Having presented some concepts in the previous chapter, it is time to use them to examine the career of items of information as they are discovered, analysed, ‘cooked’ or ‘processed’ and so transformed into knowledge. The variety of disciplines is linked to the variety of practices, although specific disciplines rarely monopolize a given practice. Borrowing from one discipline to solve problems in another will be a recurrent theme of what follows, as it is of intellectual history in general. This chapter is organized around a sequence of practices, different stages in the process of making and using knowledge. As we have seen, the process of making can be described in German in a single word, Verwissenschaftlichung, sometimes translated as ‘scientification’ but not limited to the natural sciences, so that ‘systematization’ might be a more exact rendering. It is all too easy to assume that these systematizing practices are unchanging. In fact, as scholars increasingly emphasize, they are time-bound, pursued according to different rules and with different kinds of support in different epochs and milieux.1 What follows offers a series of examples illustrating the historicity of these procedures.
The problem of the variety and the irreconcilability of human points of view is an old one. One attempt at a solution is known as ‘objectivity’, which might be described as an attempt to separate knowledge from the knower and thus to present a ‘view from nowhere’. This collective attempt reached its peak over a hundred years ago, before sociologists of knowledge, from Mannheim onwards, began to undermine it. As Lorraine Daston and Peter Galison have recently shown, ‘Scientific objectivity has a history.’2 To be precise, objectivity has several histories, since the story differs in the natural and the social sciences, not to mention the history of history itself.
In the natural sciences, the stress on objectivity, defined as ‘blind sight, seeing without inference, interpretation, or intelligence’ had its heyday in the 1860s and 1870s. Photography was at once a means to this self-effacement of scientists in the service of science and an inspiration for it. The pioneer photographer William Fox-Talbot, in his Pencil of Nature (1844), praised the new technology precisely because the images ‘have been formed or depicted by optical and chemical means alone, and without the aid of any one acquainted with the art of drawing’. In short, they are ‘impressed by Nature's hand’. In the twentieth century, by contrast, leading scientists came to emphasize the place of intuition in discovery and the value of trained judgement.
In the case of history, in the seventeenth and eighteenth centuries, the ideal was ‘impartiality’, otherwise known as freedom from ‘bias’. It was in the nineteenth century that historians borrowed the language of scientific objectivity, bending it to their own purposes, notably the attempt to avoid expressions of national prejudice. The cosmopolitan Lord Acton, general editor of the Cambridge Modern History (1902), provided a famous formulation of the ideal (while employing the traditional language of ‘impartiality’) in letters to Cambridge University Press and to the contributors, telling them that ‘Our scheme requires that nothing shall reveal the country, the religion, or the party to which the writers belong’ and that ‘our Waterloo must be one that satisfies French and English, Germans and Dutch alike’. In the 1930s, however, two leaders of the profession in the USA, Carl Becker and Charles Beard, argued that objectivity in historical writing was impossible, no more than a ‘noble dream’.3
Newspapers also claimed to be impartial. For example, the London Courant of 1688 promised to present information ‘with the integrity of an unbiased historian’, ‘representing things as they shall really happen’.4 The ideal was sometimes expressed in the title of the paper, as in the case of The Impartial Reporter, founded in Ireland in 1825. By the twentieth century, the claim was made in the language of objectivity. In the United States in the 1920s, for instance, ‘objectivity became a fully formulated occupational ideal, part of a professional project or mission’.5
Another way of thinking about the relation between investigators on one side and the cultures that they investigate on the other is to employ the language of the sociologist Norbert Elias and oppose ‘involvement’ to the ‘detachment’ that Karl Mannheim (with whom Elias had worked in Frankfurt) believed to be characteristic of what he called the ‘free-floating intellectual’ (freischwebende Intelligenz). In the case of historians, attempts at ‘distance’ from the past have been contrasted with efforts to come close to it. These attempts and efforts have their own history.6 In the nineteenth century, Leopold von Ranke and his followers (including Lord Acton) tended to view the past from a distance, while Jules Michelet and Thomas Carlyle identified themselves with some historical events and their protagonists.
It is convenient to distinguish four main stages in the sequence that runs from acquiring information to using it: gathering, analysing, disseminating and employing, although it will be useful to introduce further distinctions later. Needless to say, the four categories are fluid rather than fixed. Observation, for instance, is not only a means to understand but requires previous understanding in order to be effective. If, say, individuals from Anglo-Saxon England were to visit London today, it would be difficult for them to make sense of much of what they saw.
Although both gathering and analysis are indispensable, analysis has generally had more prestige. In the nineteenth century, mathematics and philosophy were often considered to be ‘higher’ pursuits than natural history because they were analytical while natural history was only descriptive. Analysis was also contrasted with ‘the mere gathering of raw facts’. Indeed, some scientists believed that ‘all the physical sciences aspire to become in time mathematical’. In similar fashion, the Victorian polymath Herbert Spencer declared that sociology stood to history ‘much as a vast building stands related to the heaps of stones and bricks around it’, and that ‘The highest office which the historian can discharge is that of so narrating the lives of nations, as to furnish materials for a Comparative Sociology.’
The acquisition of information includes ‘gathering’ in the literal sense of collecting plants for medical or botanical purposes, collecting rocks as geological ‘specimens’, and so on. These material objects are as near to raw ‘data’ as one can get, but even here the gatherers operate with principles of selection shaped by their culture. In other words, the process of transformation from ‘raw’ to ‘cooked’ has already begun. A similar point about selection can be made still more strongly about other forms of collecting information, by historians studying documents, journalists interviewing politicians and so on. In these cases there is a double filter, since the politician or the author of the historical document selected the information from which the journalist or the historian made another selection, each for their own purposes.
For many centuries, individuals have travelled in search of knowledge. In Arabic, a special phrase exists to describe this kind of travel, talab al-’ilm, and a saying attributed to Muhammad is often quoted: ‘Seek knowledge, even as far away as China.’ The fourteenth-century historian Ibn Khaldun pointed out that learning was revived in the Maghrib thanks to individuals who travelled to centres of learning such as Baghdad and Damascus and returned to transmit what they had learned. Still more ambitious was the fourteenth-century Moroccan Ibn Battuta, who took Muhammad's recommendation literally and made his way to China.7
Within Europe, medieval students, the wandering scholars, moved from one university to another. The custom, known as peregrinatio academica, continued into the early modern period and has of course been revived in our own time. In the twentieth century, scholars travelled in order to conduct ‘fieldwork’, whether to collect botanical or geological specimens or, in the case of anthropology, to study different cultures at close hand. Historians too travel in order to gather knowledge, whether they visit archives or, in the case of oral history, interview informants and record their memories of past events and processes.
As those last examples suggest, gathering knowledge is not limited to picking flowers or picking up shells but extends to observation, asking questions or more generally listening to what people say.
Observation is more than simple looking. It might be described as close looking, a practice penetrated by ideas if not by theories. It comes in a number of varieties, techniques learned in different situations by different kinds of people for different purposes, from astronomers gazing at the stars to physicians offering diagnoses on the basis of symptoms.
Equally important, the practice has changed and developed over the centuries. ‘Observation as both word and practice wandered from rustic and monastic settings to learned publications’ from astronomy in the fifteenth century to medicine in the sixteenth. Observation in the sense of ‘a methodical and empirical approach’ and ‘adherence to detail’ became ‘part of scientific proof and persuasion’.8 The seventeenth century, especially in the Netherlands, has been called ‘the age of observation’. To be more precise, certain kinds of observation were transformed at this time by the invention of the telescope and the microscope. At about the same time, a number of writers, including physicians, claimed to be able to read hearts and minds by observing the changing expressions of the face. In the eighteenth century, more emphasis was placed on clinical observation in medicine. A Dutch learned society (in Haarlem) offered a prize for an essay on the art of observation (1770), while a ‘Société des Observateurs de l’Homme’ was founded in France (1790).
In the nineteenth century, when books on ‘how to observe’ and ‘what to observe’ appeared in print, the famous study of war by Karl von Clausewitz emphasized the importance of military observation, while the historian August von Schlözer discussed what he called ‘the statistical gaze’, by which he did not mean looking at figures but the close observation of states by students of politics. Systematic investigations of social conditions on behalf of the government, including the collection of statistics in the modern sense of the term, formed part of what has been described as the ‘nineteenth-century revolution in government’, dependent on a multitude of ‘experts’ – inspectors, medical officers, imperial administrators, statisticians and other ‘agents of knowledge’, together with social surveys that provided information about poverty, literacy, disease and so on.9 The census is an ancient institution, from China to Israel, but regular censuses of the population of different states only became common practice in the nineteenth century.
A specialized form of observation developed at the turn of the nineteenth and twentieth centuries in order to combat crime, although some of the techniques involved had a wider application. As Carlo Ginzburg noted in the 1970s, the fictional investigator Sherlock Holmes, who claimed that ‘there is nothing so important as trifles’, was a contemporary of Sigmund Freud, who revealed the significance of small slips of the tongue, and of the scholar-physician Giovanni Morelli, who developed a method for attributing paintings to particular artists by examining with care small details such as the forms of drapery or the shapes of human ears, which each painter – whether consciously or unconsciously – represents in a distinctive manner.10
Edmond Locard, sometimes described as the ‘Sherlock Holmes of France’, opened the first forensic laboratory in Lyon in 1910. Locard became famous for his discussion of the silent testimony of the traces left by human activity on material culture. According to his ‘exchange principle’, the criminal will bring something to the scene of the crime, if only a few dropped hairs on a carpet, and leave with something from it, thus offering detectives clues linking an individual, a scene and an event. Today's crime scene investigators make use of a sophisticated technology that was unavailable in Locard's time, but they follow in his footsteps.
A relatively new form of observation has come to be practised by anthropologists and sociologists. The phrase ‘participant observation’ dates from the 1920s and originally referred to a situation where the observer was a member of the group observed, recruited by an outside investigator. Today, on the other hand, it refers to outsiders who join the group they wish to study while remaining as unobtrusive as possible in the circumstances.11 A still more recent form of observation that spread in both the UK and the USA in the 1980s makes use of closed-circuit television cameras, a visible sign of the ‘surveillance society’.
Expeditions in search of knowledge, often funded by governments, go back at least as far as the fifteenth century, when the Spanish monarchs Ferdinand and Isabella supported the attempt by Columbus to find a new route to the Indies, accidentally discovering America on the way. In 1570, the government of Philip II of Spain sent the physician Francisco Hernández to Mexico and Central America to discover new plants with medical uses. It was from the eighteenth century onwards, however, that scientific expeditions to different parts of the globe became frequent, sent for the most part by imperial governments – British, French, Spanish, Portuguese, Russian and so on.12 Astronomers, botanists, naturalists and mineralogists took part in these expeditions, along with artists. For the savants, the purpose of these expeditions was to acquire knowledge for its own sake, while for the organizers, often governments, they formed part of the enterprise of empire. Many specimens were collected; 16,000 plants and seeds were taken to the Royal Botanic Garden in Spain after the Malaspina expedition to the Pacific (1790), while 50,000 specimens were sent back from Rio by the US Exploring Expedition (1838), officially described as undertaken ‘to extend the bounds of science and promote the acquisition of knowledge’.
As a case-study of this trend to the collective gathering of knowledge, we might take Captain Cook's first voyage to the Pacific (1768–71), visiting Brazil, Tahiti, New Zealand and Australia. His mission was undertaken at the request of the Royal Society in order to observe the transit of Venus over the sun, a movement that would allow the distance of the earth from the sun to be calculated. Hence Cook was accompanied by the assistant to the Astronomer Royal. Again at the request of the Royal Society, Joseph Banks, a young gentleman amateur with botanical training (not yet the middle-aged knowledge manager whom we met in the previous chapter), was allowed to join the expedition together with his assistants, including two naturalists (one of them the Swede Daniel Solander, who had been trained by the famous Linnaeus) to collect specimens of plants and animals and two artists to record landscapes and peoples encountered during the voyage.
The expedition was supported not only by the Royal Society but also by the Admiralty, which was more concerned with knowledge that might be useful in imperial projects than with pure science. Cook was ordered by the Admiralty to search for undiscovered territories that Britain might claim and to that end he mapped the coast of New Zealand and part of Australia. His instructions, formulated with precision, commanded him ‘carefully to observe the Nature of the Soil and the Products thereof; the Beasts and Fowls that inhabit or frequent it; the fishes that are to be found in the Rivers or upon the Coast and in what Plenty; and in case you find any Mines, Minerals or valuable stones, you are to bring home Specimens of each, as also such Specimens of the Seeds of Trees, Fruits and Grains as you may be able to collect’.13
The observations made of the transit of Venus did not tell a clear story. However, the expedition did bring back answers to some of the questions formulated in Cook's instructions. Among the ‘Beasts’, they sighted a kangaroo. They discovered many new plants on the coast of Australia (justifying the name ‘Botany Bay’) including a spiky flower that is now known as Banksia. Banks extended his interests beyond botany to include the ‘manners and customs’ of the different peoples encountered on the voyage. It has been noted that his vision of the Pacific was no innocent eye but ‘coloured’ by his classical education. For example, he saw Tahiti as ‘the truest picture of an arcadia’.14 All the same, Banks's interest in learning something of local languages, notably Tahitian, and his careful observation of costume, ceremonies and customs such as cannibalism and tattooing is worthy of note.15 In New Zealand, he even insisted on buying and taking home the head of an enemy recently killed by one of the Maori whom the expedition encountered.16
To be of any use, the information that is gathered needs to be stored and preserved, most obviously by being written down. In the case of the scientific expeditions just mentioned, as in that of individual anthropologists or ethologists, taking ‘field-notes’ was an important part of their task. In the case of censuses and other social surveys, the information collected ended up in registers, files or in more recent times in databases.
As these stores of knowledge multiplied, keeping them safe became a problem to which the archive offered a solution. In early modern Europe, officials often worked at home and as a consequence treated government papers as their private property, inaccessible to their successors. From the point of view of efficient administration, this was a major inconvenience. As Queen Elizabeth wrote to the Master of the Rolls in 1567, ‘It is not meet that the records of our Chancery … should remain in private places and houses.’ Hence governments, following the lead of the Papacy and the Republic of Venice, began to establish archives, complete with keepers and rules governing access. In the nineteenth century, archives gradually opened to the public, ‘archivist’ became a profession, and some archivists insisted on saving documents that governments wished to destroy, as Francis Palgrave, the first head of the English Public Record Office did in the case of the census returns of 1851. It is only recently, however, that historians, especially historians of knowledge, have come to view archives as an important object of research in themselves as well as a collection of sources for the study of other aspects of the past, so that books and articles on the topic are beginning to proliferate.17
When books were relatively few, as in early medieval Europe, copying manuscripts was an important activity and the libraries of leading monasteries were major sites of knowledge. As books multiplied in the late Middle Ages – and still more after the invention of printing – their storage became an increasingly acute problem. The Vatican Library, one of the most important in Europe at this time, consisted of 2,500 volumes in 1475. In 1600 the imperial library in Vienna included 10,000 volumes, but by 1738, the number had increased to 200,000. The library of the British Museum reached 540,000 volumes by 1856. Today, the Library of Congress contains the mind-boggling number of some 100 million items.18 It is necessary to refer to ‘items’ rather than to books because today's libraries store information in the form of tapes, CDs, videos and so on. So do governments. Photos were used by the Paris police from 1872 onwards to identify criminals, while the Watergate scandal of the 1970s is a reminder of the importance, to governments and investigative journalists alike, of tape-recorded conversations.
All these items take up space and one of the main problems for modern archivists and librarians has been finding room for the endless flow of new ‘acquisitions’. The Italian state archives contained 163,932.57 square metres of shelving in 1906, for instance, while the FBI had accumulated over 65 million cards in its files by 1981. It might be said that the digital revolution came in the nick of time, moving information from the earth to the cloud. By the year 2003, the FBI had a billion files online.
The complementary opposite of preserving knowledge is of course losing it. These losses have a history, including famous events such as the burning of the great library of Alexandria. To loss by accident must be added knowledge that is discarded, books and manuscripts that librarians and archivists ‘de-accession’, in other words throw away. It was in response to the Italian government's plan to discard most of the files from the census of 1928 that the famous statistician Corrado Gini developed a method of sampling. On a grander scale, whole fields of knowledge, from alchemy to phrenology to eugenics, have lost their respectability and virtually disappeared from the academic world, even if they sometimes survive on its margins.19
Today, databases are organized for rapid retrieval, complementing or replacing earlier systems. Human memory is obviously the oldest form of retrieval, sometimes assisted by training in the ‘art of memory’, which in the ancient world and again in the Renaissance involved associating what needed to be remembered with vivid images arranged in an imaginary building such as a memory palace or theatre.20 Memory might be assisted by objects such as the knotted and coloured cords used in Peru under the Incas and known as qipus.
For archivists and librarians, organizing their material in order to facilitate retrieval is an old problem. In large early modern archives such as that of the Venetian government, there was not only a catalogue but also indexes of names and subjects, while other archivists preferred to arrange material in chronological order. In the nineteenth century, a new principle for organizing archives was formulated, the principle of provenance. That is, documents came to be arranged according to the institution that created them, making it easier for researchers to imagine past administrators at work.
Librarians faced similar problems to archivists. In the case of small libraries, consisting of no more than a few hundred books, the solution was simple: to compile a catalogue informing readers on which shelf a given manuscript was to be found. The larger the library, however, the larger the problem. In the most famous library of the ancient world, the library of Alexandria, founded in the third century BCE, the collection consisted of scrolls, so that viewing the contents took much longer than opening a modern book. Searching half a million scrolls was a serious problem, even if it was alleviated by tags attached to the edge of the scroll, by labels on the bins in which the scroll was stored and by an innovation that would turn into a tradition, a catalogue of authors and subjects. These catalogues were first written on scrolls, then in bound volumes and finally on cards arranged in drawers. It was the American librarian Melvil Dewey who standardized the size of the cards, at 125 × 75 millimetres, and formed a company, the Library Bureau, to sell the cards and other equipment such as the filing cabinets that housed them. Scholars too found cards a convenient means of arranging their notes, replacing the more fragile slips of paper they had used in the seventeenth and eighteenth centuries. Even in the digital age, some scholars still make notes in this way.
The subject catalogue was a solution to a problem that raised problems of its own. Paraphrasing Plato, one might say that, in the ideal library, the librarian should be a philosopher or a philosopher should become the librarian. In fact, this combination famously occurred at the library of Wolfenbüttel in Germany, where Gottfried Wilhelm Leibniz worked from 1691 to his death in 1716. Leibniz combined a philosopher's interest in the organization of knowledge with a librarian's interest in classifying books. He preferred a practical to a logical classification, remarking that ‘those who arrange a library very often do not know where to place certain books, being in suspense between two or three places equally suitable’. In all his years as librarian, he was happy to accept the traditional classification of books according to academic disciplines – arts, medicine, law and theology – while introducing new categories where necessary, among them for books on craft skills, located under mechanica and oeconomica.21
The most famous system for classifying books was developed by Melvil Dewey. It was the so-called DDC or Dewey Decimal Classification, first published in 1876 and gradually expanded and improved in successive editions – eighteen in all. It has been adopted by many libraries in many parts of the world. It also inspired the Belgian Paul Otlet in his attempt to organize knowledge in general, to ‘catalogue the world’ in the service of world peace and, he hoped, world government (active in the first half of the twentieth century, Otlet was a warm supporter of the League of Nations). An enthusiast for new technology, in his case microfilm and the telegraph, Otlet dreamt of freeing the organization of knowledge from the organization of books, planning a collection of images and a sound archive and imagining, in the 1930s, a ‘world network’ of information not far removed in conception from today's Internet.22
The proliferation of books posed problems for readers as well as librarians. It has been said that ‘the half of knowledge is knowing where to find it’. As Samuel Johnson remarked to his friend Boswell, ‘Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it. When we enquire into any subject, the first thing we have to do is to know what books have treated of it. This leads us to look at catalogues, and at the backs of books in libraries.’ In this case too, procedures have changed over time. From the early seventeenth century onwards, printed subject bibliographies helped to orient would-be readers. They multiplied so fast that as early as 1664 one scholar thought it necessary to produce a bibliography of bibliographies. In similar fashion, a dictionary of dictionaries (including encyclopaedias) was published in 1758.
Finding the right book was not enough. There remained the problem of finding information in it. Hence the gradual rise, after Gutenberg, of finding devices such as tables of contents, indexes, or summaries of chapters or paragraphs printed in the margin of the book. As tables, charts and graphs became more common, there emerged the problem of learning to read them, sometimes described as ‘consultation literacy’.
For individuals needing information on particular subjects in a hurry, the encyclopaedia – large or small, general or specialized – has long offered a solution to their problems, not only in the West but in the Islamic and East Asian worlds as well. The Chinese encyclopaedic tradition goes back to the third century CE, while Chinese encyclopaedias reached vast dimensions long before Western ones. The early fifteenth-century Great Handbook (Yongle dadian) involved some two thousand contributors and ran to more than ten thousand volumes, making it too expensive to print, while the Collection of Pictures and Writings (Tushu Jicheng), contained more than three-quarters of a million pages, making it in all probability the longest printed book in the world.
How to organize this mass of information posed a problem for the Chinese, just as it did for the editors of the great French Encyclopédie (1751–65) or the Encyclopaedia Britannica (first edition, 1768–71). The traditional method of organization for Western encyclopaedias, as for library catalogues, was by subject, following the university curriculum. However, the editors of the Encyclopédie, with some regret, opted for what they called the ‘dictionary principle’, in other words, entries arranged in alphabetical order. This solution was of course unavailable to the Chinese, who do not use an alphabet. Their traditional arrangement distinguished three large categories (heaven, earth and humanity) with many sub-divisions.
In the last few decades, encyclopaedias have gone online, the Britannica (1994) no less than Wikipedia (2001), forming part of a major shift in the way in which many people seek information, a shift that has prompted one writer to claim that we now live in ‘Search Engine Society’. Searching online, like searching in libraries, requires particular skills. What has been described as ‘search engine literacy’ is replacing older forms of consultation literacy. It includes not only the formulation of fruitful questions but also an awareness of the built-in biases of the engines, which are of course driven by advertising.23
It is time to move from retrieval to analysis. ‘Analysis’ is a technical term with rather different meanings in different disciplines: algebraic analysis, analytical chemistry, analytical philosophy, spectroscopic analysis, tissue analysis, psychoanalysis and so on. In chemistry, for instance, analysis involves breaking down substances into their constituents. By contrast, historical analysis depends on synthesis, the combination of pieces of information like fragments of a jigsaw puzzle in order to construct explanations of events and trends. In sociology and anthropology, ‘functional analysis’, an approach that was widespread in the mid twentieth century, meant – like psychoanalysis – rejecting the explanations of actions given by the actors themselves and offering new explanations that claimed to be more profound.
In what follows, however, the term ‘analysis’ will be used to refer to what I described earlier as ‘cooking’, in other words the process of turning information into knowledge by means of practices such as description, quantification, classification and verification. They too have their history. The seventeenth century, for instance, was the heyday of the so-called ‘geometrical method’, applied to subjects ranging from physics to ethics, politics and even history. Thomas Hobbes's Leviathan (1651) showed that its author was, as his friend John Aubrey described him, ‘in love with geometry’. Spinoza described his Ethics (1677) as ‘proved geometrically’ (ordine geometrico demonstrata), listing axioms and deducing conclusions. In a treatise published in 1679, a French bishop, Pierre-Daniel Huet, tried to establish the truth of Christianity on the basis of ‘axioms’ such as the following: ‘Every historical work is truthful, if it tells what happened in the way in which it is told in many books which are contemporary or more or less contemporary to the events narrated.’ A Scottish theologian, John Craig, both an acquaintance and a follower of Isaac Newton, formulated what he called the Rules of Historical Evidence (1699) in the form of axioms and theorems. Unfortunately these axioms and theorems, like Huet's, turned out to be rather banal, using the language of mathematics and physics to restate commonplaces, for example the principle that the reliability of sources varies according to the distance of the witness from the event recorded.
Description is often contrasted with analysis but the careful description of what has been observed is an indispensable stage in the analytic process. Like observation, description is a practice that might appear to be timeless – yet it has a history, becoming increasingly precise, systematic and specialized. In the West, for example, the tradition of describing places goes back to the ancient Greeks, notably to Strabo, while the precise description of plants and animals goes back to the Renaissance. The description of plants in particular became more detailed, more precise and more methodical, more concerned with the differences between one species of plant and others, hence more and more reliant on illustrations to complement the information conveyed in words.24 It has been argued that seventeenth-century Dutch art was an ‘art of describing’, as opposed to the narrative art of the Italians, and that it was linked both to map-making, an art that the Dutch dominated at this time, and to observation through the microscope, a Dutch invention of the same period.25 In similar fashion, some English artists have been described as careful observers of either nature or society; Hogarth, for instance, with his sharp eye for the details of manners and customs, or Constable, with his attention to the precise forms of trees or clouds.26
Early modern antiquarians, in other words scholars who studied the material remains of the past, offered increasingly precise descriptions of their finds, together with illustrations, as the eighteenth-century English scholar William Stukeley did in the case of Stonehenge. From the late eighteenth century, the police in France and elsewhere became concerned with the precise description of wanted criminals. Like botanists, they turned to drawings (and later, to photographs) to help them in their task. Early modern Venetian ambassadors were required to produce a ‘report’ (relazione) on their return, analysing the strengths and weaknesses of the state in which they had been residing. From the eighteenth century onwards, the many scientific expeditions undertaken also produced masses of reports, describing what had been discovered. In the nineteenth century, the practice of reporting spread more and more widely together with the apparatus of bureaucracy. A series of reports by committees, the prologue to government action, were published in nineteenth-century England: the Sadler Report (1832) on the conditions of child labour and female labour in factories, for instance, the Report of Commissioners Appointed to Inquire into the British Museum (1850) or the Northcote-Trevelyan Report (1854) on the reform of the civil service.
As we have seen, some nineteenth-century scholars looked down on what they called ‘mere’ description, as practised by the naturalists, for instance. In response, two defences of description were formulated. One defence, linking description to interpretation, will be discussed later in this chapter. The other was to make description more precise by means of quantification.
In order to be precise, descriptions needed to include measurements and other numbers. Some eighteenth-century scientists measured and even weighed the earth. In the nineteenth century, chemists undertook quantitative analysis of different substances to discover the relative importance of their constituents, physicists measured changes in matter and energy and astronomers collected stellar statistics. Alexander von Humboldt became famous for his contribution to different sciences, from botanical geography to geophysics, by means of quantitative methods and a whole arsenal of scientific instruments. Francis Galton played an important part in the development of biostatistics.
Social surveys increasingly included tables of figures, so that the term ‘statistics’ acquired a new meaning as a synonym for what used to be known as ‘political arithmetic’. In the late eighteenth century the French government, assisted by leading mathematicians such as Condorcet, became a pioneer in collecting and analysing statistics.27 New methods of visual display such as the graph and the ‘pie chart’ developed in order to communicate the results of measurement more rapidly and more memorably. In late eighteenth-century Britain, William Playfair, an engineer turned economist, was a pioneer in this domain.28 By the later nineteenth century, statistical offices formed part of the administration in a number of countries in Europe. Scholars in the social sciences followed this lead; economists measuring ‘gross natural product’, for instance, students of elections (sometimes described as ‘psephologists’) counting votes and sociologists analysing statistics in order to discover the relation between trends in crime, education, health and so on. Anthropologists turned to ‘anthropometry’, measuring bodies, especially skulls, in order to identify different peoples, a technique borrowed by the French police officer Alphonse Bertillon (the son of a statistician) to identify individuals by their physical measurements.
In the humanities, by contrast, quantitative methods were slower to come into use and their relevance remains controversial. In the case of texts, quantitative content analysis (counting the frequency of certain words, for instance) has often been used to identify the authorship of anonymous works. As for history, by the middle of the twentieth century, a substantial group of economic, social and political historians were employing quantitative methods, mockingly described by opponents of the trend as ‘cliometrics’, the vital statistics of the goddess of history.
Precise description assisted the process of classification, as in the well-known case of the eighteenth-century Swedish scholar Carl Linnaeus, whose ambitions were summed up in the title of his most famous work, ‘The system of nature’ (Systema Naturae, 1735). His famous binomial scheme for classifying plants, giving each one two names, one for the genus or family and the other for the individual species, was published in 1753. Although the system of Linnaeus was controversial, and abandoned by nineteenth-century botanists in favour of a more ‘natural’ system, his method remained an inspiration to scholars working in other disciplines and attempting to classify animals, diseases, minerals, chemical compounds and even clouds. Linguists too engaged in classification, arranging related languages into families such as ‘Indo-European’ or ‘Ural-Altaic’.
Debates about classification in particular disciplines were extended, almost inevitably, to knowledge itself, often imagined as a tree with many branches. The traditional Western system, following the university curriculum, distinguished theology, law and medicine (the three subjects of postgraduate studies in the Middle Ages), from ‘arts’. In its turn arts, otherwise known as the ‘liberal arts’, distinguished the more elementary trivium, a package of grammar, logic and rhetoric, from the more advanced quadrivium, comprising arithmetic, geometry, astronomy and music. The degree of BA, ‘bachelor of arts’, originally referred to these seven liberal arts. The non-academic knowledges that remained were described in parallel fashion as the seven ‘mechanical arts’. There was some disagreement about the contents of this package, but weaving, agriculture, architecture, metallurgy, trade, cooking, navigation and warfare frequently recur.
Over the centuries many suggestions were put forward for reorganizing the system. Renaissance humanists, for instance, following the example of ancient Rome, stressed what they called the studia humanitatis: grammar, rhetoric, poetry, history and ethics. It is from these studies that the words ‘humanist’ and ‘humanities’ are derived. Francis Bacon advocated a division of knowledges into three major sections, each associated with one of the three ‘faculties’ of the human mind: memory (including history), reason (including mathematics and law) and imagination (including art). Bacon's system was accepted, with modifications, by the editors of the French Encyclopédie, even though they decided to arrange their articles in alphabetical order. Indirectly, Bacon inspired Melvil Dewey's Decimal Classification System, still in use in many of the world's libraries.
There is no equivalent to the Decimal Classification System for use in museums, where the arrangement of objects has sometimes been controversial. In the 1890s, for instance, the anthropologist Franz Boas caused a sensation in the American museum world by criticizing the organization of exhibits in the Smithsonian Institution. The exhibits were arranged in the way customary at that time, according to the assumption of what Boas called ‘a uniform systematic history of the evolution of culture’. What he preferred was what he called ‘tribal arrangement of collections’ in what would become known as ‘culture areas’. The North West Coast Hall, which Boas arranged in the Museum of Natural History in New York, illustrates his approach and his view of objects as so many witnesses to the nature of the culture in which they were produced. Exhibits, he argued, could ‘show how far each and every civilization is the outcome of its geography and historical surroundings’. Indeed, an object, according to Boas, could not be understood ‘outside of its surroundings’ (or, as we often say today, its context).
To illustrate this point, Boas took the example of a pipe. ‘A pipe of the North American Indians’, he argued, ‘is not only a curious implement out of which the Indian smokes, but it has a great number of uses and meanings, which can be understood only when viewed from the standpoint of the social and religious life of the people.’ Hence Boas liked to show ‘life groups’ in the museum, with waxworks of people in the act of using the objects, in order ‘to transport the visitor into foreign surroundings’ and to allow an appreciation of an alien culture as a whole.29
Classification depended on comparison and contrast. The comparative method became increasingly important in the academic world of the mid nineteenth century. One of its great successes was comparative anatomy, in other words the study of both similarities and differences in the anatomy of different species. Already in the sixteenth century, some scholars had compared and contrasted the skeletons of humans with those of animals, but it was Georges Cuvier who employed the comparative method in Leçons d’anatomie comparée (1800) and other works to reconstruct extinct species of animal, such as the dinosaurs, on the basis of the fragmentary evidence of fossils.
Philology was another field where the systematic use of the comparative method led to new discoveries, such as the descent of Greek and Latin from Sanskrit, discovered by the lawyer William Jones (‘Oriental Jones’) when he was living in Calcutta, or the structural similarities between languages with very different vocabularies, such as Hungarian and Finnish. A major work in this field was the Vergleichende Grammatik (‘Comparative Grammar’), published from 1833 onwards by Franz Bopp, a professor of Sanskrit who extended his interests to what he called the ‘Indo-European’ languages.
The comparison of languages stimulated the comparison of religions and mythologies. Jones noted similarities between the Hindu gods and those of the Greeks and Romans. The German philologist Max Müller, a specialist on Sanskrit, published a study of comparative mythology in 1856 and became professor of comparative theology in Oxford in 1868 (the new field became better known as ‘comparative religion’). A century after Müller, the French scholar Georges Dumézil spent much of his career studying similarities between Indian, Greek, Roman, Norse and Celtic mythologies, all from parts of the world where ‘Indo-European’ languages were spoken. He replaced the earlier concern with similar gods, such as Jupiter and Odin, with a concern with system, which he called ‘ideology’, emphasizing the relation of different gods to praying, fighting and farming, the ‘three functions’ that underlie many social structures.30
The comparative method was used not only to establish genealogies of languages, gods or myths but also to assist explanation. As we have seen, Herbert Spencer wished to establish a new discipline under the title of ‘comparative sociology’. Some historians, not content simply to supply the raw material for sociologists to construct their theories, wrote comparative studies such as the ‘parallel’ between the histories of Spain and Poland published by the Polish historian Joachim Lelewel in 1831. In his System of Logic (1843), the philosopher John Stuart Mill reflected on the use of comparison in the search for causes, emphasizing the importance of what he called ‘concomitant variation’, in other words what is now known as the ‘correlation’ between two sets of data.
It is obviously difficult to distinguish interpretation from description and even from observation. All the same, it is possible to distinguish an interpretive method, or cluster of methods. Where comparison, like the functional analysis that used to be practised in sociology and anthropology, offers a view from outside, the interpretive method, which is common to a number of disciplines, attempts understanding from within. The method, which is thousands of years old and can be found in different cultures, was elaborated and systematized in the study of texts, religious texts such as the Bible and the Koran and secular texts such as Roman law. In Renaissance Europe, one school of lawyers, following what was known as the ‘French style’ (mos gallicus), interpreted laws in an historical manner by examining the way in which the main concepts were used and attempting to reconstruct the intentions of the legislators and the circumstances, or, as we now say, the cultural ‘context’, in which the laws were formulated. The idea of context itself has a history.31
Adopting a similar approach to the Bible was more risky, in both the Catholic and Protestant worlds, but the tendency to interpret the Bible as an historical document, or more exactly an anthology of historical documents, gradually became more powerful in the eighteenth and nineteenth centuries. The parallels between the problems of interpreting the Bible and interpreting texts from ancient Greece and Rome caught the attention of a number of scholars (notably the German theologian Friedrich Schleiermacher) and led to the development of ‘hermeneutics’. This was a general method of approaching texts that emphasized the value of the ‘hermeneutic circle’, interpreting the parts with reference to the whole and the whole with reference to the parts. At the end of the nineteenth century, Sigmund Freud extended the approach to the unconscious mind in his famous Traumdeutung (‘Interpretation of Dreams’, 1899). Systematic attempts to interpret dreams go back to ancient Greece or beyond, but Freud tried to place these approaches to dreams on a new basis.
In the twentieth century, interpretation was extended still further, illustrating once again the transfer or translation of a method from one discipline to others. Art historians began to practice the interpretation of images at two levels, not only ‘iconography’ (concerned with the manifest content of an image) but also what a leading practitioner of the method, Erwin Panofsky, called ‘iconology’ (examining its deeper cultural significance).32 More recently, some musicologists have described themselves as doing ‘musical hermeneutics’, an approach that, ‘like psychoanalysis, seeks meaning in places where meaning is often said not to be found’.33 In archaeology, some British scholars made an ‘interpretive turn’ and described themselves as ‘reading’ artefacts in order to reconstruct past meanings. Whether the participants realized this, their movement, also known as ‘contextual archaeology’, followed a German hermeneutical tradition, although it was actually inspired by the British philosopher-historian R. G. Collingwood and the American anthropologist Clifford Geertz.34 Rejecting the mode of analysis from outside that is characteristic of the natural sciences, Geertz and his followers turned to the interpretation of human behaviour, treating different cultures as texts that needed to be ‘read’. The practitioners of interpretive anthropology described their own work as ‘thick description’, a form of description that, like iconology, included interpretation. In this way they responded both to the depreciation of ‘mere’ description and to the emphasis on functional analysis in the work of their colleagues.35 Retrospectively, one might extend the idea of thick description to include two historical masterpieces, Jacob Burckhardt's Cultur der Renaissance in Italien (‘Civilization of the Renaissance in Italy’, 1860) and Johan Huizinga's Herfstij der Middeleeuwen (‘Autumn of the Middle Ages’, 1919).
How do we know that our knowledge is reliable? What counts as proof, or as evidence? Each discipline has to face the problem of verification. Like observation and description, methods of verification have a history, the study of which is known as ‘historical epistemology’, concerned with changes in the justifications for belief and in the methods of acquiring knowledge. A pioneer in this field was the historian of philosophy Ernst Cassirer, whose study of the problem of knowledge in early modern Europe was published in 1906–7. In the preface to this work, Cassirer criticized the assumption that the ‘instruments of thought’ (by which he meant fundamental concepts) are timeless. On the contrary, he argued, each epoch has its own. Recent scholars have gone further in this direction, expanding the idea of ‘instruments of thought’ to include scientific instruments such as telescopes, which have become larger, more sophisticated and more powerful over the centuries.36
A vivid example of past practice, reminding us that ‘the past is a foreign country’, comes from Steven Shapin's provocatively entitled Social History of Truth, in which he argued that trust in the word of a gentleman in seventeenth-century England extended to accounts of experiments conducted and witnessed by natural philosophers.37 On the other hand, the increasing importance of systematically repeated experiments as a confirmation of scientific discoveries offers a famous example of change in methods of verification. It has been argued that this trend exemplifies ‘the rise of the methods of the manual workers to the ranks of academically trained scholars at the end of the sixteenth century’.38 Beginning in physics and chemistry, experimental methods were gradually extended to new fields such as medicine, agriculture, biology and psychology.
The rise of the practice of ‘experiment’, a term related to ‘experience’, was part of a wider change that might be described as the increasing importance of empiricism in the academic world. Academics who claimed to be masters of scientia used to look down on mere ‘empirics’ such as the healers or artisans who practised on the basis of experience alone. Francis Bacon, however, argued for the value of a middle way. ‘Those who have handled sciences’, he wrote, ‘have been either men of experiment or men of dogmas. The men of experiment are like the ant, they only collect and use; the reasoners resemble spiders, who make cobwebs out of their own substance. But the bee takes a middle course: it gathers its material from the flowers of the garden and of the field, but transforms and digests it by a power of its own.’
Another example of change in methods of proof is the medical autopsy, in other words the examination and where necessary the dissection of corpses to determine the cause of death, thus verifying earlier diagnoses that depended on the evidence of symptoms. Autopsy has a long history – it was practised in ancient Egypt – but its place in medicine became increasingly important in the eighteenth century. A third example of major change concerns the law.39 In Europe in the Middle Ages, for instance, in cases of dispute, the old way for courts to discover what had happened was to ask witnesses, usually older local men of good standing and long memories. The new way, competing with the old, was to make use of the evidence of written documents (the word ‘evidence’, which originally meant whatever was clear or conspicuous, extended its meaning in fifteenth-century England to include documents that could be produced in court).40
It took time for people to learn to trust writing. In a dispute between King Henry I and the Archbishop of Canterbury in the early twelfth century, the king's supporters described a letter from the Pope in support of the archbishop as ‘only a sheepskin marked with black ink’, unworthy of comparison with ‘the assertions of three bishops’. In similar fashion an eleventh-century Muslim traveller, al-Beruni, quoted Socrates to the effect that he did not write books because ‘I do not transplant knowledge from the living hearts of human beings to the dead skins of sheep.’ All the same, the value of oral testimony in different contexts was increasingly devalued from the seventeenth century onwards. The upper and middle classes came to associate it with the ignorance of their social inferiors, while the eighteenth-century scholar William Robertson, in his History of America, assumed its untrustworthiness: ‘the memory of past transactions can neither be long preserved, nor be transmitted with any fidelity by tradition’.41 It was only very slowly that belief in the value of oral tradition revived, among nineteenth-century folklorists, for instance, or twentieth century oral historians, and then only on condition that it was studied critically.
Another major change in the practice of the courts was the shift from the language of proof to the language of probability. Some sixteenth-century Italian lawyers already distinguished between ‘full’ and ‘partial’ proof, but it was only gradually that lawyers developed a set of concepts to describe the partial or weaker forms: ‘presumptive proof’, for instance, ‘moral certainty’, ‘circumstantial evidence’, ‘probable cause’ or ‘beyond reasonable doubt’. When mathematicians and philosophers began to investigate probability in the seventeenth century, they borrowed ideas from the lawyers. John Locke included an important discussion of ‘degrees’ of probable knowledge in the fourth part of his essay Concerning Human Understanding (1690). In their turn, lawyers made use of Locke's ideas, among them the judge Jeffrey Gilbert, whose Law of Evidence was published in 1754.42
The methods for identifying the individuals responsible for crimes, especially murder, became more systematic in the nineteenth and twentieth centuries with the rise of police forces and professional detectives. Quintilian, an ancient Roman writer on rhetoric, had already noted the importance of bloodstains as a sign of murder, and ‘signs’ (indicia) of this kind were also discussed by early modern lawyers, but the methodical study of what became known in English as ‘clues’ came much later, forming part of what became known as ‘forensic science’.
The legal model of witnesses and testimonies spread to other disciplines. Experiments, for instance, were described in the language of trials. Again, take the case of what is known as ‘textual criticism’, the attempt to reconstruct the original version of a given text. The different manuscript versions of parts of the Bible or printed versions of plays by Shakespeare have often been described as ‘witnesses’, more or less reliable.43 In the Islamic world, establishing the reliability of the hadith (reports of the sayings of Muhammad) depends on the isnad, the chain of witnesses leading back to the person who originally heard a given saying.44
Again, the early modern Catholic Church instituted more rigorous procedures for canonizing saints, a kind of trial in which the so-called ‘devil's advocate’ tried to find weaknesses in the evidence for sanctity. In the case of history, the seventeenth-century English lawyer turned historian, John Selden, described the process of evaluating historical sources as ‘a kind of trial’. In the twentieth century, following the rise of the detective novel, leading British historians such as R. G. Collingwood and Herbert Butterfield described members of their profession as detectives, following clues in order to establish the facts of the case.
The idea of ‘the facts’, distinguished from gossip, conjecture and other forms of unreliable discourse, is central to empiricism in general and to history in particular. It too derives from the courts, which from ancient Rome onwards distinguished matters of fact from matters of law. Francis Bacon, a lawyer turned historian, declared that ‘The Register of Knowledge of Fact is called History.’ Bacon was also a pioneer in extending the use of the term ‘fact’ to natural phenomena, which he described as ‘the deeds and works of nature’, deeds that needed to be verified by observation and experiment. In similar fashion, the history of the Royal Society by Thomas Sprat described its members as concerned with ‘matters of fact’.45 Seventeenth-century English historians used the language of fact when they claimed to offer an ‘impartial’ view of conflicts. In 1652, Oliver Cromwell asked the scholar Meric Casaubon to write a history of the Civil War with ‘nothing but matters of fact, according to most impartial accounts on both sides’, while after the Restoration, Nathaniel Crouch claimed to present an ‘Impartial Account’ of Cromwell, ‘Relating only matters of Fact’.
As in the case of ‘evidence’, so in that of ‘fact’ we find that a term in legal discourse gradually spread much more widely. Emile Durkheim defined sociology as ‘the science of social facts’ (la science des faits sociaux), while his follower Marcel Mauss introduced the idea of the ‘total social fact’ (fait social total).46 On the other hand, scholars who emphasized facts were mocked by colleagues who stressed interpretation as ‘fact-worshippers’.
As a case-study in the problems of verification we might turn to the history of history itself, focusing on Europe in the seventeenth and eighteenth centuries, when some scholars, known at the time as ‘pyrrhonists’ (after the ancient Greek sceptic Pyrrho of Elis), claimed that much that passed for historical knowledge was not knowledge at all. The problem was the failure of historical knowledge to measure up to strict standards of certainty, notably the epistemological standards formulated by René Descartes. This problem was made more acute by the intellectual wars between Catholics and Protestants, in which both sides were more successful in undermining the authority to which the other side appealed (tradition on one side, the Bible on the other) than in supporting their own. It has also been suggested that scepticism was encouraged – indeed popularized – in seventeenth- and eighteenth-century Europe as an unintended consequence of the rise of a new literary genre, the newspaper, since different papers offered conflicting accounts of the same event.47
The sceptics employed two main arguments. In the first place, they emphasized the problem of bias, contrasting Catholic and Protestant accounts of the Reformation or the narratives of battles and wars produced by the two opposing sides, such as France and Spain. They also accused earlier scholars of basing their accounts of the past on forged documents and of writing about characters who had never existed and events that had never taken place. ‘Did Romulus exist?’ they asked. ‘Did Aeneas ever go to Italy?’ ‘Did the Trojan War take place, or was it just the subject of Homer's “romance” ’?
The defenders of historical practice responded to both arguments. In the first place, they suggested that historians could be impartial and simply tell the story of what happened. Ranke's famous formula, ‘what actually happened’, was anticipated by an eighteenth-century German historian who declared that the historian ‘must present a fact just as it happened’ (Er muss die Sache so vorstellen, wie sie Geschehen ist). The defence sometimes claimed certainty, but they were usually content to admit that their conclusions about what happened in the past were no more than probable.
For example, Locke argued in response to the sceptics that some historical statements are more probable than others and that some cannot reasonably be denied. ‘When any particular matter of fact is vouched by concurrent testimony of unsuspected witnesses, there our consent is … unavoidable. Thus: that there is such a city in Italy as Rome; that about 1700 years ago there lived in it a man, called Julius Caesar; that he was a general, and that he won a battle against another, called Pompey.’ Friedrich Wilhelm Bierling, a professor at the university of Rinteln in Saxony who published a treatise on scepticism about the past in 1724, followed Locke in distinguishing levels of probability in history, three in all, from the maximum (that Julius Caesar existed), via the middle level (the reasons for the abdication of Charles V) to the minimum (the problem of the complicity of Mary Queen of Scots in the murder of her husband, or of Wallenstein's plans in the months before his assassination).
In the second place, the defence argued that it was possible to examine the ‘sources’ for history with a critical eye. In 1681, for example, responding to a Jesuit who had questioned the authenticity of royal charters in early medieval France, the Benedictine scholar Jean Mabillon published a treatise discussing the methods of dating such documents by the study of their handwriting, their formulae, their seals, and so on. In this way he showed how forgeries might be detected and the authenticity of other charters vindicated. Hence it might be argued that the negative arguments of the sceptics had a positive effect, to make historians more aware of their methods and more critical of their sources than they had been. They now quoted more documents and offered more references in their footnotes, so that readers could if they wished follow them back to the sources. To do this was to write ‘critical history’, a scholarly slogan of the early eighteenth century.48
The term ‘critic’ has changed its meaning over the centuries. The Latin word criticus originally meant a philologist engaged in the activity now known as ‘textual criticism’, in other words the attempt, mentioned earlier, to establish what an author originally wrote by studying the different manuscripts of a given text, each of them different, since it is impossible to transcribe even a fairly short text without making mistakes. Thanks to principles such as the authority of the earlier manuscript or the more difficult reading, scholarly editors produced emended versions of texts, even though some emendations remained – and still remain – controversial, above all when they concern a sacred text such as the Bible.
Gradually, the practice of textual criticism was extended to the dating of a given text, its authorship (including the detection of forgeries), the sources used by the author and the cultural contexts in which the text was written and transmitted. For example, the Old Testament was the subject of a controversial study by the Catholic scholar Richard Simon, the Histoire Critique du Vieux Testament (1678), discussing the history of the transmission of the text and the possible authorship of different parts of it (Simon was one of the scholars who argued against the tradition that Moses had written the first five books of the Bible). In the mid eighteenth century, another French scholar, Jean Astruc, argued that the book of Genesis, which used two different words for God, Elohim and Yahweh, was based on two earlier texts, now lost. This kind of investigation of the different parts of the Bible became known as the ‘higher criticism’ (as distinct from ‘lower’ emendations of the text).
This higher criticism was extended to other texts, notably to Homer's epics the Iliad and the Odyssey. The Neapolitan historian Giambattista Vico had already argued in his Scienza Nuova (1744) that the two epics had been written by different individuals living in different centuries. The German scholar Friedrich Wolf went further in is Prolegomena ad Homerum (1795), demonstrating that the Homeric poems were transmitted orally and written down much later. Literary criticism gradually emerged from textual criticism in the nineteenth century, at a time when the methods of textual criticism were extended from classical and biblical studies to medieval and modern vernacular literatures, notably by Karl Lachmann in his editions of medieval German poems.
Literary criticism combined and still combines a number of intellectual genres. These genres include the editing of literary texts; their interpretation (once again adapting methods for interpreting the Bible and classical writers such as Homer and Virgil); the analysis of literary techniques (formerly studied under the rubric ‘rhetoric’); the history of literary genres; the biography of authors; and ‘literary criticism’ in a narrower sense, the evaluation of novels, poems, plays and so on. The approach advocated in an American movement of the 1940s, the ‘New Criticism’, was a kind of return to philology in the sense of emphasizing the ‘close reading’ of texts, though at the expense of context. Context had its revenge in the form of a later American movement within literary studies, the ‘New Historicism’ of the 1980s.
In the case of history, nineteenth-century German scholars developed the method of ‘source criticism’ (Quellenkritik), the systematic evaluation of testimonies about the past by considering whether their authors had first-hand or only second-hand knowledge of the topic they were writing about. In what became a classic study, Barthold Niebuhr, writing what he called ‘critical history’, rejected the account of the early Roman past given by Livy, who lived centuries later than the events he recounted, and tried to reconstruct the sources that Livy had followed. Inspired by Niebuhr, Leopold von Ranke produced a Kritik neueren Geschichtschreiber (‘Critique of Modern Historians’, 1824). In this essay Ranke analysed, indeed took apart, the famous history of Italy written in the sixteenth century by Francesco Guicciardini, asking ‘if his information was original, or if borrowed, in what manner it was borrowed and what kind of research was employed to compile it’. These questions have become routine for historians and they have been extended from texts to images and also to the testimonies collected by oral historians.
Other forms of criticism have less to do with the tradition of studying texts. Immanuel Kant's Kritik der Reinen Vernunft (‘Critique of Pure Reason’, 1781) examined the limitations of human reason. Following Marx's claim that philosophers have only interpreted the world, while ‘the point is to change it’, the Frankfurt School of philosophers and sociologists offered a critique as well as an analysis of the society that they lived in. Their approach became known as ‘critical theory’, following the publication of Max Horkheimer's Traditionelle und Kritische Theorie (‘Traditional and Critical Theory’, 1937). It has inspired recent movements in other disciplines such as ‘critical ethnography’ and ‘critical legal studies’, concerned to change social and institutional systems as well as to study them.
The final stage in the long process of analysis in a number of different fields is to produce a synthesis intended to contribute to knowledge in the sense of understanding. These syntheses often take the form of narratives.
Accounts by travellers, including reports on scientific expeditions, are normally arranged in chronological order. Histories too have traditionally been written in a narrative mode, with exceptions such as the famous ‘portraits of an age’ by Burckhardt and Huizinga, mentioned earlier. Indeed, it has been argued that historical narrative produces knowledge by revealing connections and so making experience comprehensible.49 In the nineteenth century, a turn to narrative became visible in a number of disciplines, including the natural sciences. Darwin's Origin of Species (1858), an ‘evolutionary narrative’ that has been compared to Victorian novels, is simply the most famous example of a scientific work that took this form.50 Narratives of experiments, published in specialist journals, remain an important form for contributions to scientific knowledge.
The major collective exception to the rule that historians write narratives is the revolt against the history of events (histoire événementielle) mounted by the group of historians known as the ‘Annales School’ from the 1930s onwards, notably by Fernand Braudel in a study of the Mediterranean world in the age of Philip II that was published in 1949. Braudel preferred a descriptive-analytical mode in the first and second parts of this massive study, concerned respectively with historical geography and social history, although he did narrate events in the third and final part of his book. In Britain, R. H. Tawney, a sympathizer with the Annales historians, argued in his inaugural lecture as Professor of Economic History at the London School of Economics in 1932 that historians should be concerned with society rather than events, and with analysis rather than narrative. It is of course no accident that economic historians were prominent in the revolt against narrative, since many of their analyses and conclusions did not fit easily into this literary form.
All the same, the philosopher Paul Ricoeur claimed that even Braudel offered his readers a kind of narrative, since the three parts of his book were held together by concern with the long term or longue durée. ‘To understand this mediation performed by the long time-span’, wrote Ricoeur, is ‘to recognize the plot-like character of the whole’.51 Economic historians who study trends, not to mention major events such as the Great Crash of 1929, also have more recourse to narrative than Tawney admitted.
By the end of the 1970s, a revival of narrative was under way among academic historians, fuelled by disillusionment with economic determinism.52 However, as is often the case with revivals, there was no simple return to the past. Suspicion of the oversimplifications of what is often called ‘Grand Narrative’, notably ‘the rise of the West’, together with an increasing interest in the experiences of ordinary people, encouraged a number of scholars to write micro-narratives such as the Italian historian Carlo Ginzburg's Il formaggio e i vermi (‘Cheese and Worms’, 1976) in which the protagonist of the story is a sixteenth-century miller. At more or less the same time, interest in narrative spread among scholars working in other disciplines. For example, an interest in stories on the part of sociologists and anthropologists was associated with increasing respect for the intelligence and the experience of the people they study, who were no longer treated as mere ‘objects’ of research but as subjects who understood their own culture better than the ‘social scientists’ who viewed it from outside. Geertz's famous study of the Balinese cockfight, for instance, described it as a text, compared it to plays by Shakespeare and novels by Dostoyevsky and concluded that the fight was ‘a Balinese reading of Balinese experience; a story they tell themselves about themselves’.53
The return of narrative was not confined to academic disciplines but also affected everyday life outside the academy. Take the case of the law, for example, especially in the United States, where what is known as the ‘legal storytelling movement’ developed in the 1980s, associated once again with an increasing concern with ordinary people and the ways in which they make sense of their lives. In similar fashion, an increasing interest in stories in medical circles is associated with a greater concern for the patient's point of view, with the idea that, in some respects, people know and understand their own bodies and their own illnesses better than outsiders, even if these outsiders are qualified doctors.
As often happens with revivals, the new narratives differ from the old ones in important respects. Historical narratives, for instance, used to be Olympian, as if the historian was looking down on events from a distance, like the so-called ‘omniscient narrator’ of many nineteenth-century novels. In contrast, the new narratives often present a variety of voices or points of view, following the model of Rashomon (1950), the famous film by the Japanese director Akira Kurosawa (based on two short stories by a writer of the early twentieth century, Ryunosuke Akutagawa), recounting contradictory versions of an incident leading to a murder. Whatever the intentions of Akutagawa or Kurosawa may have been, current concern among anthropologists and sociologists with what they call the ‘Rashomon Effect’ is to use stories as a means to reconstruct the attitudes and values of the tellers, moving from a conflict of narratives to a narrative of conflicts.54
The rise of newspapers was not only an encouragement to scepticism but also a watershed in the dissemination of knowledge, the third stage in our four-stage sequence. Dissemination is sometimes described, especially in the case of technology, as ‘transfer’, emphasizing movement in one direction. Other scholars prefer to speak of the ‘circulation’ of knowledge, an assumption that is often more realistic. Interest in the movement or transit of knowledges has increased sharply in the last few years, as a number of important studies bear witness.55 Whether we call it transfer or circulation, we need of course to remember that knowledge received is not the same as knowledge sent, owing to misunderstandings (a relatively neglected part of intellectual history) as well as to deliberate adaptations or cultural translations.
It is easy, all too easy, to tell the story of dissemination in ‘triumphalist’ fashion, as a story of more and more knowledge reaching more and more people thanks to increasingly efficient methods of communication – writing, print, the radio, television and the internet. The general problems raised by triumphalist accounts will be discussed in Chapter 4. Here it may be sufficient to make two points, one about dissemination in general, and the second about forms of communication.
In the first place, there is a long tradition of critics of the dissemination or ‘democratization’ of knowledge. In early modern Europe, for instance, the clergy were uncomfortable with the idea of the laity reading the Bible for themselves, masters of specialist knowledge, from goldsmiths to physicians, objected to the publication of their secrets, while rulers and their advisers saw the spread of knowledge as a threat to the hierarchical social order.
In the second place, even scholars have expressed anxiety about what we now call ‘information overload’. In the fourteenth century, the historian Ibn Khaldun was already complaining that ‘the great number of works available’ was ‘among the things that are harmful to the human quest for knowledge’. Anxiety of this kind became more widespread in the West a century or so after Gutenberg printed his first book.56
In any case, despite the importance of new forms of communication, the most effective means of dissemination remains the oldest one, encounters with people. It has been argued that ‘the transfer of really valuable knowledge from country to country or from institution to institution cannot be easily achieved by the transport of letters, journals and books: it necessitates the physical movement of human beings’. In short, ‘ideas move around inside people’.57
Movements of people include those of students and teachers. The history of education is a long-established part of what is now known as the history of knowledge, with many studies of individual schools and universities and some important overviews.58 As a case-study we might take education in the traditional Islamic world, as presented in books about medieval Cairo and Damascus written by the American historians Jonathan Berkey and Michael Chamberlain. In these cities, the system of higher education was essentially an informal one. The madrasas, schools attached to mosques, offered students stipends, accommodation and lectures, but the most important path to knowledge was to become the disciple of a master or shaykh. According to a twelfth-century treatise, it was ‘essential that the pupils sit in a semi-circle at a certain distance from the teacher’, as a mark of respect.
In this informal system there was no fixed curriculum. The equivalent of a Western degree, the ijaza, a licence to teach, was conferred by the shaykh following an oral examination. ‘Students built their careers on the reputation of their teachers.’ The system was also an open one in the sense that it gave women the opportunity to learn from other women in sessions in private houses. Reading (especially the Koran), copying and writing books were also important activities, but students were supposed to read aloud in a group. Private study was frowned on and books were considered an inferior method of transmitting knowledge. The fourteenth-century jurist Ibn Jama’a declared that ‘One of the greatest calamities is taking texts as shaykhs’ and that ‘knowledge is not gained from books’.59
Close relations between masters and ‘disciples’ can be found in Western culture too, to this day.60 However, they were and are part of a larger system for transmitting knowledge that originated in the Middle Ages: the university. Medieval universities relied heavily on the spoken word, in lectures and also in the formal debates known as ‘disputations’ in which students practised and developed their logical skills. By contrast with the Islamic world of learning, however, writing also had an important place in the system. Lecturers, as their name implies, read texts aloud to students, and the students in their turn wrote down what they heard. Reading and writing gradually became more important at the expense of listening and speaking. All the same, oral communication remains important in Western academic culture even today, as Françoise Waquet has reminded us in a history of lectures, seminars and conferences.61
Oral transmission may be described as ‘performance’. Virtuoso performers already existed at the University of Paris in the twelfth century, notably Peter Abelard, who attracted many students to his lectures and – perhaps equally important for him – away from the lectures of his competitors. In the sixteenth century, the Swiss physician Paracelsus drew attention to himself at the University of Basel by a public burning of the traditional medical treatises that he rejected. An eighteenth-century German professor, Burckhardt Mencke, criticized professors who played to the gallery in his book Charlataneria Eruditorum (‘The Charlatanry of the Learned’, 1715), a book that went through several editions in Latin and translation as well as inspiring imitations. Mencke compared these academics to the charlatans who performed on stages in the street in order to sell their medicines, pointing out the tricks to which scholars resorted to gain applause – wearing expensive or eccentric clothes or lecturing with ‘vehement expressions, constant changes of countenance, rude and wandering eyes, whirling arms, shifting feet, suggestive movements of the hips and other parts of the body’.
Had he lived a little longer, Mencke would have been able to make significant additions to his list. From the later eighteenth century onwards, experiments were regularly presented in public as spectacles, a kind of theatre with the lecturer as the showman. Chemistry lent itself to showmanship of this kind and so did electricity, the words of the lecturer being accompanied by flashes and bangs. At Oxford, the eccentric geologist-palaeontologist William Buckland enlivened his presentations by imitating the movements of dinosaurs. In London, John Henry Pepper, a nineteenth-century lecturer on science, was a famous deviser of what would now be called ‘special effects’, such as making ghosts appear on the stage. Today's television dons are the heirs of a long tradition.62
How to test the knowledge that students have acquired is another old problem. Asking them to perform their knowledge in public is an obvious solution, taking different forms; participation in debates, delivering speeches or answering a series of questions. An alternative method is the one that most of us now take for granted. It is the written examination, invented in China and studied by sinologists such as John Chaffee and Benjamin Elman.63
For about a thousand years, from the early Song to the late Qing dynasty, China was administered by the group known in the West as ‘mandarins’, scholar-officials who owed their appointment and so their social status to success in written examinations. The examinations were central to the traditional Chinese knowledge order, and to the social order as well. Records of dreams about examinations suggest that they were central to Chinese culture more generally. The examinations were usually held every three years at the provincial level, and then, for successful candidates, at the national level. They lasted three days. The candidates sat in individual cubicles in the examination hall, writing essays – commentaries on the Confucian classics, such as the Great Learning or the Doctrine of the Mean, which were regarded as sources of wisdom and virtue, but also essays on questions of policy, questions of law, and sometimes on questions of astronomy (over this long period, the suitability of different fields of knowledge for these examinations was debated, and changes were made). The examiners did not know the identity of the candidates whom they graded.
Long years of study were required for success in these examinations, although the survival of printed guides and model essays suggests that many candidates attempted to take short cuts. Some of these printed guides were very small, for smuggling into the examination hall, hidden in one's robes. Despite the possibility of successful cheating, the system was probably the most efficient system of testing knowledge in the pre-industrial world, so that it is no surprise to find that it was imitated in the West, first in eighteenth-century Prussia and then in France, England and elsewhere. In Oxford and Cambridge, for instance, written examinations were introduced in the early nineteenth century, replacing the oral system known as examination viva voce, ‘by the living voice’. In the mid nineteenth century, written examinations on the Chinese model were introduced to test candidates for the British civil service. This may be the reason that senior civil servants are still described as ‘mandarins’.64
Knowledge has often been transmitted by missionaries, whether they were Buddhist, Christian or Muslim. The story of the spread of Buddhism, for instance, is a story of long-distance travel, from India to Sri Lanka, Burma, Thailand, Laos, Cambodia and China and from China to Korea and Japan. The main agents of dissemination were monks. For example, Jianzhen was a Buddhist priest from Tang-dynasty China who spent the last ten years of his life in Japan, where (known as Ganjin) he founded a school and a temple and introduced the Japanese aristocracy to the doctrines of the Buddha. Ganjin's story, told by one of his disciples in the Record of the Eastward Journey of the Great Monk of Tang, was retold by the Japanese novelist Yashushi Inoué in The Roof Tile of Tempyō (1957) from a Japanese point of view, focusing on the official mission to China that set out in the year 732, taking monks to study Buddhism there. As described in the novel, the mission included four young monks who persuaded Ganjin to come to Japan.
Ganjin also introduced the Japanese to secular elements of Chinese culture. Missionaries often disseminate secular knowledge in this way. Indeed, in the nineteenth century in particular, Christian missionaries in Asia, Africa and elsewhere often considered it part of their task to introduce the peoples among whom they worked to Western culture and especially to Western science. For example, John Fryer, a Protestant missionary, founded the Chinese Scientific Magazine (1876) and set up a polytechnic in Shanghai, while another missionary, Alexander Williamson, founded the Society for the Diffusion of Christian and General Knowledge among the Chinese, a society that published scientific books as well as religious ones. Missionaries often founded colleges that disseminated Western learning; the Syrian Protestant College (1866), for instance, St Stephen's College Delhi (1881), Canton Christian College (1888) and so on.
Conversely, missionaries studied the languages and cultures of the regions in which they worked, and when they returned home they often spread the knowledge of those regions. In this respect missionaries have sometimes been compared to anthropologists. Indeed, a few of them ended their careers as anthropologists. A famous example is that of Maurice Leenhardt, a French Protestant pastor in New Caledonia, which had become a French possession in 1853. After working there from 1902 to 1927, Leenhardt returned to France, where he taught at the École Pratique des Hautes Etudes and elsewhere as an expert on Melanesia.65
Missionaries were not alone in travelling to disseminate knowledges. There were secular missions as well. On one side, some Western countries, notably France, sent missions abroad, such as the group of young scholars (including Fernand Braudel and Claude Lévi-Strauss) who were sent to the University of São Paulo in the 1930s. On the other side, governments that considered their country to be backward sent knowledge-finding missions to more ‘progressive’ countries. For example, the Egyptian government sent a group of students to France in 1826, including the young Rifa’a al-Tatawi, who became a leading Islamic modernizer.66 Again, in 1862 the Japanese government sent a mission to Europe to learn about Western civilization (it may be suspected that the novelist Inoué was thinking of this event when he described the mission to China sent in 732).
Besides missions, informal encounters led to the transmission of knowledges. One famous series of examples concerns the Dutch East India Company, the VOC. Some Dutchmen, Germans and Swedes in the service of the Company took the opportunity to study both nature and culture in Japan and South East Asia, and to write books that spread that knowledge in Europe. Something similar happened in India in the late eighteenth and early nineteenth centuries, the age when the British East India Company effectively ruled much of the country. Some administrators, judges, physicians and surgeons in the service of the Company studied the history, languages and local knowledges of the places in which they served, learning from local scholars and spreading Western knowledges in return, each side adapting what they learned to their own purposes. Appropriately enough, the history of these encounters and exchanges has been written jointly by Western and by Indian scholars.67
The most famous example of this kind of intellectual exchange is surely that of the Welsh lawyer William Jones, who arrived in Calcutta in 1783, founded the Asiatic Society of Bengal, conversed regularly with local scholars (the pandits), discovered that Greek and Latin derived from Sanskrit, and introduced Europe to the Sanskrit drama Sakuntala through his English translation. In similar fashion, the identification of the ‘Dravidian’ family of languages in South India was the result of the ‘conversation’ between British and Indian scholars.68 In the field of medicine, a number of physicians and surgeons in the service of the Company, many of them Scottish, exchanged knowledge with local healers from the Ayurvedic and other traditions. On the other side, in nineteenth-century Bengal, societies were founded by Indian elites who were enthusiastic for Western science: the Society for the Acquisition of General Knowledge (1838), for instance, or the Indian Society for the Cultivation of Science (1876).69
The significance of these exchanges of knowledge remains a matter of debate. Inspired by Michel Foucault and Edward Said, whose work was discussed in Chapter 2, some scholars emphasize the conflict between knowledges, notoriously illustrated by the disqualification of traditional Indian knowledge by Thomas Macaulay, who served on the Supreme Council in India from 1834 to 1838 and claimed in his ‘Minute on Indian Education’ that ‘a single shelf of a good European library was worth the whole native literature of India and Arabia’. These scholars note the political uses of knowledge in the service of either British imperialism or, later, Indian nationalism, as in the case of Jadu Nath Ganguli's National System of Medicine in India (1911), arguing that India needed a ‘system of medicine on national lines’.
In contrast, other scholars stress the harmony of different knowledge orders and the fascination exerted by foreign knowledge, in the case of both Europeans who discovered Indian traditions and Indians who discovered Western science. As Thomas Trautmann argues, studies of the ‘formation of colonial knowledge’ ‘need to consider the kinds of knowledge that were brought to the colonial situation, from both parties to the colonial relation’.70 The problem, then, is to assess the relative importance of Western and Indian contributions, as well as to try to navigate between the two extremes of a political approach to knowledge that risks becoming cynical and reductionist and a non-political approach that runs the danger of naivety.
In the history of knowledge as in history in general, unintended consequences have often been more important than intended ones. More influential than missions in either direction were the experiences of expatriates such as Jones or the travels of scholars who would have preferred not to leave home but were forced to do so, like the Protestants who were expelled from France in 1685 and settled in London, Berlin and the Dutch Republic, or the Jewish scholars who left Germany in 1933 or Austria in 1938 for Britain, the United States and elsewhere.71 The problems of the broken lives of these displaced persons are obvious enough. However, some individuals at least were able to gain a living as mediators between their home culture and the culture that received them. French Protestants in England wrote on English history or translated English books, including the philosophy of John Locke, into French, while German Jewish scholars in the United States translated Max Weber's sociology into English. Again, some scholars who left Russia after 1917 spent the remainder of their working lives explaining Russian culture to the French, British and Americans.
For their hosts, the positive side to this displacement is easier to track, especially when there were enough refugees in a particular discipline and a particular place to constitute a critical mass, introducing psychoanalysis into the United States, for example, or art history into England.72 As in other cases of the migration of skilled people, like the Protestant silk-weavers who left France along with the scholars, one nation was enriched by the losses of another.
Objects such as rocks, plants, stuffed animals, paintings and statues disseminated knowledge as they moved from one part of the world to another and entered collections, for study as well as for display. In early modern Europe, these collections were private, owned by rulers such as the Medici in Florence or by scholars such as the Danish physician Ole Worm. They included works of art alongside works of nature, European coins and medals, Mexican feather-work or Brazilian blowpipes together with shells or crocodiles. Since the French Revolution, public collections have become the dominant form, displayed in museums and galleries and open to visitors, often including schoolchildren. Viewing these objects has become part of many people's education. Indeed, some museums were established for educational purposes, like the South Kensington Museum in London (founded in 1857 in order to show artisans what models they might follow), or the Science Museum that split off from it in 1885.
The transport of texts also disseminated knowledge. Japanese monks returned from their time in China with thousands of scrolls of Buddhist texts, many of them copied by themselves. The rise of paper (much cheaper than parchment) in China, the Islamic world and finally in Europe contributed to the spread of knowledge in the age of manuscript. The invention of printing with moveable type made books more easily available and at a lower price than before. Books were already travelling long distances in the sixteenth century, from Spain, for instance, to Mexico or Peru. So were letters (although they might take a long time to arrive at their destinations), thus creating long-distance networks of knowledge, between Jesuit scholars in Rome, Goa and Beijing for instance, or, more generally, extending the frontiers of the so-called ‘Republic of Letters’.73
The Republic of Letters may be regarded as an imagined community, a college without walls or a network of networks. Most studies of this community begin in the fifteenth century, when the phrase respublica litterarum was coined, and come to an end around the year 1800, when the unity of the Republic was threatened first by nationalism and then by intellectual specialization.
However, a case can be made for extending its history to our own time, distinguishing between four main periods on the basis of changes in modes of communication.74 The first age, from about 1500 to 1800, was that of the horse-drawn republic, when books, letters and scholars themselves all needed horse-power in order to travel on land, and sailing-ships to cross the seas. The second age, 1800–1950, was the age of what we might call the ‘steam republic’. The steam press drove down the price of books, while the railways and the steamships allowed international conferences to become regular events where scholars could exchange information.
In the third age, more or less 1950–90, the growing ease of air travel encouraged the proliferation of small international conferences on specific themes. Today we are living in a fourth age, that of the ‘digital republic’. The Republic of Letters was always a virtual or imagined community, but the acceleration of communication, thanks to e-mail, e-conferencing, and collective e-research, has made its members more conscious of interaction at a distance than they used to be and given a new meaning to the old idea of an ‘invisible college’.
Thanks to changes in communication, the Republic of Letters, originally confined to Western Europe, was gradually extended, to the Balkans, Russia and to some cities in the Americas, North and South, and later to other parts of the world, leading to the movement of knowledges on a global scale. Already in the eighteenth century, thanks to faster sailing-ships, the former students, known as the ‘apostles’ of the Swedish botanist Carl Linnaeus, were able to send him information from the Middle East, Africa, China, North and South America and (in the case of Solander, who sailed with Banks and Cook) Australia.
Offering a simple three-stage model of the diffusion of scientific knowledge, George Basalla once argued that as in the case of international trade, the periphery exported raw material, such as the information gathered by Western scientific expeditions to other parts of the world, while the centre (in this case the ‘centres of calculation’ in the West) exported finished products. Only at a later third stage did the production of scientific knowledge spread outside the centres.75 Recent studies have continued to emphasize the links between Western science and Western imperialism, noting for example that ‘The emergence of centres of science, such as museums, gardens, asylums and universities depended on the passage of data, material culture and people across imperial networks.’76
Basalla was writing in the 1960s and since that time his model has often been criticized, especially for three reasons. In the first place, knowledge as well as information moves from the periphery to the centre as well as vice versa. For example, the sixteenth-century Portuguese physician García de Orta, who published a famous study of Indian medicinal herbs in 1563, drew on the local knowledge of Indian healers.77 In the second place, exotic knowledge is domesticated on both sides of the exchange. In other words, it is translated from one language to another and subjected to ‘cultural translation’ in the sense of being adapted to a new environment, producing what has been called hybrid or ‘pidgin-knowledge’. Hence the need felt by scholars today to go ‘beyond diffusionism’.78
In the third place, thinking of knowledges in the plural, it is clear that different knowledges had their own centres. Basalla's model of the spread of Western science naturally privileged Western centres, but a model of the spread of Indian or Chinese knowledges would privilege Indian or Chinese centres for equally good reasons. In any case, the model could and should be refined, as suggested in Chapter 2, by introducing the idea of the ‘semi-periphery’, including colonial cities such as sixteenth-century Goa, where García de Orta lived, conversed with Indian healers and wrote his book, or eighteenth-century Bombay or Calcutta, where British doctors and surgeons in the service of the East India Company learned from their Indian colleagues as well as teaching them.79 Again, the Japanese port city of Nagasaki, including the small island of Deshima where Western traders were confined from the seventeenth to the nineteenth centuries, became a centre for both the Western discovery of Japan and the Japanese discovery of Europe. As the nineteenth-century journalist Fujita Mokichi remarked, ‘Nagasaki was not simply a place for trade in goods with the Dutch, it was also a new port for trade in knowledge.’80
Needless to say, these exchanges of knowledges required the participation of interpreters and translators between languages. What is sometimes called ‘cultural translation’ also took place, with both sides adapting what they had learned to their own needs and circumstances. What was often, though not always, ‘lost in translation’ was what turned that information into knowledge, local classification systems, for instance. The loss is worth bearing in mind when we consider the importance of translation between languages for the spread of knowledge. As has already been noted, the spread of Buddhism from India to China and Japan involved translation between three very different languages: Sanskrit, Chinese and Japanese. Again, much ancient Greek knowledge, especially knowledge of the natural world, reached Western Europe via the Arabs. Translations of Aristotle, for instance, were made from Greek into Arabic and later from Arabic into Latin and sometimes from Latin into French and other vernaculars, so that a text by ‘Aristotle’ might be the translation of a translation of a translation of a text, many times transcribed, of what Aristotle himself had dictated.
In any case, Aristotle's world of small city-republics was very different from the medieval Europe dominated by the Church and by kings, so that the ideas put forward in his Politics, for instance, were often misunderstood. Indeed, one might argue that these ideas needed to be misunderstood in order to be relevant to the fourteenth- or fifteenth-century world in which they were read. In other words, these ideas passed through a process of cultural translation as well as a translation from one language to another.81 The process of the cultural translation of knowledge is even clearer in the case of some non-verbal examples, like the maps made by the Inuit at the moment of their encounter with Europeans in the late eighteenth century, thus documenting ‘a search for cross-cultural equivalences’.82 As discussions of this example suggest, historians have moved from dismissing non-Western maps as inaccurate to viewing them as evidence of different understandings of space.
The dissemination of knowledge takes place not only laterally, spreading across space, but also vertically, moving from scientists, scholars and other experts to the ‘lay’ public. Movements for the popularization of knowledge, especially certain kinds of knowledge, have a long history. In Britain, for instance, the Society for the Promotion of Christian Knowledge was founded in 1698.
The dissemination of knowledge to the laity, including women and children, has been the subject of a number of important recent studies. The studies focus on Germany, France and especially on Victorian England, where the phrase ‘popularizer of science’ was in use by 1848.83 One mode of dissemination was the public lecture, which sometime drew crowds, as in the case of Alexander von Humboldt's lectures on the cosmos, given in Berlin (1827–8), or Max Müller's lectures on language at the Royal Institution in London (1861). Another was the museum. A third mode of dissemination was of course print. From the sixteenth century onwards, books on medicine with titles such as ‘Medicine for the Common Man’ or ‘Treasury of Health’ were published in vernacular languages, allowing readers to avoid the expense of calling in a physician by practising ‘do it yourself’ healing. Some of them went through many editions.
Books disseminating other kinds of knowledge might also become best-sellers. John Hawkesworth, the author of an account of Cook's first voyage, commissioned by the Admiralty and published in 1773, is supposed to have received an advance of six thousand pounds from the publisher, a huge amount for the time, suggesting that the book was expected to sell very well. The anonymous Vestiges of the Natural History of Creation (1844, actually written by Robert Chambers) appealed to readers both high and low in the social scale.84 A similar point may be made about Macaulay's History of England (1848). Three thousand copies of the first volume were sold in less than a fortnight, while a group of working men from near Manchester wrote to the author to thank him for writing the book, which had been read aloud to them on Wednesday evenings. Magazines also helped to spread knowledge: The Scientific American (founded in 1845), for instance, the Chinese Scientific Magazine (founded by an English missionary, 1876) or the National Geographic Magazine (1888).
In different periods, from antiquity onwards, and in different cultures (especially European, Islamic and East Asian) knowledge has been disseminated by means of what we call ‘encyclopaedias’, volumes great or small that claim to contain within their pages, if not all knowledge, then at least the essentials. Encyclopaedias are the best-known of the many types of book, including specialist works such as dictionaries or guides to practices such as cooking or horsemanship, that are designed not to be read from cover to cover but to be consulted whenever needed. From the sixteenth century onwards, reference books in general and encyclopaedias in particular proliferated both in the West and in East Asia. By the mid eighteenth century, there were so many of them that an enterprising Frenchman produced a Dictionary of Dictionaries.85
A recent study of the consequences of the popularization of knowledge is Mary Elizabeth Berry's Japan in Print, focusing on the early Tokugawa period, better known to Westerners as the seventeenth century, when the rise of cities, especially Kyoto, Osaka and Edo (now Tokyo), was accompanied by the rise of printed books targeting an increasingly wide public – women as well as men, farmers as well as artisans and shopkeepers. Many of the books printed for this public or for part of it provided information. Guides to Edo and Kyoto, for instance, told visitors what to see, from temples to tea-houses, and where to go in search of different goods and services. Instruction manuals gave advice on how to farm, to write letters or poems, to perform the tea ceremony and to bring up children. Cheap encyclopaedias such as Banmin chohoki (‘Everybody's Treasury’, 1692) made their appearance. This ‘library of public information’, as Berry calls it, encouraged a ‘revolution in knowledge’ and led to the rise of a public sphere, an arena for the discussion of public issues, while the publication of maps of Japan helped widen imagined communities from regions to the whole country.86
A discussion of the dissemination of knowledge needs to take account of the complementary opposite theme of the obstacles to this dissemination. Economic obstacles, for instance, such as the cost of books, including that of transporting them over long distances, not to mention the so-called ‘tax on knowledge’, as the stamp duty on British newspapers came to be known (imposed in 1712, the stamp duty was finally abolished in 1855). Deliberate censorship of communication also has a long history. The spread of printed books in particular has been viewed unfavourably by many authorities, religious and secular. In Japan, in the age of the ‘library of public information’, books were censored by the government. In China, the tradition of censorship goes back as far as the third century BCE but is best known from the time of the emperor Qianlong in the late eighteenth century, when edicts against ‘seditious books’ and ‘heterodox opinions’ were issued in response to the ‘flood’ of printed texts.87
In early modern Europe, the censorship of printed books was a major preoccupation of the authorities in both states and churches, Protestant and Catholic alike, whether their anxieties concerned heresy, sedition, or immorality. The most famous and widespread censorship system of the period was that of the Catholic Church, associated with the ‘Index of Prohibited Books’. This Index was a printed catalogue of the books that the faithful were forbidden to read, an antidote to the poisons of print and Protestantism. The important indices were those issued by papal authority and binding on the whole Catholic Church, from the mid sixteenth century to the mid twentieth century.88 Bonfires of books could often be seen in the Catholic world: bonfires of Muslim books in sixteenth-century Spain, or of Protestant books in Antwerp, Paris, Florence, Venice and other cities. Protestant censorship was less effective than Catholic censorship not because the Protestants were more tolerant but because they were more divided, fragmented into Lutherans, Calvinists and so on.
Government censorship of books before publication came to an end in England in 1695, in France in 1789, in Prussia in 1850 and in Russia in 1905. All the same, attempts to control what was published did not disappear. Book-burning continued: a notorious example is a series of bonfires in different German cities of books by Jewish, Communist or foreign authors, events organized by the German Students' Union in 1933, soon after Hitler came to power. Today, authoritarian regimes in Iran, Russia and elsewhere ban books, control television programmes and block access to certain websites on the Internet, as in the case of the so-called ‘golden shield’ or ‘great firewall of China’, which was launched in the year 2000.
Just as historians need to study ignorance as the complementary opposite of knowledge, so they need to study concealment as the opposite of diffusion. Rulers have long attempted to preserve the arcana imperii, ‘secrets of state’. The subjects of imperial regimes may try to conceal local knowledge from their new masters. Eighteenth-century Hindu scholars, for example, tried to prevent the British from learning Sanskrit.89 Secret societies attempt to restrict certain knowledges to the circle of the initiated. Specialists, from smiths to masters of ceremonies, do the same, in order to preserve their intellectual capital. It is no accident that the English word ‘mystery’ used to refer to crafts as well as secrets.
As may be imagined, however, serious problems of method arise for historians working in this domain. Failed concealment is obviously easier to study than the success that hides its traces. All the same, the failures offer insights into changing strategies and methods of concealment in the long conflict between the defenders, in other words the individuals, groups and institutions that try to keep certain knowledges secret, and their opponents, who try to gain access either for themselves or for a wider public, often assisted by ‘moles’ inside the system. As in the case of wars, new methods of defence respond to new methods of attack, apparently ad infinitum.
The different stages in this long conflict between concealment and discovery are more visible than usual in the history of codes and ciphers. In the ninth century, the Arab philosopher al-Kindi wrote a guide to decipherment. In the fifteenth century, polyalphabetic ciphers were invented to prevent decipherment by an analysis of the frequency with which certain letters recurred. In the nineteenth century, a more sophisticated mathematical analysis led to the breaking of polyalphabetic ciphers. The twentieth century was the age of cipher machines such the famous Enigma, with a code that was broken by the joint efforts of Polish cryptographers and a British team located in Bletchley Park.90 In the age of the Internet, we see both new forms of attack such as automated intelligence gathering, and new forms of defence such as automated security systems.
Another example of this conflict concerns the collection of information by means of surveillance, secret information that is ‘leaked’ to the public from time to time. On one side, we see the efforts of governments, or more recently of large corporations, to gather information and keep it to themselves. In early modern Venice, for instance, ambassadors resident in other states used spies and other informers to collect sensitive information, which went into their confidential reports. Today, the work of spies (known in the jargon as HUMINT, ‘human intelligence’) has been supplemented if not replaced by TECHINT, ‘technical intelligence’. Take the case of the NSA, the National Security Agency of the USA, which collects and even analyses data by means of programmes such as XKeyscore, which searches for information on the Internet (including private e-mails). In industry as well as in politics, espionage has moved from the infiltration of organizations by individuals to the hacking of computers from a distance.
On the other side, historians have discovered that the confidential reports of Venetian ambassadors were often copied and the copies sold in Rome and elsewhere. Filippo De Vivo tells the story of an early seventeenth-century Venetian diplomat who had been posted to England and was shocked to discover, in the Bodleian Library in Oxford, ‘a large volume in manuscript’ that contained fourteen of these reports.91 The recent history of the revelation of official secrets includes a number of episodes in which individuals inside the system supplied information in electronic form; Bradley Manning (as he then was) sent US Air Force documents concerning the war in Iraq to Wikileaks in 2010, while Edward Snowden sent copies of NSA documents to the Guardian and the Washington Post in 2013. The media of communication change, and so does the amount of information in circulation, but the old conflict between secrecy and transparency continues.
Attempts to keep information secret and attempts to reveal it (whether by moles, journalists or hackers) both raise the question of access, since secrecy presupposes insiders who know the secret as well as outsiders who are excluded. Access to knowledge has long been unequal, especially access to knowledge-creating and knowledge-storing institutions such as universities, archives, libraries and museums. Attempts to widen this access also have a long history. Five hundred years ago, print became a major instrument in these attempts. However, print could not widen access to knowledge by itself. Two obstacles had to be overcome: illiteracy and Latin. Hence the movements, one might even say the campaigns, to spread literacy and to make knowledges available in vernacular languages.
Martin Luther, a fellow-countryman of Gutenberg's, was a leader and has become the symbol of the collective attempt to make religious knowledge, especially knowledge of the Bible, available in the vernacular, an attempt that was central to the movement that we now call the Reformation. In the case of medicine, the role of Luther was played by another German, Paracelsus, who insisted on both lecturing and writing in the vernacular.
In the long term, the movement for what we might call the vernacularization of knowledges was impossible to resist. Until the first half of the seventeenth century, printed encyclopaedias usually appeared in Latin, but after that time they were replaced by encyclopaedias in modern languages, among them the Scottish Encyclopaedia Britannica and the famous French Encyclopédie. The Encyclopédie made another important and controversial contribution to making knowledges more common. It described the practices of many types of artisan in rich detail, with many illustrations. In this way it introduced to a wider public a number of knowledges that had previously been kept from the uninitiated. Making private knowledges public in this way was part of Diderot's campaign against the guild system. He believed that this ‘publicization’ of craft knowledge would make the economy prosper and benefit humanity.92
The ideal of common knowledge took institutional form in nineteenth-century societies such as the British Society for the Diffusion of Useful Knowledge (founded 1826, following similar initiatives in Germany and the USA); in educational institutions for adults such as Mechanics' Institutes, as they used to be called in Britain, or ‘People's High Schools’ (Folkehøjskole), as they were known in Denmark; and also in popular encyclopaedias such as the one published in Britain in 1860 by the Chambers brothers, subtitled ‘A Dictionary of Universal Knowledge for the People’. The popular newspaper, aimed at a wide audience, is another nineteenth-century invention that spread rapidly in the USA, Britain, France, Germany and elsewhere. It depended on the earlier spread of literacy, thanks to universal or almost universal education (in England, from 1870 onwards, at almost exactly the same time as in Japan, where a campaign for modernization was just beginning). It might be argued that parliamentary democracy depended on that invention, which gave ordinary voters the information they needed to make a political choice. As the Liberal Chancellor of the Exchequer, Robert Lowe, remarked in his sardonic manner at the time that the right to vote was extended in England in 1867, ‘We must educate our masters.’
In the twentieth century, at least three revolutions in technology widened access to some knowledges and indeed made the dream of a common knowledge seem attainable at last: radio, television and the internet. The globalization of knowledges was aided by the spread of English as an international medium of communication, a kind of new Latin, as well as by the spread of images that need no translation. The second half of the twentieth century was also the great age of the democratization of knowledges, thanks in part to radio lectures, televised science, open universities and online encyclopaedias. In the political domain, there were movements for increased freedom of information or transparency in government. Glasnost was promoted by Mikhail Gorbachev soon after he came to power in 1985. This meant reducing ‘the number of officially forbidden topics’ in the press.93 Elsewhere freedom of information acts have gone much further and given the public access to official documents.
So far the examples cited concern knowledges that have become increasingly common. All the same, it is necessary to avoid the assumption of the inevitable spread of knowledges, whether geographically or socially. It is more realistic to view the history of knowledges as a kind of tug of war, a conflict between the forces for widening and the forces for narrowing access. To write in a vernacular language was to widen access to many knowledges in one way, by making them available to social groups that had not learned Latin. However, writing in a vernacular narrowed access in another way, access for foreigners. In Luther's time, Erasmus wrote his books in Latin in order to reach a European audience from England to Poland. His audience was wide geographically but narrow socially, while Luther's was the reverse. Again, paradoxically enough, globalization restricts access to knowledges as well as widening it. It narrows access to zero by destroying some knowledges altogether. Many local knowledges are in crisis, a crisis that may indeed be terminal. To take one of the most obvious examples, many of the world's languages, of which there are about six thousand currently spoken, are in danger of extinction by the end of the twenty-first century, if not before.
In any case, the conquest of access has always been under threat. There were and are three major threats. The first and perhaps the least obvious threat comes from intellectual specialization. Collectively we know much more than ever before but individually we all find it harder and harder to see the big picture. The second threat to common knowledge comes from political regimes. The threat takes two main forms: the negative form of censorship and the positive form of secret or restricted knowledges, generally associated with authoritarian states but in fact virtually ubiquitous, even if in different degrees. The third threat to common knowledge is the trend towards privatization. The idea of the ownership of knowledge is not an invention of capitalism, but the privatization of knowledges has been much extended by capitalists via patents and other forms of intellectual property. For example, pharmaceutical companies have tried to patent traditional indigenous knowledges such as the antiseptic properties of the Indian spice turmeric.
Stewart Brand, best known as the author of the Whole Earth Catalog, coined the phrase ‘information wants to be free’. More cautiously, the economist Kenneth Arrow remarked that ‘it is difficult to make information into property’.94 All the same, some governments and some companies have succeeded, at least temporarily, in this task.
‘Useful knowledge’ has long been a widespread slogan, the focus of organizations and campaigns from the middle of the eighteenth century onwards. In Erfurt, the ‘Academy of Useful Knowledges’ (Akademie gemeinütziger Wissenschaften) was founded in 1754. In Philadelphia, the American Philosophical Society for the Promotion of Useful Knowledge dates from 1766 and was followed by similar societies in Trenton, New York, Lexington and elsewhere. In Britain, the Society for the Diffusion of Useful Knowledge was founded in 1826. In France, the Journal des Connaissances utiles was established in 1832.
It is of course necessary to ask: useful to whom, or for what? Different knowledges have obviously been employed for many purposes. In early modern Europe, for instance, the study of classical rhetoric was of practical use in the domains of law and politics. Empires could hardly survive without access to detailed knowledge of the terrain and its resources. Geographical knowledge has also been deployed in warfare. Hence the use of topographical engineers in Napoleon's armies, for instance, surveying and mapping Austria, Italy and Russia. Later in the nineteenth century it was the turn of the Prussians, whose victory in the war with France in 1870–1 was described by a geographer as ‘a war fought as much by maps as by weapons’. Since the Gulf War (1990–1), armies have been making use of Geographical Information Systems.
In business as in war, it is as important to discover the plans and the technology of one's competitors as to keep one's own plans and technology secret. In short, knowledge is often employed in the service of control, a point emphasized by Foucault in his famous statement, quoted earlier in this book, that ‘knowledge constantly induces effects of power’.
Foucault's point may be illustrated from the history of the Catholic Church at the time of the so-called ‘Counter-Reformation’ of the sixteenth and seventeenth centuries. The spread of Protestantism administered a kind of wake-up call to the authorities, to which they responded in various ways. In the first place, the Church made greater efforts than before to spread religious knowledge among ordinary people by means of sermons and also, a novelty, by means of catechism classes. The question-and-answer format of the catechism made it easier to test religious knowledge. In the second place, there were systematic attempts on the part of bishops to acquire information about religious practice. To ensure that no one failed to go to confession, censuses were made in each diocese. Bishops were also supposed to conduct ‘visitations’, in other words inspections of each parish, ranging from the physical state of the parish church and its fittings to the behaviour and beliefs of the laity (whether there were any heretics, how many people had been excommunicated or were living with concubines). Standardized questionnaires were issued to allow the information from different sources to be compared.95
In Spain, Italy, Portugal and the Catholic parts of the New World, the efforts of the bishops were seconded by those of the Inquisition, which investigated both belief and behaviour and accumulated over the centuries an impressive ‘data bank’ that is now regularly raided by historians for their own purposes. Among the new religious orders founded during the Counter-Reformation were the Jesuits, an order that grew very rapidly in numbers and established itself in mission fields in many parts of the world, from Canada to Paraguay and from India to Japan. One distinctive feature of the organization of the Jesuits was the extent and the sophistication of their information system. They were a centralized order, ruled by a ‘general’ in Rome, to whom was addressed a series of regular reports or ‘annual letters’ from Jesuit houses and colleges all over the world, thus allowing him to keep a close watch on what was happening in each location and to extend ‘a long arm’ wherever and whenever this became necessary.96
Among Protestants too, the clergy was concerned both with spreading religious knowledge among ordinary people and with acquiring knowledge about them. The first point may be illustrated from the history of two British societies, the Society for the Promotion of Christian Knowledge (SPCK), founded in 1698 to support missionaries, and the British and Foreign Bible Society, founded in 1804 to make the Bible more easily available throughout the world. As for the second point, Protestants like Catholics carried out visitations. In Sweden from the seventeenth century onwards, the clergy made regular visits to the houses of the laity to test all the family members on their ability both to read and to understand the Bible.97
The processes of state formation and the centralization of government in early modern Europe involved the use of increasing amounts of information. Historians have noted the rise of what the Canadian sociologist Dorothy Smith called ‘textually mediated forms of ruling’ such as writing letters, writing and annotating reports, issuing forms and questionnaires and so on, associated with what is variously known as the information state, archive state or paper state – now in the process of transforming itself into the digital state.98 This process may be described as the rise of ‘bureaucracy’ in the original sense of the term, the rule of the bureau, or office, and its officials. These officials both issued and followed written orders and recorded these orders in their files, together with the reports on the political situation at home and abroad that assisted decision-making. The ruler on horseback was gradually transformed into the ruler sitting at his desk, as in the famous cases of Philip II of Spain in the sixteenth century and Louis XIV of France in the seventeenth.
Information was sometimes collected by means of printed forms (used as early as the sixteenth century in Venice to compile the census) and also by the use of questionnaires, as in the case of the Spanish Empire, where the systematic collection of information about the New World began in 1569, when 37-point questionnaires were sent to local officials in Mexico and Peru, followed in 1577 by a printed 50-point questionnaire. As the German historian Arndt Brendecke has pointed out, empiricism was a tool of empire.99
As we have seen, imperial regimes, especially new imperial regimes, have an especially urgent need for information about the lands that form part of their empire. However, since early modern governments were making increasing demands on the population, whether for taxes, military service or religious conformity, they increasingly employed similar methods at home. Jean-Baptiste Colbert, for instance, best-known as Louis XIV's finance minister, might equally well be regarded as a minister of information. For example, he re-established the provincial officials known as intendants, but ‘transformed their functions’, from tax-collectors into observers and informers, producing ‘a massive bank of information’. As Jacob Soll remarks, ‘Detail even extended to counting the number of cows in a given locale.’ Colbert sent out questionnaires, received reports from India and elsewhere, tried to bring learning under the control of the state and established archives so that the information collected could be preserved and retrieved.100
The choice of these European examples does not imply that governments elsewhere did not participate in this general process. In the early modern Mughal Empire, the regime of Akbar was known as ‘government by paper’ (kaghazi raj), a system that was taken over by the East India Company as they began to rule as well as trade. The early modern Chinese government was also a great producer of official paper.101
The centralization of government went still further in Europe from the eighteenth century onwards, when the knowledgeable state gradually became more and more of a surveillance state, whether the surveillance was carried out by human informers or, in recent years, by cameras, microphones and computers. Surveillance was assisted by the demand that individuals carry identification papers of some kind. Passports have long existed, but as a general requirement for travel to foreign countries they go back only to the First World War, while the system was codified at conferences organized by the League of Nations in the 1920s. In many countries identity cards became a requirement for all citizens – in France in 1940, in Germany at about the same time, and so on.102
Interpretations of the utilization of information by the state are controversial. On one side, presenting what might be called a ‘malign interpretation’ of the motives of governments, there is Michel Foucault, emphasizing the desire to control. His supporters would include the British historian Vic Gatrell, citing for example the establishment of the British Habitual Criminals Register (1869), which made it easier to put second offenders back into prison. By contrast, another British historian, Edward Higgs, offers a more ‘benign’ interpretation of the official uses of information. Focusing like Gatrell on the nineteenth century, Higgs argues that information collected by the central government was employed essentially to empower, to defend and diffuse the rights of individuals (to pensions, for example). He suggests that information ‘underpins general rights and liberties within a pluralist society’.103 In early modern times, too, some information had been collected for the purpose of welfare: censuses of mouths to feed in a given city in times of famine, for instance. As is so often the case, each interpretation has something to be said for it, with the relative importance of welfare and surveillance varying with particular regimes.
Studies of the uses of knowledge in business are multiplying. One focus of interest is the merchant's manual. From the later Middle Ages onwards, more and more manuals were produced to provide merchants, especially merchants living abroad, with essential information about keeping their accounts and about the commodities and the weights and measures and currencies that a Venetian, for instance, would find in Florence, in Bruges, in Aleppo and so on, as well as tips on how to avoid being cheated. A kind of practical knowledge, more or less tacit, which had formerly been transmitted by example or by word of mouth to relatives and employees, was now written down, printed, and so made more widely available.104
As enterprises grew larger in the age of trading companies that bought and sold in many parts of the world, their need for written information increased. A famous example of what would now be described as a ‘knowledge-creating company’ is the Dutch East India Company, founded in 1602 and known as the VOC (Vereenigde Oost-Indische Compagnie). The VOC has been described as an early ‘multinational’ and its remarkable success has sometimes been attributed to the efficiency of its communications network, passing information from the centre in Amsterdam to the Asian headquarters in Batavia (now Jakarta), and the branches in Nagasaki, Surat and elsewhere, and even more important, passing information from the local branches to the centre. The Company's maps and charts were constantly updated as new information was gathered. Bribes, described euphemistically as ‘gratuities’, gave the company access to information from both Dutch and foreign diplomats. What was most remarkable about the information system of the VOC was the use of regular written reports that provided essential commercial information, often in the form of statistics: reports from local branches and an annual report from the Governor-General in Batavia to the directors in Amsterdam. By the end of the seventeenth century, sales figures were already being analysed in order to determine the future policy of the company on pricing and the ordering of pepper and other commodities from Asia.105
What was still unusual in what we might call the ‘knowledge policy’ of the VOC was to become commonplace later, especially at the time of the rise of large manufacturing firms in the USA and elsewhere in the late nineteenth century. Like states, these firms were bureaucracies, administered by officials known as ‘managers’. More and more information came into the firm and circulated through it, in the form of statistics, reports, correspondence, written orders, and so on, assisted by the rise of new office technology, from the typewriter and the filing cabinet to the paper clip.106 The late nineteenth century was also the time of the rise of what is now known as ‘Research and Development’ or ‘R & D’, with large firms constructing laboratories and hiring scientists in order to produce new or improved products. In 1876, for instance, the inventor Thomas Edison opened what has been called the first industrial research laboratory in the world in Menlo Park, New Jersey. Chemists were employed to discover synthetic dyes and pharmacologists to discover new remedies.
There was also increasing concern at this time to circulate information about the firm and its products by means of advertising in newspapers, on posters and on the radio. Thanks to aggressive advertising, ‘Pears Soap’ had already become household words in late Victorian England. By the 1930s, Americans were being interviewed on the street in order to discover the effectiveness of advertising. Systematic ‘market research’ had begun.
In the last few pages, the uses of knowledge in the religious, political and economic domains have been discussed separately. However, re-employment should not be forgotten. Both techniques for acquiring information and information itself have sometimes been transferred from one ‘employer’ to another. In early modern Europe, for instance, the questionnaire, a tool for acquiring useful knowledge, was transferred from the Church to the state. In the United States in the twentieth century, the techniques of market research were adapted to political uses, taking the form of public opinion polls. Changes in the display of artefacts in shop windows were imitated by the curators of museums. Card-indexes spread from offices to libraries and the studies of individual scholars.
Transfers from the political to the academic domain have also taken place. The documents in the archives of government were originally preserved because it was thought that they might be useful in the everyday task of administration. It was only from the French Revolution onwards that government archives were gradually opened to the public, especially though not exclusively to professional historians. The French Archives Nationales were established in 1800; the English Public Record Office opened in 1838; the Spanish archive at Simancas opened in the 1840s; the Vatican Archives opened in 1881, and so on. Following the collapse of the Communist regimes in Europe after 1989, even the files of secret police forces such as the East German Stasi were opened to the public and studies based on this material have begun to appear.
The consequences of employing different kinds of knowledge include some that are unintended and, on occasion, disastrous. As the English poet Alexander Pope memorably wrote, ‘A little knowledge is a dangerous thing.’ This is the central argument of a book by James Scott, Seeing Like a State (1998). An anthropologist who has carried out fieldwork in South-East Asia and takes a special interest in the problems of peasants, Scott shows ‘how certain schemes to improve the human condition have failed’. From the eighteenth century onwards, so he suggests, there have been a succession of attempts ‘to make a society legible’. To make it legible means not only to collect maps, statistics and other kinds of information, but also ‘to arrange the population in ways that simplified the classic state functions of taxation, conscription and the prevention of rebellion’. Scott begins his account with forestry in Germany, where the state saw forests as sources of revenue, and scientific forestry was concerned with estimating and managing this revenue. ‘The German forest became the archetype for imposing on disorderly nature the neatly arranged constructs of science.’ The trees were planted in straight rows, as if on parade. From the arrangement of trees, the book moves on to the arrangement of people, discussing what the author calls ‘authoritarian High Modernism’ via concrete examples such as the collectivization of agriculture in the USSR, the foundation of Brasília, ‘compulsory villagization’ (in Tanzania), and so on. In each case Scott emphasizes the negative consequences of plans backed by state power and imposed without regard to local conditions and problems.
Seeing Like a State might be described as an anthropologist's critique not only of the modern state but also of sociology and more generally of supposedly universal or context-free knowledge. The author argues that ‘certain forms of knowledge and control require a narrowing of vision’, and makes an eloquent plea for the valorization of an alternative knowledge, variously described as local, practical and contextualized, ‘the valuable knowledge that high-modernist schemes deprive themselves of when they simply impose their plans’. More recent studies of the dangers of planning without local knowledge support Scott's argument.107