The recent rapid expansion of knowledge studies in general and of the history of knowledge in particular has led to the proliferation of new concepts. We are faced with what virtually amounts to a new language – not to say ‘jargon’ – so much so that something like a glossary is becoming necessary. As a first step in this direction, what follows will discuss a small group of terms that help us not only to read and write about the history of knowledge but to think about it as well.1 As in the case of glossaries, items will be arranged in alphabetical order.
As studies of colonial situations suggest, knowledges may be plural but they are not equal: that is, they are not treated as equal. Some individuals, groups and institutions (the Church, the state or the university, for instance) are ‘authorities’, in the sense that they have the power to authorize or reject knowledges, to declare ideas to be orthodox or heterodox, useful or useless, reliable or unreliable, indeed to define what counts as knowledge or science in a particular place and time.2
The example of the Inquisition is too well known to need more than a brief reference, like the example of authoritarian states such as Stalin's Russia or Hitler's Germany, but it may be worth lingering for a moment on the case of universities, analysed at length (in the case of Paris in the 1960s) in a classic study by Pierre Bourdieu.3 Some academics, known in Italian as ‘barons’ (baroni), may be described as ‘gatekeepers’ who control appointments, access to research funds and even entry to a given intellectual field, whether they make their decisions on the basis of intellectual merit, ‘correct’ views, or membership of the baron's patronage network. Other scholars, described by Bourdieu as ‘the consecrated heretics’, concentrate on their research and acquire international prestige, but exercise little power in the world of the universities.4
With unexpected tact, Bourdieu omitted the names of individual academics from his analysis, but it is not difficult to fill in at least some of the blanks. One famous example of an academic baron in Paris in the 1960s was the historian Roland Mousnier, professor at the Sorbonne and an opponent of both the Marxists and the historians of the so-called ‘Annales School’, whose aim was to write a new kind of history, with less emphasis on politics than had been customary and more emphasis on the economy, society and culture. Fernand Braudel, leader of the Annales group, was another baron, charismatic and authoritarian, a visionary and an empire builder. Professor at the Collège de France, outside the university system, Braudel might be described as one of Bourdieu's ‘consecrated heretics’. However, he did have a power base in the VIth section of the Ecole des Hautes Etudes and also in the Maison des sciences de l'homme, an interdisciplinary institute that he founded. Braudel combined the gift of spotting talent with the power to make or break careers, while his alliance with another professor at the Sorbonne, Ernest Labrousse, who supervised a record number of doctoral dissertations (42 in all), allowed him to influence the younger generation.5
Long before Bourdieu, an anonymous Victorian satirist encapsulated the idea of academic power in a quatrain put into the mouth of a leading academic baron of the time, Benjamin Jowett, a leading classicist and Master of Balliol College Oxford.
I come first, my name is Jowett,
All that's knowledge, well I know it,
What I don't know isn't knowledge.
I'm the Master of the College.
Some authorities, notably clerical elites such as the Catholic priesthood and the Muslim ‘ulama, have attempted to establish monopolies of knowledge, or at least of its most prestigious forms in a given culture. According to the Canadian economic historian Harold Innis, each medium of communication has tended to create a monopoly of knowledge. Innis regarded these monopolies as extremely dangerous. In compensation, they were vulnerable to competition from other media, so that ‘the human spirit breaks through’ from time to time. The intellectual monopoly of medieval monks, for example, based on parchment, was undermined by paper and print, just as the ‘monopoly power over writing’ exercised by Egyptian priests in the age of hieroglyphs had been subverted by the Greeks and their alphabet. In the case of Innis, it is difficult to resist the suspicion that the economic historian's interest in competition, in this case between media, was reinforced by the Protestant's critique of ‘priestcraft’ (Innis had planned to become a Baptist minister before turning to an academic career).6
Curiosity, the impulse to know, may appear to be a constant feature of human psychology, but attitudes to that impulse, as well as the meaning of the term ‘curiosity’ and its equivalent in other languages (curiositas in Latin, curiosità in Italian, Curiosität in German, and so on), have changed a good deal over the centuries. Although Aristotle approved of curiosity, as one might have expected, given the wide range of his studies, other ancient writers emphasized its dangers. In the early Christian centuries, Ambrose criticized Cicero for believing that astronomy and geometry were worth knowing, while Augustine regarded curiosity as a vice, associated with pride. For many Christians, the story of Eve and the apple was a warning against the perils of female curiosity in particular.
Medieval philosophers were torn between the positive view of Aristotle and the negative view of Augustine. It was only at the Renaissance that we find a ‘rehabilitation’ of curiosity, a return to Aristotle's positive view, while Francis Bacon put forward the idea of ‘an essential human right to knowledge’.7 Even then, the story of Dr Faustus selling his soul to the devil in return for knowledge (among other things) reminds us that the negative view of curiosity still had adherents. It may have been as late as the Enlightenment that the positive view became dominant, symbolized by Kant's motto, ‘dare to know’ (sapere aude, a quotation from the Roman poet Horace).
To complicate the story, as Neil Kenny has shown, the meanings and associations of terms such as ‘curious’ in English, French, German and other languages were (as they still are) multiple and changeable. In the seventeenth century, these meanings ranged from ‘careful’ to ‘elegant’ and from ‘inquisitive’ to ‘odd’. Only the context tells us that the collegium curiosum founded at Altdorf in 1672 was meant to refer to a club of lovers of knowledge, especially experiment, rather than to a group of eccentrics.8 ‘Cabinets of curiosities’, known in German as ‘cabinets of marvels’ (Wunderkammer), in other words the private museums that became fashionable in early modern Europe, contained objects that provoked wonder because they were strange, rare, made with unusual skill or perceived as ‘exotic’ because they came from distant places.
The rise of the idea of ‘useful knowledge’ in the eighteenth century implied a new critique of knowledge for its own sake, a critique that was worldly rather than religious. In the English Royal Society, for instance, the mathematicians opposed the election of Joseph Banks as President because they feared that he would turn the Society into ‘a cabinet of trifling curiosities’.
In Chapter 1, a distinction was made between ‘information’ that is relatively raw, and knowledge that has been processed or ‘cooked’. A more formal name for this process of testing, elaboration and systematization is ‘scientification’. This word still sounds somewhat ponderous in English as well as evoking the natural sciences at the expense of the humanities, although its German original, Verwissenschaftlichung, has a wider application, to society as well as to knowledge, and has come to be generally accepted. Scientification is often, if not always, an elaboration of everyday practices such as observation, description and classification, making them more precise but at the same time more remote from the experience of ordinary life. The process is sometimes called ‘disciplining’ (in German: Disciplinierung). It is central to the formation of academic disciplines.
Like the idea of discipline in athletics, religion and war, the concept of an intellectual ‘discipline’ is an old one, emphasizing the ascetic side of the scholar's career as well as the need for a kind of apprenticeship until the necessary skills have been internalized. We might describe a discipline as a set of intellectual practices that are distinctive (or, at least, believed to be distinctive) and that are institutionalized in professions such as law or medicine. Academic disciplines have sometimes been compared to nations. They have their own traditions and territories, their ‘fields’ and their frontiers, warning trespassers to keep out (the term intellectual ‘field’ (campus) can be found in Cicero, and again in early modern scholars such as Johannes Wower, author of De polymathia (1603), but it only became common in the nineteenth and twentieth centuries).9
Systems of disciplines vary with the orders of knowledge of which they form a part. The best-known system of disciplines, and one that has come to dominate the intellectual world, is the Western one, despite the fact that, as a recent study emphasizes, ‘none of the basic activities that each discipline comprises is confined to Europe or even just to “advanced” industrial societies across the world’.10
In the nineteenth century, academic disciplines and fields multiplied at a vertiginous rate. Their autonomy took physical form in departments, separated by a location in different buildings or by walls or floors of the same building. The university became an archipelago, a collection of more or less independent intellectual islands. It has become difficult, though not impossible, to move from one island to another, as the sections on interdisciplinarity and intellectuals will suggest.
Although universities used to be essentially concerned with preserving and transmitting knowledge, creating new knowledge has been one of their main functions ever since the rise of the research university in the nineteenth century. Firms too search for new knowledge in order to improve their products and outstrip their competitors, and encouraging innovation is one of the principal tasks of knowledge managers.
Contributions to the theory of innovation have come from a whole range of disciplines, among them economics (Joseph Schumpeter), sociology (Vilfredo Pareto), geography (Torsten Hägerstrand), psychology (Liam Hudson), urban studies (Richard Florida) and management (Ikujiro Nonaka). Might historians also have something to offer?
In the first place, studying traditions of knowledge, historians are likely to suggest that what is generally recognized as innovation will often turn out, if analysed more closely, to be an adaptation for new purposes of an earlier idea or technique. In short, innovation is a kind of displacement. What makes these displacements happen?
One answer to this question, offered by the Dutch scholar Anton Blok, focuses on the kinds of people who innovate. Blok offers a strong, provocative argument to the effect that people who become famous as innovators do not have more talent than their colleagues, but they do work harder, indeed obsessively so. They behave in this way because they have had to contend with difficulties, often from early childhood (loss of parents etc). Innovators, Blok argues, are usually outsiders, geographically (they are provincials), psychologically (they are loners), socially and intellectually. They take more risks than their established colleagues because they have less to lose.11
An alternative approach focuses on groups rather than individuals. Although the mythology of innovation is dominated by individual geniuses, recent studies suggest that the propensity to innovate is a collective as well as an individual phenomenon, depending on interaction and exchange. The most important milieu for creative interaction is a small group, usually a face-to-face group, especially a group that meets regularly. Ideally, this group should be composed of people with common interests but different approaches, often linked to differences in their education, in different countries or in different disciplines. Displaced ideas often come from displaced people (below, Chapter 3).12
How can such groups be encouraged? In the past, they were often encouraged by the growth of cities. Cities are magnets for immigrants from different places and with different skills and they offer niches or spaces of sociability such as taverns and coffeehouses where discussion can flourish, producing the ‘buzz’ that leads to new ideas. Our problem today is that the increasing size of cities makes it more difficult for different kinds of people to meet.
A history of knowledge is necessarily concerned with different kinds of knowledgeable people inside and outside the university. A concept that recurs in discussions of knowledgeable people is that of ‘intellectual’, principally in the sense of a writer or scholar who speaks out on public issues. A well-known example is that of the novelist Emile Zola at the time of the notorious ‘Dreyfus Case’ in France (1894–1906). Zola led the group that argued that Captain Dreyfus, who had been charged with treason for divulging military secrets to the Germans, was in fact innocent. It was in this context that the French word ‘intellectuel’, which later spread to many languages, was originally coined.13 An earlier and more precise term is ‘intelligentsia’, originally a Russian word referring to writers and scholars who opposed the authoritarian regime of the tsars.14
Another species of knowledgeable person is the expert or the ‘specialist’, a term coined in the mid nineteenth century, originally in a medical context, at a time when medical specialisms were multiplying.15 The term soon came to be used more widely. A very different species is the scholar familiar with a number of different disciplines, the polymath or the ‘generalist’ as the American scholar Lewis Mumford (best known as an architectural critic and a student of cities) liked to describe himself. The term ‘polymath’ came into regular use in the seventeenth century, at a time when scholars were already beginning to be worried by the fragmentation of knowledge, although a few remarkable individuals were still able to make original contributions to a number of different fields. Gottfried Wilhelm Leibniz, for instance, is now remembered as a philosopher, but he also made discoveries in mathematics, history and linguistics.
Since the eighteenth century, following the rise of more and more specialist knowledges, polymaths have often been regarded as an endangered species. They have never become extinct, although they have become less ambitious. It may be useful to distinguish two types of wide-ranging scholar. One is the passive polymath, such as the writer Aldous Huxley, who is said to have read the Encyclopaedia Britannica from cover to cover but made no significant contribution to knowledge himself. The other is the serial polymath, who is trained in one field and later moves to others.
Two well-known examples of serial polymaths are Michael Polanyi and Jared Diamond. Polanyi, a political refugee first from Hungary and then, in 1933, from Germany, was a professor of physical chemistry who turned philosopher, writing about the ‘tacit knowledge’ discussed later in this chapter. Diamond was a physiologist who moved into ornithology but is probably most widely known today for his books on world history, Guns, Germs and Steel: The Fates of Human Societies (1997) and Collapse: How Societies Choose to Fail or Survive (2005). Their universities have been quite accommodating to these changes of fields. Polanyi simply moved from the department of chemistry to the department of philosophy at the University of Manchester, while Diamond transferred from a chair in physiology to a chair in geography at the University of California at Los Angeles.16
Serial polymaths are in a particularly good position for practising interdisciplinarity, in other words taking ideas or methods current in one field and employing them in another. Interdisciplinarity may be regarded as a necessary antidote to specialization. Like the division of labour in general, specialization increases efficiency and so contributes to the growth of knowledge. At the same time what has been called ‘knowing more and more about less and less’, or even ‘knowing everything about nothing’, has sometimes proved to be an obstacle to new discoveries and new theories.17 Living on islands in the academic archipelago encourages intellectual insularity. Hence the continuing need to avoid the intellectual ‘frontier police’, as Aby Warburg, a private scholar best known for his contributions to the study of images, memory and the classical tradition, used to say.
The twentieth century was a time of many attempts to institutionalize interdisciplinarity by means of informal discussion groups like the History of Ideas Club at Baltimore in the 1920s (bringing together philosophers, historians and literary scholars), or more formally by the foundation of organizations such as the Institut für Sozialforschung (Institute for Social Research) at Frankfurt in 1923. Some of these organizations have ambitious aims, like the Institute for the Unity of Science in The Hague (founded in 1936), while others are relatively modest, like the centres of ‘area studies’ founded in the USA with government assistance, largely for political reasons, in the age of the Cold War. The best-known of these institutions is probably the Russian Research Center founded at Harvard in 1947, in which economists and sociologists worked with historians and political scientists, focusing on the USSR.
‘Knowledge management’ is a relatively new phrase that spread in the 1990s, when courses on the subject were founded in a number of fields, from business to librarianship. A Journal of Knowledge Management was founded in 1997. An associated idea is ‘science policy’, a concern of governments and also an object of academic study (the Science Policy Research Unit at the University of Sussex was founded in 1966). The phrase ‘knowledge management’ is also associated with the concept of ‘intellectual capital’, viewing information and ideas as a resource or investment that needs to be protected and used wisely. Hence the appointment of Chief Knowledge Officers and Chief Information Officers (CKOs, CIOs) in many firms, also from the 1990s onwards. Ensuring that the information stored on the firm's computers is secure from hacking has become an increasingly important part of their job.
All the same, the story of knowledge management did not begin in the 1990s. The future of knowledge has often been planned and to a lesser extent shaped by individuals in strategic positions outside the academic world. In the seventeenth century, Francis Bacon, Lord Chancellor, had a vision of collective research, requiring an organizer or co-ordinator, while the ‘information master’ Jean-Baptiste Colbert, King Louis XIV's minister of finance, ‘amassed enormous libraries and state, diplomatic, industrial colonial and naval archives; hired researchers and archival teams; founded scientific academies and journals; ran a publishing house; and managed an international network of scholars’.18 In the eighteenth century, Joseph Banks was active as a kind of knowledge manager, combining his official position as President of the Royal Society with an unofficial but powerful role as adviser to King George III.19 In the nineteenth century, a leading knowledge manager was Friedrich Althoff, a civil servant in Berlin with considerable influence over the appointment of professors and the foundation of research institutes.20
In the twentieth century, Warren Weaver, director of the Division of Natural Sciences at the Rockefeller Foundation, 1932–55, funded projects in genetics, agriculture and medicine and supported the emerging discipline of molecular biology (as he named it) at a decisive moment. On the side of the humanities and social sciences, Shepard Stone, Director of International Affairs at the Ford Foundation, gave money for research to the Free University of Berlin, St Antony's College Oxford and the Institute for European Sociology in Paris, not only to advance knowledge but to fight Communism and improve the image of the United States abroad.21
Awareness of the need for knowledge management is a response to the rise of the so-called ‘knowledge society’ or ‘information society’, a subject of debate by economists, sociologists and management theorists from the 1960s onwards. Economists such as Fritz Machlup noted the increasing numbers of ‘knowledge workers’. Sociologists such as Daniel Bell argued that ‘industrial society’ had been succeeded by ‘post-industrial society’.22 Management theorists suggested that knowledge, which they described as ‘intellectual capital’, made companies more innovative and so more competitive.23 In the digital age, the rise of the knowledge society accelerated. It has been argued, for instance, that capitalism was restructured in the late twentieth century thanks to changes in information technology.
The knowledge society is generally regarded as something quite new, not only by journalists and the general public but also by sociologists such as Manuel Castells, who has written about what he calls the ‘information age’.24 On the other hand, as we saw in Chapter 1, the few historians who have intervened in this debate have tended to stress continuity. Indeed, a Dutch historian has written about the medieval ‘knowledge economy’ as part of what he calls the ‘long road’ to the Industrial Revolution.25
There is an obvious need to avoid two opposite dangers: on one side the simplistic contrast between the present and an undifferentiated past, and on the other an exaggerated emphasis on continuities. To quote the American historian Robert Darnton, ‘every age was an age of information’, but ‘each in its own way’.26 What we need to do is to distinguish these ways: the age of manuscript, for instance, from c.3000 BCE onwards, and the first age of print and paper, running in the West from 1450 to 1750 or thereabouts. After 1750, periodization becomes more difficult and controversial, but we might distinguish five more ages: the age of statistics, 1750–1840; the age of steam and electricity, 1840–1900 (conveying information by the steam press, steamship, railway and telegram); the age of Big Science, 1900–50; the age of three revolutions, 1950–90 (the third age of discovery, third scientific revolution and third industrial revolution); and our own age, the Age of the World Wide Web, from around 1990 onwards.27
One of the fundamental concepts in the history (or the sociology or the anthropology) of knowledge is that of ‘orders of knowledge’, ‘orders of learning’ or ‘orders of information’. Foucault, provocative as usual, claimed that ‘each society has its regime of truth’ as well as using the less controversial phrase ‘orders of knowledge’.28 These orders are often defined by place (Western, Islamic, and East Asian, for instance) or by period (medieval, modern, and perhaps post-modern). For example, the book that you are reading now has been written from within the British variety of the Western order of knowledge in an age of transition from the ascendancy of print to digital dominance.
The essential point is that the main forms and institutions of knowledge to be found in a particular culture, together with the values associated with them, form a system: schools, universities, archives, laboratories, museums, newsrooms and so on. The connections between different parts of the system are probably most visible to outsiders, while insiders take the whole order for granted. The orders are not planned but they are shaped by the values of the culture as well as by interactions between organizations founded for specific purposes.
In traditional China, for instance, the system was dominated by Confucianism and the civil service examinations. In the Ottoman Empire, the order of knowledge was dominated by Islam and more specifically by the mosque schools or medreses. In the USSR, it was dominated by Marxism and by the Academy of Sciences. Since the knowledge order is part of the larger socio-cultural order, it is no surprise to discover the importance of central control and the dominance of Paris in the twentieth-century French order of knowledge, in sharp contrast to the decentralized system of the United States.
Today, it is becoming increasingly implausible to speak of a single dominant order. In Britain, for example, we see competition between the BBC and its rivals, between different churches and mosques, between different kinds of school and university, and so on, not to mention the increasing use of international search engines online.
In other words, orders of knowledge change, even if the rate of change tends to be slow. European universities reacted only gradually to the rise of printed books, while lectures remain a staple mode of disseminating academic knowledge to this day.29 Again, in the North American order there has been a gradual change in the balance of power between institutions, with the rise of the university to ‘ascendancy’ in the production of knowledge in the late nineteenth century, followed by its decline a century later, as both public and private research institutes or ‘think tanks’ became an increasingly important part of the scene.30
Alternatively, information orders might be defined by the means of communication dominant in a given place or time: oral, written, printed or digital, bearing in mind that when each new medium arrives it does not replace but rather coexists with all the earlier ones. Competition between media often settles down into a division of labour like the one between manuscript and print in early modern Europe, where manuscripts retained importance not only for clandestine communication but also for the circulation of poems and other works by nobles who associated print with the commerce they often despised.31
The concept of orders of knowledge underlies recent comparative studies of what we now call ‘science’ in early modern Europe, China and the world of Islam.32 A penetrating example of this comparative approach is Geoffrey Lloyd's study of ancient Greece and China, in which he notes that, in the study of nature, the Chinese had an advantage over the Greeks, government support, while the Greeks had an advantage over the Chinese, a tradition of discussion and dispute.33
Without the concept of order, or something like it (‘system’, ‘culture’ or ‘regime’), comparisons between knowledges in different places, different periods or different social groups would be difficult indeed. Another advantage of the concept is to warn us against false analogies. For example, a given practice, such as healing the sick or writing about the past, even if it is similar in certain respects in different cultures, may still occupy a different place in each of them. Simplifying somewhat, one might say that in ancient Rome history was written by senators for senators, while in early medieval Europe it was written by monks for monks and today, by professors for students. It is hardly surprising that the questions asked about the past and the answers given to them have changed a good deal over the centuries.
On the other hand, the concept of a knowledge order raises problems as well as solving problems. If we divide the world into orders according to geography, should we speak of the Western order (as opposed to the Islamic or East Asian), or the French order (as opposed to the English or the American)? The great problem is that of frontiers. ‘Systems’ are not watertight (or, in this case, ‘information-tight’). Frontiers of knowledge, like the one between the world of Christendom and the world of Islam in the sixteenth century for instance, have generally been porous, since many people travelled while at least some travellers were open to ideas from outside. If some degree of movement and openness were not the case, intellectual change would be limited, if not impossible – yet we know that it happens, often on a massive scale.
Another disadvantage of the concept of order of knowledge is that it implies a homogeneity that does not exist. Viewed more closely, an order fragments into dominant knowledges and subordinate ones that are often perceived by elites either as heretical or, in the case of popular knowledges, dismissed as unworthy of attention. In the Ottoman empire, for instance, the dominant order, that of the ‘ulama (in Turkish, ‘ulema) was challenged by that of the Sufis, mystics who prized ma'rifah rather than ‘ilm (above, Chapter 1).34 In China, the Confucian order of knowledge was challenged by Buddhists and Daoists. In short, the concept of an order of knowledge is useful on condition that we recognize that it represents a kind of intellectual shorthand, a helpful simplification of a more complex reality.
‘Practice’ has become a central concept in studies of knowledge, beginning with studies of the history of reading and the history of experiment, spreading to analyses of observation, note-taking and description, and summed up in the two massive volumes edited by Christian Jacob, Lieux de Savoir, described by the editor as contributions to ‘the history and the anthropology of intellectual practices’. The crucial point here is the awareness that habits that seem timeless are in fact subject to change, even if the changes are gradual and generally imperceptible. An important everyday intellectual practice is classification. Classifications differ between cultures and disciplines, but in a given culture or discipline they may appear to be natural, a tendency that Foucault encouraged his readers to question, notably in his famous description, borrowed from the Argentinian writer Jorge Luis Borges, of a Chinese encyclopaedia that divided animals into fourteen categories that included ‘those that belong to the emperor’, ‘embalmed ones’ and ‘those drawn with a fine camel-hair brush’.35 Although this Chinese encyclopaedia never existed, anthropologists have found almost equally surprising contrasts with Western ways of classifying in their examination of ‘folk taxonomies’, the ways in which the different peoples of the world name colours or arrange plants, animals and birds. A well-known study with the provocative title, ‘Why is the cassowary not a bird?’ analysed the logic of the zoological taxonomies of the Karam, a people living in the highlands of New Guinea, explaining that the Karam thought of cassowaries as family members.36
Intellectual practices also include the more or less formal procedures for acquiring, classifying or testing knowledge, such as dissecting bodies, observing stars through a telescope, conducting experiments and so on. Some of these are characteristic of a particular discipline (like diagnosis in medicine), while others (such as comparison) are common to a number of disciplines. Yet others (note-taking, for example) are even less formal and still more widespread. Each of these practices has a history, in the sense of changing over the long term.37 The fact that scientific methods have often if not always developed out of less formal everyday practices is one more reason (besides the desire to avoid ethnocentrism and anachronism) for incorporating the history of science in a broader history of knowledges.
The rise and multiplication of different disciplines has sometimes been regarded from a purely intellectual point of view, as a response to the increasing accumulation of knowledge, especially from the nineteenth century onwards. However, the story has a social aspect as well. Sociologists use the term ‘professionalization’ to refer to a process that includes not only the multiplication of full-time occupations, each with its own kind of knowledge, but also the establishment of bodies that make the rules governing entry to a particular occupation, organize training, maintain collective standards and so on.38 Thus healers turn into physicians organized in colleges, while PhDs become necessary for careers in the academic world. Professional organizations tend to become bureaucratic, in the sense of spelling out the rules for entry, awarding diplomas, adopting formal procedures for appointment, promotion, the funding of projects and so on.
Take the case of nineteenth-century Britain, when the old professions (the Church, the law, medicine, the army and navy) were joined by a number of new ones: engineering, architecture, accounting, surveying, teaching and so on. In Britain, the Society of Engineers (founded in 1824) was joined by the Institution of Surveyors (1868) and the Institute of Chartered Accountants (1880).
The process of professionalization is accompanied by the development of a technical language or occupational jargon, facilitating communication within the group at the same time as rendering it more difficult when insiders speak to outsiders). A professional ethos develops: pride in one's occupation, which may be viewed as a calling rather than simply a means of making a living, together with loyalty to one's colleagues.
As in the case of ‘knowledge order’, discussed above, the concept of professionalization has costs as well as benefits. It directs attention to what is common in the rise of different professions at the expense of attention to differences. It fits newer professions, like accountant, better than old ones, like medicine, and it fits the more practically useful occupations better than the humanities.39
Take the opposite cases of librarians and historians. To speak of the professionalization of librarians is relatively unproblematic. Libraries used to be managed by scholars, as in the famous case of the polymath Gottfried Wilhelm Leibniz at the ducal library in Wolfenbüttel. Now they are managed by librarians who have been to library school and belong to a professional association. In the United States, for instance, the American Library Association was founded in 1876 and the first library school was established soon afterwards by Melvil Dewey.40 International conferences are another indicator of professionalization and the first international congress of archivists and librarians took place in 1910. On the other hand, it is more difficult to say when some historians turned professional. There is an argument for choosing the mid nineteenth century, taking the famous example of Leopold von Ranke and his pupils in Berlin, Munich and elsewhere in the German-speaking world. This was the time when historians could find full-time employment in universities or in archives. However, the role of official historian has a much longer history in Europe, going back to the fifteenth century if not before. In any case, for some early modern scholars, history was already viewed as a ‘calling’.41 The foundation of the American Historical Association in 1884 may have made historians more conscious that they formed a group with its special interests, but it was far from the beginning of professionalization.42
The idea of professionalization is linked to that of expertise. The rise of the words ‘expert’ and ‘expertise’ took place in Britain in the nineteenth century, their first use being recorded in 1825 and 1868 respectively. The new terms are linked to a new trend, the increasing reliance by governments on specialist advice on practical problems such as sanitation, town planning or the management of the economy. John Maynard Keynes, for instance, was a Cambridge economist who advised the government after the Great Depression of 1929, joining the Economic Advisory Council in 1930.43
The concept of an order of knowledge surely requires including its complementary opposite, the organization of non-knowledge or ignorance. In fact some scholars have begun to study what they call ‘regimes of ignorance’, in other words what is not known by different kinds of people in certain places or times.44
Anthropologists have studied secrets and secret societies in some West African cultures, for instance, while economists have made analyses of decision-making by firms in conditions of uncertainty. Sociologists have emphasized the paradox that ‘non-knowledge’, like silence, is a resource that has its uses, at least in certain circumstances. The anonymity of examination candidates, for example, encourages fairness in the examiners.45 On the other side, the dangers of ignorance may be illustrated by governments that pursue economic growth or technological change without knowing what the long-term impact of their policies on the environment and so on society will be over the long term.
Historians have not published many studies of ignorance so far, although they sometimes offer examples of its historical importance, for example when one group attributes ignorance to another, thus justifying imperial rule. In the case of the French Revolution, the problem of the ‘control of the definition of ignorance’ has been discussed; ‘the ability to brand others as ignorant and thereby disqualify them from a voice in the affairs of the city’, in Marseilles for instance. Again, a study of the use of statistics by the German state notes the fact that in 1920, in the crisis of transition from the German Empire to the Weimar Republic, ‘the vacuum of knowledge’ about the state of the German economy ‘was almost complete’.46
All the same, it is not difficult to imagine what might be done in the future, most obviously in the history of empires. In recently conquered territories, for example, conquerors usually knew little about the resources of the lands that they had taken over, or about the cultures of the inhabitants. Surveys made by the Spaniards in the New World, by the British in India or by the French in North Africa might be viewed in negative as well as positive terms as more or less successful attempts to plug these holes in the knowledge that was essential to the efficient exercise of power. In these empires, military and political decisions as well as economic ones were taken in conditions of particular uncertainty and sometimes led to disaster.47 From the point of view of the conquered peoples, on the other hand, the ignorance of their masters was a valuable resource.
Karl Marx had already written about the way in which thought, especially what he called ‘ideology’, was shaped by society and its social classes. Offering a milder version of Marx's claim, Mannheim, as we saw in Chapter 1, described knowledge as ‘tied’ to everyday life, situated in a particular time, place and community. Historians of knowledge therefore need to place or more exactly replace it ‘in context’. This is the aim of the Society for Social Studies of Science (1975), as it is of the journal Science in Context (1987).
Where Mannheim thought of social situation mainly in terms of class and generation, later scholars extended the concept further. The American scholar Donna Haraway wrote a famous essay on ‘situated knowledge’ in which she discussed situation in terms of gender. For his part, Michel Foucault viewed situation in terms of place, especially the micro-spaces, such as clinics, factories and prisons, in which knowledge is produced or employed. Indeed, in an interview conducted by geographers, he once admitted to what he called ‘spatial obsessions’.48 In similar fashion the French Jesuit Michel de Certeau, who was, among other things, an historian, published an essay claiming that written history was ‘the product of a place’, in other words the result of a set of social, political and cultural conditions that make some kinds of research possible and others impossible.49 This essay may have inspired the collective study of Lieux de Savoir that was mentioned in Chapter 1.
In the wake of Foucault and Certeau, many scholars have turned to the study of the sites or, as Bacon called them, the ‘seats’ of knowledge, small or large. Some focus on a building such as a clinic or a laboratory, in which particular intellectual practices take place.50 Others are concerned with cities such as Rome, Paris or London, viewed as networks of smaller sites (universities, libraries, monasteries, coffee-houses and so on).51
Another group of studies emphasize the ‘geopolitics of knowledge’, especially the relation between intellectual centres and their peripheries. According to a powerful if simplistic model, this relation resembles the economic relation between a metropolis – usually if not always a Western metropolis – and its colonies. The places described by Bruno Latour as ‘centres of calculation’ (Paris, for example, or London) import the raw material of information from the periphery, and export the finished product, knowledge, in return.52 Different kinds of knowledge had different centres at different times. In the eighteenth century, for instance, the university city of Uppsala became a centre for botanical knowledge thanks to the presence of Carl Linnaeus.
The centre–periphery model can obviously be criticized as Eurocentric. It has often been assumed that the spread of knowledge has been one-way, from the West to the ‘rest’, despite the many examples of movement in the opposite direction, to Europe from the Islamic world or from China. Again, the model assumes that what the West imported was raw information, although it can be shown that some Europeans in India, China and elsewhere also took over indigenous systems of classification (of plants, for instance). In the third place, the model treats the knowledge that moved as if it did not change in transit, although what was imported was translated into different languages and adapted to different circumstances.53
It might also be useful to modify the model by introducing the notion of ‘semi-periphery’, thinking of colonial cities such as sixteenth-century Goa or eighteenth-century Calcutta where an important part of the work of translation, adaptation and even publication took place.
The idea of different modes of thinking is a topic that has been discussed by philosophers for centuries, using terms such as manière de penser or Denkungsart. In the 1920s, it became an object of sociological and historical study. In France, Marc Bloch's Les rois thaumaturges (1924) discussed the practice of the French and English kings of touching sufferers from scrofula in order to heal them. What most interested Bloch was the way in which the belief in the royal power to heal survived all the evidence to the contrary. He later studied the history of medieval ‘ways of thinking’ (façons de penser) more generally, turning away from the great ideas of great thinkers and directing attention to the everyday ideas of ordinary people.54
In Germany, the sociologist Karl Mannheim distinguished between different ‘styles of thought’, as he called them, characteristic of different periods and different nations, noting, for example, the contrast between the French ‘liberal-universal’ and the German ‘conservative-historicist’ styles in the early nineteenth century.55 Almost simultaneously and apparently independently, the Polish biologist Ludwik Fleck used the identical term ‘style of thought’ (Denkstil) to distinguish between what he called different ‘thought collectives’, defining such a collective as ‘a community of persons exchanging ideas’. Fleck pointed out that one's own style of thought (like one's own point of view) seems natural and necessary, while any other style seems odd or arbitrary.56
In the 1950s, the German sociologist Heinrich Popitz and the Polish sociologist Stanisłas Ossowski both argued that the social structure is perceived differently by individuals located at different points within it. Pierre Bourdieu went further still in this direction by noting that these differences in perception included those of sociologists themselves.
The division of intellectual labour between centres and peripheries is a reminder of the need to expand the concept of situation to include encounters between cultures, or rather, encounters between individuals and groups from different cultures, each with their own knowledges. Encounters include conquests, producing colonial situations in which knowledges coexisted on unequal terms. The knowledges of the conquerors became dominant, while local knowledges were ‘subjugated’. These subjugated knowledges were often forgotten or at least unacknowledged by members of dominant groups, as in the case of individuals from the West who wrote about or mapped the non-Western world but had little to say about what they had learned from indigenous informants.57
A famous and controversial case-study, inspired by Foucault, of the role of knowledge in the domination of the Middle East by the West (especially the French and British governments) is Orientalism (1978), written by the Palestinian-American critic Edward Said. Said defined orientalism as at once an academic specialism, ‘a body of knowledge in the West’; a ‘style of thought’; and finally, as a ‘corporate institution’ supporting Western dominance.58 This dominance began with Napoleon's invasion of Egypt in 1798, when the French army was accompanied by 167 scholars in a commission for sciences and arts. Their collective research culminated in the publication of a multi-volume Description de l'Egypte (1809–28).59
Said's account is a landmark in studies of the Middle East and extremely critical of earlier studies in the field. It has often been criticized in its turn for reducing Western interest in ‘the Orient’ to the desire to dominate, neglecting the scholars who were driven by disinterested curiosity, as scholars often are.60 Take the case of the Englishman Edward William Lane, who spent many years in Egypt between 1825 and 1849, learned Arabic, dressed as an Egyptian, and published his Manners and Customs of the Egyptians in 1836. According to Said, Lane's book contributed to ‘academic Orientalism’, which in turn contributed to Western dominance. This negative judgement is in sharp contrast to that of Lane's biographer Leila Ahmed, who suggested that he described Egyptian culture and society ‘in the terms in which a member of that culture experienced them’.61
For another example of subjugated knowledges we might take India in the age of rule by the East India Company (1757–1857) and then by the British government (1858–1947). The contribution of knowledge, or more exactly of a variety of knowledges, both indigenous and Western, to British rule in India was obvious enough to the rulers themselves. Two hundred years before Foucault's remarks on power and knowledge, the Governor-General of Bengal, Warren Hastings, declared that ‘Every accumulation of knowledge and especially such as is obtained by social communication with people over whom we exercise dominion … is useful to the state.’
Knowledge has been the central theme of a number of important studies of British India which combine a concern with the macro-level, the collision between two orders of knowledge, and a concern with the micro-level, encounters between individual Britons and Indian informants. The Cambridge historian Christopher Bayly, for instance, emphasized the debt of the British to the ‘information order’ of the Mughal rulers into whose shoes they stepped. The American anthropologist Bernard Cohn distinguished a number of what he called ‘investigative modalities’, forms of enquiry such as travel, surveys, surveillance and the collection of statistics. In this unequal encounter between two epistemological regimes, the British, in Cohn's words, ‘re-ordered’ Indian knowledge.62 It might be more exact to say that knowledge was re-ordered by Indians, working as guides, translators, spies or clerks, and incorporating Western information into their own order or orders of knowledge, as well as by the British, whether they tried, like missionaries, to transform Indian knowledge, or, as administrators, to incorporate information from indigenous informants into their own system.63
In short, the British production of knowledge about India was really a joint production, the result of a dialogue between different groups, ‘though not always in equal measure’.64 It might be useful to think about this situation in terms of cultural negotiation. ‘Negotiation’ is a somewhat elusive concept, but it might be described as a semi-conscious process of response to the ideas of another person or group, a partial appropriation and incorporation of those ideas. In this sense, negotiation should be distinguished from conscious attempts by both missionaries and indigenous scholars to reconcile Western science with Indian traditions, both Hindu and Muslim.
Intellectual innovation develops not only from interaction between disciplines but also from outside the academic system, from practical knowledge or ‘knowhow’. Orders of knowledge include practical, tacit or implicit knowledges, ‘knowing how’ to do something, as opposed to ‘knowing that’ something is the case, the form of knowledge dominant in the academic world. Michael Polanyi, the serial polymath mentioned earlier, made his contribution to epistemology by arguing that ‘we can know more than we can tell’, offering a variety of examples of skills that are difficult to put into words and have to be learned in practice, skills ranging from riding a bicycle to diagnosing illness or tasting wine.65 It would be easy to add to this list: playing the violin, making furniture, cooking, boxing, connoisseurship (dating and attributing works of art) and so on. Polanyi's ‘tacit knowledge’ can also be described as embodied knowledge, as it was by one of the leading analysts of knowledge in the late twentieth century, Pierre Bourdieu.
Bourdieu liked to speak of ‘habitus’, an old concept but one that he developed with characteristic brilliance, describing a set of skills and assumptions that have been so well internalized that individuals are no longer aware that they possess them, whether they are footballers or physicists. A particular habitus allows someone to improvise within a framework of unconscious or semi-conscious rules.66 True to form, Bourdieu studied his own practice, describing it as the result of a ‘cleft habitus’, a conflict between his upbringing in a peasant community in South West France and his later training as a philosopher, anthropologist and sociologist. He also argued that a habitus ‘was not a destiny’, but capable of being transformed by experience.67
Studying embodied practices of tacit knowledge poses serious problems for historians. Take the case of the crafts, the many products of what is sometimes called the ‘mindful hand’ or ‘vernacular epistemology’.68 Artisanal knowledge is literally handed down (the original meaning of ‘tradition’) from master to apprentice by example, almost without words. Hence the study of the crafts depends on fieldwork and participant observation, methods that are impossible to follow in the study of the past.
For example, a British anthropologist Trevor Marchand carried out his ‘fieldwork’ as an apprentice to a master builder in Yemen, helping to construct minarets, and noting that the master ‘found it tremendously difficult to “explain” what he knows, or, more importantly, how he knows’. Learning the craft involved a regular exchange between master and apprentice of the roles of observer and performer.69 In a later study of woodworkers in London, however, Marchand noted that observing and performing were assisted by brief explanations in words. This observation implies that, unlike Polanyi, we should think in terms of more or less tacit knowledges rather than drawing a sharp distinction between tacit and explicit.70
It is no surprise to find that this part of the history of knowledge has been neglected, at least relatively speaking. What is not put into words is rarely recorded, so that it is difficult to find sources for the study of changes in these practices over the long term. It is also difficult to interpret the sources once they have been discovered. Hence the historian Pamela Smith collaborated with a silversmith to reconstruct the techniques described in a sixteenth-century French manuscript on metal-working.71
However, it is sometimes possible to observe tacit knowledges in the process of becoming explicit thanks to textualization, especially the rise of how-to-do-it books that began to proliferate in the generation after Gutenberg and flourish to this day – books about book-keeping, dancing, farming, writing letters, horsemanship and more recently child-rearing, management, and so on. Indeed, it has been argued that the so-called ‘scientific revolution’ of the seventeenth century was the fruit of an encounter between the explicit and tacit knowledges of scholars and artisans. Scientific experiments, for example, were an elaboration of the ‘trial and error’ techniques that were common practice on the part of goldsmiths, for instance.72 We might speak of the new knowledge produced in this way as ‘hybrid’ or ‘translated’.
Practices are both supported and shaped by material culture, especially by what have been called ‘tools of knowledge’. For example, the practices of observation associated with the ‘scientific revolution’ of the seventeenth century depended on new scientific instruments, especially two: the telescope and the microscope. Today, some kinds of scientific research depend on enormous tools such as the Herschel Space Telescope or the Large Hadron Collider in Geneva, built to assist the study of particle physics.
Medium-size tools include blackboards, filing cabinets, microscopes, personal computers, and in early modern times celestial and terrestrial globes and the book-wheels that made it easier for scholars to view two or more open books at once. ‘Little tools of knowledge’ should not be forgotten, including simple things such as pens, inkwells, blotting paper, carbon paper, record cards and paper clips.73 Think too of the special walking-stick carried by the philosopher Thomas Hobbes. Thoughts often came to Hobbes while he was walking and he needed to record them. But how could he write them down when away from his desk? According to his friend John Aubrey, Hobbes always carried a notebook with him. He also ‘had in the end of his cane a pen and ink-horn’ so that no thoughts would be lost.
As in the case of arts and crafts and other forms of knowhow, the academic production of knowledge generally follows traditions as well as sometimes breaking them.74 Historians can scarcely do without the notion of tradition, although they might be well advised to abandon what might be called the traditional notion of tradition, in other words a cluster of practices and modes of thought (whether explicit or tacit) handed down (in Latin, tradere) from one generation to the next.
The problem here, as in the case of the idea of the ‘transfer’ of knowledge from one place to another, is the assumption that what is transferred remains the same. It was in reaction against this false assumption that Eric Hobsbawm put forward his famous idea of the ‘invention of tradition’, originally formulated to describe some cultural movements in Europe between 1870 and 1914, and later extended much more widely by other scholars.75 However, to speak of ‘invention’ is also problematic, since it implies beginning with a blank slate. It is usually more exact to say that traditions are revived, reconstructed or translated in order to fit changing situations, new needs, or, in the case of traditions of knowledge, new discoveries. In the case of the classical tradition, the German scholar Aby Warburg was already practising this approach in the 1920s.76
Traditions are often viewed in a negative way, as so many obstacles to innovation. On the other hand, there are milieux and moments in which the oxymoron ‘traditions of innovation’ seems to be appropriate. Take the case of the French historians known collectively as the ‘Annales School’, a group that has survived for four generations, beginning with the founders, Marc Bloch and Lucien Febvre; continued by Febvre's intellectual heir, Fernand Braudel, and his colleague the economic historian Ernest Labrousse; succeeded by a third generation, including the medievalist Jacques Le Goff; and continued in the fourth generation by Roger Chartier, Bernard Lepetit and others.77 Each generation learned from the previous ones, but each generation developed a distinctive approach, with individual variations.
Despite the claims to universal knowledge on the part of natural scientists, it has been argued that particular styles of thought and practices of research form part of national traditions as well as disciplinary ones.78 Anglo-American empiricism has often been contrasted with the German emphasis on theory. In the humanities, ‘four ways’ of practising anthropology have been studied, respectively British, German, French and American.79 In the case of the history of knowledge itself, three regional traditions (in a broad sense of the term ‘region’) have been particularly influential. The German tradition stems from the sociology of knowledge as practised by Mannheim and others, and draws on the work of German philosophers. The French tradition draws on the sociologist Emile Durkheim as well as on Foucault. The Anglo-American tradition has emerged from the history of the natural sciences. Even today, when many major works produced in all three traditions have been translated, differences of approach remain visible in the ways in which knowledge is gathered and elaborated, processes that will be discussed in the following chapter.
The spread, transfer or dissemination of knowledge has often been discussed. Scholars used to assume that what was disseminated remained more or less the same as it moved from place to place or from person to person. Today, on the other hand, the opposite assumption has become dominant, in other words the idea that what arrives differs in important respects from what set out. It is mediated. Propositional knowledge (‘knowledge that’) needs to be translated into different languages in order to travel, but concepts that are central in one language may be lacking in others, as missionaries to China, for instance, found when they attempted to translate the Christian idea of ‘God’. Hence the need for ‘negotiation’. Indeed, one might say that translation is a kind of negotiation, while negotiation is a kind of translation.80
Translation between languages offers particularly clear examples of the problems of what is known as ‘cultural translation’, in the sense of the adoption and consequent adaptation to one culture of items originating in another. A given ‘culture of knowledge’, large or small, forms a system, and if a new item is introduced into the system it is virtually bound to be modified, even if, in the longer term, the system is modified as well. Cultural ‘transplantation’ is followed by cultural ‘transformation’.81 In short, following a model involves a certain degree of innovation.
Conversely, what is generally recognized as innovation will often turn out, if we analyse it more closely, to be an adaptation of an earlier practice or institution – a free or creative adaptation, but an adaptation nonetheless. In similar fashion, it has been suggested that new ideas come into being by extending or ‘displacing’ old ones.82 Thinking of innovation as displacement draws attention to the role of ‘displaced people’.
One kind of displaced person is the exile or refugee, like the Greeks who fled West as the Ottoman Empire expanded in the fifteenth century, the Protestants who left Catholic countries in the sixteenth and seventeenth centuries or the Jewish intellectuals who participated in what has been called the ‘Great Exodus’ from Germany and Austria in the 1930s. The refugees took their intellectual capital with them, as in the case of the ‘skill migration’ of Protestant silk-weavers from France to London, Amsterdam and Berlin.
Looking for a job in their new home, other exiles turned to translation, a form of mediation between their former culture and their new one. In the United States in the mid twentieth century, for instance, German-speaking refugees introduced the ideas of philosophers such as Nietzsche, psychologists such as Freud and sociologists such as Max Weber. They translated texts into English, and they also engaged in ‘cultural translation’, explaining foreign ideas in terms that members of the host culture would understand. The result was a kind of hybridization, most obviously between the American tradition of empiricism and pragmatism and the German tradition of theory, incorporated in the Institute for Social Research at Frankfurt, mentioned above, which was transferred to New York, after Hitler came to power, and later migrated to California.83
Another kind of migrant intellectual might be described as a ‘nomad’ or a ‘renegade’. Academic nomads or renegades are individuals who were trained in one discipline but migrate to another, taking along with them the habitus of the old discipline but applying or adapting it to the new. Vilfredo Pareto, for instance, was trained as an engineer and carried over ideas from engineering, notably the concept of equilibrium, into the studies of economics and sociology. Again, Robert Park, a leading member of the Chicago School of sociology, was active as a newspaper reporter before he entered academic life. He carried with him the habit of investigative reporting, turning it into sociological ‘fieldwork’ in the city. Both Pareto and Park may be described as translators between disciplines.84
Equipped with this conceptual tool-box, we may turn in the following chapter to the many processes that information undergoes as it is turned into knowledge, disseminated in different places among different social groups and employed for a variety of purposes.