‘What must the world be like in order that man may know it?’
Thomas Kuhn, The Structure of Scientific Revolutions (1962)1
No theory of knowledge should attempt to explain why we are successful in our attempts to explain things … there are many worlds, possible and actual worlds, in which a search for knowledge and for regularities would fail.
Karl Popper, Objective Knowledge (1972)2
In 1571 Montaigne retired from his professional life as a judge. He was 37, still young by our standards, but on the threshold of old age by those of the sixteenth century. He was mourning – still mourning – the death of la Boëtie in 1563, and he was preoccupied with thoughts of dying. He intended to spend time with his books – he owned a thousand volumes, a vast collection. On the beams of his library he had painted sixty or so quotations from the classics, all emphasizing the vanity of human life and of human aspirations to knowledge. They were, in effect, an epitome of his reading. He had a medal struck which bore the words ‘Que sçay-je? – What do I know?’ – over the image of a pair of scales. The scales did not represent justice, for they teetered. They represented uncertainty.
Montaigne found no happiness in his new life, and so he turned to writing as a form of therapy, a way of keeping himself company. The result was to be the Essays, of which the first volume, containing Books One and Two, was published in 1580 (a third book was added in 1588, and Montaigne went on revising his essays until his death in 1592). The word essays has come to seem normal and natural to us – students write essays all the time. But when Montaigne used the word it meant an assay or a test. Montaigne was testing himself, exploring himself, studying himself, trying to make sense of himself. In the Essays Montaigne was making a fundamental claim about our knowledge of the world, that knowledge is always subjective, personal. He was also inventing a new literary genre.
In the first edition of his Essays two were of particular importance. At the centre of Book One was an essay on friendship, a prelude to what was originally intended to be the first publication of a remarkable work by La Boëtie, The Discourse of Voluntary Servitude, a work now often regarded as the first anarchist text.3 In the end, Montaigne was unable to publish The Discourse because it had already been published by Protestant rebels and condemned as seditious. La Boëtie wanted to know why we obey authority, and his answer was that we shouldn’t.
At the heart of Book Two (though not this time at the centre – the central essay is entitled ‘On Freedom of Conscience’) was the longest of all the essays, ‘An Apology for Raimond Sebond’, a passage from which, as we saw in Chapter 9, was crucial to later thinking about the laws of nature. Sebond (1385–1436), a Catalan theologian, had written, in Latin, a book providing a rational demonstration of the truths of Christianity, and Montaigne had been requested by his dying father to translate it into French (Montaigne dated the dedicatory epistle to the translation, addressed to his father, to the day of his father’s death, 18 June, 1568). Thus the origin of the ‘Apology’ was every bit as private and personal as that of the essay on friendship, and here again we have a pairing of texts – Sebond’s defence of Christianity with Montaigne’s ‘Apology’. But this time it is Montaigne who is the author of the revolutionary text, for the ‘Apology’ was only in outward appearance a defence of Sebond; on closer examination it is a devastating attack on everything he stands for, a sustained critique of religion. Evidently Montaigne’s argument had to be expressed with exquisite care. Even Sebond’s work had fallen foul of the censors, not for its basic thrust, but for the extravagant claims made on behalf of it by Sebond in his preface. Since Sebond had harnessed faith and reason together, Montaigne’s critique set out to undermine faith by showing that all claims to knowledge are overstated. What was at stake in the ‘Apology’ was not just the reasonableness of Christian faith, but the reliability of all the claims made by philosophers. The subjects we would now call ‘science’ formed, in the sixteenth century, part of philosophy,ii so Montaigne’s ‘Apology’ is, among other things, an attack on the science of his day.
The sources of Montaigne’s scepticism are not difficult to identify. The bitter conflict between Protestants and Catholics, which had led to prolonged civil war in France, to the most terrible massacres and brutalities, had made all claims to truth seem partisan. Humanist learning (Montaigne had been brought up to speak Latin as his first language, so that he would have the learning his father lacked) had brought back to life the beliefs of the pagan Greeks and Romans, offering a real alternative to Christianity. The philosophical disputes of the medieval universities (between the Aristotelianism of Avicenna and the Aristotelianism of Averroes, and between realists and nominalists) had been made to seem parochial by the publication of two texts unknown to the Middle Ages: On the Nature of Things by Lucretius, a work of materialist atheism which Montaigne had studied with great care (his copy, heavily annotated, has recently been identified); and the Outline of Pyrrhonism of Sextus Empiricus (rediscovered in the 1420s but only published in 1562).4 The discovery of the New World had fatally undermined any claim that there are some things on which all human beings can agree – here were societies practising nudity and cannibalism.
Montaigne’s scepticism had its limits. He did not doubt that you can make wine out of grapes, or find your way from Bordeaux to Paris. Someone had once tried to persuade him that the ancients did not understand the winds in the Mediterranean. Montaigne was impatient with such an argument: did they try to sail East and end up going West? Did they set out for Marseille and find themselves in Genoa? Of course not. He gave no indication of doubting that two plus two make four, or that the angles of a triangle add up to two right angles (although he found a geometrical proof that two lines could approach each other for ever but never meet paradoxical).5 What he doubted is that you can prove the truth of the Christian religion or of any religion. He doubted that the universe was created to provide a home for human beings any more than a palace is built for rats to live in.6 He doubted that there is any principle of morality which can command universal assent, and he doubted that any of our sophisticated intellectual systems make sense of how the world is. Doctors, he felt sure, were more likely to kill their patients than cure them. For nearly a millennium and a half Ptolemy had seemed an entirely reliable expert on all questions to do with geography and astronomy; and then the discovery of the New World had shown that his geographical knowledge was hopeless, and Copernicus had shown that there was at least a viable alternative to his cosmology.7 Our claims to knowledge, Montaigne said, are generally misconceived because we will not acknowledge our limits as human beings. We need to remember that the wisdom of Socrates consisted in acknowledging his own ignorance.8
Montaigne ended (or almost ended) ‘The Apology’ with a quotation from Seneca: ‘Oh, what a vile and abject thing is Man if he does not rise above humanity.’ ‘A pithy saying;’ he commented, ‘a most useful aspiration, but absurd withal. For to make a fistful bigger than the fist, an armful larger than the arm, or to try and make your stride wider than your legs can stretch, are things monstrous and impossible. Nor may a man mount above himself or above humanity: for he can see only with his own eyes, grip only with his own grasp.’ Of course he could not quite stop there, for the heretical implications were too clear. And so he went on: ‘He will rise if God proffers him – extraordinarily – His hand; he will rise by abandoning and disavowing his own means, letting himself be raised and pulled up by purely heavenly ones.’9 Was this a reluctant addition? Readers of Montaigne are – and always have been – sharply divided between those who think his protestations of Catholic orthodoxy were genuine, and those who think they were merely concessions to the censor. My own sympathies will already be clear.10 After all, Montaigne never gave an example of heavenly inspiration, of divine intervention, without hedging it about with doubts and difficulties. He pointed out that, far from our being made in the image of God, we make our gods in our own image: ‘we forge for ourselves the attributes of God, taking ourselves as the correlative.’11 One moment he insisted he believed in miracles, the next he doubted his own belief. In the end he grounded the obligation to be a Christian in the obligation to obey the laws of the country in which one finds oneself – and, from the point of view of a reasonable person, the content of those laws is entirely arbitrary.12
There is no need to resolve this issue here. What matters for present purposes is Montaigne’s rejection not of the practical knowledge, of wine-making and bread-baking, of his day, but of the learned knowledge, of medicine, geography, astronomy. Montaigne called these various branches of knowledge ‘sciences’. Montaigne’s scepticism, when applied to the sciences of the times, was entirely justified: for there is not a single natural philosophical principle taught in the universities in 1580 that a student in the sciences would still learn today. Montaigne’s arguments against religious belief and against conventional moral certainties are still as sharp as ever they were; but his arguments against the sciences of his day have no purchase against the sciences of our day. Science is now something utterly different from what it was then.
Human beings, Montaigne argued, are imperfect, and so human knowledge is necessarily unreliable. Galen had claimed that the hand of a healthy doctor was the perfect instrument for judging hot and cold, wet and dry – the four qualities that made up the world. If the patient was hotter than the doctor’s hand they were, in absolute terms, hot, and that was that. The world had been divinely ordered so that our sensations of hot and cold corresponded to real qualitative differences. Montaigne would have none of this. We have five senses, but who knows how many we ought to have if we want to know what is really going on? Who knows what we are missing? And of course he is right: bats experience the world fundamentally differently from the way in which we experience it, and it is wrong to assume that echo location merely enables them to know what we know by a different means, for it may give them insights which we will never have.13 Diderot, in the Letter on the Blind (1749), a work every bit as subversive as Montaigne’s ‘Apology’, was to formulate the view that a blind philosopher would of necessity be an atheist, for they would be quite unable to perceive order and harmony in the universe.14 What we know about the world and what we think we know depend entirely on how we perceive it.
Part of the great transformation that we know as the Scientific Revolution, a transformation which began in earnest the year after Montaigne retired to his library, consisted in improving our senses. The compass enabled sailors to perceive the earth’s magnetic field. The telescope and the microscope enabled scientists to see previously invisible worlds. The thermometer replaced Galen’s hand as a measure of temperature. The barometer displayed the pressure of the air on the skin. The pendulum clock provided an objective measure of a subjective experience – the passage of time. New instruments meant new perceptions, and with them came new knowledge.
All of these instruments relied, at least in part, on glass manufacturing skills and provided visual information. Alongside these we can put the mechanical reproduction of text and images by means of the printing press, which transformed the communication of knowledge and established new types of intellectual community. Montaigne’s Essays, which he wrote in his library surrounded by serried ranks of printed books, are themselves testimony to the emergence of a new bookish culture; disseminated in print they showed each and every reader how to engage in their own project of self-exploration.
There is a tendency to think of the telescope as a scientific instrument and the printing press as something external to science: but the first telescopes were not made by or for scientists, and the printing press transformed the intellectual aspirations of scientists because it was now possible to work with detailed images alongside text. Both began as practical technologies and became scientific instruments. The new science was thus dependent on a few key technologies which functioned, to use Elizabeth Eisenstein’s phrase, as ‘agents of change’.15
The printing press had a further crucial consequence which we can also see reflected in Montaigne’s Essays: it fostered a new critical attitude to authority which led to the insistence that knowledge must be tested and retested. In Montaigne’s case this resulted in a peculiar emphasis on the subjectivity of what we know, its dependence on our personal experience. Inherited knowledge could no longer be accepted without question. But as new knowledge accumulated, the printing press instead of fostering scepticism began to make possible a new type of confidence. Facts could be checked, experiments replicated, authorities could be set side by side and compared. Intellectual scrutiny could be much more intensive and extensive than ever before. The printing press was the precondition for this new insistence that knowledge, no longer authoritative, might at last become reliable.
The new instruments and the oceans of printed books opened up new experiences and destroyed old authorities. The old history of science, the history of science of Burtt, Butterfield and Koyré, rejected the idea that the new science of the seventeenth century was primarily the consequence of this new evidence; what mattered were new ways of thinking. The new history of science, beginning with Kuhn, tried to ground these new ways of thinking in intellectual communities: the success of new ideas depended upon conflict and competition within and between communities of thinkers. By problematizing the idea that experiments could be successfully replicated, the generation after Kuhn, the generation of Shapin and Schaffer, sought to demonstrate that experience itself is unpredictable, malleable, socially constructed. On their account (and here they parted company with Kuhn) the social history of knowledge is not just one aspect of the history of science; rather the social history of knowledge is the only history that can be written.
Recognizing the inadequacies of postmodern history of science does not mean that we should simply go back to Kuhn or to Koyré. The problem with concentrating on the paradigm shifts that interested them is that you lose sight of the wider environment within which those shifts took place: thus Kuhn gave an account of Copernicanism in which discovery was taken for granted, the telescope barely appeared, and the language in which science was conducted was never mentioned. Kuhn’s approach took the scientific enterprise as given, and so inevitably it missed the process of its formation, which was critical to the belated triumph of Copernicanism. Kuhn failed to see what he was missing because he assumed that science had been invented long before 1543 and because he seriously underestimated the obstacles to the adoption of Copernicanism, obstacles which came from the subordination of astronomy to philosophy. Such an approach might explain local revisions: how Pascal developed a theory of pressure, or Boyle came up with Boyle’s law; it cannot explain the long series of vacuum experiments from Berti to Papin (Newcomen’s atmospheric steam engine was not so much a new beginning as the final conclusion of that extended enterprise), for during that sequence a new culture was constructed, one which sought to resolve intellectual disputes through experimentation. That culture was itself founded in imitation of an earlier enterprise, which sought to resolve disputes regarding the structure of the universe through ever more exact observation, the enterprise of the new astronomy founded by Tycho Brahe. As the mathematicians turned their attention from observation to experiment, from astronomy to physics, they found they required a new set of intellectual tools, a new language. Some of that language – hypotheses and theories – came from astronomy; some of it – facts, and later evidence – came from the law. This new vocabulary was crucial to explaining the status of the new knowledge, and yet it is a language that we have come to take so completely for granted that its invention has become invisible. The presumption has been, either that thinking comes naturally, or that the required intellectual tools for thinking about natural science had all been developed by the ancient Greeks. As we have seen, this is not the case.
Enquiries into science have tended to assume that there are basically three variables to take into account: experience (facts, experiments), scientific thought (hypotheses, theories), and society (social status, professional organizations, journals, networks, textbooks). Kuhn’s concept of a paradigm, which he presented as an amalgam of a practice, a theory, and an educational programme, represented a particular way of interlocking these three variables. This fundamental schema might have come into question with the publication of Ian Hacking’s The Emergence of Probability (1975), which argued that probability thinking provided a powerful intellectual tool which had not existed until the 1660s.iiii But it should now be apparent that probability was just one of a series of key intellectual tools that appeared in the course of the seventeenth century: the materials out of which one could construct such a new history of science were not to hand in 1975.
Hacking’s identification of probability theory as a particular mode of thought could, however, have served to clarify the intellectual alternatives that were available before the emergence of probability. No lines written by Galileo are more frequently quoted than these:
Philosophy is written in this very great book which always lies open before our eyes (I mean the universe), but one cannot understand it unless one first learns to understand the language and recognize the characters in which it is written. It is written in mathematical language and the characters are triangles, circles and other geometrical figures; without these means it is humanly impossible to understand a word of it; without these there is only clueless scrabbling around in a dark labyrinth.16
In Galileo’s view the intellectual tools provided by geometry were the only tools required by a scientist. This was a reasonable view, since they were the only tools needed for Copernican astronomy and for Galileo’s two new sciences, the science of projectiles and the science of load-bearing structures.iiiiii In insisting these were the only tools required Galileo was dismissing Aristotelian logic as an irrelevance. Since Galileo, of course, all sorts of new languages have been invented with which to do science, including algebra, calculus and probability theory.
It is easy to think that new knowledge comes from new types of apparatus – Galileo’s telescope, Boyle’s air pump, Newton’s prism – not from new intellectual tools.iviv Often this is a mistaken view: in a hundred years time the randomized clinical trial (streptomycin, 1948) may look much more significant than the X-ray (1895) or even the MRI scanner (1973). New instruments are plain as pikestaffs; new intellectual tools are not. As a result we tend to overestimate the importance of new technology and underestimate the rate of production and the impact of new intellectual tools. A good example is Descartes’ innovation of using letters from near the end of the alphabet (x, y, z) to represent unknown quantities in equations, or William Jones’s introduction of the symbol π in 1706. Leibniz believed that the reform of mathematical symbols would improve reasoning just as effectively as the telescope had improved sight.17 Another example is the graph: graphs are now ubiquitous, so it comes as something of a shock to discover that they only began to be put to use in the natural sciences in the 1830s, and in the social sciences in the 1880s. The graph represents a powerful new tool for thinking.18 An absolutely fundamental concept, that of statistical significance, was first propounded by Ronald Fisher in 1925. Without it, Richard Doll would not have been able to prove, in 1950, that smoking causes lung cancer.
Physical tools work very differently from intellectual tools. Physical tools enable you to act in the world: a saw cuts through wood, and a hammer drives home nails. These tools are technology-dependent. The screwdriver only came into existence in the nineteenth century, when it became possible to mass produce identical screws; before that the few handmade screws that were used were turned with the tip of a knife blade.19 Telescopes and microscopes depended on pre-existing techniques for making lenses, and thermometers and barometers depended on pre-existing techniques for blowing glass. Telescopes and thermometers do not change the world around them as saws and hammers do, but they change our awareness of the world. They transform our senses. Montaigne said that people can see only with their own eyes; when they look through a telescope (which of course Montaigne never did) they still see only with their own eyes, but they see things they could never see with their unaided eyesight.
Intellectual tools, by contrast, manipulate ideas, not the world. They have conceptual preconditions, not technological preconditions. Some instruments are both physical and intellectual tools. An abacus is a physical tool for carrying out complicated calculations; it enables you to add and subtract, multiply and divide. It is perfectly material, but what it produces is a number, and a number is neither material nor immaterial. An abacus is a physical tool for performing mental work. So too are the Arabic numerals we take for granted. I write 10, 28, 54, not, as the Romans did, x, xxviii, liv. Arabic numerals are tools which enable me to add and subtract, multiply and divide on a piece of paper far more fluently than I could with Roman numerals. They are tools that exist as notations on the page and in my mind; like the abacus, they transform the way I operate on numbers. The number zero (unknown to the Greeks and the Romans), the decimal point (invented by Christoph Clavius in 1593), algebra, calculus: these are intellectual tools which transform what mathematicians can do.20
Modern science, it should now be apparent, depends on a set of intellectual tools which are every bit as important as the abacus or algebra, but which, unlike the abacus, do not exist as material objects, and which, unlike arabic numerals, algebra, or the decimal point, do not require a particular type of inscription. They are, at first sight, merely words (‘facts’, ‘experiments’, ‘hypotheses’, ‘theories’, ‘laws of nature’, and indeed ‘probability’); but the words encapsulate new ways of thinking. The peculiar thing about these intellectual tools is that (unlike the intellectual tools employed by mathematicians) they are contingent, fallible, imperfect; yet they make possible reliable and robust knowledge. They imply philosophical claims which are difficult, perhaps impossible, to defend, yet in practice they work well. They served as a passage between Montaigne’s world, a world of belief and misplaced conviction, and our world, the world of reliable and effective knowledge. They explain the puzzle that we still cannot make a fistful bigger than a fist, or a stride longer than our legs can stretch, but that we can now know more than Montaigne could know. Just as the telescope improved the capacities of the eye, these tools improved the capacities of the mind.
During the seventeenth century the meaning of key words shifted and changed, and a modern scientific – or rather metascientific – vocabulary slowly took shape. This both reflected and gave rise to new styles of thinking.21 These changes were rarely the subject of explicit debate within the intellectual community, and have generally been overlooked by historians and philosophers (partly because the terms themselves were not new – ‘probability’ is typical in this respect – even if they were now being used in a new way), but they transformed the character of knowledge claims.22
Alongside these intellectual tools we can see the emergence of a community accustomed to using them: the new language of science and the new community of scientists are two aspects of a single process, since languages are never private. What held this community together was not just the new language, but a set of competitive and cooperative values which were expressed in the language used to describe the scientific enterprise (rather than in scientific arguments themselves), expressed in terms of discovery and progress and eventually institutionalized in eponymy. What is striking about these intellectual tools and cultural values is that they have proved to have a history quite unlike that of paradigms. Paradigms flourish; some then die, and others get relegated to introductory textbooks. The new language and the new values of science have now survived for 300 years (500 if we go back to their common origin in ‘discovery’), and there is nothing to suggest they are likely to go out of fashion soon. Just like algebra and calculus, these tools and these values represent acquisitions which are too powerful to be discarded, and which remain not as museum pieces but are in constant use. Why? Because the new language and culture of science still constitute (and I believe will always constitute) the basic framework within which the scientific enterprise is conducted. Their invention is part and parcel of the invention of science.
The Scientific Revolution was a single transformative process, the cumulative consequence, not of one sort of change repeated many times, but of several distinct types of change overlapping and interlocking with each other. First, there was the cultural framework within which science was invented. This framework consisted of concepts such as discovery, originality, progress, authorship and the practices (such as eponymy) associated with them. An older school of historians and philosophers took this framework for granted, while a newer school has wanted to debunk or deconstruct the concepts rather than explain their significance and trace their origin. This culture emerged at a particular moment in time: before it came into existence there could be no science as we understand the term. Of course the critics are right in that concepts such as discovery are problematic: discoveries are rarely made by a single individual at a precise moment in time. But just like plenty of other problematic concepts (democracy, justice, transubstantiation), they provided and still provide a framework within which people made sense and make sense of their activities and decided and still decide how to live their lives. We cannot understand science without studying the history of these foundational concepts.
Alongside this new framework, the printing press was transforming the nature of intellectual communities, the knowledge they could exchange, and the attitude to authority and to evidence that came naturally to them. Next there came new instruments (telescopes, microscopes, barometers, prisms), and new theories (Galileo’s law of fall, Kepler’s laws of planetary motion, Newton’s theory of light and colour). Finally the new science was given a distinctive identity by a new language of facts, theories, hypotheses and laws. Five fundamental changes thus interacted and interlocked in the course of the seventeenth century to produce modern science. Changes in the wider culture, in the availability of and attitude to evidence, in instrumentation, in scientific theories narrowly defined, and in the language of science and the community of language users all operated across different time-scales, and were driven by different, independent factors. But the cumulative effect was a fundamental transformation in the nature of our knowledge of the physical world, the invention of science.
Since each of these changes was necessary for the construction of the new science we should be wary of trying to rank them. But, if one looks closely, it is apparent that the new science was about one thing more than anything else, and that is the triumph of experience over philosophy. Each and every one of these changes weakened the position of the philosophers and strengthened the position of the mathematicians, who, unlike the philosophers, welcomed new information. The new language of science was above all a language which gave the new scientists tools for handling evidence, or, as it was called at the time, experience. Leonardo, Pascal and Diderot (and Vadianus, Contarini, Cartier, and all the others) were right: it was experience that marked the difference between the new sciences and the old.
Montaigne, too, was right – right to think that the men and women of his day were hopelessly fallible when it came to understanding the world. Since then, the claims of the postmodernists notwithstanding, we have learnt to develop reliable knowledge, even though we as human beings have continued to be as fallible as ever. Of course, our present-day knowledge will prove to be incomplete and limited in the eyes of future generations; we cannot even begin to guess what will one day be known. But there is no prospect of it proving simply unreliable. We can calculate reliably the path a rocket will take as it flies from the earth to Mars. We can sequence human DNA, and identify genetic mutations that cause, for example, diabetes. We can build a particle accelerator. We could not do these things if our knowledge was entirely misconceived – anyone who suggests we could should be met with the same impatience as that with which Montaigne greeted the claim that the Romans did not understand the Mediterranean wind system.
Hilary Putnam claimed in 1975 that realism, the belief that science gets at the truth, ‘is the only philosophy that doesn’t make the success of science a miracle’.23 The thinking is simple: science is very good at explaining what happens and predicting what is going to happen. If scientific knowledge is true this state of affairs needs no further explanation; but if scientific knowledge is not true then only a miracle could bring about such a perfect coincidence between the predictions of scientists and what actually happens. Putnam’s argument was demolished by Larry Laudan, who objected to the claim that successful scientific theories were likely to be true, and he was quite right to do so.24 Plenty of theories which we now regard as plain wrong have been successful in the past. By this I do not mean the theories that always were defective, were recognized as being defective by some people at the time, but nevertheless acquired a widespread following: Hippocratic (humoral) medicine, or alchemy, or phrenology. I mean rather theories which became well-established within the science of their time, were based upon significant evidence, appeared to provide robust explanations, and were successfully used to make novel predictions: theories such as the Ptolemaic system, phlogiston (a substance held, from 1667 until late in the eighteenth century, to be released by combustible substances when set alight), caloric (an elastic fluid which was supposed, in the first half of the nineteenth century, to be the material basis of heat) and the electromagnetic aether (which was held, in the second half of the nineteenth century, to be the medium for the propagation of light).
These cases differ from that, for example, of Newtonian physics. Using Einstein’s theory of relativity you can construct a world – the world of our daily experience – in which Newtonian laws very closely correspond to what actually happens. Astrophysicists still use Newton not Einstein to plot the orbits of space craft, because although the Newtonian calculations are based on what we would now regard as misconceptions, the differences between them and calculations that recognize the relativity of space and time are too small to be worth worrying about. Einsteinian physics can thus be regarded as inheriting the results of Newtonian physics while going far beyond them. But in the cases of caloric or the electromagnetic aether there is no inheritor theory, and we would not now say that these theories, which at one time seemed perfectly well established, were useful approximations of the truth. Nevertheless, it does not follow from the fact that we no longer regard these theories as true, or even useful, that they were never associated with reliable experimental practices; like Ptolemaic astronomy, they were well-founded within certain limits. Laudan’s arguments tell against Putnam’s claim that science gets at the truth, not against the claim that what marks science out is that it is reliable.25 As Margaret Cavendish put it in 1664, comparing the search for the truth to the futile search for the philosopher’s stone which would turn base metal into gold:
although Natural Philosophers cannot find out the absolute truth of Nature, or Natures ground-works, or the hidden causes of natural effects; nevertheless they have found out many necessary and profitable Arts and Sciences, to benefit the life of man … Probability is next to truth, and the search of a hidden cause finds out visible effects.26
Of course reliability is a slippery concept. We only need to turn to the doctors of Montaigne’s day for a cautionary example. They thought they were using their knowledge to cure patients. In fact, their preferred remedies (bleedings and purges) did no good at all.27 They mistook the patients’ spontaneous recovery (thanks to the workings of their immune systems), combined with the placebo effect, for cures brought about by medical therapy (and intelligent bystanders such as Montaigne suspected as much).vv In medicine there were no reliable methods of measuring success until the nineteenth century.
But the Ptolemaic astronomers of Montaigne’s day were very different from the Hippocratic doctors. Clavius claimed that eccentrics and epicycles must exist, otherwise the success of the predictions made by astronomers were inexplicable:
But by the assumption of eccentric and epicyclic circles not only are all the appearances already known preserved, but also future phenomena are predicted, the time of which is altogether unknown. … it is not credible that we should force the heavens (but we seem to force them, if eccentrics and epicycles are fictions, as our adversaries would have it) to obey our fictions and to move as we wish or as agrees with our principles.28
Clavius was wrong – there are no eccentrics and epicycles – but he was right to claim that he could predict the future movements of the heavenly bodies with a high degree of reliability. Like Clavius, we test our knowledge by doing things with it, which is the fundamental difference between our knowledge and most of the sciences of Montaigne’s day. In comparison to sixteenth-century philosophy, all our sciences are applied sciences, and all our scientific knowledge is robust enough to withstand real-world application, if only in the form of experiment. We can summarize this in two words: Science Works.
If you learn to navigate a boat you will be taught to work with a Ptolemaic system, with a stationary earth and a moving sun, not because this is true, but because it makes for an easy set of calculations. So a false theory can be perfectly reliable when used in the appropriate context. If we no longer use epicycles, phlogiston, caloric, or aether it is not because no reliable results can be obtained with those theories; it is that we have alternative theories (theories we take to be true) which are just as easy to use and have a wider range of applications. There are no good grounds for thinking that one day our physical sciences will prove, like Hippocratic medicine, to be learned nonsense; but it is perfectly possible that where they are right they are, like Ptolemy’s epicycles, right for entirely the wrong reasons. Science offers reliable knowledge (that is, reliable prediction and control), not truth.29
One day we may discover that some of our most cherished forms of knowledge are as obsolete as epicycles, phlogiston, caloric, the electromagnetic aether and, indeed, Newtonian physics. But it seems virtually certain that future scientists will still be talking about facts and theories, experiments and hypotheses. This conceptual framework has proved remarkably stable, even while the scientific knowledge it is used to describe and justify has changed beyond all recognition. Just as any progressive knowledge of natural processes would need a concept akin to ‘discovery’, so as further advances occurred it would need a way of representing knowledge as both reliable and defeasible: terms that do the work done by ‘facts’, ‘theories’ and ‘hypotheses’ would have to play a role in any mature scientific enterprise.
We should end by acknowledging that we have the scientific knowledge we have against all the odds. There is no evidence that the universe was made with us in mind, but by good fortune we seem to have the sensory apparatus and the mental capacities required to make a start on understanding it; and over the last 600 years we have fashioned the intellectual and material tools needed to make progress in our understanding. Robert Boyle asked:
And how will it be prov’d, that the Omniscient God, or that admirable Contriver, Nature, can exhibit Phaenomena by no wayes, but such as are explicable, by the dim Reason of Man? I say, Explicable rather than Intelligible; because there may be things, which though we might understand well enough, if God, or some more intelligent Being then our own, did make it his Work to inform us of them, yet we should never of our selves finde out those Truths.30
God, angels, and extraterrestrials have yet to come to our assistance; yet more and more phenomena have proved explicable by the dim reason of human beings.
Science – the research programme, the experimental method, the interlocking of pure science and new technology, the language of defeasible knowledge – was invented between 1572 and 1704. We still live with the consequences, and it seems likely that human beings always will. But we do not just live with the technological benefits of science: the modern scientific way of thinking has become so much part of our culture that it has now become difficult to think our way back into a world where people did not speak of facts, hypotheses and theories, where knowledge was not grounded in evidence, where nature did not have laws. The Scientific Revolution has become almost invisible simply because it has been so astonishingly successful.