Chapter 6
Evolutionary Epistemology
Popper’s Philosophy of Knowledge
Popper’s Selectionism
Objectivity without Evolution
Objectivity with Evolution
The Bipartite History of Science
The Consequences of the Failure to Grasp the Bipartite Nature of the History of Science
Popper’s Philosophy of Knowledge
Let us return to the beginning of the whole argument in chapter 1 . We require a philosophy of science which has a number of characteristics. First, it must not legitimise itself by an appeal to the history of science because we have shown that the history of science is dependent on a philosophy of science. Second, it must account for the fact that in science there has been progress, because, as we have argued, the history of science is notably different from the history of art and literature. Third, it must be capable of explaining the remarkable discontinuity in the history of science: it must be able to account for the fact that the pursuit of science and progress in scientific knowledge is rare, sporadic and exceptional. Fourth – and this requirement follows directly from the second requirement – it must allow that scientific theories are commensurable and that there are reasons other than fashions and whims for replacing one theory by another theory. Finally, last not least, the philosophy of science we require must allow for the fact that all genuine knowledge has to be confronted with the environment and must not be permitted to be retained by the fiat of a subjective will or the consensus of an elite, no matter how carefully institutionalised in the shape of an Academy or Royal Society. Since conscious knowledge about the world is a continuation of pre-conscious knowledge enshrined in organisms which are adapted to an environment, exposition of knowledge to the environment for criticism and eventual elimination is an essential requirement for every philosophy of science. If the retention of knowledge depends, according to a philosophy of science, on ‘acceptance’ by a designated body of people rather than on toleration by the environment, that philosophy of science breaks the continuity between evolution and the growth of knowledge and sets up an artificial, qualitative barrier between the ‘fit’ involved in successful evolution and the ‘fit’ involved in true knowledge. However, one qualification is needed. We said that genuine knowledge must be tolerated by the environment. Toleration is indeed all that is possible and necessary. Neither organisms nor conscious knowledge are determined by the environment. Both organisms and conscious knowledge are, in fact, underdetermined by the environment. An adapted organism is simply an organism which survives and is not eliminated by the environment. It is not an organism which fits the environment like a hand in a glove. The organism will survive as long as it is compatible with the environment. The same applies to knowledge. Conscious knowledge always says more than the environment warrants. It is therefore underdetermined. In saying more, it can still be considered a ‘fit’ – or an adaequatio rei et intellectus , to use a medieval, scholastic term – as long as what it asserts is compatible with the environment.
Given these requirements, we can see that Kuhn’s philosophy of science does not qualify. In the first instance, it clearly does not stand on its own feet but claims to be derived from the history of science. Since such a history presupposes a philosophy of science, it cannot legitimise one – least of all not the philosophy of science which has been used as the general framework for composing it. Next, Kuhn’s philosophy of science is unhistorical in that it avowedly fails to explain why one paradigm preceded another and why one paradigm was earlier or later than another. It cannot see any connection between the growth of knowledge and the succession of paradigms. What is more, by making any one paradigm incommensurable with any other paradigm, it makes all knowledge (i.e. normal science) relative to the paradigm which happens to be fashionable and thus makes it impossible to weigh the relative merits of the paradigms themselves. As I have argued above, there is something to be said for the profession of such incommensurability when one is dealing with art or with social institutions. There may indeed be no obvious reason why one should compare the institutions of the Trobriand Islanders with those of the Andaman Islanders. But when one is dealing with knowledge, the absence of commensurability is a fatal flaw, for the growth of knowledge is a historical phenomenon – an approximation to a goal – in a sense in which the history of art and literature or the successive emergences of social institutions are not. Again, since ‘language games’ are, partly, ends in themselves (i.e. forms of social exchange and communication used only obliquely to convey information) nobody need blame Wittgenstein for maintaining that one cannot measure the relative merits of one game against another. But scientific knowledge is different. It is neither a closed system nor a game. It is therefore essential that one theory or paradigm should be capable of being compared with any other; any philosophy of scientific knowledge which expressly precludes such comparison is constructed in such a way as not to be relevant. The account it gives of scientific knowledge is so wide off the mark as to be well-nigh unrecognisable.
By contrast, Karl Popper’s philosophy of science qualifies by all the requirements listed. It is now almost half a century since Popper first proposed that the chief characteristic of scientific knowledge is its falsifiability. To start with, this philosophy of science is based on the uncontestable consideration that a general theory can never be verified because observations in its favour must, by necessity, be limited in number. This philosophy does not depend on the observation of the history of science. Its truth depends on nothing but a logical argument. Not even Carnap, who found Popper’s scepticism in regard to induction unacceptable, doubted Popper’s logic. He merely maintained that, in spite of this logic, it made perfectly good sense to say of a certain hypothesis that it was positively confirmed. Psychologically speaking, Carnap is right. Repeated observation of a phenomenon countenances a guess that we have here a law of nature. Nobody would deny this. What is to be denied, however, is the claim that repeated observation justifies rather than countenances the guess and that such justification proves the truth of the guess.
In summary fashion, this philosophy of knowledge explains that the search for knowledge begins with guesses and conjectures which must be framed in such a way that they are at least in theory falsifiable. Their falsifiability assures them of an ‘empirical content’ – that is, when falsifiable, we know that they are about a real world and different from mere imaginings which do not refer to anything objective and are, therefore, not falsifiable. The advance of knowledge depends to a large extent on the boldness of these conjectures. When these conjectures are falsifiable, but not yet falsified, they are said to be provisionally true. In this account of knowledge, there can be no certain knowledge, no certain method for acquiring it, no guarantee that acquired knowledge (i.e. unfalsified knowledge) is true by virtue of having been acquired according to a ‘correct’ method. The rationality of knowledge consists in its exposure to criticism and falsification; not in the presence of a ‘rational’ method for obtaining it, as observation and induction were alleged to be. In short, Popper stood the old Positivism on its head: instead of starting with the collection of observation and working forward by induction to generalisations, Popper maintained that we start with general conjectures, make deductions from them and end up by trying to falsify the deductions.
In rejecting verification as a legitimate basis of knowledge, Popper rejected all forms of Positivism and the assumption – or ought one to call it a presumption? – that meaningful statements must be equivalent to some logical construction upon terms which refer to immediate experience. In order to hold fast to the distinction between metaphysics or superstition and genuine knowledge, a distinction which had been so important for Positivism, Popper suggested that the distinction can be based upon a consideration other than verification. Since, logically, one single falsifying observation falsifies the general theory or law which suggested that the observation be made, Popper held up falsifiability as the criterion by which one can distinguish phoney knowledge from genuine knowledge. If knowledge can, ideally, be falsified and if one can point to the observation which would falsify it, it must be knowledge about the world. In giving up the requirement of verification and substituting the demand for falsification, he did not give up the demarcation between knowledge and non-knowledge. He merely rejected the notion that knowledge is built up piecemeal, by careful collection of observations and by summation of separate observations, from the bottom up. He proposed, instead – without surrendering the possibility of distinguishing meaningful knowledge from meaningless knowledge – that we think of knowledge as something proposed by the mind and then exposed to falsification. One starts with a general theory, deduces particular statements from it and then seeks to discover whether these statements can be falsified by observation. Having a general theory right at the beginning of the growth of knowledge, one knows what one can deduce from it and learns what one ought to observe – that is, where one ought to look. In this position, the ‘knower’ is in an infinitely more advantageous position than the ‘knower’ envisaged by Positivism. The ‘knower’ of the Positivists is supposed to start with observations and he is supposed to make them without the assistance or guidance of presuppositions. Presuppositions, on the contrary, in the view of Positivism, are a distorting and disabling element. Supposedly starting with observations, the ‘knower’ is in an impossible and completely unreal situation. Imagine what one might do in response to an invitation to ‘observe’! Confronted by such an invitation, ought one simply to stare straight ahead or gaze at the nearest object in front of one’s eyes? Or should one watch, instead, the space between one’s eyes and the nearest object? When asked to ‘observe’, as the Positivists would have it, people usually and understandably do a double-take and ask, rightly: ‘Observe what?’
Moreover, with falsificationism, one can also initially distinguish between theories. If one is comparing two different theories, one will find the one which has easier, more obvious and more frequent possibilities for falsification preferable.
In its original form, there is a clear difficulty in Popper’s falsificationism. While the logic of the argument upon which it is based cannot possibly be impugned, it is very difficult to find actual examples in the history of science which show that theories are abandoned when they are falsified. Popper does give historical examples. Examples, however, are not a history of science and the Logic of Scientific Discovery is not only not a work of history but completely unhistorical. Its argument is entirely theoretical. In almost all cases where Popper gives examples, it has been found that the history of science can also be told in a different way so that the abandonment of an old theory appears not necessarily due to its actual falsification. The most striking story, of course, is the story about Einstein and the Michelson-Morley experiment. It would make a good Popperian chapter in the history of science if one could show that the experiment falsified the ether theory and that Einstein, aware of this falsification, then proceeded to make a different conjecture. Most histories of science indicate, however, that Einstein had not even heard of the famous experiment and that, whatever the reasons he had for his new theory, the falsification of the old theory was not one of them. The Michelson-Morley experiment, according to many histories of science, was not designed to test the ether theory, let alone to falsify it. It was designed to decide between two competing ether theories and, initially, its results were not even considered a falsification of any ether theory.
In The Logic of Scientific Discovery (first published as Die Logik der Forschung in 1934), Popper gave some brief examples of what he then thought the history of science was. These snippets of the history of science were composed with the help of simple falsificationism. Popper then said that ‘what compels the theorist to search for a better theory … is almost always the experimental falsification of a theory, so far accepted and corroborated … Famous examples are the Michelson-Morley experiment which led to the theory of relativity …’ 1 Since Popper wrote this, it has been established beyond reasonable doubt 2 that Einstein did not know of the Michelson-Morley experiment when he proposed the Special Theory of Relativity and that therefore one cannot consider the history of science as a progression determined by falsification of accepted theories and the subsequent invention of new theories.
It seems, therefore, that the history of science, however it is composed, does not really bear out Popper’s original account of the process of scientific reasoning or scientific progress. It is not that the account as such is false, but that it seems incomplete. Something was missing. The missing part has something to do with the fact that falsifying observations as such are very difficult, if not impossible. Every observation, as Popper himself keeps insisting, is ‘theoryladen’. If one makes an observation to test, for instance, a prediction made by the General Theory of Relativity, one has to rely on theories about the photosensitivity of certain plates, about the behaviour of the human retina, about the constitution of the measuring instruments – to mention only the most obvious examples. While in pure logic, one falsification renders a universal proposition false, in practice it is almost impossible to make any such simple falsifying observation, for in making it one always depends on other theories and this fact makes it always possible to put the blame for any falsification on those other theories. If one wishes, one can thus avoid falsification endlessly.
There seems to be a large number of difficulties in the concept of falsification, despite its logical impeccability. A falsifying proposition, to start with, has to occupy a privileged epistemological status. Unless such privilege is conceded, one can endlessly wonder whether the proposition which falsified a theory is true. If there is no privilege, the truth of such a proposition can be questioned in spite of the fact that such truth would have to depend on nothing more than a particular observation. Next, in an important sense, all theories which have ever been held have also been falsified because every observation relevant to the theory shows up anomalies. No water ever boils at exactly 100 degrees centigrade – but nobody would take any such anomaly as a falsification of the theory that water boils at 100 degrees centigrade. Moreover, all falsifications are themselves fallible and many practising scientists, though appreciative of the theory of falsificationism, have expressed doubts as to whether it is practically helpful in a problem situation. 3
One could also ask whether the requirement that beliefs about the world be falsifiable is naturally compelling to rational men. 4 On the face of it and out of context,· one could not really say that it is ‘compelling’. But if one puts it into a biological context, one can see that falsification is related to elimination and, in such a context, it appears plausible as a mechanism of selection. The reason why it is not naturally compelling outside a biological context is that it depends on observation and that observation, in turn, is dependent on concepts. Hence, there can be errors and delusions and one can say no more than that falsification is accepted by convention as a standard of veracity or empirical content and that the errors, delusions, illusions, misunderstandings and ambiguities which must creep into concepts and words describing the falsifying observation are not minimised or kept at bay by conventions but are standardised by conventional language rules. Thus, everybody makes the same errors and nobody can identify the errors as errors. Errors which are not thus out of step cease to be errors. Alternatively, one can imagine that conventions set up social solidarity and co-operation among men who thus protect themselves indefinitely from the disastrous consequences of holding false beliefs. For example, a pastoral society could cling to the belief that its animals need to be fed only once a month. Under non-soçial circumstances, any man who held such a belief would cause the animal to be very lean and he would therefore lack food. However, in a social situation, he could make up for his false belief by banding together with twenty other men, stage a military campaign at regular intervals, and steal food from another tribe. Thus, the consequences of error are postponed and the process of falsification is nullified.
However, in a biological context, falsification appears in a different light. In that context, falsification becomes naturally compelling because it is the analogue of elimination in the struggle for survival. An animal of the prairie with weak legs is unlikely to leave many offspring. There is physical elimination. Concepts and their potential errors do not enter into this situation. When one comes to consciously held theories (i.e. disembodied organisms instead of incarnated theories), one must try to simulate such physical elimination as much as possible. Instead of seeking evasive action by running for shelter in conventions or solidarity, one ought to seek to expose one’s theories and beliefs as much as possible to falsification. For sure, concepts and words come into all observations, even into falsifying observations. But one has a choice. One can either assimilate such observations to the struggle for physical survival in biological evolution and correct the concepts that intrude in the light of one’s knowledge that the senses which make observations are adaptive and therefore not totally misleading all the time; or one can draw away from biology and assimilate observations to the protective shelter of social institutions and correct them in the light of one’s knowledge that all human communication, be it for thought or for action, must be based on rules. There can be no doubt that when one is aware of that choice, the reasonable man must opt for the first alternative.
There are other doubts in regard to falsification. Falsificationism by itself is not rich enough as a philosophy of science to enable one to write a history of science. It represents a sort of timeless ideal of rationality, 5 a ‘timeless and universal rational organon’. The history of science, however, has to include all sorts of irrational moments of inventiveness, of psychological and sociological determinants of inspiration so that, at best, the sequence of events which would emerge if one used refutation events as the sole criterion of selection, would show very little similarity to the actual history of science. This last consideration is not an argument against falsificationism; nor can one invoke ‘what actually happened’ in the history of sciencc because, without a criterion of selection, one docs not know what actually happened. This last argument merely casts some doubts upon the fruitfulness of falsificationism for the composition of a history of science.
Popper’s Selectionism
Perhaps for this reason and perhaps for other reasons – the history of the development of Popper’s thought has not yet been written – Popper himself enlarged his account of scientific progress. From the mid-1960s on, in all his writings, though falsification as such was not dropped, it came to be absorbed in a wider concept. Popper began to speak of ‘error elimination’. Logically, ‘error elimination’ is not really different from falsification. But in practice, it is a looser procedure and describes the actual progression of scientific knowledge in a more realistic way. Finally, in his Objective Knowledge 6 Popper showed that progressive error elimination is not only the method by which conscious scientific thought proceeds towards truth, but also the method by which all evolution of organisms, from the amoeba to Homo sapiens , progressed. Using ‘P’ for problem, ‘TS’ for tentative solutions, ‘EE’ for error elimination, Popper described the fundamental evolutionary sequence of events as follows:
P → TS → EE → P
The sequence, he argued, is not a cycle. The second problem is different from the first because it is a problem in a new situation which has arisen because of the tentative solutions which have been tried out and because of the error elimination which controls them. With this formula, one can write the history of the rejection of the ether theory and the emergence of Einstein’s theories without having to maintain that Einstein invented his new conjecture because an old conjecture had actually been falsified and because he knew of this falsification.
Not the least merit of the formula is that it accounts for the negative sociology of science mentioned in chapter 1 . In regarding the growth of knowledge as the result of progressive error elimination, Popper must imply that there can be no growth of knowledge under those social conditions in which knowledge is artificially protected from competition with other knowledge because it is used as a social bond. When it is used as a social bond, it must be protected dogmatically. Alternatives cannot be entertained and therefore selection cannot take place. Alternatives are entertained only in those societies in which bonding is neutral and does not have to rely on particular pieces of knowledge. Moreover, the availability of alternatives depends also largely upon the creation of alternatives. Alternative pieces of knowledge are human creations. The more creative people are, the more alternatives will be available for competition. Creativity, in turn – though nobody quite knows what it ultimately depends on – is favoured when people can ‘bisociate’, in Koestler’s sense as explained in his The Act of Creation . It is favoured when people are capable of taking apart ideas that are conventionally associated with one another and of reassembling the bits in a new way – a sort of bricolage , if one prefers Léi-Strauss’s terminology. Bisociation and bricolage are more likely to occur under these conditions of loose social bonding – in open societies, to use a good old Popperian term. Societies are never wholly open or wholly closed, but there arc degrees of openness. The greater the openness, the better the chance for bricolage . Here, then, we have an explanation of the negative sociology of knowledge and can understand how social conditions encourage or discourage the growth of knowledge.
The concept of error elimination likens the growth of knowledge to evolution. Just as in ordinary evolution, organisms are naturally selected, so in the growth of knowledge, certain theories are selected. They are the theories which survive the process of error elimination. As in evolution, there is neither finality nor certainty in such survival: either a change in the environment or a novel observation can eliminate a species or a theory which has so far been selected.
In order to make selectionism viable, one has to show that biological evolution and the growth of articulated knowledge, or of knowledge articulated as general theories, have something in common. The factor they have in common is the fact that both biological evolution and articulated universal theories embody a store of knowledge. This knowledge is the information about the environment which remains after error elimination has taken place. In organisms, this knowledge is stored in the gene pool of a given population; in theories, it is the information encoded in those general theories which survive error elimination. Every organism which has ever appeared on earth is a sort of conjecture; if the information it contains about its environment is compatible with that environment, the organism survives because it will breed faster than organisms which contain information less compatible with the environment. The same is true of conscious theories. Every theory is a conjecture, a proposal. Once made, it is compared to the environment and, if compatible, it survives error elimination. Such information, be it in the gene pools of populations or in the minds of men, is never a portrait of the environment or a mirror-image induced in the organism or the mind by the environment. The proposals are not made in response to the environment. They are thrown up freely, by chance, and are unpredictable and undetermined.
It would not seem an exaggeration to say that organisms are embodied theories about the environment, and that theories held by conscious human beings are disembodied organisms. Consider the following example of knowledge stored in a surviving organism:
Infusoria … by means of their phobic and topic responses … seek an environment containing inter alia a particular concentration of H-ions. The commonest acid found in nature is carbonic acid, the highest concentration of which is found in waters in which paramecia flourish, especially in the vicinity of rotting vegetable matter, because the bacteria that live on this matter give off carbon dioxide. This relationship is so dependable, and the occurrence of other acids, let alone toxic ones, is so rare, that the paramecium manages admirably with one single item of information, which put into words would say that a certain acid concentration signifies the presence of a mass of bacteria on which to feed. 7
With this representation, Konrad Lorenz clearly likens organisms to theories and we can thus see the similarity between selection and falsification. In its original narrow sense, falsifiability was a measure of empirical content. A falsifiable theory obviously had a greater measure of reference to something real in the world than a theory which was not falsifiable and from which one could not deduce statements which, if false, would make the theory untrue. It is essential to bear in mind that, in the narrow sense, falsificationism was designed as a measure of empirical content and not as an instruction for deciding when theories had to be rejected. The real reason for discarding theories is not intimately linked to falsification at all. One rejects some theories and prefers others because one prefers theories which are more universal to theories which are less universal. The less universal a theory, the less its explanatory power. Hence, comparative degrees of universality rather than falsity determine the succession of theories. Being initially concerned with the demarcation problem and the measure of empirical content, the falsification criterion cannot automatically be extended to become the basis for a history of science in which theories are discarded because they are false. The temptation to apply it to the history of science is great and even Popper himself has at times given way to it. But in its narrow form, the falsification criterion should be used to measure degrees of empirical content, and not to write the history of how and why theories succeed one another. However, the great value of the falsification criterion lies in the fact that it can be blown up into selectionism and then used to compose a history of science.
Strictly speaking, narrow falsificationism obscured our understanding of the history of science and of the growth of knowledge. For example – and this example is frequently used by Popper himself-Einstein did not falsify Newtonian Mechanics. Nevertheless, the General Theory of Relativity superseded Newtonian Mechanics because it had greater universality. For that matter, the General Theory of Relativity does not discard Newtonian Mechanics. The latter remains as an alternative for certain kinds of calculations. For this reason, when we come to consider the growth of knowledge, we should amend falsificationism and transform it into selectionism. In doing so, we are drawing attention to the fact that the dismissal of a falsified theory is never mandatory. Falsifiability is an important consideration because it enables us to distinguish between theories according to their empirical content. This is doubly important after the disappearance of verification as a criterion of empirical content or reference to reality. But falsifiability must be qualified and expanded.
In saying that a theory need not be discarded even though it has been falsified, we are not asserting that it is immune from experience. On the contrary. Falsification has shown that it is anything but immune and that it has empirical content. But there may be other considerations for shoring it up with, for instance, an ad hoc hypothesis or whatever. Since the theory is not immune, one has to make a special decision not to discard it, even though one is now permitted to discard it. A theory which is not, in principle, falsifiable, is different from one which is falsifiable, falsified and nevertheless kept. Only a theory which is not, in principle, falsifiable is truly immune. A refusal to discard in the face of falsification is therefore not tantamount to a declaration of immunity.
The entire question as to whether theories which arc falsifiable and, when falsified, not discarded, are immune or not, arises only when one takes falsification to be absolute. If one takes falsification to be a command to discard, then a falsified, non-discarded theory must appear immune. But if one takes falsification merely as a measure of content and as a permission to discard, a falsified, non-discarded theory is not immune. Such a theory is not kept because it is immune, but because one has reasons for not availing oneself of the falsification. The question, really, is whether one takes falsification to be a command or a permission to discard. It seems that if one is taking it as a command, one is acting like an inductivist or a Positivist who does not go by his own decision, but claims to be guided by the ‘instruction’ he has received from the ‘outside’. If one discards a theory in response to falsification alone, one is really acting like an old-fashioned Positivist who is discarding a theory because nature has induced him to do so, or because he has slavishly induced from nature that it must be discarded. In short, while falsifiability is a sine qua non of empirical content and reference, it is not the factor which explains the dynamics of theories and the manner in which they succeed one another.
It is uncertain whether Popper himself and most Popperians would be prepared to accept this re-evaluation of the role of falsification. It must retain its important place. But when one is concerned with the growth of knowledge and with the dynamics of theories, it is more fruitful to place selectionism (i.e. an expanded falsificationism) into the centre of the stage. The preoccupation with the growth of knowledge is, as we have seen, not idiosyncratic. On the contrary. When one is concerned with knowledge and cannot explain it the way the old and naive Positivists were wont to, the growth of knowledge becomes a central part of knowledge itself. Popper himself has declared that the growth of knowledge is his main concern 8 but it may seem, to judge from many of his more recent works, that he thinks that the best service is rendered to our understanding of that growth by his theory about Worlds 1, 2 and 3 and the relationship in which he says they stand to each other. Popper distinguishes three worlds. First there is the objective world of material things – World 1; then there is a subjective world of minds – World 2; and a world of objective structures which arc the products, though not necessarily the intentional products, of minds or living creatures – World 3. World 3 is a creation of emergence through the interaction of World 1 with World 2. But once World 3 is created, it exists independently. Popper takes such independence very literally. The knowledge which is in World 3 is knowledge written on paper or stored in computers, not events in minds. That knowledge includes as its forerunners, so to speak, the nests built by birds or the webs built by spiders. Although Popper never said so, World 3 is very similar to what Hegel used to call ‘objective spirit’. World 3 is the world of ideas, science, language, ethics and institutions – all creations of the mind. Once created, these creations create their own problems which we have to discover and for which we have to discover solutions.
It comes, then, to the question of the interpretation of Popper’s thought. Should that interpretation veer towards the Worlds 1, 2 and 3 theory? Or towards selectionism and evolution? I do not think that these two interpretations are incompatible, but it does seem to be a question of emphasis. Both directions follow upon falsificationism – but the second, I think, more cogently than the first. In my view, the concern with evolution is more fruitful when one comes to an understanding of the growth of knowledge than the concern with Worlds 1, 2 and 3. At any rate, I would remind the reader that Popper himself declares that there is more to good theories than their authors have put into them; one may therefore feel free in one’s attempts to steer the momentum of Popperian thinking towards evolution rather than towards the problems involved in the distinctions between Worlds 1, 2 and 3. Popperian thought – and this is one of its major merits – is concerned with openness and indeterminism. It must of necessity, therefore, remain open towards the possibility of an evolutionary interpretation regardless of whether Popper himself treats this particular interpretation as his main concern or not. Equally important is another tenet of Popperian thought. One need not, he says, logically define every concept or idea until one is certain of it and knows that it is incapable of refutation. The concept of selectionism, in the way we have expanded it from falsificationism, should be regarded in the light of this openness. Popper himself and many of his followers have not always heeded this openness and have frequently aimed at a certainty and finality about indeterminism and uncertainty which is both unobtainable and unnecessary. As I remarked in an earlier chapter, we must remain as uncertain about our knowledge of knowledge as we are about knowledge itself.
Objectivity without Evolution
Selectionism is enlarged falsificationism. In linking the philosophy of science to evolution and in thus making epistemology evolutionary, Popper has also established his philosophical realism on a more secure basis. Ordinarily, realism is very difficult to establish. We can never compare a proposition about the world with the world. We always have to compare it with another proposition, so that we never really come face to face with ‘reality’.
The early philosophy of Popper – his falsificationism – sought support for philosophical realism in the correspondence theory of truth, especially in the form in which it was put by Tarski. Tarski’s truth definition shows how the internal structure of complex sentences can contribute to their meaning and he hoped to show how semantic notions, including a semantic notion of truth, can be part of a physicalist (read: philosophical realist) scheme. Language expressions, he hoped, would support physicalism. The young Popper was worried by the fact that the correspondence theory of truth was hard to prove – for instance, what could actually correspond to a statement of fact? Tarski was a revelation for him. Tarski, he thought, had finally shown that true statements are really true about reality and that propositions (linguistic expressions) can correspond to facts (non-linguistic phenomena). This revelation made a deep and lasting impression on Popper and he has never wavered from Tarski. In Objective Knowledge there is a moving account of the day on which Tarski 9 explained his theory of truth, while he and Popper sat on a bench in Vienna’s Volksgarten.
Tarski’s theory, however, is of very doubtful value. It is very hard to see how a mere semantic argument can help to establish philosophical realism. The semantic difficulty apart, a simple correspondence theory of truth, whether in the Tarski form or any other, would have, for example, great difficulty with the phenomenon of light. If the proposition ‘light is a wave’ is true if and only if light is a wave, how can that proposition be ‘true’ when we know that light is not only a wave but also a particle? In so far as Tarski’s analysis is a purely formal or semantic one, it is not affected by the peculiar ‘logic’ of Quantum Mechanics and its uncertainties. But these uncertainties and that peculiar logic show that any concept of truth which is based on a straight either/or distinction between truth and falsehood is irrelevant and not applicable. Quine said that this kind of attribution of truth to statements does not presuppose any notions other than those already used inside the statement itself. We understand no more when we claim that a statement is true, than we do when we understand that statement itself. 10 Popper himself was and always had been aware of these difficulties but he has never ceased to stress the importance of Tarski’s definition of truth for his own philosophical or ‘metaphysical’ realism. If science is the search for truth, then one has to believe that the truth sought is truth about the real world. Tarski seemed to provide the necessary support. However, as we shall see, there are biological considerations in favour of at least hypothetical realism which obviate reliance on Tarksi’s correspondence definition of truth. 11
Support was indeed necessary. We are entitled to know whether what we know really corresponds to a world or whether what we know is a sort of figment of our mind, a figment which is so fictitious that it also protects us against the possibility of ever finding out that it is a figment by fictitiously providing those answers to any tests and attempted falsifications which confirm us in the belief that there is a real world apart from our knowledge. This is a very old philosophical problem and, in spite of innumerable attempts, it has not been possible to provide a viably conclusive solution.
Broadly speaking, and leaving strict philosophical logic aside, one could assume that knowledge is ‘objective’ to the degree to which one can demonstrate that the observer’s standpoint has no part in it and is excluded from it. This sounds seductive, but cannot stand up to criticism. In order to ascertain whether the observer has been excluded, one has to know what the stuff he has to be excluded from is like. In other words, the implementation of this requirement of objectivity presupposes knowledge of an objective world. And so we come across the age-old hermeneutic circle; in order to know ‘objectively’ and in order to be sure that what one knows is knowledge of an objective world, one has to know beforehand what one wants to know. This circularity has bedevilled all epistemologies from Plato to the present time. Any epistemology which requires a clear distinction between knower and known so that it can make sure the one does not intrude into the other, has to assume that both the knower and the known are known so that they can be distinguished. Or, at least, either the knower or the known have to be known in advance – in which case, one can consider that the other is what is not included in the one, or the other way round. Experience has shown that even this reduced demand is impossible to meet. Compare, for example, the two great competing ontologies of the seventeenth century, the ontology of Newton and the ontology of Huygens. It will be completely apparent from such a comparison that each ontology is intimately linked to a specific epistemology and that there is no way in which one can separate either ontology from its epistemology. Newton postulated a world in which there were bodies inside an empty isotropic space and a time which is the same for all points in that space. In this world, causal connections between bodies depend on their position in space at a certain time. The simplest conception of causal efficacy must be the assumption of instantaneous causal action at a distance. This ontology also controls the conception of light as transport of corpuscles or particles, for one cannot conceive of processes other than the transport of particles. By contrast, in Huygen’s ontology, processes are the results of pushes and pulls. Hence, there has to be a medium which transmits these impulses – a continuum which makes this kind of causal efficacy possible. If there is a continuum, light could be waves transmitted in that continuum. Alternatively, if one assumes that light is a wave, one is committed to an ontology in which there is a medium in which waves can travel. 12 Both ontologies are clearly mind-dependent (i.e. they owe a lot to the theories about light and causality). In short, we cannot achieve objectivity by separating the world from the mind. In order to do so, we would have to separate our knowledge of the world (mind) from the world. This cannot be done because we cannot have knowledge of the world without having knowledge of the world.
In the past, philosophers have reacted in different ways to this circularity. Some, from Plato onwards, have been dogmatic about ontology. They have simply assumed that they know what the world was like and then proceeded to construct an epistemology that would do justice to that ontology. They have thought that there are two questions when in reality there is only one. They have thought that there is, first, the question: What is the world like? Second, there comes the question: How can we know it? There is, however, only one question – the first. Once it is answered, the second question is automatically answered. In other words, the dogmatism about ontology makes the search for a suitable epistemology redundant.
The next strategy is reductionism. From Aristotle onwards, philosophers have tried to assume that our sensations and consciousness can be reduced to events in the real world. They must assume, one can only suppose, that there is some direct causal operation which works upon the mind and causes the mind to have certain awarenesses which can be formulated as sentences – or, possibly, the other way round. Although this strategy for vindicating that our knowledge is knowledge of a really objective world seems based on all manner of superstitions about causality, it has enjoyed a very wide support, right down to the present day. Looked at in the cold light of reason, there can be no conceivable justification for thinking that such reductions are possible, let alone exhaustive. In all its varieties of sensationalism, phenomenalism, presentationism, representationalism, it always comes to the same thing. A sense experience is evidence about the mind or person who has it; it can by no stretch of the reasonable imagination be considered evidence about an objective world, alleged to have ‘caused’ it.
Next, we come to the sceptics, whose most convincing modern representative is Quine. There is no real ground for believing that there is an objective world to which our knowledge refers; and, similarly, one cannot really make an ultimate distinction between true knowledge and false knowledge, between superstition and science. But, he adds, you ‘should not venture farther from sensory evidence than you need to’ 13 and one ought to have an ontic preference, but no more than a preference, for observations. How far, one ought to ask, ‘need’ one venture from sensory evidence? Quite a lot, Quine seems to be saying, and, if one does not ‘need’ to, at least, one may:
Physical objects are conceptually imported into the situation as convenient intermediaries – not by definition in terms of experience, but simply as irreducible points comparable epistemologically to the gods of Homer. I believe in physical objects, and not in Homer’s gods. But in point of epistemological footing the objects and the gods differ only in degree. 14
After the sceptics, we come to the sociologisers. Here, we meet a very broad spectrum of opinions which are, however, all agreed on one point. Since we seemingly have no access to standards which enable us to distinguish absolutely the knower from the known and which would thus enable us to exclude the mind from the world and to recognise what is objectively there, we must apply, in our search for an objective world, the standards of a social group and defer to the epistemic authority of a speech community, however defined. Accepting that our ways of speaking about the world cannot receive any justification from some sort of non-linguistic knowledge about the way words and things are related to each other, these people ground the harmony of thought and reality on the rules of our grammar. In the face of the certainty that we can have no knowledge of the presence of something by virtue of which a sentence can be true or meaningful, this strategy may seem radical – but it has a lot to commend it. In the hands of Wittgenstein, it always seemed eminently reasonable because Wittgenstein steadfastly refused to examine its social and political implications. In the hands or minds of his innumerable followers, this strategy appears neither innocent nor reasonable. For once one’s attention is drawn to the fact that our knowledge of the objective world depends on our membership of a community and on obedience to the rules prevailing there, one has necessarily to examine the ethics of the community, the justice of its organisation, the political goals it pursues, and similar questions. And from there it is only a short step to the observation made so cogently in this context by Michel Foucault – that all questions of knowledge are really questions of power, 15 for the composition and governance (even the intellectual governance) and epistemic authority of any community must, in the last analysis, boil down to an investigation of the power that controls it. At one end of the spectrum, our sociologisers follow Wittgenstein and show wise resignation. In the middle of the spectrum, they embark upon the search for the sociological determinants of our knowledge and the social reasons for the construction of reality; in doing so, they display a total epistemic narcissism. Knowledge, they are saying, is nothing but a reflection of my society in my mind. At the far end of the spectrum, they are into politics and power. Richard Rorty gently reminds us of the politics involved in considering koala bears for membership of these communities; and Michel Foucault, less gently, glories in being able to show that questions of knowledge turn out to be questions of power. Rorty wonders whether we should exercise our power and include koala bears but exclude pigs from these intellectual edict-making communities. Foucault hints openly at the possibility that the proletariat may seize power in order to force us to accept its edicts as the basis for knowledge.
If it is so difficult to establish that there is an objective world from which one could exclude the mind of the observer, some have argued, one might try to proceed in the opposite direction and establish what the mind of the observer is like. Then, so the argument goes, one can know what to subtract from knowledge. After all, if one wants to separate two things – mind and world – it is enough to separate out either one and take what is left to be the other. Having failed in our attempt to separate an ontology, let us look at the possibility of separating the mind.
The chances of success are equally unpromising. They are made specially unpromising by a phenomenon which is playing a major part in modern knowledge. Both in Quantum Mechanics and in Thermodynamics, there is a well-known phenomenon which makes it look as if the processes of nature were actually controlled by the human mind and by our knowledge of these processes. This is not just a matter of a well-known distortion effect which simply shows that when an observer is intruding into a room of observed subjects, the subjects are affected by the intrusion. The phenomenon I am referring to is infinitely more complex and does not depend just on a simple intrusion. It looks as if an intrusion may have taken place when in reality there may have been none. There is no telling at the moment what the correct answer might be; but whatever it will be, the phenomena in question show how difficult it is to be sure what is an intrusion of the mind into a natural process. All we can see is that there are physical processes which look as if a mind had intruded and which appear indistinguishable from processes into which no mind has intruded.
I am referring, of course, to the Uncertainty Principle in Quantum Mechanics and to the Second Law of Thermodynamics. As far as Quantum Mechanics is concerned, there is an uncertainty about the location of, say, electrons. It looks as if that uncertainty results from the fact that in order to locate the electron, it has to be hit by a sub-atomic particle and, in being so hit, it is deflected from the path it was occupying. Now, electrons may just be the sort of ‘entities’ which behave in this seemingly peculiar fashion. On the other hand, their behaviour may be due to the intrusion of an observer. Whichever way this matter may one day be resolved, the appearance of an observer’s intrusion will always be there – which shows how difficult it is to separate the mind out of nature. The laws of Quantum Mechanics may be objective ; but they look, at the same time, as if they are not.
Or take the Second Law of Thermodynamics. Since the increase in entropy stipulated by the Second Law is an increase in disorder and randomness, it looks as if the increase in entropy is not objective but merely the result of our ignorance, of our inability to keep track of ever-more seemingly random movements of molecules. The faster and more randomly they move, the greater our ignorance of the system. Hence, it looks as if there is no increase in entropy at all; but merely an increase in our ignorance. On the other hand, the increase in entropy could be perfectly objective and real. Nevertheless, it is indistinguishable from an increase in our ignorance in regard to the system. Again, there is no telling how this matter may be resolved one day. If it is resolved in favour of objectivity, it will nevertheless continue to look as if it were a matter of our ignorance. This example shows again how tricky it is to disentangle mind from the world, the knower from the known. The known may be out there, objectively and in itself; but it is so cussed that it looks as if a knower had intruded into it even when he has not!
Leaving nature’s cussedness aside, one can take a sweeping glance at attempts to separate out the mind of the knower, rather than an ontology. However, a glance at the major traditional philosophers who have attempted such a description of the human mind will not be encouraging. Plato sought to determine the features of the mind from his observation that we have only uncertain knowledge of reality and its objects. The mind, therefore, must be something which adds the required certainty. Descartes determined the features of the mind from his observation that he had certain knowledge of one single fact – that is, of the fact that in thinking he was thinking. Kant determined the features of the mind from his observation that we have certain knowledge of Newton’s Mechanics. He determined what the mind was by showing what it must be like to have that kind of knowledge. Hegel derived the features of the mind from his observation that the knowledge the mind has of itself not only changes, but has grown.
In the twentieth century, these grand strategies for the discovery of a mind which knows reality have been replaced by more refined analysis of how words refer to things and what has to be satisfied for this reference to be unequivocal. This, in turn, has led to discussions of whether we know things or processes, as well as to discussions of whether we know things and baptise them or whether we choose names in accordance with our knowledge of things. In attending the numberless conferences devoted to this question and in perusing countless books and papers about it, I have never yet found a sensible discussion as to how the notion of reference would fare when seen in the context of, say, Quantum Mechanics. It would be a sobering experience for the participants and antagonists in this debate to reflect upon the physicists who do not get a picture of the atom or of Quantum processes from their new matrix mechanics, i.e. who do not have anything these words like ‘atom’ and ‘Quantum process’ refer to. On the contrary, matrix mechanics was invented precisely to avoid making a statement which could refer to anything physical. Waves, in Quantum Mechanics, are not material events and the word ‘wave’ therefore does not refer to something that is happening, but to a ‘wave of probability’. Our knowledge of Quantum Mechanics shows that one can have quite a lot of knowledge, even though one’s knowledge does not have a precise focus of reference. That knowledge can be couched in such a way that it does not refer at all to specifically locatable events or objects. If this is so, does it matter whether ‘referring’ is a kind of christening or a summing up of observations? 16
Next, we keep discussing whether our thoughts represent judgments or whether they are our judgments. Are language-users guided by mental fact or are they simply and mindlessly following a rule for the use of language? Are they doing, in speaking, two things or, as Wittgenstein maintains, one thing? Alternatively, are we in speaking expressing a private language or does ‘using a language’ simply mean ‘following a rule?’ And then, there always comes the question as to whether our concepts, no matter how they are related to judgments, are mental or not. If mental, one has to provide a description of what is non-mental and this leads back to the older controversies about the nature of mind and this, in turn, leads back to a consideration of whether what is special to mentality is the power to abstract and go beyond the observation of physical particulars. 17 Though these contemporary discussions have the air of being precise and the appearance of being promising, they have as yet not led to a single view which goes in any important way beyond the results or non-results of the earlier grand strategies. The volume of these discussions is inversely proportional to the lack of results. ‘If only we could get it right!’ the participants in the discussion seem to be saying. ‘If we could get the relationship between objects and words, between concepts and judgments right, we would know how objective our knowledge is.’ Unfortunately, one cannot even tell what ‘getting it right’, in this context, would mean. Would it mean consensus? If so, then whose consensus? They all talk as if, with a little more effort, the problem could be solved. But where precisely is the problem? And supposing there is a problem, would one not be obliged to conclude from the impossibility of agreement that it is spurious? If there were a problem, one or the other view would eventually gain support and compel agreement. The impossibility of reaching agreement seems to indicate that there is no standard by which the cogency of any of these views could be assessed, for if there were agreement on what the problem was, one could judge which of the many answers about reference, individuals, translation or concept could be considered the best answer. These arguments either make sense or they don’t – ‘sense’ not meaning ‘meaning’ but ‘logically cogent’. If they did, they would compel assent; if they didn’t, there would be no point in putting them forward. The fact that they have not compelled assent should be taken as a demonstration that they do not make much sense. The argument about private language is very similar to the medieval debate about the number of angels who could dance on the point of a needle. The debate depended entirely on the definition of an angel as a non-extended substance. The debate about private language depends entirely on one’s definition of language. The other debate about reference is not even in the same class. Its resolution depends on absolutely nothing. There is no logical point which has been overlooked and which might decide it one way or the other; there is no evidence at which one might point which, if produced, would settle the matter. It is sensible and understandable to have conflicting opinions on such interesting topics as politics and art, ethics and religion. But it is very difficult to see the reason why people should carry on having different opinions on so unedifying a subject as reference when there clearly is no conceivable way in which the debate could be advanced or improved and when one cannot even imagine what sort of argument would count as an advance or what could be gained by a settlement.
If, on the other hand, one takes one’s stand upon the alleged empirical basis of scientific theory, one seems to be committed to an endless string of problems in cognitive psychology. The fact is, however, that we have scientific knowledge. What we are lacking is an explanation of this fact. Most philosophers go about their business seeking to justify that knowledge. Seeing that it exists, there seems to be little point in seeking to justify it. The real question we ought to answer is how it has come into being. Knowledge, like life itself, is its own justification. What we want to know is how it has grown. If one looks at the relationship between mind and reality in the first instance by looking at the mind which is supposed to be doing the knowing of reality, one obviously cannot produce any arguments in support of the supposition that what we know is knowledge of something out there, called reality.
When the debate has reached this conclusion, it is now a very prevalent strategy to invite the participants to form a closed circle – a speech community or a language game or accept some other framework or paradigm – and allow that circle to exercise epistemic authority. Obviously, arguments of this kind can lead nowhere, but the cultivation of speech habits within the fixed precincts of a framework is viable. For inside the circle, one can do something one cannot do outside the circle. One can decide who conforms to the habits of the circle and who does not. This invitation to form circles and to close them by making the prisoners submit to the epistemic authority of the circle was an approach favoured not only by Wittgenstein himself, but also by Quine, Sellars and Kuhn; it solves nothing, however. But it does inject law and order into the debate. In any case, with the voluntary submission to the framework, one can now be confident that such knowledge as is held by the prisoners is knowledge justified by the framework. There is no longer any need for any of the prisoners to go hungry for knowledge, though there seems to be no end to the starvation of free citizens.
For people who think that knowledge is only knowledge if and when it is justified, it becomes very attractive to accept an invitation to join one or the other of such closed circles. In refusing the status of knowledge to guesses, conjectures and hypotheses, these people are bound, sooner or later, to be left high and dry – the only thing which amazes is that it took so long for them to realise that the justified and legitimate emperor had no clothes on; and that if one cannot get oneself to be a democrat and live with conjectures, one would indeed have to submit to the epistemic authority of one or the other closed circle and use that authority to legitimise and justify such knowledge one would be allowed to have. After all, it is better to submit to a prevailing norm than to go hungry.
Nevertheless, this strategy and the invitation which results from it has to be rejected. The participants in the debate find the invitation attractive because they cling to the conviction that nothing can qualify as knowledge which is not justified. Having failed to find justification by defining reality, mind, reference, meaning, speech acts, etc., they now are tempted to seek more modest justification by hiding behind the authority of a closed circle. This temptation ought, however, to be resisted because the disease can be cured in a different way. But even if it could not, I myself would rather die than say ‘yes’.
In view of all these difficulties and doubtful strategies for the resolution of these difficulties, it cannot be an exaggeration to say that philosophy has shown itself quite unequal to the task. Philosophically, the problem of defining either mind or an objective world has always been, and remains, untractable.
Objectivity with Evolution
The development of modern biology and of the Darwinian theory of evolution by natural selection shows a way out of this impasse. Where philosophy has been powerless, biology can help. It provides the missing link in the arguments about objectivity. The great merit of Popper’s philosophy of knowledge is that it avails itself of this opportunity offered by biology.
Popper’s philosophy of knowledge does not solve the problem of objectivity, but it shows that it need not be solved and that it can be by-passed. It is able to do so because of its use of Darwinian evolution. If we accept that we are here in the world, we must accept that the world is the sort of world which has brought about our existence. Our presence, therefore, is not only a guarantee of an objective reality as the result of which we are; it is also evidence of the fact that that objective reality must be of a certain kind, for if it were different, we would not be here. There may well be other worlds, and they may well all be totally different from our wildest dreams. But there must be at least one world which has made our evolution possible. In a different world, the evolution of Homo sapiens would not have been possible. Being here implies not only those billions of years which have preceded the emergence of life on earth, but also the particular interaction between life and environment which is envisaged in the theory of natural selection. If there were no objective environment, there would have been nothing to do the selecting as a result of which we are here the way we are. In basing his philosophy of knowledge on evolution, Popper can simply afford to disregard the quest for objectivity and avoid all the pointless efforts which have been made by philosophers to show that when we know, our knowledge corresponds to something objectively present. With the theory of evolution, we simply take the presence of an objective world for granted. Its presence cannot be proved ‘objectively’; it must, however, be presumed, for the emergence of life is the result of the selective pressures of that objective reality. For this reason, we speak of ‘hypothetical realism’. 18 Feyerabend states categorically – and, incidentally, uses this statement to justify his cognitive anarchism – that ‘the world which we want to explore is a largely unknown entity.’ 19 In the light of evolution, this statement appears to be simply false. The world we want to explore is not ‘largely unknown’ at all. It is the sort of world which has produced the sort of beings who want to explore it.
At first glance, it might look as if with this argument we are back with the old circularity which surrounds the problem of ontology. In order to devise a correct method for knowing and in order to form an intelligent opinion as to whether anything discovered by this method is ‘objective’, we would have to know in advance what we are setting out to discover. In short, we have to have an ontology. But we cannot get an ontology unless we have knowledge of the world or, at least, of the sort of thing the world is. We have encountered this circular problem already in chapter 1 , when we were looking for a substantive conception of rationality. If we could have an ontology, we realised then, we could call the method best designed to discover the world our ontology alleges to exist, a rational method. We realised then that this kind of circularity prevents us from having a substantive conception of rationality (i.e. a conception of the right method which will lead us to a desired end). However, in the present context, we might have a second string to our bow. Let us try again by formulating the problem in a different way. Instead of concerning ourselves with rationality, let us concern ourselves with the relation between knower and known.
It has always been supposed that the real philosophical problem is to find out what kind of mind we ought to have (i.e. what kind of method we ought to use) in order to find out something about the world. If one starts by asking about mind and method required, one seems to presuppose that one knows what sort of thing the world is – that is, that one has an ontology up one’s sleeve. Here, we are back with the circularity involved in all attempts to discover an ontology. The classic case is Kant. He took it that Newton was right and that the world is the sort of world Newton said it was – a box of space with lots of masses moving in that box. Kant then realised that the problem left for the philosopher was to explain what kind of mind a man would have to have in order to get to know this Newtonian world. 20
In the past, philosophers have shown great self-assurance in making assertions about the stuff the world was made of. They may have been backward in knowledge and science, but they have always been confident about ontology. Some said the world was made of atoms; others maintained that it had been hand-crafted by God and that man, in particular, had originally been made of clay; others maintained that it consisted of sentient monads; some asserted that it consisted of nothing but processes. In the present century, philosophers have become more self-conscious about their ontologies. Let us look at some more recent attempts and list them in ascending order of plausibility, where ‘plausibility’ means ‘least strain’ on our credulity. Obviously, that system of thought is best which needs least ontology.
At the very bottom of such a list, we must place the recent attempt by P.Smith to establish ‘realism’. 21 A statement refers to black holes, Smith says, if there are black holes; to atoms, if there are atoms. In this scheme, there is obviously a very heavy burden placed on ontology. 22 This kind of ontology requires a lot of knowledge in advance and its plausibility must, therefore, be zero. Next, we place the formal attempts at an ontology by Gustav Bergmann. Here, it is asserted that what exists in the full sense are universals or characters, be they relational or non-relational. Then we need individuators and some entities like ‘exemplification ties’ which are said to subsist. If we want to make an attempt at the ontology of ‘two yellow spots’, we would need at least the universal ‘yellowness’; a conception of singularity (i.e. in this case ‘two’ singulars, for there are two spots in nature); and a tie of exemplification, for each existing spot exemplifies the universal ‘yellowness’. This is a very rough summary and does not do justice to Bergmann’s logical acumen. The attempt is to be rated above that of Smith, but is not all that plausible because it requires a fairly clear knowledge of the number of yellow spots in nature and amounts to little more than a logical categorisation of what we know.
A little higher on the list we place a recent attempt by Roy Bhaskar in his A Realist Theory of Science , 23 Bhaskar tries to provide an ontology of transcendental realism. His efforts look promising because he is aware of what he calls the ‘epistemic fallacy’ 24 which reduces ‘being’ to knowledge. But in order to avoid this fallacy, he has to postulate that there be ‘closures’, i.e. only what is inside an experimental situation is causal. 25 Then he goes on to postulate ‘atomic’ facts so that he can separate causal laws from patterns of events because, ontologically, patterns should not be reduced to laws. 26 He also has to separate the fact from its causal efficacy 27 and then he lapses into Aristotelian terminology: ‘once a tendency is set in motion it is fulfilled unless it is prevented’ 28 and ‘only things and materials and people have powers.’ 29 He also distinguishes between the actuality of causal laws and the facts themselves. 30 The attempt is courageous and valiant. But with so many postulates and assumptions, one feels betwixt and between. Assumptions and postulates are not ‘knowledge’ and one concedes that thus the epistemic fallacy is indeed avoided. But the making of assumptions must be guided by hidden knowledge and must correspond to something which Bhaskar knows and we do not know.
Next in ascending order comes the attempt of Keith Lehrer. 31 ‘Scientific theories and descriptions that are simple, precise, comprehensive, coherent and predictively fecund have a better chance of being true than those that lack these virtues.’ 32 If this statement were taken out of context, one would have to assign a low place to Lehrer on our list because the statement seems to postulate that the world is the sort of thing of which statements with the above qualities are correct. But in context, the statement is very disarming, for Lehrer continues that his confidence in the virtue of these qualities is not dependent on an ontology but derives from a ‘subjective’ commitment and that one could not demonstrate that an assertion that theological description is better, is false. ‘Here we arrive,’ he says, ‘at the bottom rock of subjectivity on which all else rests.’ He then goes on to consider the two fashionable and popular objections to subjectivism – a recourse to ‘empirical evidence’ and a recourse to the epistemic authority of a speech community – and rejects both. He is thus left with his total bottom rock of subjectivity. Although Lehrer has an ontology, he admits that it is a subjective act of faith; since he rejects the two popular strategies for grounding such an act of faith on Positivism or speech communities, we must rate his ontology above Bhaskar’s ontology because it has less content on account of its avowed subjectivity. There is content, but since it is undogmatic and subjective, it is totally disarming.
Next in order of ascent let us place, for argument’s sake, the ontology of Popper. There is no real ontology in Popper at all. It is extremely modest and Popper has at times referred to it as nothing more than a ‘manner of speaking’. 33 However, we can reconstruct a sort of minimum ontology for Popper. If it is said that theories must be framed in such a way as to be falsifiable, one must assume that the world is the sort of world which would falsify untrue statements. This means that one supposes that the world is not the sort of world in which, for example, the voice of God could falsify anything. This kind of ontology makes a minimal strain on our credulity and must therefore rate high on our list.
From Popper’s ontology, it is a very short step to the ‘Anthropic Principle’ which tops our list because it imposes no strain at all on our credulity. The ‘Anthropic Principle’ states 34 no more than that the world must be the sort of thing which has produced us by evolution so that we can sit here and talk or think about it and perceive it. Instead of starting by asking what sort of mind is required to know the world – a question which invariably leads on to a search for an ontology because one cannot know what mind is required unless one knows what kind of world it is supposed to know – the ‘Anthropic Principle’ tells us that we take it the other way round. We do not ask a question about the mind or about method; we ask what kind of world the world must be to have been able to produce the sort of mind we have. In thus completely reversing the order of questions, the ‘Anthropic Principle’ does away with the need for an ontology. It simply realises that we would not have a mind to know, let alone a method for acquiring knowledge, unless we had evolved. The only question, therefore, is the question about the nature of the world – the question which scientists of all shades and complexions have always asked. By taking evolution seriously, the ‘Anthropic Principle’ shows us that what we took to be the problem of ontology is really the problem of what the world is like. There is only one question to be asked, not two questions: one about the nature of the world, and the other about the nature of ontology. Knowledge and ontology, with evolution, become one and the same thing. There is no ontology here at all and the ‘Anthropic Principle’ not only tops our list, but eliminates the reason for all attempts at ontology. The ‘Anthropic Principle’ is totally plausible because it does not strain our credulity since it does away with all ontology. The evolving world, it states, has evolved in such a way as to produce, among other things, the mind we have. The realism involved in the ‘Anthropic Principle’ is ‘hypothetical’ because, since we have a mind, it is a reasonable hypothesis that there must be a real world which, by a process of evolutionary selection, has produced our mind.
Ironically, hypothetical realism provides an answer to one of Wittgenstein’s many aphorisms. ‘The most difficult thing in philosophy,’ he wrote in his Remarks on the Foundations of Mathematics , VI, 23, ‘is to maintain realism without empiricism.’ Wittgenstein started on the assumption that we know that there is a real world because we experience it empirically. Realising, however, that there is no way in which we can accumulate perceptions and declare that their sum total is evidence for the existence of a real world which somehow corresponds to our perceptions, he concluded that it must be very difficult to be a ‘realist’. Hypothetical realism has solved the problem. It says that we are realists because we are here. If there had not been a real world to select us for survival by providing the selective pressures, we would not be here to wonder about it. Hypothetical realism is realism without empiricism, realism without sense experience as a foundation of knowledge. 35
The whole matter hinges on the role of perception. If one means by perception the sort of gathering of observations which are gradually incorporated into one’s mind and retained by learning and summed up as knowledge, one remains involved in all the countless difficulties and quandaries discussed in the preceding section. Empiricism, we have seen there, cannot be a foundation of knowledge of an objective world. If, however, one takes it the other way round, the problem disappears. If what we call knowledge is the result of selective pressures, we can be sure that what we know is the result of the world which has done the selecting. Darwinism and Popperian philosophy of knowledge have a common denominator. In both theories, we find that proposals are made to the environment and that the false or non-adaptive proposals are eliminated. The common denominator is: approximation to the truth by error elimination. Darwin discovered that this mechanism is at work in the evolution of living organisms; and Popper, that it is at work in the evolution of knowledge. If Darwin had known of Popperian terminology, he might have formulated his theory of evolution by saying that it consists of conjectures and refutations. With this theory, neither Darwin nor Popper need to occupy themselves with fruitless attempts to show how we can learn by piecemeal observations; how we can inductively sum up these piecemeal observations to tell us something which limited experiences cannot tell us; and how, by painful memory work, we can learn to incorporate these summations into our store of knowledge or into the structure of our living cells. Least of all, they do not have to worry whether and how what is thus laboriously built up can have veracity and represent an objective world. In both theories, experience plays a negative, eliminating role: it falsifies wrong conjectures.
When one is comparing neo-Darwinian theories of evolution with Popper’s philosophy of knowledge and is seeking the common denominator, there is room for argument. One could argue, for example, that the common denominator consists simply in the possibility of thinking of theories as disembodied organisms, and of organisms as embodied theories. In this case, falsification appears as the common denominator. Popper himself goes further than this. He explains that there is a similarity between the mechanism in genetic adaptation, in adaptive behaviour and in scientific discovery: the common denominator here is problem-solving . 36 This common denominator appears very attractive when one accepts that
evolution proceeds like a tinkerer who, during millions of years, has slowly modified his products, retouching, cutting, lengthening, using all opportunities to transform and create … Making a lung with a piece of oesophagus sounds very much like making a skirt with a piece of granny’s curtain. 37
I prefer, however, yet a different common denominator. F. Jacob writes:
Evolution is built on accidents, on chance events, on errors. The very thing that would lead an inert system to destruction becomes a source of novelty and complexity in a living system. An accident can be transformed into an innovation, an error into a success. 3 8
This passage reads as if it had been written by Popper, for Popper shows that we learn from our errors. Once a theory is falsified and an error eliminated, we are nearer a correct theory with greater verisimilitude. The more errors we make, the greater the progress. Unconscious organisms take ages to benefit from their errors and yield more adaptive organisms because they have to wait to be physically eliminated and for offspring to try out variations. Human beings, capable of consciously formulating theories, can achieve a tremendously fast turnover because they can abandon false theories and do not have to wait for coming generations to test the ‘adaptiveness’ of theories by physical trial and error in actual survival of organisms. Conscious human beings can even make errors on purpose and do not have to wait for a chance error to happen. Here, then, we find that the common denominator is errormaking .
Errors are eliminated or corrected by experience and, in the last analysis, ‘experience’ must mean ‘observation’ or ‘interpretation of observation’; and, in this sense, Popperian philosophy of knowledge is a form of empiricism. But there are empiricisms and empiricisms. When we recognise the crucial role of observation in Popperian philosophy of knowledge, we are not thinking of the sort of observation which conventional empiricists believe to be constitutive of knowledge or the inductive summation of which they hold to be knowledge. In short, in Popperian philosophy observation is the end of the process, not the beginning.
Nevertheless, though negative and some kind of end game or final move, it does play a crucial part – like an ultimate court of appeal. Though not primary and though it is not the starting-point in our acquisition of knowledge, it is essential. For most conventional and traditional empiricists, observation even in this negative role presents a formidable problem. Why, one might legitimately ask, should one accord such a privileged status to observation and allow it to falsify? What is so special about observation and/or sense experience? An empiricist like Ayer is simply dogmatic:
We say that a sentence is factually significant to any given person, if, and only if, he knows how to verify the proposition which it purports to express – that is, if he knows what observations would lead him, under certain conditions, to accept the proposition as being true, or reject it as being false. 3 9
In this formulation Ayer docs not link observation to inductivism. His requirement as to what would make an observation valid applies equally to a falsifying observation. Why, then, is observation the ultimate test of falsity? Why not revelation or authority or intuition? It has been argued that one ought to give preference to observation because observations tend to be inter-subjective and can be shared by many people. This argument cannot hold. The inter-subjectivity of observation is highly questionable when one considers such things as optical illusions; and when we come to the possibility that an observation may have to be interpreted, we are on even more uncertain ground. For that matter, there are many illusions which are perfectly well inter-subjective. Moreover, both revelation and intuition, not to speak of authority, are inter-subjective because of the power of both suggestiveness and auto-suggestiveness. On the other side, alchemists used to appeal to observations as evidence of the truth of their procedures to produce gold. But since their observations consisted often of as many as 700 different steps or reactions of chemicals under heat, their observations have notoriously proved not to be inter-subjectively testable.
Quine, as we have seen above, is less dogmatic than Ayer. He suggests nothing more than that we should give a slight preference to observation. Why should we? All demands that we ought to pay attention to observation seem to be without foundation. Observation merely tells us, after all, that something is happening to or in our nervous system.
There is, nevertheless, a perfectly good reason why we should pay attention to observation and give it more than just slight preference over other methods of falsifying statements. Evolution conjectures that the nervous system, complete with its sense organs, is adapted to the environment. Thus, we know that experience by observation is not just an event inside our nervous system. The theory of evolution and that theory alone puts teeth into the demand for falsification by observation. Without evolution, there could be no reason why we should prefer observation to authority or intuition or revelation as a standard for falsification. Evolution transforms the appeal to observation from a dogma into a reasoned criterion or test when and when not we come into contact with the outside world, i.e. with something that is not just an event inside our nervous system. A modern philosophical St Paul might therefore say that observation may be faith and hope; but without evolution, it is of no avail. In this way, observation for falsification purposes is vindicated by evolution. There are good reasons why it should be preferred to other methods. There is no need for Ayer’s dogmatism and no ground for Quine’s scepticism. None of these arguments, however, can or should be construed to mean that we should begin with observation and with nothing but observation and insist that all we know should be reduced to observation. The theory of evolution can only vindicate the primary importance of observation in regard to falsification; it cannot establish it as a source of knowledge. If one regards observation as a source of knowledge one cannot appeal to evolution, for evolution tells us that such knowledge as is embodied in organisms is the result of selective retention of adaptive proposals, not the result of accumulated learning by observation. 40
In Evolutionary Epistemology, knowledge is compatible with reality – that is, the knowledge which has survived error elimination. It does not have to represent reality or tell us what reality is like and does not have to be a complete ‘fit’ to reality. All that is required is that it should be compatible with reality. Nevertheless, we know that there is a ‘real’ world other than our knowledge of it because what we ‘know’ has been selected by it and is tolerated by it. Our knowledge of a ‘real’ world has been left standing by that world because the two are not incompatible, even though that knowledge is neither objectively nor certainly known. For all that is required for the selection of either organisms or theories is that they should be compatible with reality. Organisms as well as knowledge are merely minimum adaptations to reality. Neither an organism fit to survive, nor a theory which has escaped the process of error elimination, tell us what that reality is exactly like. But they tell us what is compatible with it. For this reason, the correspondence theory ceases to be crucial for philosophical realism. Popper would not agree; but his own arguments in favour of Evolutionary Epistemology make it superfluous.
The idea of ‘compatibility’ has to be used with discernment. Contradictions are compatible with any proposition. This, indeed, is the formal definition of ‘compatibility’. But it would clearly be wrong to say that a contradictory theory is a good theory just because it is compatible with everything. A theory which is compatible with reality in the present sense, must make some assertions or have some implications which are not compatible with reality. An organism which is compatible with its environment is not compatible with all environments. Reality, like Kant’s Ding an sich , cannot be known. But unlike the Ding an sich , it makes itself felt. It selects. It weeds out the non-fits and thus cannot but bend the growth of knowledge towards itself. In this view, the mind is part of reality and, in principle, not different in its relation to reality from the relation of every organism to reality.
Evolutionary Epistemology also throws light on another problem which has been troublesome in Popperian thought. In order to explain the growth of knowledge and to show how new theories supersede old theories by explaining new facts as well as old facts explained in the earlier theory, Popper has introduced the concept of verisimilitude. This concept is logically not an easy concept. A theory can be either true or false or plausible or unfalsified. But it is very hard to accept that it can be verisimilitudinous (i.e. that it can be like the truth) without being actually true. If this is hard to grasp, then the notion that there are degrees of verisimilitude is even harder to grasp. We can give good reasons for thinking that the General Theory of Relativity is nearer the truth than Newton’s Mechanics; but we cannot easily demonstrate that this is so.
Evolutionary Epistemology cannot provide such a demonstration, either. But it can shed light on the meaning of the concept of verisimilitude and thus explain it without providing a precise logical status for it. A comparison with the notion of verisimilitude in biology can shed considerable light on this matter. A tick is dependent on its ability to detect the presence of mammals. It is programmed to let itself fall from a twig when it ‘smells’ butyric acid and to cling to an object it encounters, provided that object has a temperature of 37 degrees Celsius. It so happens that the odour of butyric acid and the specific temperature to which the tick reacts is the minimum definition of a mammal. 41 We say, therefore, that the tick, though its picture of mammals is very incomplete, has knowledge which has some verisimilitude to mammals. But this knowledge has less verisimilitude than our own knowledge about mammals. This difference in the degrees of verisimilitude can be illustrated best in a negative way. It is easier to simulate a mammal to mislead a tick than it is to simulate a mammal to mislead a human being. In this example, the degree of verisimilitude is not determined by a degree of verisimilitude in representation. The tick has no representation of a mammal. It merely has some basic information. For that matter, a human being’s knowledge of a mammal is, strictly speaking, not a representation of a mammal, but knowledge of a large number of qualities of mammals which is brought into operation for the purposes of response every time a sufficient number of these qualities are detected to be present. It would not make much sense to describe the list of qualities as a pictorial representation of the mammal. If one had to rely on pictures of mammals, one would have to have not only a separate picture for each species and sub-species, but literally a picture of every single member of every single species. Since human knowledge of mammals has greater verisimilitude than a tick’s knowledge, human knowledge is better. But it is so not in terms of its better representational power. The biological analogy helps us to detach the notion of verisimilitude from the notion of representation. As long as we think of verisimilitude in terms of representation, all inquiries about degrees of verisimilitude will lead to inquiries about accuracy of depiction. Accuracy of depiction or of pictorial reproduction are irrelevant. It is important to salvage the notion of verisimilitude because it helps us to sharpen our concept of progress. When we compare two theories, we should prefer the theory which has a greater verisimilitude – provided we do not mean by verisimilitude, verisimilitude of pictorial representation.
The difficulties which have arisen in regard to the concept of verisimilitude all stem, predictably, from the fact that a non-representational and non-inductive theory of knowledge cannot really claim that one can compare theories by comparing the degree to which they ‘represent’. All attempts to formalise the concept of verisimilitude cannot hide this initial conceptual contradiction between non-representational knowledge and verisimilitude. A non-formal, intuitive approach will help; and it will confirm that the notion of degrees of verisimilitude is essential.
Verisimilitude is the epistemological analogue of adaptation in evolution. Adaptations are fits to the environment, but never perfect fits. A good adaptation is one that is compatible with the environment which includes, of course, competitors. It need not be more and is more only in rare cases. Hence, the notion of verisimilitude in knowledge. A theory is never perfect, never wholly true, never the whole truth and, what is more, even if it is, one cannot prove that it is: thus a theory is a verisimilitude. Just as there are degrees of compatibility with the environment, there are degrees of verisimilitude. Whatever the logical difficulties in the concept, it is conceived in the spirit of Evolutionary Epistemology and derives its fruitfulness from evolutionism. Neither organisms nor theories are induced or determined by the environment. They are free and undetermined proposals or conjectures. Those that have no verisimilitude at all, do not survive. A conjecture which is dead right and therefore either a complete fit or totally true, is against all chances. Even then, one means no more than that it is compatible with all parts of the environment, but not that it accurately pictures the whole environment. All in all, Evolutionary Epistemology shifts our attention from preoccupation with accuracy of depiction to degrees of compatibility (i.e. to verisimilitude).
Most conjectures, be they organisms or theories, are merely verisimilitudinous. They say more than the environment warrants. 42 They are underdetermined. 43 A mallard duckling ‘knows’ that it must follow the first quacking object it sees, whereas the environment merely justifies that it should follow it if it is its mother. All humans ‘know’ that the sun will rise tomorrow, whereas the environment we have experienced does not justify such a theory. It merely tolerates such a theory. All knowledge, even the knowledge encoded in the body of a paramecium, consists of unjustified expectations and transcends experience. The environment tolerates mallard ducklings prepared to follow any object which quacks with the right call-note in front of them because, in practically all cases (unless Konrad Lorenz is present and does the quacking), that object will be its mother. Both our mallard ducklings and our human scientist jump to a conclusion which is not warranted by the environment. If they jump to a conclusion which is incompatible with the environment, the mallard ducklings will be eliminated and the scientist must eliminate his theory. There is no mechanism known to us which would allow the environment to determine the knowledge held by the mallard ducklings or by the scientist in exactly the right way. The environment cannot directly produce a replica or a mirror-image of itself in living matter, not even when that living matter has a conscious mind. For this to happen, unconscious living matter would have to be able to encode whatever it ‘learns’ in its gene pool. We know now that this form of Lamarckism cannot take place. 44 Conscious living matter would have to possess some kind of clairvoyance which produces information about non-observable and non-observed events such as future events. 45 Knowledge about the world, in all cases, depends on universals: in non-conscious beings, upon genetically programmed expectations of regularities; in conscious beings, upon conjectured predictions of regularities. But in neither case can we expect living matter to pick up the exactly correct instructions about these regularities from the environment because there is no mechanism available to do it.
When we are thinking of non-representational verisimilitude we are reminded of the idea of J. v. Uexkül 46 that every organism cuts out a special part of the environment which part then becomes its reality. The rest of the environment simply does not exist for that organism. A frog’s eye announces only changes in illumination and the outlines of curved objects in movement. Nothing else is of interest or concern to the frog. A dog’s world is largely a smelt world; a bat’s world is a heard world; and man’s world is, to a very large degree, a seen world. 47 In the world of a paramecium there are fewer events than in our world. But those few events which take place are just as real as those which take place in our world. 48
As far as metaphysical realism is concerned, the absence of such a mechanism of instruction by the environment or direct transfer of information is a positive advantage. If there were such a mechanism, it would be very difficult to know whether all details had been accurately reproduced or encoded in organisms and/or theories; for if there were such a mechanism, nothing less than complete accuracy of information would be acceptable. But since we know that there is no such mechanism, we are left to conclude that the organism and the theory which have survived have been selected for survival or retention because they are compatible with reality. This conclusion is a very solid argument in favour of the presence of such a reality – for without it, there would have been nothing to do the selecting. From this perspective, Kant appears both right and wrong. Right, because he knew that the thing in itself, the noumenal world, is not accurately portrayed in our knowledge. Wrong, because he did not and could not, living as he did before Darwin, grasp that through natural selection the categories of our understanding have been selected for survival by the things in themselves, so that we can be confident that whatever we know is not as unrelated and as uninformative about the things in themselves as his total dichotomy between phenomena and the noumenal world would suggest. It was one of the many merits of Hegel to have spotted this weakness in Kant and to have argued that in evolutionary perspective (even though his evolutionary perspective was pre-Darwinian and un-Darwinian), there cannot be such a complete dichotomy and that all our knowledge reflects at least indirectly some real information about the noumenal world. 49
The nature of the environment constrains the sort of adaptations which are possible. An adaptation which is a minimum compatibility is one which contains the absolute minimum of information about the environment required by the organism in question. This information has, epistemologically speaking, a very low degree of verisimilitude and, since it is compatible with a very large number of features of the environment, is very hard to falsify. The better the adaptation and the tighter the fit, the more verisimilitudinous the information it represents. Hence, it is easier to falsify. Its empirical content grows and it is more readily subject to error elimination because there are fewer features of the environment which are compatible with it. Some human organisms are capable of putting forward conjectures and theories about their environment which are more than minimally compatible with that environment. They therefore show a high level of verisimilitude and, by that token, have a high empirical content which makes them more subject to error elimination and criticism by that environment than the information stored, say, in the gene pool of a population of paramecia which contains only the absolute minimum information necessary for survival. There are lots of things a paramecium can do withbut being eliminated by the environment as an error: it has very low verisimilitude. For example, it does not have to make a bee-line for that part of the water which contains its food. It is sufficient for a majority of the members of the population to get there sooner or later. It can, for example, afford to ignore the oxygen in the water and the water’s transparency as long as it is informed about its penetrability. By contrast, Quantum Mechanics or the DNA theory have a very high verisimilitude: almost everything remotely relevant to them will eliminate them if either theory makes one or two false predictions.
Selectionism, in short, is an enlargement of falsificationism. Falsificationism was a theory of knowledge. Its great initial strength and viability derived from the fact that it obviated the need for a specific epistemology, i.e. a correct account of how we come to know. If one has a falsificationist theory of knowledge, one asserts something about knowledge which makes it unimportant to know exactly how this knowledge is acquired. Epistemology used to be of great importance as long as people believed that there was a right way and a wrong way of acquiring knowledge. Falsificationism made knowledge of the right way superfluous. Selectionism is a theory about knowledge. The enlargement of Popper’s philosophy from falsificationism to selectionism can be expressed in Popper’s own words: ‘The book [The Logic of Scientific Discovery ] was meant to provide a theory of knowledge.’ 50 Forty years later, he said of himself that he has become totally absorbed in the study of the growth of knowledge. 51 At first, he was interested in the demarcation of knowledge from non-knowledge, i.e. metaphysics. Later, the interest shifted from the demarcation of knowledge to the growth of knowledge.
Selectionism is an enlargement of falsificationism also in a different sense. Theories do not get selected because they fit or are right, but because they are not incompatible with a niche in the environment. In falsificationism, one used to say that a theory is provisionally true until it is falsified or that ‘truth’ is an unfalsified hypothesis. Selectionism enlarges this formulation: a true theory is a theory which is compatible with the environment. It increases in truth or becomes more verisimilitudinous if it is compatible with a larger part of the environment. In this sense, Einstein’s theory is better than Newton’s because it explains more; Homo sapiens is an improvement on a cat because man is more flexible – that is, he can maintain himself in a very large variety of different environments. A theory ceases to be compatible when the environment changes or when another theory appears which is more compatible in the sense of being compatible with more. Being more compatible, it competes successfully with the older theory. In this way, we arrive at the Popperian notion of preference. In this notion, theories do not have to be justified and we do not drop a theory because we can no longer justify it. Theories are abandoned because another theory is preferred. In order to make such comparisons possible, we have to assume that all theories are commensurable. They are commensurable because they are all competing in the same environment. In Popper’s philosophy of science, for this reason, the commensurability of theories is of paramount importance. Any philosophy of science which holds that theories are incommensurable with one another and that preference for any one theory is determined by a mere paradigm change, seems to assume that there are different environments to which they are a fit.
In Popper’s Evolutionary Epistemology, progress is seen primarily as an increase in universality. A theory is better than its competitor if it is more universal.
The proper way of understanding increase in universality is to see the increase in explanatory power of the later theory. The later theory is more universal when it explains several new phenomena as well as all the phenomena which used to be explained by the older theory. Thus, Einstein’s theory is more universal than Newton’s theory not because one can deduce Newton from Einstein and not vice versa, but because the phenomena explained by Newton’s theory can be explained by Einstein’s theory plus many phenomena which could not be explained by Newton’s theory. For this to be acceptable, we have to say that theories are commensurable via the phenomena they explain, but not by a simple confrontation of one theory by another.
This whole problem arises because we have discarded representation as a criterion of truth. If we had not discarded it, we could easily argue that progress consists in a growth of accuracy of depiction. But having discarded representation as a criterion of any theory, this argument is not open. Take gases. The Boyle-Charles law for a perfect gas does not represent gases more ‘accurately’ than Dalton’s law of partial pressures or Graham’s law of diffusion. But it is more universal in that it explains the behaviour of gases in terms of tiny molecules, each one of which is subject to the laws of Newton’s Mechanics. The Boyle-Charles law is a progress towards greater universality because it explains not only new phenomena, but also the phenomena explained by Graham’s law and by Dalton’s law.
Kuhn objects to this notion of progress in terms of greater universality. If the two theories involved in this progress are to be commensurable, he says, we have to have a language into which at least the empirical consequences of both can be translated without loss or change. 52 In other words, he insists that in all cases there is complete meaning-variance so that one can never say that a theory explains phenomena which used to be explained by another theory. In Kuhn’s view, any theory which is alleged to explain some new phenomena as well as the old phenomena explained in an earlier theory, does not do so because the old phenomena have a meaning which is strictly and exclusively dependent on the older theory which explained them. If that theory is dropped, the same phenomena cannot be reproduced and therefore cannot be said to be explained by the new theory. If the new theory explains something similar, it is still only something similar, but never the same.
This argument derives its strength from the generally conceded fact that all observations are theory-laden. There is no such thing as a strictly neutral phenomenon. But its great weakness is that it forgets that though there are no phenomena which are completely neutral, there are always phenomena which are neutral relative to the two theories to be compared. These ‘neutral’ phenomena are not absolutely neutral, but neutral relative to the two theories. They are, of course, theory-laden by a third theory which is more or less independent of the two theories to be compared. Thus, one can compare two theories successfully without translating the empirical consequences of one theory into the consequences of the other. Or, by the same token, when one compares two theories, one obtains automatically a method for translating the empirical consequences of one theory into the empirical consequences of the other.
One can discern a relationship between explanatory power, degree of universality of a theory and verisimilitude. An increase in any of these will bring about an increase in the other two. Let us start with explanation. Explanation does not consist in the revelation of a mysterious force or an unmasking of hidden events. 53 Explanation, Popper says, is brought about when one can deduce a prognosis from an initial condition with the help of a general law. Ideally, one can explain the fact that I am now typing by saying that whenever I type the letters ‘I t y p e’ on a certain typewriter on a certain desk at a given time, I am pushing my fingers against the respective keys on that particular typewriter. Thus, one would take the writing of the letters as the initial condition, the pushing of the fingers as the prognosis, and the phrase which begins with ‘whenever …’ as the general law. This law would have very little generality and refer, in fact, to nothing but the event in question. Nevertheless, one would take the whole deduction as an explanation of ‘pushing the fingers’. However, the general law invoked has a very low degree of verisimilitude because it ceases to be true a second after the time specified in it, since it refers to a particular moment of my typing. Similarly, its explanatory power is very poor, because, in referring only to one particular moment, it cannot explain why I keep pushing my fingers down a moment later. Finally, it has an almost zero degree of universality because it contains specific reference to a time and a place and could not apply to a different typewriter. As soon as the ‘general’ law is enlarged to become more general, the verisimilitude will increase and the explanatory power will increase. Hence, we see that in our desire to gain more knowledge, we are seeking more verisimilitude and an increase in the generality of the laws or generalisations we employ. Confronted by a choice between two laws, we must naturally prefer the more general one and thus opt automatically for the one which has more verisimilitude. In this sense, the desire for knowledge contains an inbuilt sense of direction and indicates which of two competing theories is to be preferred. The preference does not have to be made in response to a straight falsification. It is, more often, the result of a comparison between theories and a preference for the theory which has greater universality. Since the rejected theory has less verisimilitude than the preferred theory, we call the rejection an error elimination. In this sense, progressive error elimination will lead to greater verisimilitude and an increase in universality.
With the wider formula of selectionism rather than with narrow falsificationism, we establish the continuity of the growth of knowledge with the evolutionary process in a neo-Darwinian sense. Popper here offers a philosophy of science which does not depend for its truth on a study of the history of science; but which, on the contrary, makes the writing of the history of science possible. The formula takes account of the fact that the growth of knowledge is a historical phenomenon and that that growth tends towards a goal, even though it can never actually reach it. Finally, it takes account of the fact that progress is not an accumulation of knowledge. Any later theory, indebted as it is to an earlier theory, is commensurable with an earlier theory in the sense that both theories are tentative solutions of a problem and that, in the course of history, the problem itself is being transformed. There are no closed systems here and no language games which succeed one another in a higgledy-piggledy, ahistorical fashion. Science, in Popper’s philosophy of science, is a historical phenomenon. 54 In the early stages, the selection of valid hypotheses is done by the environment. In the later stages, provided the social conditions are suitable, it is done by conscious criticism. The criticism is the criticism of human minds which are, in turn, the products of natural selection. Thus, there is no need to define in advance what sort of entity the human mind might be.
The final and perhaps greatest merit of the formula consists in the way in which P → TS → EE → P explains that the growth of knowledge, though undetermined and unplanned and not designed, nevertheless, by the sheer accumulation of error eliminations, moves in the direction of truth. The strategy of the argument is perhaps not new. It is certainly reminiscent of the old argument of the invisible hand which, in the absence of conscious and willed design, nevertheless leads towards an optimisation of economic returns. At any rate, in the eighteenth century this argument was first used in the field of economics when Adam Smith demonstrated that one need not follow a plan in order to bring about the achievement of a design. Darwin, too, used this argument. He was not, as he kept insisting, arguing against design. The sheer constraints of the environment upon evolving organisms were bound to produce design. He attacked, however, the argument from design – that is, the belief that since there was design in evolution, there must have been a plan, a divine Providence or some other kind of preconceived determination of design. Neo-Darwinists have therefore been able to characterise evolution as a process which, though lacking in intentionality and foresight, is nevertheless creative and productive of design. This results from the constraints of the environment upon chance mutations. The Popperian formula for the sequential growth of knowledge is the only description of the growth of knowledge which shows that despite the absence of the planned accumulation of sense observations (the sort of accumulation Bacon and his countless followers had envisaged), the progression moves nevertheless towards increasing truth about the world.
If there were to be a debate as to whether Kuhn (see end of chapter 4 ) or Popper has done for science what Darwin did for biology, Popper would win hands down. For that matter, though it was Kuhn, as we have seen, who complained about ‘resistance’ to his theory of science in terms which are almost reminiscent of the way in which Freud used to complain about resistance to his theory of infantile sexuality, it is Popper – not Kuhn – who has found a certain amount of resistance. Resistance to Popper can be explained in institutional terms and has nothing to do with the actual content of his theories. At the time when Popper could have been accepted, philosophers in Britain were dominated by the linguistics of Ryle and Austin; on the continent, Heidegger and Sartre occupied centre-stage. In America, they were looking either towards Britain or towards the continent or came, during the Second World War, under the influence of Carnap and the Vienna Circle’s Positivism and inductivism which seemed to follow so well upon American pragmatism and operationism. Scientists, though the main beneficiaries of Popper’s philosophy, are rarely interested in philosophical questions. Those who are, have acknowledged the liberating effect of Popper’s philosophy of knowledge on their work. 55 With Popper’s philosophy, they no longer have to wrestle with the discrepancy between their practice – in which theories are given priority and observations are seen to be theory-laden – and the philosophy of Positivism, Baconian or other, which tells them that they ought to start with observations and with nothing but observations. But the liberating effect of Popper’s philosophy of philosophy-minded scientists was offset after the Second World War by the new generation’s barrage of Marxism, neo-Marxism, revisionist Marxism and structural Marxism. To Marxists of all persuasions who have established a real corner in sociology and the sociology of knowledge, Popper’s philosophy appears indistinguishable from Positivism because, like Positivism, it is concerned with knowledge which claims to be knowledge of a real world. Throwing Popper in with the Positivists is like saying that tablespoons are like elephants because neither can climb trees! Marxists, however, whatever their particular colour, are committed to seeing all knowledge as an ideology and confine themselves therefore to those aspects of knowledge which can usefully be explained as a function of a social structure rather than as assertions about the real world. Those aspects of knowledge which cannot be usefully so explained, are forced by them into a Procrustes bed and assimilated to knowledge which can be explained as a function of social structures. Either way, Marxists and Marx-inspired social scientists cannot afford to distinguish between Popper and Positivism because such a distinction would be too damaging to their initial commitment. Here again, Popper is meeting with enormous resistance.
The Bipartite History of Science
In the Popperian view of the growth of knowledge, the growth consists of two separate movements which interact. There are, first, chance mutations (in the case of evolving organisms) and conscious theories or conjectures about the world. Second, these tentative solutions are incessantly exposed to selective pressures by the environment. Those which are a ‘fit’ are selected for survival and the others are eliminated as ‘errors’. These are blind conjectures and selective retentions. The element of chance and blindness is present in the mutations and the conjectures. But the selective process is not governed by chance. On the contrary, the selection by the environment of ‘correct’ or ‘fitting’ mutations is the anti-chance factor in evolution as well as in the growth of knowledge. If evolution and the growth of knowledge are continuous, then one must expect to find that the growth of knowledge, like evolution itself, is divided into two parts.
This conclusion has a bearing on the history of science and on the way in which it ought to be studied and written. It forces us to accept that there are really two histories of science. There is, first, the history of the chance mutations or conjectures and inventions; and, second, the history of the selective process. Agassi has convincingly and, I think, conclusively shown that the writing of the history of science depends on the vagaries of the philosophy of science. There are as many histories of science, he seemed to be saying, as there are philosophies of science. ‘The actual critical process [of the growth of knowledge]’, P.K. Feyerabend wrote, ‘is very complex and has never been analysed to everyone’s satisfaction. But of one thing we can be sure: experience plays a very fundamental role in it.’ 56 The complexity of the process can now be somewhat reduced – or, at least, be divided into two parts. The first part remains, undeniably, incredibly complex. But the second part, the selective process, can be shown to be a fairly reasonable, not to say rational, story. The first part consists of the intricate by-ways of inventiveness. The second part consists of the rational reasoning which leads to error elimination and selection. When we thus divide the history of science into two parts to correspond with the chance factor and with the anti-chance factor, the story becomes less complex and less mysterious.
Broadly speaking, the distinction between the two parts corresponds to the distinction between conjectures and refutations. In the first part, there will be all the stories of how conjectures were being made. In the second part, there will be the stories of how they were refuted so that the second part will also show which conjectures survived because they failed to be refuted. I would all the same hesitate to use these labels for part one and part two. Part one must contain a great deal of general history as well as biography and will show much more than the formulation of conjectures. Part two will certainly not just be the story of attempts at refutation. It will contain stories of doubts and of the weighing-up of competing conjectures because error elimination, rather than straight falsification, will be the content of most of the stories it contains. If one has to give names to the two parts, I would suggest that part one be called the story of inventions – to correspond with the story of mutations in biology; and part two, the story of selective retentions – to correspond with the story of natural selection in biology.
The question of names apart, it is essential to realise that Popperian philosophy of science enables us to make the distinction between the two parts of the story. This is the whole of the contribution the philosophy of science makes to the history of science. Once the distinction is made, the histories in part one and in part two can be written as all histories are written. They have to depend on general laws so that single events can be located and linked together. But these general laws do not have to be general laws of scientific method or general laws about the behaviour of people labelled ‘scientists’. They are judiciously collected and reasonably examined general laws and one does not need a special philosophy of science to write the stories of part one and part two. The philosophy of science one needs in order to write the history of science has taught us that the history of science must come in these two interacting parts. The distinction between the two parts is the specific and necessary contribution of Popper’s philosophy of science to the writing of the history of science. Once this is established, the historian of science can avail himself of any general law or any set of general laws as his judgment thinks fit. The stories in part one and the stories in part two will themselves not be relative to any further philosophy of science, and not even to Popper’s philosophy of science. It is the distinction between the two parts and the mode of their interaction which is relative to Popper’s philosophy of science, not the contents of part one and part two.
The distinction between the two parts must not be confused with the well-established distinction between the internal and external history of science. The internal history of science is the story of experiments and theories and debates about theories. The scenario is fairly rational and the protagonists are always scientists. The external history of science includes the psychological and sociological background, the institutional conditions which make science possible, as well as any other factor which may or may not be relevant. Whatever the uses of such a distinction are, it clearly does not run parallel to the distinction between the two parts here proposed. Internal history would, for example, include conjectures and inventions, provided they are of a ‘scientific’ nature, and exclude them only when they are, for instance, hermetic, alchemical or Neoplatonic. It would also include the process of error elimination and selection. The subjects of internal history are thought-contents and their logical relations. External history, on the other hand, would include inventions only when they are of a non-scientific character, and would deal mainly with the non-scientific determinants of science and its growth such as psychological pressures, economic conditions and social institutions. It would concern itself with the sociology of science without necessarily attempting to reduce the internal history to the external history. It describes the causally related social, psychological and other non-cognitive factors which influence the development of science. The distinction between internal and external is probably useful, though perhaps a trifle scholastic. If one considers that quite important decisions to select or adopt a theory can be the result of events in external history, one must conclude that the distinction is somewhat artificial and not all that enlightening. However, it is well established. In 1970, it seemed new: E. McMullin 57 used the distinction, but used the words ‘HSi’ and ‘HSii’ for internal and external, respectively. Whittacker’s book on ether, McMullin said, is HSi (internal history) and P. Williams’s book on Faraday is HSii (external history). Since then, the distinction and the labels ‘internal’ and ‘external’ have become common usage. 58 However common the usage, there is a divergence of meaning in the terms as used by different authors. For Kuhn, internal history of science is concerned with the substance of science. It can be studied if the historian sets aside the science he knows and relies on the journals and textbooks of the period, thus leaving out the innovators who changed the direction of science. External history is concerned with the activity of scientists as a social group. Mary Hesse’s use of the distinction is quite different. Lakatos 59 uses the terms in yet a different sense. Internal history, in his mind, is the rational reconstruction of the arguments used; external history, by contrast, is concerned with the psychology of invention. In his terminology, ‘external’ refers to the actual history of science and ‘internal’ to the ideal history of science. There would be no need to argue with people who find the distinction useful, were it not for the fact that it is often believed that external history is empirical while internal is not. A historical narrative, no matter what its subject is, is ‘empirical’ in the sense that it uses evidence. It is ‘non-empirical’ or transcends experience in that it employs meta-historical general laws for keeping its particular events lined up and linked. This is as true of any internal history as of any external history and whatever the distinction between the two, it does not correspond to the distinction between empirical and non-empirical.
Scholastic or not, the distinction between external history and internal history has nothing to do with the distinction which follows from Popper’s Evolutionary Epistemology, between the inventive and the selective parts of the scientific enterprise. Writing the history of science in two parts, let us first indicate all those events which belong to the process of invention and conjecture. Here, we have a bewildering mass of evidence. It ranges from the psychoanalysis of scientists, of their dreams and fantasies, to the prevalence of traditional metaphors and religious myths. Kepler was devoted to sun mysticism and derived his imagery and his determination from his conviction that the sun must be in the centre. Harvey was obsessed by circles, circular motion, the ubiquity of cycles. At the same time, he lived in an age when pumps were being tried and used all over the place. Here, we find a technological determinant of his conjecture that the heart must be a pump. The technology, in turn, was rooted in the economics of the seventeenth century which, in turn, was linked to the politics and social developments of his age. Galileo received his training in a school of design 60 and this may have helped him to turn his mind away from the humanistic philosophy of the age of the Renaissance in order to read in the ‘book of nature’ – a metaphor which was also widely used by humanists who read that book in a very different way. Historians writing part one of the history of science have rightly wondered what might have caused Newton to conceive the possibility of action at a distance. According to some, his early childhood separation from his beloved mother was crucial: during the separation, he became familiar with the experience of longing for his mother who was a long way away. 61 Alternatively, it is also possible that we should here remind ourselves of the enormous power of astrological thought. Newton may well have derived the idea of attraction over a distance from his knowledge of astrology which had taught that distant stars can influence human destiny. With the causative role of early childhood, we are into psychology; with the causative influence of astrology, we arc into the history of ideas. And who knows whether Virchow did not owe a debt for his theory about cells to his political convictions about self-governing republics? Maxwell was inspired, so it seems, by statistical calculations performed by a French sociologist who tried to cope with demographic problems. 62 Leonardo may have derived his courage to dissect human bodies from his observation that other people, too, had broken the limits of convention and decency. If an archbishop can conspire to murder the Medici brothers during High Mass in front of the main altar of the cathedral, why should I, he may have reasoned, refrain from dissecting dead bodies? 63 Einstein lived in an age of great social and political upheaval. One of his close friends from Zurich went back to Vienna to murder the mayor of that city. 64 Obviously, tradition was not being observed and some foreigners in Switzerland, at that time, were given to daring experiments of thought – a point charmingly made in Tom Stoppard’s Travesties .
The history of these inventions and conjectures is limitless and irrational. There are no boundaries, no methods. One never knows what belongs to the history of science here and what does not. The history of the Hermetic tradition in the sixteenth century, though superficially part of the history of magic and obscurantism, nevertheless overlaps with the history of scientific conjectures, as Frances Yates has beautifully shown. The history of metaphysics, as Burtt has demonstrated, is part of it, though linked to an older, non-scientific system of thought. Paracelsus may not have been a scientist, but without him, as Debus has shown, the story of science would have been very different. Magic, sectarianism, mythology, alchemy – all overlap with inventions which eventually found their way into science, as is clearly recognised by Mary Hesse in her Revolutions and Reconstructions in the Philosophy of Science . 65 And as Koestler in his ominously entitled The Sleepwalkers has shown, the inhibitions and depressions of Copernicus may have been the psychological roots of his determination to show that the earth is not at the earth of the planetary system. Also belonging to part one are Merton’s famous theories on the connection between sixteenth-century Puritanism and science, as well as the ideological considerations which prompted the founders of the Royal Society apologetically to espouse Bacon’s inductivism as the paragon of scientific progress. 66 Part one further encompasses the stories about Malthus’s influence on Darwin, Manuel’s book on Newton, the stories of Kekulé’s dream and of Poincar?s streetcar, as well as the story of the peculiar personal friendships of the members of the Phage group 67 and Watson’s Double Helix . Popper himself gave a good description of part one:
The initial stage, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis nor to be susceptible of it. The question how it happens that a new idea occurs to a man – whether it is a musical theme, a dramatic conflict, or a scientific theory – may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge. 68
Popper here is saying that the ‘how’ of part one is distinct from the selective process described in part two, but he should not be interpreted as meaning that part one is not part of the history of science.
There is no need to multiply examples. This part of the story – the chance part, the blind conjecture part – is untidy. There is no boundary to it and one can write it only if one does not neglect the nooks and crannies of private obsessions, strange beliefs, and the wider pressure of politics, technology and sociology.
If part one is concerned with the inventions and reasons for the inventions, whatever they might be, it will in all likelihood become clear that the inventions and conjectures are not made entirely at random. First, they are more likely to be made under social conditions in which knowledge is not pre-empted for social bonding and is released from the bondage in which it is kept if it is used for social bonding. Second, in any one society, there is a great likelihood that inventions should be couched in terms of available metaphors or correspond to the deeper psychic currents in the minds of individual scientists. These correlations can be studied and classified and they force one to the conclusion that the inventions and conjectures are not absolutely random, but only random relative to the selective procedure (i.e. to part two of the story). When one is looking at part one, the conjectures are not random or not very random but tend to cluster around major themes represented by the cultural context or the psychology of the people involved. The inventions are not rational and are not made in pursuit of reason or any correct method because there is no correct method known. But this does not mean that they are arbitrary or random.
The importance of psychological and metaphysical motivation is well presented by Gerald Holton. 69 The foundations of Quantum Mechanics were laid under metaphysical auspices because Planck was fascinated by the Absolute. Einstein believed that God does not play dice; Niels Bohr was dominated by the vision of the complementarity of Yang and Yin. In Descartes’s mind, inertia was somehow linked to the immutability of God, 70 Kepler was deeply immersed in solar mysticism. 71 These examples and many others show that inventiveness is not random in relation to the cultural context, even though it must be clear that this absence of randomness does not constitute a rationale for scientific discovery: for in part two of the story, this inventiveness must always appear as random.
The second part – the story of the anti-chance part, of the selective process – is much easier to write. Harvey may have been inspired by the technology of pumps and obsessed by circles and mythical formations. But the manner in which his theory was selected for retention and Galen’s theory eventually crowded out, is a process of pure rationality. Here, the non-chance factor is paramount. It is inconceivable that Harvey’s theory should not have been selected and that a false theory, in competition with it, should have retained the attention of scientists. Just as the environment selects for retention and survival those organisms whose populations’ gene pool stores ‘correct’ information, so the scientific environment selects for temporary retention those theories which fit experience. The process may not always be instantaneous, but it is inevitable. This does not mean that truth will out and that one only has to sit back and wait until truth makes itself manifest and wins the day. Nothing could be further from the truth. One first needs the irrational, accidental, unplanned part of the story – the conjectures. Only conjectures which have actually been made Can be selected. If they are not made, the truth will not out. But once they have been made, once conjectures are offered for competition and comparison, the truer or more verisimilitudinous ones will survive.
Popper’s early falsificationism was not rich enough to be used as meta-historical theory for writing the history of science. Popper himself, before he was an evolutionist and when he was still confined to falsificationism in the narrow sense, wrote, as the above quoted remark shows, that the events of part one – the psychology of imagination and all social pressures, etc. – are not part of the history of science. The Logic of Scientific Discovery , first published in German in 1934, was therefore an unhistorical book or, as Lakatos once complained, ‘highly ahistorical’. 72 In the context of falsificationism, this is completely justified. History cannot prove the truth of a philosophy of science – so there was no need to refer to it. The reason why history cannot be used to sustain a philosophy of science has nothing to do with the fact that one cannot derive an ‘ought’ from an ‘is’. It is due to the fact that history itself has to be written and that one cannot write it without a philosophy of knowledge. The Logic of Scientific Discovery is, in this sense, a book with an infinitely better strategy of argument than Kuhn’s Structure of Scientific Revolutions , which relies almost wholly on history. Moreover, falsificationism by itself would yield only part two of the history of science – and part two without part one is incomplete as a history of science. There was therefore every reason, pace Lakatos, why The Logic of Scientific Discovery was an unhistorical book and why, in spite of many subsequent editions, it has remained one.
Nevertheless, the distinction between parts one and two which follows from Popper’s Evolutionary Epistemology is foreshadowed in falsificationism. It runs parallel to Reichenbach’s distinction between the ‘context of discovery’ and the ‘context of justification’. In Reichenbach’s world, the former was of no interest. People who paid attention to it were judged guilty of ‘the genetic fallacy of psychologism’. Not being an evolutionist, Reichenbach could only imagine that people who showed an interest in the genesis of scientific ideas thought that the ‘how’ of an idea amounted to a justification of the idea. It did not occur to him that while the ‘context of discovery’ might indeed be the story of how ideas come into being without, at the same time, being ‘justified’ just because they have come into being; and that the ‘context of justification’, rather than consisting in attempts at proving the truth of the ideas which have come into being, might consist of the story of eliminating the false ones.
Evolutionary Epistemology has clarified the distinction. As D.T. Campbell writes, the history of science is basically a ‘descriptive epistemology’. 73 It is concerned with the question of how people proceed when they acquire knowledge. Traditionally, philosophers have taken little interest in this question because the ‘how’ by itself is no guarantee of correctness. But when one looks upon scientific knowledge as knowledge which is always open to criticism, the ‘how’ (i.e. the method of actual invention) is highly significant because it is that part of the story which provides the material upon which the selective process of part two is to be exercised. The ‘how’ is relevant and important, even though the question whether the actual method of invention is impeccable or not is neither here nor there. The ‘how’ merely shows how knowledge is invented in order to be exposed to the critical selective process described in part two. When we think of science as progress, we can see that its history comes in two parts: there is, first, the story of theory-formation; and, second, the story of theory-appraisal. 74
The Consequences of the Failure to Grasp the Bipartite Nature of the History of Science
The failure to grasp the bipartite nature of the history of science has had some curious results. If there was at first a reluctance to attribute any real importance to part one and to dismiss interest in part one as ‘the genetic fallacy’, the pendulum seems now to have swung in the opposite direction. There are many philosophers who make the opposite mistake and consider part one the most important part of the story. Since part one, correctly understood as what Gerard Radnitzky calls ‘theory-formation’, does not contain the story of ‘theory-appraisal’, and since they know that appraisal and evaluation is an important part of the history of science, these philosophers have made many different attempts, mistaking part one for the whole of the enterprise – a sort of intellectual synecdoche, as it were – to show that part one, too, contains efforts at appraisal.
Let us start with the simplest of these attempts. It is often argued that there are rules for research methods and that the story of theory-formation should show how and whether these rules have been followed. 75 There is nothing wrong with the suggestion as such, except that it is unnecessary. Moreover, there is a wide divergence of opinion as to what ‘rules’ have been, or ought to have been, followed. If one consults Einstein’s description of how he came by some of his best ideas, one will realise how wide off the mark the rational ‘rules’ suggested by other writers on the subject are:
What precisely is ‘thinking’? When at the reception of sense-impressions, memory pictures emerge, this is not yet ‘thinking’. And when such pictures form series, each member of which calls forth another, this too is not yet ‘thinking’. When, however, a certain picture turns up in many such series, then – precisely through such return – it becomes an ordering element for such a series, in that it connects series which in themselves are unconnected. 76
Such an account of method is very different from the ‘rules’ culled by Noretta Koertge from Robert Pirsig’s advice as to how to maintain a motorcycle in his Zen and the Art of Motorcycle Maintenance . These rules state that, having chosen an auspicious day, one should first look at the obvious; next, at the probable; and so forth. The point is that it does not matter whether Einstein and Pirsig agree or not and whether Noretta Koertge is wrong in thinking that it does matter that one follows ‘rules’. Thinking only of part one, she cannot entertain the possibility that the history of science can be understood, let alone written, without reference to rules which ensure that the outcome of the enterprise is sound. True, the attempt to bring Zen into play is a very sophisticated advance over Bacon’s recommendation that one ought to accumulate observations until they tell their own story. The wisdom of Zen and the folly of Baconian inductivism are tragically underlined by the story of the discovery of the molecular structure of DNA. While Rose Frank was following Bacon’s rules and was getting nowhere, Crick and Watson were more relaxed and stumbled upon the great conjecture. 77
The insistence that the story of theory-formation should show which rules have been followed and that rules have been followed derives directly from the failure to grasp that ‘appraisal’ is in part two. If part two is neglected or not recognised, then it would indeed be important to show that theory-formation has proceeded according to rules – for if no rules are followed, one could not distinguish between good and bad theories. And if no rules are followed, there is at least a demand that theories should be shown to have been induced from observation of the environment so that they can be seen as a rational response to the environment and distinguished from irrational responses to the environment.
The distinction between part one and part two was first clearly made by Darwin, for organic evolution – for this is, in essence, the meaning of ‘natural selection’ as distinct from artificial selection or breeding. It was later applied to the evolution of knowledge by Popper, where it is seen to operate as critical and conscious selection. Without the distinction, one must always fall back on the notion that theory-formation at least ought to pre-select what comes finally to be offered for rational appraisal and hence ask oneself what the rules for pre-selection are. For when there is no distinction, one could not tell the difference between good and bad theories unless one can see that some theories are formed according to certain rules and others are not. If, however, the vital distinction between formation and appraisal, random variation and natural selection (or, in the growth of knowledge, between wild invention and critical selection), is grasped, it becomes clear that there is not the slightest need for rules and not even for pre-selection on the theory-formation level.
More spectacular than attempts at establishing rules of discovery – attempts which are not basically different from Bacon’s advice that one ought to proceed inductively and systematically and collect observations to make sure of the soundness of the results – are the efforts by Feyerabend and Lakatos. Feyerabend, taking part one for the whole, took one close and careful look at it and sized it up correctly. There are no rules in part one. Therefore, he recommended, there ought to be no rules. The logical hiatus between the ‘is’ and the ‘ought’ may be forgivable when one considers that part one is full of stories of how scientists made true discoveries – or rather, how they made discoveries and stumbled upon theories which later turned out to be true. Somehow or other, part one tells us, science gets there. We know, though Feyerabend neglects this, that science gets there because of the stories in part two. Feyerabend simply looks at part one. He sees that, among other things, by hook or by crook, by luck or by accident, some of the inventions in part one seem to be the right ones. And so he recommends that we simply allow anarchy to reign free. Part one contains inventions which eventually form part of knowledge because the events of part two tell how they came to be selected. Looking only at part one, Feyerabend thinks it is the whole story. Since part one must contain also inventions which turn out to be true, he suggests that there is no point in introducing rules for distinguishing between right inventions and wrong inventions. In chapter 18 of his Against Method , 78 he happily declares all boundaries between science and non-science are dissolved because in part one there are indeed no boundaries. Feyerabend reminds one of the infamous Inquisitor in the Albigensian Crusade at the beginning of the thirteenth century. When the King of France had surrounded one of the cities held by heretics, he asked the Inquisitor for advice as to what to do next. The Inquisitor advised that the king have all inhabitants killed indiscriminately: ‘God,’ he added, ‘will recognise His own.’ Similarly, Feyerabend advises that we should happily invent to our heart’s content. Somehow or other, God will select the true inventions and consign the wrong ones to hell. With this advice, Feyerabend rejects not only Popperian falsificationism, but also Popperian selectionism.
Apart from the basic misunderstanding which consists in taking part one for the whole of the story, there is also a fundamental error of judgment in Feyerabend’s advice. Alchemy, magic, witchcraft, astrology, Hermetism, Neoplatonism – all play a part in the story of part one. The reason why they do and why they have often proved so useful in the growth of knowledge is because, in part one, they are always open to criticism. The story of how they were criticised is, however, in part two. If one looks at part one only, one tends to forget that. Where astrology and magic, Neoplatonism and Hermetism occur, they mostly occur in a social context in which they are not only useless for the growth of knowledge, but directly contrary to it. For the most part, these systems of knowledge so-called occur in societies or communities which protect them artificially from criticism and error elimination. Acolytes and aspirants have to be ‘initiated’: they are usually sworn to silence and secrecy, made to promise obedience to authority, and are often kept in seclusion so that discussion and contact with alternative knowledge is minimised. In these places, these pieces of knowledge are accepted dogmatically and are, therefore, not really pieces of knowledge. Feyerabend, in wiping out the distinction between science and witchcraft, is neglecting the fact that what makes witchcraft so intolerable is not the witchery in itself, but the fact that in most cases it is practised dogmatically. When witchcraft or astrology or alchemy are espoused by men like Paracelsus, who lived in an environment in which all knowledge was sooner or later exposed to criticism, witchery and alchemy did not really matter: exposed to criticism, they eventually withered away. What matters in all these cases is the mode of knowledge, not the content of knowledge. There is no harm in trying a bit of alchemy if one is prepared to consider rational criticism of it. But if one disregards criticism, then even Newtonian Mechanics becomes non-knowledge. ‘What does mark a man’s beliefs as prejudices and superstitions,’ Stephen Toulmin writes, ‘is not their content, but the manner of holding them.’ 79 Steven Weinberg begins his The First Three Minutes 80 with some critical remarks about the Edda myth of origins. Gary Zukav intersperses his splendid popularisation of modern physics with comments on the similarity of physics to various Buddhist doctrines. 81 I think that both authors somehow miss the point. There remains always a very real difference between the Edda and Buddhism, on one side, and our modern knowledge, on the other. It does not matter whether the contents of these mythical stories are identical with the content of our modern knowledge or not. What does matter is that the myths are believed in dogmatically and are not exposed to criticism, whereas our modern knowledge is undogmatic and constantly exposed to criticism.
It is significant that Feyerabend, before rejecting Popperian falsificationism, did not stop to consider selectionism and Popper’s Evolutionary Epistemology which led to the bipartite division of the scientific enterprise. His concentration on part one and the intellectual synecdoche by which he takes the part for the whole, result from his failure to appreciate the significance of Popper’s evolutionism. This failure is inexcusable, for he wrote long after the appearance of Popper’s works on Evolutionary Epistemology.
There is more than just a failure to take notice of Popper. Feyerabend takes care to barricade himself behind an assumption which makes it impossible for him to take evolution into account and consider its implications. As we have seen, evolution establishes hypothetical realism – for there has to be a real world which does the selecting. In knowledge, theories are selected or retained after clashes with that real world. If one does not consider evolution, the difficulties of establishing that there is such a real world are well-nigh insurmountable. In not considering evolution and in thus disregarding hypothetical realism, Feyerabend glories in the proclamation that there is no ‘underlying substance’, 82 that all we ever know is what theories tell us or what various styles of art depict – there is no opportunity to test theories or styles against an ‘underlying substance’ and thus distinguish the variety or verisimilitude of one from another. Had he taken evolution into account, he would have noticed that we could not be here, as the result of natural selection, had there not been an underlying substance to do the selecting. It is perfectly true, as he says, that we can have no knowledge of that underlying substance as such, or as it is in itself. But it is not true, as he concludes, that there is none. If there were none, the particular interaction which we describe as natural selection by the environment could not have taken place and Feyerabend would not be here to deny the interaction. His entire epistemological anarchy falls to the ground as soon as one takes evolution into account. It can sound faintly reasonable only if one believes that we were put here by God, created on the sixth day by a divine will. In this case, hypothetical realism would indeed be without foundation and one could not have confidence in the natural selection of organisms or the critical selection of verisimilitudinous theories. One would then be left with all sorts of pieces of knowledge about witchcraft and alchemy and Quantum Mechanics without a viable way of choosing between them. In such a situation, ‘anything goes’, as Feyerabend would have it. But one can only suppose that for a philosopher who believes that man was suddenly put on earth on the sixth day of creation, anything goes indeed!
A similar and similarly inexcusable disregard of Popper’s evolutionism is to be found in Lakatos. Lakatos thus shared with Feyerabend the view that part one is the whole of the history of science. This view is common to Feyerabend and Lakatos, even though nothing else is. I understand that Feyerabend and Lakatos were planning to write a book on rationality together and that Lakatos was prevented by other commitments from carrying this plan out. When one considers the enormous differences between Feyerabend’s and Lakatos’s rejection of Popper’s falsificationism, the mind boggles when one tries to imagine what that book might have been like.
More realistic than Feyerabend and less willing to put his trust in divine Providence ‘to know His own’, Lakatos reasoned in a diametrically opposite direction. Since knowledge has grown and since the history of inventions and experiments is all we have, there must have been internal controls built into these inventions and experiments. The falsification, Lakatos argues, can never take place before the emergence of a better theory. 83 But under most circumstances, a new theory would not emerge before the falsification of an old theory. Why indeed, in strict and narrow falsificationism, should it? With this sort of criticism, he put his finger on one of the weak aspects of Popper’s initial, narrow falsificationism. As we have seen, narrow falsificationism does indeed present a real problem. Since all observations are theory-laden, there can be no single observation which one must take as falsification of a theory because such an observation would be dependent on a theory. If the theory it is dependent on is the theory to be falsified, one would have shown no more than that the theory in question had unforeseen consequences which turned out to be false. In that case, one is really forced to compare the theory in question with itself. If the theory the observation is dependent on is a different theory, one is rejecting the theory in question because one believes the other theory (the theory on which the crucial observation is dependent) preferable – so that one is comparing one theory with another theory. Stark falsification by the observation of a single falsifying instance is something which is very unlikely to occur. Whatever one is doing, one is comparing theories. This weakness of narrow falsificationism was spotted long before Lakatos, by Kuhn. 84 However, Lakatos poured out the baby with the bath water because he did not realise that these weaknesses had been eliminated by the broadening of falsificationism into selectionism and by evolutionism, which told the story of the controls of inventions in part two.
Lakatos argued that inventions come in, or follow from, discrete and discontinuous chunks – which are like quanta, we might add – but that, unlike quanta, these chunks can peter out gradually and fade away. Any such chunk is a research programme. A research programme can lead either to progressive or to degenerative problem shifts. These shifts are the controls. Looking at part one and its stories of inventions, Lakatos proposed that inventions result from particular research programmes, e.g. Cartesian metaphysics, that is, the mechanistic theory of the universe. Such programmes encourage certain theories and certain experiments and discourage others. As long as such a research programme shows a progressive problem shift, it will be continued. When it begins to show a regressive problem shift, it will be abandoned and a different programme will eventually take its place. 85 In this way, he thought that the mere story of inventions carries its own correctives in its wake. He sought to show that the history of science is at once descriptive (i.e. there have always been research programmes) and normative (i.e. these programmes produce their own controls in the form of problem shifts: if the shifts are degenerative, the programmes have always been abandoned). In this way, he avoided reliance on falsification as a controlling agent and as the mechanism which makes knowledge grow.
This argument has a strangely Hegelian ring to it. Hegel, too, sought to find the inbuilt controlling mechanisms of development. Once there is development, Hegel was saying, it cannot go wrong because it follows inherent rules which compel it to be right. In Hegel’s view, the development of thought is controlled by dialectical inferences which lead from thesis to antithesis to synthesis. Development and correct development are synonymous. For Lakatos, development consists in changes in research programmes and such development must lead to progress because we have an ability to distinguish between degenerative and progressive shifts in each research programme. Both Hegel and Lakatos tried to show that the weeding-out process is part of, or inside, the growth process. Lakatos was so confident that one can detect whether shifts are degenerative or not without further ado and without first writing a history of science, that he simply recommended that journals be closed to, and funds withheld from, scientists who pursue research inspired by programmes which show degenerative shifts. 86
Kinship with Hegel may be a merit rather than a fault. The real fault of Lakatos’s philosophy of knowledge is its failure to avoid historical circularity. We cannot know whether these shifts are degenerative or progressive unless we know what the history of science has been. If one embarks on research with a research programme in mind, one will soon see whether the problems are shifting in a degenerative or progressive direction. But one cannot establish the norm that one ought to have research programmes because history teaches us to; nor can we prove from historical observation that scientists in the past, unbeknown to themselves, have worked with research programmes and that, in so far as they have been successful, they have been successful because they worked in accordance with a research programme. This means we first have to write the history of science. Such writing cannot be done unless we first have a philosophy of science which helps us to select and link the separate events. Lakatos wanted to show that the existence of research programmes, with their built-in controls, is exhibited by history. We learn of them, he said, by studying history. Like Kuhn, he overlooked the fact that we cannot study history in order to find out which philosophy of science we ought to espouse. He tried to strengthen his argument by insisting that the history of science which would exhibit the role of research programmes was the so-called ‘internal’ history rather than the ‘external’ history. This distinction, as I argued above, is tenuous and not fruitful. 87 But even if it is maintained, not even internal history is sufficiently well established and known to dispense with an initial philosophy of science. Even an internal history of science will differ according to whether it is written from the standpoint of inductivism or conventionalism or research programmes or falsificationism. This conclusion implies that one has to decide upon the truth of the philosophy of science of research programmes before one writes history, even though this history is internal history only. Lakatos believed that the methodology of research programmes is an exception and that the philosophy they embody is revealed by a study of the history of science. In his calmer moments, he was quite aware of the intricacy of the whole problem he had created and explained that in order to evaluate rival logics of discovery, one has to evaluate the rival histories of science they lead to. 88 He did not even gloss over the fact that not even the internal history of science stands on its own feet: ‘the history of science,’ he admitted, ‘is always richer than its rational reconstruction.’ 89 But when the crunch came, in the end, he tried to lift himself up by his own bootstraps by claiming that rival methodologies can be evaluated by evaluating the rival histories of science to which they lead, 90 thus proving that he was not aware of the need to satisfy at least the ‘Postulate of Sufficient Variety’ discussed in chapter 2 above. He was oblivious of the fact that if that postulate is not satisfied, any effort to evaluate a methodology in terms of the history it leads to is a circular enterprise. Whichever way one is looking at him, he is running round in circles – an activity doubly regrettable because so unnecessary. One look at Popperian evolutionism rather than exclusive concentration on falsification and its problems, would have led Lakatos to grasp that the scientific enterprise is in two parts and that therefore his effort to introduce ‘design’ into part one is superfluous. Design, I repeat, results from the exposition of the events in part one to the events in part two. Design, in this lap of evolution, means progress towards truth.
A Lakatosian research programme is not a single theory enunciated by somebody at a certain time so that its presence is obvious, absolute and can be located. A research programme, on the contrary, is an over-arching set of assumptions which can only be detected if one studies a longish span of the history of science. One will then discover that certain scientific enterprises have been guided by a set of problems and assumptions called a research programme. The detection of the so-called Cartesian research programme is the result of historical study, not its presupposition. The mechanistic assumptions of this research programme – that the universe is a huge clockwork – was in competition with the Newtonian research programme. The Cartesian programme, unlike the Newtonian programme, discouraged research on action at a distance. 91 My point is that only historical knowledge can establish that this was so. But history has to be written before one can study it. The view that theory-formation is guided by research programmes presupposes the writing of history; the view that theory-formation about action at a distance was guided by the presence of a particular research programme presupposes a particular version of the history of science. Long before one can even start to assess whether a problem shift is progressive or degenerative, one has to have a history in order to know whether there was a research programme and, if there was, what the research programme was.
Let us look at examples. In most histories of science, we find that the Cartesian research programme exhibited generative problem shifts and was, for this reason, eventually abandoned in favour of the Newtonian programme which suggested research and theories about action at a distance. There is, however, nothing absolute about this particular history of science, even though it appears in a great many books. There was, in fact, a very good and plausible alternative to it. One could, for example, link Descartes’s idea that things are shapes and that physics is part of geometry to Leibniz’s idea that things are not only shapes but shapes and forces. This view leads to a new research programme in which one seeks to explain matter by a theory of space. 92 The degenerative shifts in that research programme were strong at first but have not led to a total eclipse. Eventually, the research programme of Leibniz came to be revived by Einstein’s search for an overall theory of forces. This research programme, in which there was no room for theories about action at a distance, was enormously successful in its theory of gravitational fields in which it takes time for gravity to go from one place to another. 93 If one reads this version of history, neither changes nor eventual progress can be explained in terms of degenerative and progressive problem shifts. On strictly Lakatosian lines, Einstein’s views on gravity, resulting from a research programme which had shown degenerative shifts for nearly two centuries, ought to have been kept out of journals and denied funds – which, to say the least, would have been a pity!
Or consider a different example. A few years ago, Brian Easlea published a book on the history of knowledge in the seventeenth century. 94 In this book, we find that magic was a research programme and that as the seventeenth century went by, it was abandoned because of degenerative problem shifts. However, and this is the crucial point, these problem shifts are not degenerative because they led to fewer and fewer problems and to less corroboration, but because there was a shift in psycho-metaphysical awareness. Magic as a research programme had been sustained and continued to show progressive shifts as long as people accepted a stance towards their environment which Easlea characterises as ‘female submission’. When such submission gave way to ‘male-oriented appropriation’, the problem shifts in the programme became degenerative. Obviously, then, these problem shifts are not absolutely negative or absolutely progressive but they are so relative to a standard. In Easlea’s history, the standard was whether people wanted to sever ties with Mother Earth in order to pursue a compulsive drive to prove their masculinity and virility by exploiting Mother Earth. 95 Anybody who has read his Jung and his Toynbee or his Alan Watts, 96 let alone his Freud, will recognise the viability of Easlea’s meta-history. In reading the history which results from this meta-history, one can see that the criteria which enable us to decide whether problem shifts are degenerative or progressive depend on the history one has written, and are imported into it by the meta-history used. In some histories, the great competition was between the Descartes-Leibniz research programmes and the Newton research programme. In other histories, of which Easlea’s is a good example, the competition was between the research programmes of magic and the Bacon-Descartes-Leibniz-Newton research programme. In the former, the criterion which determines whether a problem shift is degenerative or progressive was the question whether the attraction between bodies can be explained by the nature of bodies or not. In the latter, the criterion which determined whether shifts were degenerative or progressive was the question which vision of nature enabled masculine men to achieve power over Mother Nature and control her. In Easlea’s history, once magic was going downhill, there was nothing much to choose between the Descartes-Leibniz programme and the Newton programme. The real competition was between programmes which promised power over nature; in this competition, the Baconian (non-mathematical) experimental philosophy, institutionalised in the 1660s in the Royal Society, was to win. 97 None of this helps us to pass judgment on any of these versions of history; but it does show firmly that our knowledge of research programmes and their degenerative and progressive problem shifts has to be derived from our knowledge of history. Like Kuhn and unlike Popper, Lakatos cannot make a contribution to our knowledge of the growth of knowledge.
In view of the inherent difficulties in the notion of falsification when it comes to the writing of the history of science, Lakatos’s proposal that we consider the history of science as the history of research programmes has found wide support – even Popper himself has at times adopted the expression ‘research programme’. 98 There have been several histories of science which have quite successfully described developments as degenerative and progressive shifts of research programmes. These histories seem to bear out Lakatos’s contention that the dynamics of theory-change are determined not by falsification but by degenerative and progressive problem shifts in research programmes. Consider, for example, Peter Clark’s paper on atomism versus thermodynamics. 99 Until around 1880, Clark shows, the succession of atomic-kinetic theories constituted a series of progressive problem shifts. But during the last part of the nineteenth century, Clark says, the progressive shifts ceased. Hence, the appearance of a new research programme – phenomenological thermodynamics – which started to show progressive problem shifts. Kuhn comments that this story sounds quite plausible, but that one could also write the history of the acceptance of the new research programme by telling how the old programme contained doubts all along and that the new programme emerged because of these doubts in the old programme – not because of the degenerative shifts in the old programme. 100 In other words, the detection of whether shifts are progressive or degenerative is only possible after one has written a history of the subject. And still on the history of thermodynamics, John Blackmore 101 has recently cast doubt on the idea that Boltzmann was engaged in a research programme which showed degenerative shifts and has suggested that these degenerative shifts are only apparent to historians who put ‘too much stress on temporary and technical deficiencies which failed to bother people at the time.’ Whatever the views of historians, it is clear that any judgment as to whether shifts are progressive or degenerative must always come after a history has been written, never before. As I argued above, it is not only the shifts in a research programme the detection and evaluation of which is consequent upon a history. Even the very presence of research programmes can only be ascertained after the history has been written. If one is looking at the debate between the caloric theory of heat and the vibration theory of heat, this will be clear. If one writes the history of the debate and the eventual triumph of the vibration theory in Lakatosian terms, one would say that the caloric theory began to show degenerative shifts and that therefore the vibration theory began to establish itself. However, one can also write the history of the debate without reference to research programmes. One can simply show that the vibration theory gained ground because it received open support from atomic theory, whereas the caloric theory did not. Here, we can see the dynamics of theories as the result of a coming together of two different theories which turned out to support each other.
Popperian philosophy of science, in connecting the growth of knowledge to organic evolution in general, offers in this way a new basis for the history of science. By encouraging us to divide it into two parts, it makes the whole task more manageable. True, the first part will remain difficult and untidy because it must contain reports about psychological vagaries and sociological conditions which have, in themselves, nothing to do with the story of science. But the second part, the part which deals with the selection of conjectures by error elimination, will become more tidy. Here, the subject-matter is circumscribed by rational and non-chance or anti-chance pressures. Using Popperian philosophy of science, the historian of science can, ideally, divide his task into two and make, at least on the second front, a positive and rationally intelligible contribution to the growth of knowledge. The first part cannot be left out. If it is, the story would cease to make sense. But by dividing the labour and by characterising clearly the differences between the two tasks, one can set about writing the history of science with a greater measure of certainty than the scene surveyed by Agassi would lead one to suspect.