12 The Aim of Science (1957)

To speak of ‘the aim’ of scientific activity may perhaps sound a little naive; for clearly, different scientists have different aims, and science itself (whatever that may mean) has no aims. I admit all this. And yet it seems that when we speak of science we do feel, more or less clearly, that there is something characteristic of scientific activity; and since scientific activity looks pretty much like a rational activity, and since a rational activity must have some aim, the attempt to describe the aim of science may not be entirely futile.

I suggest that it is the aim of science to find satisfactory explanations, of whatever strikes us as being in need of explanation. By an explanation (or a causal explanation) is meant a set of statements of which one describes the state of affairs to be explained (the explicandum) while the others, the explanatory statements, form the ‘explanation’ in the narrower sense of the word (the explicans of the explicandum).

We may take it, as a rule, that the explicandum is more or less well known to be true, or assumed to be so known. For there is little point in asking for an explanation of a state of affairs which may turn out to be entirely imaginary. (Flying saucers may represent such a case: the explanation needed may not be of flying saucers, but of reports of flying saucers; yet should flying saucers exist, then no further explanation of the reports would be required.) The explicans, on the other hand, which is the object of our search, will as a rule not be known: it will have to be discovered. Thus, scientific explanation, whenever it is a discovery, will be the explanation of the known by the unknown

The explicans, in order to be satisfactory (satisfactoriness may be a matter of degree), must fulfil a number of conditions. First, it must logically entail the explicandum. Secondly, the explicans ought to be true, although it will not, in general, be known to be

true; in any case, it must not be known to be false even after the most critical examination. If it is not known to be true (as will usually be the case) there must be independent evidence in its favour. In other words, it must be independently testable; and we shall regard it as more satisfactory the greater the severity of the independent tests it has survived.

I still have to elucidate my use of the expression ‘independent’, with its opposites, ‘ad hoc\ and (in extreme cases) ‘circular’.

Let a be an explicandum, known to be true. Since a trivially follows from a itself, we could always offer a as an explanation of itself. But this would be highly unsatisfactory, even though we should know in this case that the explicans is true, and that the explicandum follows from it. Thus we must exclude explanations of this kind because of their circularity.

Yet the kind of circularity I have here in mind is a matter of degree. Consider the following dialogue: ‘Why is the sea so rough today?’ - ‘Because Neptune is very angry’ - ‘By what evidence can you support your statement that Neptune is very angry?’ - ‘Oh, don’t you see how very rough the sea is? And is it not always rough when Neptune is angry?’ This explanation is found unsatisfactory because (just as in the case of the fully circular explanation) the only evidence for the explicans is the explicandum itself.2 The feeling that this kind of almost circular or ad hoc explanation is highly unsatisfactory, and the corresponding requirement that explanations of this kind should be avoided are, I believe, among the main motive forces of the development of science: dissatisfaction is among the first fruits of the critical or rational approach.

In order that the explicans should not be ad hoc, it must be rich in content: it must have a variety of testable consequences, and among them, especially, testable consequences which are different from the explicandum. It is these different testable consequences which I have in mind when I speak of independent tests, or of independent evidence.

Although these remarks may perhaps help to elucidate somewhat the intuitive idea of an independently testable explicans, they are still quite insufficient to characterize a satisfactory and independently testable explanation. For if a is our explicandum -let a be again ‘The sea is rough today’ - then we can always offer a highly unsatisfactory explicans which is completely ad hoc even

though it has independently testable consequences. We can even choose these consequences as we like. We may choose, say, ‘These plums are juicy’ and ‘All ravens are black’. Let b be their conjunction. Then we can take as explicans simply the conjunction of a and b: it will satisfy all our requirements so far stated.

Only if we require that explanations shall make use of universal statements or laws of nature (supplemented by initial conditions) can we make progress towards realizing the idea of independent, or not ad hoc, explanations. For universal laws of nature may be statements with a rich content, so that they may be independently tested everywhere, and at all times. Thus if they are used as explanations, they may not be ad hoc because they may allow us to interpret the explicandum as an instance of a reproducible effect. All this is only true, however, if we confine ourselves to universal laws which are testable, that is to say, falsifiable.

The question ‘What kind of explanation may be satisfactory?’ thus leads to the reply: an explanation in terms of testable and falsifiable universal laws and initial conditions. And an explanation of this kind will be the more satisfactory the more highly testable these laws are and the better they have been tested. (This applies also to the initial conditions.)

In this way, the conjecture that it is the aim of science to find satisfactory explanations leads us further to the idea of improving the degree of satisfactoriness of the explanations by improving their degree of testability, that is to say, by proceeding to better testable theories; which means proceeding to theories of ever richer content, of higher degrees of universality, and of higher degrees of precision. [See notes 3 and 6 to selection 10 above.] This, no doubt, is fully in keeping with the actual practice of the theoretical sciences.

We may arrive at fundamentally the same result also in another way. If it is the aim of science to explain, then it will also be its aim to explain what so far has been accepted as an explicans; for example, a law of nature. Thus the task of science constantly renews itself. We may go on for ever, proceeding to explanations of a higher and higher level of universality - unless, indeed, we were to arrive at an ultimate explanation; that is to say, at an explanation which is neither capable of any further explanation, nor in need of it.

But are there ultimate explanations? The doctrine which I have called ‘essentialism’ amounts to the view that science must seek ultimate explanations in terms of essences: if we can explain the behaviour of a thing in terms of its essence - of its essential properties - then no further question can be raised, and none need be raised (except perhaps the theological question of the Creator of the essences). Thus Descartes believed that he had explained physics in terms of the essence of a physical body which, he taught, was extension; and some Newtonians, following Roger Cotes, believed that the essence of matter was its inertia and its power to attract other matter, and that Newton’s theory could be derived from, and thus ultimately explained by, these essential properties of all matter. Newton himself was of a different opinion. It was a hypothesis concerning the ultimate or essentialist causal explanation of gravity itself which he had in mind when he wrote in the Scholium generate at the end of the Principia: ‘So far I have explained the phenomena ... by the force of gravity, but I have not yet ascertained the cause of gravity itself ... and I do not arbitrarily [or ad hoc] invent hypotheses.’3

I do not believe in the essentialist doctrine of ultimate explanation. In the past, critics of this doctrine have been, as a rule, instrumentalists: they interpreted scientific theories as nothing but instruments for prediction, without any explanatory power. I do not agree with them either. But there is a third possibility, a ‘third view’, as I have called it. It has been well described as a ‘modified essentialism’ - with emphasis upon the word ‘modified’.4

This ‘third view’ which I uphold modifies essentialism in a radical manner. First of all, I reject the idea of an ultimate explanation: I maintain that every explanation may be further explained, by a theory or conjecture of a higher degree of universality. There can be no explanation which is not in need of a further explanation, for none can be a self-explanatory description of an essence (such as an essentialist definition of body, as suggested by Descartes). Secondly, I reject all what-is questions: questions asking what a thing is, what is its essence, or its true nature. For we must give up the view, characteristic of essentialism, that in every single thing there is an essence, an inherent nature or principle (such as the spirit of wine in wine), which necessarily causes it to be what it is, and thus to act as it does. This animistic view explains nothing; but it has led essentialists (like Newton) to shun relational properties, such as gravity, and to believe, on grounds felt to be a priori valid, that a satisfactory explanation must be in terms of inherent properties (as opposed to relational properties). The third and last modification of essentialism is this. We must give up the view, closely connected with animism (and characteristic of Aristotle as opposed to Plato), that it is the essential properties inherent in each individual or singular thing which may be appealed to as the explanation of this thing’s behaviour. For this view completely fails to throw any light whatever on the question why different individual things should behave in like manner. If it is said, ‘because their essences are alike’, the new question arises: why should there not he as many different essences as there are different things?

Plato tried to solve precisely this problem by saying that like

individual things are the offspring, and thus copies, of the same

.

original ‘Form’, which is therefore something ‘outside’ and ‘prior’ and ‘superior’ to the various individual things; and indeed, we have as yet no better theory of likeness. Even today, we appeal to their common origin if we wish to explain the likeness of two men, or of a bird and a fish, or of two beds, or two motor cars, or two languages, or two legal procedures; that is to say, we explain similarity in the main genetically; and if we make a metaphysical system out of this, it is liable to become a historicist philosophy. Plato’s solution was rejected by Aristotle; but since Aristotle’s version of essentialism does not contain even a hint of a solution, it seems that he never quite grasped the problem.5

By choosing explanations in terms of universal laws of nature, we offer a solution to precisely this last (Platonic) problem. For we conceive all individual things, and all singular facts, to be subject to these laws. The laws (which in their turn are in need of further explanation) thus explain regularities or similarities of individual things or singular facts or events. And these laws are not inherent in the singular things. (Nor are they Platonic Ideas outside the world.) Laws of nature are conceived, rather, as (conjectural) descriptions of the structural properties of nature - of our world itself.

Here then is the similarity between my own view (the ‘third view’) and essentialism; although I do not think that we can ever describe, by our universal laws, an ultimate essence of the world, I do not doubt that we may seek to probe deeper and deeper into the structure of our world or, as we might say, into properties of the world that are more and more essential, or of greater and greater depth.

Every time we proceed to explain some conjectural law or theory by a new conjectural theory of a higher degree of universality, we are discovering more about the world, trying to penetrate deeper into its secrets. And every time we succeed in falsifying a theory of this kind, we make a new important discovery. For these falsifications are most important. They teach us the unexpected; and they reassure us that, although our theories are made by ourselves, although they are our own inventions, they are none the less genuine assertions about the world; for they can clash with something we never made.

Our ‘modified essentialism’ is, I believe, helpful when the question of the logical form of natural laws is raised. It suggests

a

that our laws or our theories must be universal, that is to say, must make assertions about the world - about all spatiotemporal regions of the world. It suggests, moreover, that our theories make assertions about structural or relational properties of the world; and that the properties described by an explanatory theory must be, in some sense or other, deeper than those to be explained. I believe that this word ‘deeper’ defies any attempt at exhaustive logical analysis, but that it is nevertheless a guide to our intuitions. (This is so in mathematics: all its theorems are logically equivalent, in the presence of the axioms, and yet there is a great difference in ‘depth’ which is hardly susceptible of logical analysis.) The ‘depth’ of a scientific theory seems to be most closely related to its simplicity and so to the wealth of its content. (It is otherwise with the depth of a mathematical theorem, whose content may be taken to be nil.) Two ingredients seem to be required: a rich content, and a certain coherence or compactness (or ‘organicity’) of the state of affairs described. It is this latter ingredient which, although it is intuitively fairly clear, is so difficult to analyse, and which the essentialists were trying to describe when they spoke of essences, in contradistinction to a mere accumulation of accidental properties. I do not think we can do much more than refer here to an intuitive idea, nor that we need do much more. For in the case of

any particular theory proposed, it is the wealth of its content, and thus its degree of testability, which decide its interest, and the results of actual tests which decide its fate. From the point of view of method, we may look upon its depth, its coherence, and even its beauty, as a mere guide or stimulus to our intuition and to our imagination.

Nevertheless, there does seem to be something like a sufficient condition for depth, or for degrees of depth, which can be logically analysed. I shall try to explain this with the help of an example from the history of science.

It is well known that Newton’s dynamics achieved a unification of Galileo’s terrestrial and Kepler’s celestial physics. It is often said that Newton’s dynamics can be induced from Galileo’s and Kepler’s laws, and it has even been asserted that it can be strictly deduced from them.6 But this is not so; from a logical point of view, Newton’s theory, strictly speaking, contradicts both Galileo’s and Kepler’s (although these latter theories can of course be obtained as approximations, once we have Newton’s theory to work with). For this reason it is impossible to derive Newton’s theory from either Galileo’s or Kepler’s or both, whether by deduction or induction. For neither a deductive nor an inductive inference can

ever proceed from consistent premisses to a conclusion that formally contradicts the premisses from which we started.

I regard this as a very strong argument against induction. Here, however, I am not so much interested in the impossibility of induction as in the problem of depth. And regarding this problem, we can indeed learn something from our example. Newton’s theory unifies Galileo’s and Kepler’s. But far from being a mere conjunction of these two theories - which play the part of explicanda for Newton’s - it corrects them while explaining them. The original explanatory task was the deduction of the earlier results. Yet this task is discharged, not by deducing these earlier results but by deducing something better in their place: new results which, under the special conditions of the older results, come numerically very close to these older results, and at the same time correct them. Thus the empirical success of the old theory may be said to corroborate the new theory; and in addition, the corrections may be tested in their turn - and perhaps refuted, or else corroborated. What is brought out strongly, by the logical

situation which I have sketched, is the fact that the new theory cannot possibly be ad hoc or circular. Far from repeating its explicandum, the new theory contradicts it, and corrects it. In this way, even the evidence of the explicandum itself becomes independent evidence for the new theory. (Incidentally, this analysis allows us to explain the value of metrical theories, and of measurement; and it thus helps us to avoid the mistake of accepting measurement and precision as ultimate and irreducible values.)

I suggest that whenever in the empirical sciences a new theory of a higher level of universality successfully explains some older theory by correcting it, then this is a sure sign that the new theory has penetrated deeper than the older ones. The demand that a new theory should contain the old one approximately, for appropriate values of the parameters of the new theory, may be called (following Bohr) the ‘principle of correspondence’.

Fulfilment of this demand is a sufficient condition of depth, as I said before. That it is not a necessary condition may be seen from the fact that Maxwell’s electromagnetic wave theory did not correct, in this sense, Fresnel’s wave theory of light. It meant an increase in depth, no doubt, but in a different sense: ‘The old question of the direction of the vibrations of polarized light became pointless. The difficulties concerning the boundary conditions for the boundaries between two media were solved by the very foundations of the theory. No ad hoc hypotheses were needed any longer for eliminating longitudinal light waves. Light pressure, so important in the theory of radiation, and only lately determined experimentally, could be derived as one of the consequences of the theory.’7 This brilliant passage, in which Einstein sketches some of the major achievements of Maxwell’s theory and compares it with Fresnel’s, may be taken as an indication that there are other sufficient conditions of depth which are not covered by my analysis.

The task of science, which, I have suggested, is to find satisfactory explanations, can hardly be understood if we are not realists. For a satisfactory explanation is one which is not ad hoc; and this idea - the idea of independent evidence - can hardly be understood without the idea of discovery, of progressing to deeper layers of explanation: without the idea that there is something for us to discover, and something to discuss critically.

And yet it seems to me that within methodology we do not have to presuppose metaphysical realism; nor can we, I think, derive much help from it, except of an intuitive kind. For once we have been told that the aim of science is to explain, and that the most satisfactory explanation will be the one that is most severely testable and most severely tested, we know all that we need to know as methodologists. That the aim is realizable we cannot assert, neither with nor without the help of metaphysical realism which can give us only some intuitive encouragement, some hope, but no assurance of any kind. And although a rational treatment of methodology may be said to depend upon an assumed, or conjectured, aim of science, it certainly does not depend upon the metaphysical and most likely false assumption that the true structural theory of the world (if any) is discoverable by man, or expressible in human language.

If the picture of the world which modem science draws comes anywhere near to the truth - in other words, if we have anything like ‘scientific knowledge’ - then the conditions obtaining almost everywhere in the universe make the discovery of structural laws of the kind we are seeking - and thus the attainment of ‘scientific knowledge’ - almost impossible. For almost all regions of the universe are filled by chaotic radiation, and almost all the rest by matter in a similar chaotic state. In spite of this, science has been miraculously successful in proceeding towards what I have suggested should be regarded as its aim. [See also the end of selection 7 above.] This strange fact cannot, I think, be explained without proving too much. But it can encourage us to pursue that aim, even though we may not get any further encouragement to believe that we can actually attain it; neither from metaphysical realism nor from any other source.

I

In this paper [this selection and the next] I wish to solve some of the problems, old as well as new, which are connected with the notions of scientific progress and of discrimination among competing theories. The new problems I wish to discuss are mainly those connected with the notions of objective truth, and of getting nearer to the truth - notions which seem to me of great help in analysing the growth of knowledge.

Although I shall confine my discussion to the growth of knowledge in science, my remarks are applicable without much change, I believe, to the growth of pre-scientific knowledge also - that is to say, to the general way in which men, and even animals, acquire new factual knowledge about the world. The method of learning by trial and error - of learning from our mistakes - seems to be fundamentally the same whether it is practised by lower or by higher animals, by chimpanzees or by men of science. My interest is not merely in the theory of scientific knowledge, but rather in the theory of knowledge in general. Yet the study of the growth of scientific knowledge is, I believe, the most fruitful way of studying the growth of knowledge in general. For the growth of scientific knowledge may be said to be the growth of ordinary human knowledge writ large.1

But is there any danger that our need to progress will go unsatisfied, and that the growth of scientific knowledge will come to an end? In particular, is there any danger that the advance of science will come to an end because science has completed its task? I hardly think so, thanks to the infinity of our ignorance. Among the real dangers to the progress of science is not the likelihood of its being completed, but such things as lack of imagination (sometimes a consequence of lack of real interest); or a misplaced faith in formalization and precision (which will be discussed below in section v); or authoritarianism in one or another of its many forms.

Since I have used the word ‘progress’ several times, I had better make quite sure, at this point, that I am not mistaken for a believer in a historical law of progress. Indeed I have before now [see selection 23 below] struck various blows against the belief in a law of progress, and I hold that even science is not subject to the operation of anything resembling such a law. The history of science, like the history of all human ideas, is a history of irresponsible dreams, of obstinacy, and of error. But science is one of the very few human activities - perhaps the only one - in which errors are systematically criticized and fairly often, in time, corrected. This is why we can say that, in science, we often learn from our mistakes, and why we can speak clearly and sensibly about making progress there. In most other fields of human endeavour there is change, but rarely progress (unless we adopt a very narrow view of our possible aims in life); for almost every gain is balanced, or more than balanced, by some loss. And in most fields we do not even know how to evaluate change.

Within the field of science we have, however, a criterion of progress: even before a theory has ever undergone an empirical test we may be able to say whether, provided it passes certain specified tests, it would be an improvement on other theories with which we are acquainted. This is my first thesis.

To put it a little differently, I assert that we know what a good scientific theory should be like, and - even before it has been tested - what kind of theory would be better still, provided it passes certain crucial tests. And it is this (metascientific) knowledge which makes it possible to speak of progress in science, and of a rational choice between theories.

n

Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory.

My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in fact.

This criterion of relative potential satisfactoriness (which I formulated some time ago,2 and which, incidentally, allows us to grade theories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or content; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be more severely tested by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.

All these properties which, it thus appears, we desire in a theory can be shown to amount to one and the same thing: to a higher degree of empirical content or of testability.

m

My study of the content of a theory (or of any statement whatsoever) was based on the simple and obvious idea that the informative content of the conjunction, a*b, of any two statements, a, and ft, will always be greater than, or at least equal to, that of either of its components.

Let a be the statement Tt will rain on Friday’; b the statement ‘It will be fine on Saturday’; and a • b the statement Tt will rain on Friday and it will be fine on Saturday’: it is then obvious that the informative content of this last statement, the conjunction a • ft, will exceed that of its component a and also that of its component b. And it will also be obvious that the probability of ab (or, what is the same, the probability that a • b will be true) will be no greater than that of either of its components.

Writing Ct(a) for ‘the content of the statement a\ and Ct{a*b) for ‘the content of the conjunction a and ft’, we have

This contrasts with the corresponding law of the calculus of probability,

(2)    p(a) § p(a-b) ^ p(b),

where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b - that is to say, if it entails more than b.)

This trivial fact has the following inescapable consequence: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.

I found this trivial though fundamental result about thirty years ago, and I have been preaching it ever since. Yet the prejudice that a high probability must be something highly desirable is so deeply ingrained that my trivial result is still held by many to be ‘paradoxical’.3 Despite this simple result the idea that a high degree of probability (in the sense of the calculus of probability) must be something highly desirable seems to be so obvious to most people that they are not prepared to consider it critically. Dr Bruce Brooke-Wavell has therefore suggested to me that I should stop talking in the context of ‘probability’ and should base my arguments on a ‘calculus of content’ and of ‘relative content’; or in other words, that I should not speak about science aiming at improbability, but merely say that it aims at maximum content. I have given much thought to this suggestion, but I do not think that it would help: a head-on collision with the widely accepted and deeply ingrained probabilistic prejudice seems unavoidable if the matter is really to be cleared up. Even if, as would be easy enough,

I were to base my own theory upon the calculus of content, or of logical strength, it would still be necessary to explain that the probability calculus, in its (‘logical’) application to propositions or statements, is nothing but a calculus of the logical weakness or lack of content of these statements (either of absolute logical weakness or of relative logical weakness). Perhaps a head-on collision would be avoidable if people were not so generally inclined to assume uncritically that a high probability must be an aim of science, and that, therefore, the theory of induction must explain to us how we can attain a high degree of probability for our theories. (And it then becomes necessary to point out that there is something else -‘truthlikeness’ or ‘verisimilitude’ - with a calculus totally different from the calculus of probability with which it seems to have been confused.)

To avoid these simple results, all kinds of more or less sophisticated theories have been designed. I believe I have shown that none of them is successful. But what is more important, they are quite unnecessary. One merely has to recognize that the property which we cherish in theories and which we may perhaps call ‘verisimilitude’ or ‘truthlikeness’ [see the next selection] is not a probability in the sense of the calculus of probability of which (2) is an inescapable theorem.

It should be noted that the problem before us is not a problem of words. I do not mind what you call ‘probability’, and I do not mind if you call those degrees for which the so-called ‘calculus of probability’ holds by any other name. I personally think that it is most convenient to reserve the term ‘probability’ for whatever may satisfy the well-known rules of this calculus (which Laplace, Keynes, Jeffreys, and many others have formulated, and for which I have given various forma' axiom systems4). If (and only if) we accept this terminology, then there can be no doubt that the absolute probability of a statement a is simply the degree of its logical weakness, or lack of informative content, and that the relative probability of a statement a, given a statement £>, is simply the degree of the relative weakness, or the relative lack of new informative content in statement a, assuming that we are already in possession of the information b.

Thus if we aim, in science, at a high informative content - if the growth of knowledge means that we know more, that we know and b, rather than a alone, and that the content of our theories thus increases - then we have to admit that we also aim at a low probability, in the sense of the calculus of probability.

And since a low probability means a high probability of being falsified, it follows that a high degree of falsifiability, or refutability, or testability, is one of the aims of science - in fact, precisely the same aim as a high informative content.

The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests - especially those tests to which we could point as crucial for the theory before they were ever undertaken.

It is possible in many cases to compare the severity of tests objectively. It is even possible, if we find it worth while, to define a measure of the severity of tests. By the same method we can define the explanatory power and the degree of corroboration of a theory.5

IV

The thesis that the criterion here proposed actually dominates the progress of science can easily be illustrated with the help of historical examples. The theories of Kepler and Galileo were unified and superseded by Newton’s logically stronger and better testable theory, and similarly Fresnel’s and Faraday’s by Maxwell’s. Newton’s theory, and Maxwell’s, in their turn, were unified and superseded by Einstein’s. In each such case the progress was towards a more informative and therefore logically less probable theory: towards a theory which was more severely testable because it made predictions which, in a purely logical sense, were more easily refutable.

A theory which is not in fact refuted by testing those new and bold and improbable predictions to which it gives rise can be said to be corroborated by these severe tests. I may remind you in this connection of Galle’s discovery of Neptune, of Hertz’s discovery of electromagnetic waves, of Eddington’s eclipse observations, of Elsasser’s interpretation of Davisson’s maxima as interference fringes of de Broglie waves, and of Powell’s observations of the first Yukawa mesons.

All these discoveries represent corroborations by severe tests -by predictions which were highly improbable in the light of our previous knowledge (previous to the theory which was tested and corroborated). Other important discoveries have also been made while testing a theory, though they did not lead to its corroboration but to its refutation. A recent and important case is the refutation of parity. But Lavoisier’s classical experiments which show that the volume of air decreases while a candle burns in a closed space, or that the weight of burning iron filings increases, do not establish the oxygen theory of combustion; yet they tend to refute the phlogiston theory.

Lavoisier’s experiments were carefully thought out; but even most so-called ‘chance discoveries’ are fundamentally of the same logical structure. For these so-called ‘chance discoveries’ are as a rule refutations of theories which were consciously or unconsciously held: they are made when some of our expectations (based upon these theories) are unexpectedly disappointed. Thus the catalytic property of mercury was discovered when it was accidentally found that in its presence a chemical reaction had been speeded up which had not been expected to be influenced by mercury. But neither Orsted’s nor Rontgen’s nor Becquerel’s nor Fleming’s discovery was really accidental, even though each had accidental components: every one of these men was searching for an effect of the kind he found.

We can even say that some discoveries, such as Columbus’s discovery of America, corroborate one theory (of the spherical earth) while refuting at the same time another (the theory of the size of the earth, and with it, of the nearest way to India); and that they were chance discoveries to the extent to which they contradicted all expectations, and were not consciously undertaken as tests of those theories which they refuted.

v

The stress I am laying upon change in scientific knowledge, upon its growth, or its progressiveness, may to some extent be contrasted with the current ideal of science as an axiomatized deductive system. This ideal has been dominant in European epistemology from Euclid’s Platonizing cosmology (for this is, I believe, what

Euclid’s Elements were really intended to be) to that of Newton, and further to the systems of Boscovic, Maxwell, Einstein, Bohr, Schrodinger, and Dirac. It is an epistemology that sees the final task and end of scientific activity in the construction of an axiomatized deductive system.

As opposed to this, I now believe that these most admirable deductive systems should be regarded as stepping stones rather than as ends:6 as important stages on our way to richer, and better testable, scientific knowledge.

Regarded thus as means or stepping stones, they are certainly quite indispensable, for we are bound to develop our theories in the form of deductive systems. This is made unavoidable by the logical strength, by the great informative content, which we have to demand of our theories if they are to be better and better testable. The wealth of their consequences has to be unfolded deductively; for as a rule, a theory cannot be tested except by testing, one by one, some of its more remote consequences; consequences, that is, which cannot immediately be seen upon inspecting it intuitively.

Yet it is not the marvellous deductive unfolding of the system which makes a theory rational or empirical but the fact that we can examine it critically; that is to say, subject it to attempted refutations, including observational tests; and the fact that, in certain cases, a theory may be able to withstand those criticisms and those tests - among them tests under which its predecessors broke down, and sometimes even further and more severe tests. It is in the rational choice of the new theory that the rationality of science lies, rather than in the deductive development of the theory.

Consequently there is little merit in formalizing and elaborating a deductive non-conventional system beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors. This critical comparison, though it has, admittedly, some minor conventional and arbitrary aspects, is largely non-conventional, thanks to the criterion of progress. It is this critical procedure which contains both the rational and the empirical elements of science. It contains those choices, those rejections, and those decisions, which show that we have

learnt from our mistakes, and thereby added to our scientific knowledge.

VI

Yet perhaps even this picture of science - as a procedure whose rationality consists in the fact that we learn from our mistakes -is not quite good enough. It may still suggest that science progresses from theory to theory and that it consists of a sequence of better and better deductive systems. Yet what I really wish to suggest is that science should be visualized as progressing from problems to problems - to problems of ever increasing depth. For a scientific theory - an explanatory theory - is, if anything, an attempt to solve a scientific problem, that is to say, a problem concerned or connected with the discovery of an explanation.

Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in difficulties, in contradictions; and these may arise either within a theory, or between two different theories, or as the result of a clash between our theories and our observations. Moreover, it is only through a problem that we become conscious of holding a theory. It is the problem which challenges us to learn; to advance our knowledge; to experiment; and to observe.

Thus science starts from problems, and not from observations; though observations may give rise to a problem, especially if they are unexpected; that is to say, if they clash with our expectations or theories. The conscious task before the scientist is always the solution of a problem through the construction of a theory which solves the problem; for example, by explaining unexpected and unexplained observations. Yet every worthwhile new theory raises new problems; problems of reconciliation, problems of how to conduct new and previously unthought-of observational tests. And it is mainly through the new problems which it raises that it is fruitful.

Thus we may say that the most lasting contribution to the growth of scientific knowledge that a theory can make are the new problems which it raises, so that we are led back to the view of

science and of the growth of knowledge as always starting from, and always ending with, problems - problems of an ever increasing depth, and an ever increasing fertility in suggesting new problems.

I

[In the preceding selection] I have spoken about science, its progress, and its criterion of progress, without even mentioning truth. Perhaps surprisingly, this can be done without falling into pragmatism or instrumentalism: it is perfectly possible to argue in favour of the intuitive satisfactoriness of the criterion of progress in science without ever speaking about the truth of its theories. In fact, before I became acquainted with Tarski’s theory of truth,1 it appeared to me safer to discuss the criterion of progress without getting too deeply involved in the highly controversial problem connected with the use of the word ‘true’.

My attitude at the time was this: although I accepted, as almost everybody does, the objective or absolute or correspondence theory of truth - truth as correspondence with the facts - I preferred to avoid the topic. For it appeared to me hopeless to try to understand clearly this strangely elusive idea of a correspondence between a statement and a fact.

In order to recall why the situation appeared so hopeless we only have to remember, as one example among many, Wittgenstein’s Tractatus with its surprisingly naive picture theory, or projection theory, of truth. In this book a proposition was conceived as a picture or projection of the fact which it was intended to describe and as having the same structure (or ‘form’) as that fact; just as a gramophone record is indeed a picture ora pro j ect ion of a sound, and shares some of its structural properties.2

Another of these unavailing attempts to explain this correspondence was due to Schlick, who gave a beautifully clear and truly devastating criticism3 of various correspondence theories - including the picture or projection theory - but who unfortunately produced in his turn another one which was no better. He interpreted the correspondence in question as a one-to-one correspondence between our designations and the designated objects, although counterexamples abound (designations applying to many objects, objects designated by many designations) which refute this interpretation.

All this was changed by Tarski’s theory of truth and of the correspondence of a statement with the facts. Tarski’s greatest achievement, and the real significance of his theory for the philosophy of the empirical sciences, is that he rehabilitated the correspondence theory of absolute or objective truth which had become suspect. He vindicated the free use of the intuitive idea of truth as correspondence to the facts. (The view that his theory is applicable only to formalized languages is, I think, mistaken. It is applicable to any consistent and even to a ‘natural’ language, if only we learn from Tarski’s analysis how to dodge its inconsistencies; which means, admittedly, the introduction of some ‘artificiality’ - or caution - into its use.)

I may perhaps explain the way in which Tarski’s theory of truth can be regarded, from an intuitive point of view, as a simple elucidation of the idea of correspondence to the facts. I shall have to stress this almost trivial point because, in spite of its triviality, it will be crucial for my argument.

The highly intuitive character of Tarski’s ideas seems to become more evident (as I have found in teaching) if we first decide explicitly to take ‘truth’ as a synonym for ‘correspondence to the facts’, and then (forgetting all about ‘truth’) proceed to explain the idea of ‘correspondence to the facts'.

Thus we shall first consider the following two formulations, each of which states very simply (in a metalanguage) under what conditions a certain assertion (of an object language) corresponds to the facts.

(1)    The statement, or the assertion, ‘Snow is white’ corresponds

to the facts if, and only if, snow is, indeed, white.

- *

(2)    The statement, or the assertion, ‘Grass is red’ corresponds to the facts if, and only if, grass is, indeed, red.

These formulations (in which the word ‘indeed’ is only inserted for ease, and may be omitted) sound, of course, quite trivial. But

it was left to Tarski to discover that, in spite of their apparent triviality, they contained the solution of the problem of explaining correspondence to the facts.

The decisive point is Tarski’s discovery that, in order to speak of correspondence to the facts, as do (1) and (2), we must use a metalanguage in which we can speak about two things: statements; and the facts to which they refer. (Tarski calls such a metalanguage ‘semantical’; a metalanguage in which we can speak about an object language but not about the facts to which it refers is called ‘syntactical’.) Once the need for a (semantical) metalanguage is realized, everything becomes clear. (Note that while (3) ‘“John called” is true’ is essentially a statement belonging to such a metalanguage, (4) ‘It is true that John called’ may belong to the same language as ‘John called’. Thus the phrase 6It is true that’ -which, like double negation, is logically redundant - differs widely from the metalinguistic predicate 6is true'. The latter is needed for general remarks such as, ‘If the conclusion is not true, the premisses cannot all be true’ or ‘John once made a true statement’.)

I have said that Schlick’s theory was mistaken, yet I think that certain comments he made (loc. cit.) about his own theory throw some light on Tarski’s . For Schlick says that the problem of truth shared the fate of some others whose solutions were not easily seen because they were mistakenly supposed to lie on a very deep level, while actually they were fairly plain and, at first sight, unimpressive. Tarski’s solution may well appear unimpressive at first sight. Yet its fertility and its power are impressive indeed.

n

Thanks to Tarski’s work, the idea of objective or absolute truth - that is truth as correspondence with the facts - appears to be accepted today with confidence by all who understand it. The difficulties in understanding it seem to have two sources: first, the combination of an extremely simple intuitive idea with a certain amount of complexity in the execution of the technical programme to which it gives rise; secondly, the widespread but mistaken dogma that a satisfactory theory of truth would have to be a theory of true belief - of well-founded, or rational belief. Indeed, the three

rivals of the correspondence theory of truth - the coherence theory which mistakes consistency for truth, the evidence theory which mistakes ‘known to be true’ for ‘true’, and the pragmatic or instrumentalist theory which mistakes usefulness for truth - these are all subjectivist (or ‘epistemic’) theories of truth, in contradistinction to Tarski’s objectivist (or ‘metalogical’) theory. They are subjectivist in the sense that they all stem from the fundamental subjectivist position which can conceive of knowledge only as a special kind of mental state, or as a disposition, or as a special kind of belief, characterized, for example, by its history or by its relation to other beliefs.

If we start from our subjective experience of believing, and thus look upon knowledge as a special kind of belief, then we may indeed have to look upon truth - that is, true knowledge - as some even more special kind of belief: as one that is well founded or justified. This would mean that there should be some more or less effective criterion, if only a partial one, of well-foundedness; some symptom by which to differentiate the experience of a well-founded belief from other experiences of belief. It can be shown that all subjectivist theories of truth aim at such a criterion: they try to define truth in terms of the sources or origins of our beliefs [see selection 3 above], or in terms of our operations of verification, or of some set of rules of acceptance, or simply in terms of the quality of our subjective convictions. They all say, more or less, that truth is what we are justified in believing or in accepting, in accordance with certain rules or criteria, of origins or sources of our knowledge, or of reliability, or stability, or biological success, or strength of conviction, or inability to think otherwise.

The objectivist theory of truth leads to a very different attitude. This may be seen from the fact that it allows us to make assertions such as the following: a theory may be true even though nobody believes it, and even though we have no reason for accepting it, or for believing that it is true; and another theory may be false, although we have comparatively good reasons for accepting it.

Clearly, these assertions would appear to be self-contradictory from the point of view of any subjectivist or epistemic theory of truth. But within the objectivist theory, they are not only consistent, but quite obviously true.

A similar assertion which the objectivist correspondence theory

would make quite natural is this: even if we hit upon a true theory, we shall as a rule be merely guessing, and it may well be impossible for us to know that it is true.

An assertion like this was made, apparently for the first time, by Xenophanes who lived 2,500 years ago [see p.31 above]; which shows that the objectivist theory of truth is very old indeed -antedating Aristotle, who also held it. But only with Tarski’s work has the suspicion been removed that the objectivist theory of truth as correspondence with the facts may be either self-contradictory (because of the paradox of the liar), or empty (as Ramsey suggested), or barren, or at the very least redundant, in the sense that we can do without it (as I once thought myself).

In my theory of scientific progress I might perhaps do without it, up to a point. Since Tarski, however, I no longer see any reason why I should try to avoid it. And if we wish to elucidate the difference between pure and applied science, between the search for knowledge and the search for power or for powerful instruments, then we cannot do without it. For the difference is that, in the search for knowledge, we are out to find true theories, or at least theories which are nearer than others to the truth - which correspond better to the facts; whereas in the search for theories that are merely powerful instruments for certain purposes, we are, in many cases, quite well served by theories which are known to be false.4

So one great advantage of the theory of objective or absolute truth is that it allows us to say - with Xenophanes - that we search for truth, but may not know when we have found it; that we have no criterion of truth, but are nevertheless guided by the idea of truth as a regulative principle (as Kant or Peirce might have said); and that, though there are no general criteria by which we can recognize truth - except perhaps tautological truth - there are something like criteria of progress towards the truth (as I shall explain presently).

The status of truth in the objective sense, as correspondence to the facts, and its role as a regulative principle, may be compared to that of a mountain peak which is permanently, or almost permanently, wrapped in clouds. The climber may not merely have difficulties in getting there - he may not know when he gets there, because he may be unable to distinguish, in the clouds, between the main summit and some subsidiary peak. Yet this does not affect the objective existence of the summit, and if the climber tells us ‘I have some doubts whether I reached the actual summit’, then he does, by implication, recognize the objective existence of the summit. The very idea of error, or of doubt (in its normal straightforward sense) implies the idea of an objective truth which we may fail to reach.

Though it may be impossible for the climber ever to make sure that he has reached the summit, it will often be easy for him to realize that he has not reached it (or not yet reached it); for example, when he is turned back by an overhanging wall. Similarly, there will be cases when we are quite sure that we have not reached the truth. Thus while coherence, or consistency, is no criterion of truth, simply because even demonstrably consistent systems may be false in fact, incoherence or inconsistency do establish falsity; so, if we are lucky, we may discover inconsistencies and use them to establish the falsity of some of our theories.5

In 1944, when Tarski published the first English outline of his investigations into the theory of truth (which he had published in Poland in 1933), few philosophers would have dared to make assertions like those of Xenophanes; and it is interesting that the volume in which Tarski’s paper was published also contained two subjectivist papers on truth.6

Though things have improved since then, subjectivism is still rampant in the philosophy of science, and especially in the field of probability theory. The subjectivist theory of probability, which interprets degrees of probability as degrees of rational belief, stems directly from the subjectivist approach to truth - especially from the coherence theory. Yet it is still embraced by philosophers who have accepted Tarski’s theory of truth. At least some of them, I suspect, have turned to probability theory in the hope that it would give them what they had originally expected from a subjectivist or epistemological theory of the attainment of truth through verification; that is, a theory of rational and justifiable belief, based upon observed instances.7

It is an awkward point in all these subjectivist theories that they are irrefutable (in the sense that they can too easily evade any criticism). For it is always possible to uphold the view that everything we say about the world, or everything we print about logarithms, should be replaced by a belief-statement. Thus we may replace the statement ‘Snow is white’ by T believe that snow is white’ or perhaps even by ‘In the light of all the available evidence I believe that it is rational to believe that snow is white’. The possibility of replacing any assertion about the objective world by one of these subjectivist circumlocutions is trivial, though in the case of the assertions expressed in logarithm tables - which might well be produced by machines - somewhat unconvincing. (It may be mentioned in passing that the subjectivist interpretation of logical probability links these subjectivist replacements, exactly as in the case of the coherence theory of truth, with an approach which, on closer analysis, turns out to be essentially ‘syntactic’ rather than ‘semantic’ - although it can of course always be presented within the framework of a ‘semantical system’.)

It may be useful to sum up the relationships between the objectivist and subjectivist theories of scientific knowledge with the help of a little table:

SUBJECTIVIST OR PSYCHOLOGICAL OR EPISTEMOLOGICAL THEORIES

truth as property of our state of mind - or knowledge or belief

subjective probability (degree of rational belief based upon our total knowledge)

lack of knowledge lack of knowledge

OBJECTIVIST OR LOGICAL OR ONTOLOGICAL THEORIES

truth as correspondence

with the facts

objective probability (inherent in the situation, and testable by statistical tests)

objective randomness (statistically testable)

equiprob ability

(physical or situational symmetry)

In all these cases I am inclined to say not only that these two approaches should be distinguished, but also that the subjectivist approach should be discarded as a lapse, as based on a mistake -though perhaps a tempting mistake. There is, however, a similar table in which the epistemological (right hand) side is not based on a mistake:

truth

testability

explanatory or predictive power

verisimilitude

conjecture

empirical test

degree of corroboration (that is, report of the results

of tests)

m

Like many other philosophers I am at times inclined to classify philosophers as belonging to two main groups - those with whom I disagree, and those who agree with me. I also call them the verificationists or the justificationist philosophers of knowledge (or of belief), and the falsificationists or critical philosophers of knowledge (or of conjectures). I may mention in passing a third group with whom I also disagree. They may be called the disappointed justificationists - the ^rationalists and sceptics.

The members of the first group - the verificationists or justificationists - hold, roughly speaking, that whatever cannot be supported by positive reasons is unworthy of being believed, or even of being taken into serious consideration.

On the other hand, the members of the second group - the falsificationists - say, roughly speaking, that what cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed - though only tentatively.

Verificationists, I admit, are eager to uphold that most important tradition of rationalism - the fight of reason against superstition and arbitrary authority. For they demand that we should accept a belief only if it can be justified by positive evidence; that is to say, shown to be true, or, at least, to be highly probable. In other words, they demand that we should accept a belief only if it can be verified, or probabilistically confirmed.

Falsificationists (the group of fallibilists to which I belong) believe - as most ^rationalists also believe - that they have discovered logical arguments which show that the programme of the first group cannot be carried out: that we can never give positive reasons which justify the belief that a theory is true. But, unlike irrationalists, we falsificationists believe that we have also discovered a way to realize the old ideal of distinguishing rational science from various forms of superstition, in spite of the breakdown of the original inductivist or justificationist programme. We hold that this ideal can be realized, very simply, by recognizing that the rationality of science lies not in its habit of appealing to empirical evidence in support of its dogmas -astrologers do so too - but solely in the critical approach - in an attitude which, of course, involves the critical use, among other arguments, of empirical evidence (especially in refutations). For us, therefore, science has nothing to do with the quest for certainty or probability or reliability. We are not interested in establishing scientific theories as secure, or certain, or probable. Conscious of our fallibility we are interested only in criticizing them and testing them, in the hope of finding out where we are mistaken; of learning from our mistakes; and, if we are lucky, of proceeding to better theories.

Considering their view about the positive or negative function of argument in science, the first group - the justificationists - may be also nicknamed the ‘positivists’ and the second - the group to which I belong - the critics or the ‘negativists’. These are, of course, mere nicknames. Yet they may perhaps suggest some of the reasons why some people believe that only the positivists or verificationists are seriously interested in truth and in the search for truth, while we, the critics or negativists, are flippant about the search for truth, and addicted to barren and destructive criticism and to the propounding of views which are clearly paradoxical.

This mistaken picture of our views seems to result largely from the adoption of a justificationist programme, and of the mistaken subjectivist approach to truth which I have described.

For the fact is that we too see science as the search for truth, and that, at least since Tarski, we are no longer afraid to say so. Indeed, it is only with respect to this aim, the discovery of truth, that we can say that though we are fallible, we hope to learn from our mistakes. It is only the idea of truth which allows us to speak sensibly of mistakes and of rational criticism, and which makes rational discussion possible - that is to say, critical discussion in

search of mistakes with the serious purpose of eliminating as many of these mistakes as we can, in order to get nearer the truth. Thus the very idea of error - and of fallibility - involves the idea of an objective truth as the standard of which we may fall short. (It is in this sense that the idea of truth is a regulative idea.)

Thus we accept the idea that the task of science is the search for truth, that is, for true theories (even though as Xenophanes pointed out we may never get them, or know them as true if we get them). Yet we also stress that truth is not the only aim of science. We want more than mere truth; what we look for is interesting truth -truth which is hard to come by. And in the natural sciences (as distinct from mathematics) what we look for is truth which has a high degree of explanatory power, in a sense which implies that it is logically improbable truth.

For it is clear, first of all, that we do not merely want truth -we want more truth, and new truth. We are not content with ‘twice two equals four’, even though it is true: we do not resort to reciting the multiplication table if we are faced with a difficult problem in topology or in physics. Mere truth is not enough; what we look for are answers to our problems. The point has been well put by the German humorist and poet Busch, of Max-and-Moritz fame, in a little nursery rhyme - I mean a rhyme for the epistemological nursery:8

Twice two equals four: ’tis true,

But too empty, and too trite.

What I look for is a clue To some matters not so light.

Only if it is an answer to a problem - a difficult, a fertile problem, a problem of some depth - does a truth, or a conjecture about the truth, become relevant to science. This is so in pure mathematics, and it is so in the natural sciences. And in the latter, we have something like a logical measure of the depth or significance of the problem in the increase of logical improbability or explanatory power of the proposed new answer, as compared with the best theory or conjecture previously proposed in the field. This logical measure is essentially the same thing which I have

described above as the logical criterion of potential satisfactoriness and of progress.

My description of this situation might tempt some people to say that truth does not, after all, play a very big role with us negativists even as a regulative principle. There can be no doubt, they will say, that negativists (like myself) much prefer an attempt to solve an interesting problem by a bold conjecture, even if it soon turns out to be false, to any recital of a sequence of true but uninteresting assertions. Thus it does not seem, after all, as if we negativists had much use for the idea of truth. Our ideas of scientific progress and of attempted problem-solving do not seem very closely related to it.

This, I believe, would give quite a mistaken impression of the attitude of our group. Call us negativists, or what you like: but you should realize that we are as much interested in truth as anybody - for example, as the members of a court of justice. When the judge tells a witness that he should speak ‘The truth, the whole truth, and nothing but the truth’, then what he looks for is as much of the relevant truth as the witness may be able to offer. A witness who likes to wander off into irrelevancies is unsatisfactory as a witness, even though these irrelevancies may be truisms, and thus part of ‘the whole truth’. It is quite obvious that what the judge - or anybody else - wants when he asks for ‘the whole truth’ is as much interesting and relevant true information as can be got; and many perfectly candid witnesses have failed to disclose some important information simply because they were unaware of its relevance to the case.

Thus when we stress, with Busch, that we are not interested in mere truth but in interesting and relevant truth, then, I contend, we only emphasize a point which everybody accepts. And if we are interested in bold conjectures, even if these should soon turn out to be false, then this interest is due to our methodological conviction that only with the help of such bold conjectures can we hope to discover interesting and relevant truth.

There is a point here which, I suggest, it is the particular task of the logician to analyse. ‘Interest’, or ‘relevance’, in the sense here intended, can be objectively analysed; it is relative to our problems; and it depends on the explanatory power, and thus on the content or improbability, of the information. The measures

alluded to earlier are precisely such measures as take account of some relative content of the information - its content relative to a hypothesis or to a problem.

I can therefore gladly admit that falsificationists like myself much prefer an attempt to solve an interesting problem by a bold conjecture, even (and especially) if it soon turns out to be false, to any recital of a sequence of irrelevant truisms. We prefer this because we believe that this is the way in which we can learn from our mistakes; and that in finding that our conjecture was false, we shall have learnt much about the truth, and shall have got nearer to the truth.

I therefore hold that both ideas - the idea of truth, in the sense of correspondence with facts, and the idea of content (which may be measured by the same measure as testability) - play about equally important roles in our considerations, and that both can shed much light on the idea of progress in science.

iv

Looking at the progress of scientific knowledge, many people have been moved to say that even though we do not know how near we are to or how far we are from the truth, we can, and often do, approach more and more closely to the truth. I myself have sometimes said such things in the past, but always with a twinge of bad conscience. Not that I believe in being over-fussy about what we say: as long as we speak as clearly as we can, yet do not pretend that what we are saying is clearer than it is, and as long as we do not try to derive apparently exact consequences from dubious or vague premisses, there is no harm whatever in occasional vagueness, or in voicing every now and then our feelings and general intuitive impressions about things. Yet whenever I used to write, or to say, something about science as getting nearer to the truth, or as a kind of approach to truth, I felt that I really ought to be writing ‘Truth’, with a capital‘T’, in order to make quite clear that a vague and highly metaphysical notion was involved here, in contradistinction to Tarski’s ‘truth’ which we can with a clear conscience write in the ordinary way with small letters.9

It was only quite recently that I set myself to consider whether the idea of truth involved here was really so dangerously vague and

metaphysical after all. Almost at once I found that it was not, and that there was no particular difficulty in applying Tarski’s fundamental idea to it.

For there is no reason whatever why we should not say that one theory corresponds better to the facts than another. This simple initial step makes everything clear: there really is no barrier here between what at first sight appeared to be Truth with a capital‘T’ and truth in a Tarskian sense.

But can we really speak about better correspondence? Are there

H

such things as degrees of truth? Is it not dangerously misleading to talk as if Tarskian truth were located somewhere in a kind of metrical or at least topological space so that we can sensibly say of two theories - say an earlier theory tx and a later theory t2, that thas superseded tx, or progressed beyond tY, by approaching more closely to the truth than tx7

I do not think that this kind of talk is at all misleading. On the contrary, I believe that we simply cannot do without something like this idea of a better or worse approximation to truth. For there is no doubt whatever that we can say, and often want to say, of a theory t2 that it corresponds better to the facts, or that as far as we know it seems to correspond better to the facts, than another theory tx.

I shall give here a somewhat unsystematic list of six types of case in which we should be inclined to say of a theory t{ that it is superseded by t2 in the sense that t2 seems - as far as we know -to correspond better to the facts than tl9 in some sense or other.

(1)    t2 makes more precise assertions than tl9 and these more precise assertions stand up to more precise tests.

(2)    t2 takes account of, and explains, more facts than tY (which will include for example the above case that, other things being

equal, t2 s assertions are more precise).

(3)    t2 describes, or explains, the facts in more detail than tY.

(4)    t2 has passed tests which tY has failed to pass.

(5)    t2 has suggested new experimental tests, not considered before t2 was designed (and not suggested by tx, and perhaps not even applicable to 0; and t2 has passed these tests.

(6)    t2 has unified or connected various hitherto unrelated problems.

If we reflect upon this list, then we can see that the contents of

the theories t: and t2 play an important role in it. (It will be remembered that the logical content of a statement or a theory a is the class of all statements which follow logically from a, while I have defined the empirical content of a as the class of all basic statements which contradict a.10 For in our list of six cases, the empirical content of theory t2 exceeds that of theory tx.

This suggests that we combine here the ideas of truth and of content into one - the idea of a degree of better (or worse) correspondence to truth or of greater (or less) likeness or similarity to truth; or to use a term already mentioned above (in contradistinction to probability) the idea of (degrees of) verisimilitude.

It should be noted that the idea that every statement or theory is not only either true or false but has, independently of its truth value, some degree of verisimilitude, does not give rise to any multivalued logic - that is, to a logical system with more than two truth values, true and false; though some of the things the defenders of multivalued logic are hankering after seem to be realized by the theory of verisimilitude (and related theories).

v

Once I had seen the problem it did not take me long to get to this point. But strangely enough, it took me a long time to put two and two together, and to proceed from here to a very simple definition of verisimilitude in terms of truth and of content. (We can use either logical or empirical content, and thus obtain two closely related ideas of verisimilitude which however merge into one if we consider here only empirical theories, or empirical aspects of theories.)

Let us consider the content of a statement a; that is, the class of all the logical consequences of a. If a is true, then this class can consist only of true statements, because truth is always transmitted from a premiss to all its consequences. But if a is false, then its content will always consist of both true and false consequences. (Example: ‘It always rains on Sundays’ is false, but its consequence that it rained last Sunday happens to be true.) Thus whether a statement is true or false, there may be more truth, or less truth, in what it says, according to whether its content consists of a greater or a lesser number of true statements.

Let us call the class of the true logical consequence of a the ‘truth content’ of a (a German term ‘Wahrheitsgehalt’ - reminiscent of the phrase ‘there is truth in what you say’ - of which ‘truth content’ may be said to be a translation, has been intuitively used for a long time); and let us call the class of the false consequences of a - but only these - the ‘falsity content’ of a. (The ‘falsity content’ is not, strictly speaking, a ‘content’, because it does not contain any of the true consequences of the false statements which form its elements. Yet it is possible to define its measure with the help of two contents.) These terms are precisely as objective as the terms ‘true’ or ‘false’ and ‘content’ themselves. Now we can say:

Assuming that the truth content and the falsity content of two theories tx and t2 are comparable, we can say that t2 is more closely similar to the truth, or corresponds better to the facts, than tx, if and only if either

(1)    the truth content but not the falsity content of t2 exceeds that oft,,

(2)    the falsity content of tx, but not its truth content, exceeds that oft2.

If we work with the (perhaps fictitious) assumption that the content and truth content of a theory a are in principle measurable, then we can go slightly beyond this definition and can define Vs(a), that is to say, a measure of the verisimilitude or truthlikeness of a. The simplest definition will be

Vs (a) = CtT(a) — CtF(a)

where Ctj(a) is a measure of the truth content of a, and Ct^a) is a measure of the falsity content of a. (A slightly more complicated but in some respects preferable definition can also be formulated.11) It is obvious that Vs(a) satisfies our two demands, according to which Vs(a) should increase (1) if Ct-fa) increases while Ctp(a) does not, or (2) if CtF(a) decreases while Ct-fa) does not.

VI

Three non-technical points may be made. The first is that our idea of approximation to truth, or of verisimilitude, has the same

objective character and the same ideal or regulative character as the idea of objective or absolute truth. It is not an epistemological or an epistemic idea - no more than truth or content. (In Tarski’s terminology, it is obviously a ‘semantic’ idea, like truth, or like logical consequence, and, therefore, content.) Accordingly, we have here again to distinguish between the question ‘What do you intend to say if you say that the theory t2 has a higher degree of verisimilitude than the theory tx}\ and the question ‘How do you know that the theory t2 has a higher degree of verisimilitude than the theory txV

We have so far answered only the first of these questions. The answer to the second question depends on it, and is exactly analogous to the answer to the analogous (absolute rather than comparative) question about truth: ‘I do not know -1 only guess. But I can examine my guess critically, and if it withstands severe criticism, then this fact may be taken as a good critical reason in favour of it.’

My second point is this. Verisimilitude is so defined that maximum verisimilitude would be achieved only by a theory which is not only true, but completely comprehensively true: if it corresponds to all facts, as it were, and, of course, only to real facts. This is of course a much more remote and unattainable ideal than a mere correspondence with some facts (as in, say, ‘Snow is usually white’).

But all this holds only for the maximum degree of verisimilitude,

arid not for the comparison of theories with respect to their degree of verisimilitude. This comparative use of the idea is its main point; and the idea of a higher or lower degree of verisimilitude seems less remote and more applicable and therefore perhaps more important for the analysis of scientific methods than the - in itself much more fundamental - idea of absolute truth itself.

This leads me to my third point. Let me first say that I do not suggest that the explicit introduction of the idea of verisimilitude will lead to any changes in the theory of method. On the contrary, I think that my theory of testability or corroboration by empirical tests is the methodological theory that gives point to this new metalogical idea. The only improvement is one of clarification. Thus I have often said that we prefer the theory t2 which has passed certain severe tests to the theory tx which has failed these tests,

because a false theory is certainly worse than one which, for all we know, may be true.

To this we can now add that after t2 has been refuted in its turn, we can still say that it is better than tx, for although both have been shown to be false, the fact that t2 has withstood tests which tx did not pass may be a good indication that the falsity content of texceeds that of t2 while its truth content does not. Thus we may still give preference to r2, even after its falsification, because we have reason to think that it agrees better with the facts than did

tx.

All cases where we accept t2 because of experiments which were crucial between t2 and tx seem to be of this kind, and especially all cases where the experiments were found by trying to think out, with the help of f2, cases where t2 leads to other results than did txThus Newton’s theory allowed us to predict some deviations from Kepler’s laws. Its success in this field established that it did not fail in cases which refuted Kepler’s: at least the now known falsity content of Kepler’s theory was not part of Newton’s, while it was pretty clear that the truth content could not have shrunk, since Kepler’s theory followed from Newton’s as a ‘first approximation’ .

Similarly, a theory t2 which is more precise than tx can now be shown to have - always provided its falsity content does not exceed that of tx - a higher degree of verisimilitude than tx. The same will hold for t2 whose numerical assertions, though false, come nearer to the true numerical values than those of tx.

Ultimately, the idea of verisimilitude is most important in cases

where we know that we have to work with theories which are at

*

best approximations - that is to say, theories of which we actually know that they cannot be true. (This is often the case in the social sciences.) In these cases we can still speak of better or worse approximations to the truth (and we therefore do not need to interpret these cases in an instrumentalist sense).

vn

It always remains possible, of course, that we shall make mistakes in our relative appraisal of two theories, and the appraisal will often be a controversial matter. This point can hardly be overemphasized. Yet it is also important that in principle, and as long as there are no revolutionary changes in our background knowledge, the relative appraisal of our two theories, t: and r2, will remain stable. More particularly, our preferences need not change, as we have seen, if we eventually refute the better of the two theories. Newton’s dynamics, for example, even though we may regard it as refuted, has of course maintained its superiority over Kepler’s and Galileo’s theories. The reason is its greater content or explanatory power. Newton’s theory continues to explain more facts than did the others; to explain them with greater precision; and to unify the previously unconnected problems of celestial and terrestrial mechanics. The reason for the stability of relative appraisals such as these is quite simple: the logical relation between the theories is of such a character that, first of all, there exist with respect to them those crucial experiments, and these, when carried out, went against Newton’s predecessors. And secondly, it is of such a character that the later refutations of Newton’s theory could not support the older theories: they either did not affect them, or (as with the perihelion motion of Mercury) they could be claimed to refute the predecessors also.

I hope that I have explained the idea of better agreement with the facts, or of degrees of verisimilitude, sufficiently clearly for the purpose of this brief survey.

In this paper, I propose briefly to put forth and to explain the following theses, and to indicate the manner of their defence.

(1)    The solution of the problem of interpreting probability theory is fundamental for the interpretation of quantum theory; for quantum theory is a probabilistic theory.

(2)    The idea of a statistical interpretation is correct, but is lacking in clarity.

(3)    As a consequence of this lack of clarity, the usual interpretation of probability in physics oscillates between two extremes: an objectivist purely statistical interpretation and a subjectivist interpretation in terms of our incomplete knowledge, or of the available information.

(4)    In the orthodox Copenhagen interpretation of quantum theory we find the same oscillation between an objectivist and subjectivist interpretation: the famous intrusion of the observer into

i.

physics.

(5)    As opposed to all this, a revised or reformed statistical interpretation is here proposed. It is called the propensity interpretation of probability.

(6)    The propensity interpretation is a purely objectivist interpretation. It eliminates the oscillation between objectivist and subjectivist interpretations, and with it the intrusion of the subject into physics.

(7)    The idea of propensities is ‘metaphysical’, in exactly the same sense as forces or fields of forces are metaphysical.

(8)    It is also ‘metaphysical’ in another sense: in the sense of providing a coherent programme for physical research.

These are my theses. I begin by explaining what I call the propensity interpretation of probability theory.1