4

SCIENCE: PROBLEMS, AIMS, RESPONSIBILITIES

I

The intellectual history of man has its depressing as well as its exhilarating aspects. For one may well look upon it as a history of prejudice and dogma, tenaciously held, and often combined with intolerance and fanaticism. One may even describe it as a history of spells of religious or quasi-religious frenzy. It should be remembered, in this context, that most of our great destructive wars have been religious or ideological wars – with the notable exception, perhaps, of the wars of Genghis Khan, who seems to have been a model of religious toleration.

Yet even the sad and depressing picture of religious wars has its brighter side. It is an encouraging fact that countless men, from ancient to modern times, have been ready to live and to die for their convictions, for ideas – ideas which they believed to be true.

Man, we may say, appears to be not so much a rational animal as an ideological animal.

The history of science, even of modern science since the Renaissance, and especially since Francis Bacon, may be taken as an illustration. The movement inaugurated by Bacon was a religious or semi-religious movement, and Bacon was the prophet of the secularized religion of science. He replaced the name ‘God’ by the name ‘Nature’, but he left almost everything else unchanged. Theology, the science of God, was replaced by the science of Nature. The laws of God were replaced by the laws of Nature. God’s power was replaced by the forces of Nature. And at a later date, God’s design and God’s judgements were replaced by natural selection. Theological determinism was replaced by scientific determinism, and the book of fate by the predictability of Nature. In short, God’s omnipotence and omniscience were replaced by the omnipotence of nature and by the virtual omniscience of natural science.

It was also in this period that the phrase ‘deus sive natura’ – which may perhaps be translated as ‘God, or what is the same, nature’ – was almost casually used by the physicist and philosopher Spinoza.

According to Bacon, nature, like God, was present in all things, from the greatest to the least. And it was the aim or the task of the new science of nature to determine the nature of all things, or, as he sometimes said, the essence of all things. This was possible because the Book of Nature was an open book. All that was needed was to approach the goddess Nature with a pure mind, free of prejudices, and she would readily yield her secrets. Give me a couple of years free from other duties, Bacon somewhat unguardedly exclaimed in a moment of enthusiasm, and I shall complete the task – the task of copying faithfully the whole Book of Nature, and of writing the new science.

Unfortunately, Bacon did not get the research grant for which he was looking. The great Foundations did not yet exist and as a consequence, sad to say, the science of nature is still unfinished.

Bacon’s naïve and amateurish optimism was a source of encouragement and inspiration for the great scientific amateurs who founded the Royal Society, modelling it after the central research institution envisaged by Bacon in his New Atlantis.

Bacon was the prophet, the great inspirer of the new religion of science, but he was not a scientist. Yet the inspiration and the influence of his new theology of nature were at least as great and as lasting as those of his contemporary Galileo, who might be described as the true founder of modern experimental science. More especially, Bacon’s naïve view concerning the essence of natural science, and the distinction or demarcation drawn by him between the new natural science on the one hand and the old theology and philosophy on the other, became the main dogma of the new religion of science. It is a dogma to which scientists as well as philosophers have tenaciously adhered down to our own day. And it is only in recent years that some scientists have become willing to listen to those who criticize this dogma.

The Baconian dogma I have in mind asserts the supreme merits of observation and the viciousness of theorizing speculation. I shall call this dogma, briefly, by the name ‘observationism’.

According to Bacon, the nature or essence of the method of the new science of nature, the method which distinguishes and demarcates it from the old theology and from metaphysical philosophy, can be explained as follows:

Man is impatient. He likes quick results. So he jumps to conclusions.

This is the old, the vicious, the speculative method. Bacon called it ‘the method of anticipations of the mind’. It is a false method, for it leads to prejudices. (The term ‘prejudice’ was coined by Bacon.)

Bacon’s new method, which he recommends as the true way to knowledge, and also as the way to power, is this. We must purge our minds of all prejudices, of all preconceived ideas, of all theories – of all those superstitions, or ‘idols’, which religion, philosophy, education, or tradition may have imparted to us. When we have thus purged our minds of prejudices and impurities, we may approach nature. And nature will not mislead us. For it is not nature that misleads us but only our own prejudices, the impurities of our own minds. If our minds are pure, we shall be able to read the Book of Nature without distorting it: we have only to open our eyes, to observe things patiently, and to write down our observations carefully, without misrepresenting or distorting them, and the nature or essence of the thing observed will be revealed to us.

This is Bacon’s method of observation and induction. To put it in a nutshell: pure untainted observation is good, and pure observation cannot err; speculation and theories are bad, and they are the source of all error. More especially, they make us misread the Book of Nature – that is, misinterpret our observations.

Bacon’s observationism and his hostility to all forms of theoretical thought were revolutionary, and were felt to be so. They became the battle cry of the new secularized religion of science, and its most cherished dogma. This dogma had an almost unbelievable influence upon both the practice and the theory of science, and this influence is still strong in our own day.

In order to show that this dogma did not express the general belief of scientists contemporary with Bacon, I shall once again briefly contrast Bacon with Galileo.

Bacon, the philosopher of science, was, quite consistently, an enemy of the Copernican hypothesis. Don’t theorize, he said, but open your eyes and observe without prejudice, and you cannot doubt that the Sun moves and that the Earth is at rest.

Galileo, the great scientist and defender of the Copernican ‘System of the World’, paid homage to Aristarchus and Copernicus precisely because they were bold enough to produce speculative theories which not only go beyond, but also contradict, all that we believe ourselves to know from observation.

I may perhaps quote a passage from Galileo’s Dialogue Concerning the Two Chief World Systems:1

I shall never be able to express strongly enough my admiration for the greatness of mind of these men who conceived this [heliocentric] hypothesis and held it to be true. In violent opposition to the evidence of their own senses and by sheer force of intellect, they preferred what reason told them to that which sense experience plainly showed them … I repeat, there is no limit to my astonishment when I reflect how Aristarchus and Copernicus were able to let reason conquer sense, and in defiance of sense make reason the mistress of their belief.

This is Galileo’s testimony to the way in which bold and purely speculative scientific theories may free us from our prejudices. Bacon, on the contrary, held that these new theories were speculative prejudices, that theoretical thinking always creates prejudices, that only its abandonment can help us to free ourselves from prejudices, and that thought can never achieve this.

Before turning to criticize the Baconian dogma, and to replace it by a very different view of experimental as well as theoretical science, I wish to add a final remark about Bacon.

Bacon, I suggest, was not a scientist but a prophet. He was a prophet not only in the sense that he propagated the idea of an experimental science, but also in the sense that he foresaw, and inspired, the industrial revolution. He had the vision of a new age, of an industrial age which would also be an age of science and of technology. Referring to the accidental discovery of gunpowder, and of silk, he spoke of the possibility of a systematic scientific search for other useful substances and materials, and of a new society in which, through science, men would find salvation from misery and poverty. Thus the new religion of science held a new promise of heaven on earth – of a better world which, with the help of new knowledge, men would create for themselves. Knowledge is power, Bacon said, and this idea, this dangerous idea, of man’s mastery over nature – of men like gods – has been one of the most influential of the ideas through which the religion of science has transformed our world.

II

I shall now very briefly criticize Bacon’s anti-theoretical dogma and his view of science, and then turn to my own view of science – and in particular of experimental science – which I propose to put in its place.

1  The idea that we can purge our minds of prejudices at will and so get rid of all preconceived ideas or theories, prior to, and preparatory to, scientific discovery, is naïve and mistaken. It is mainly through scientific discovery that we learn that certain of our ideas – such as those of the flat earth or the moving sun – are prejudices. We discover the fact that one of the beliefs we held was a prejudice only after the advance of science has led us to discard it. For there is no criterion by which we could recognize prejudices in anticipation of this advance.

2  The rule ‘Purge yourself of prejudice!’ can therefore have only the dangerous result that, after having made an attempt or two, you may think that you have succeeded – with the result, of course, that you will stick more tenaciously to your prejudices and dogmas, especially to those of which you are unconscious.

3  Moreover, Bacon’s rule was ‘purge your mind of all theories!’ But a mind so purged would not only be a pure mind: it would be an empty mind.

4  We always operate with theories, even though more often than not we are unaware of them. The importance of this fact should never be played down. Rather, we should try, in each case, to formulate explicitly the theories we hold. For this makes it possible to look out for alternative theories, and to discriminate critically between one theory and another.

5  There is no such thing as a ‘pure’ observation, that is to say, an observation without a theoretical component. All observation – and especially all experimental observation – is an interpretation of facts in the light of some theory or other.

This last remark leads me to a crucial point – the point which I should be inclined to call ‘Bacon’s problem’. It is this.

6  Bacon was aware of the general tendency to interpret observed facts in the light of theories, and he was keenly awake to the very real dangers of this tendency. He saw that if we interpret the observed facts in the light of preconceived theories or ‘prejudices’, then we are liable to confirm, and to strengthen these prejudices by our observations, whatever the actual facts may be. Thus prejudices make it impossible for us to learn from experience: they form an impassable barrier to the advancement of science through observation and experiment.

The point is so important that it should be illustrated by some examples.

What Bacon had in mind was something like this. Let a man hold some religious creed – say, the Zoroastrian or Manichaean heresy which sees our world as an arena of conflict between a good and an evil power. Then all his observations will only confirm his belief. In other words, he will never be able to correct it by experience, or to learn from experience.

There is a modern secular parallel to this theological example. Take a man who believes in the theory that all history is a history of class struggle, and that modern history is the history of the struggle between virtuous proletarians and vile capitalists. If he holds this belief, then whatever he observes or experiences and whatever the newspapers report or fail to report will be interpreted by him in terms of this belief, and will therefore tend to reinforce it.

Or take a third example. Psychoanalysts tend to speak of what they call their ‘clinical observations’, and of the fact that these observations invariably support the psychoanalytic theory. These clinical observations are, however, always interpreted: they are interpreted in accordance with established psychoanalytic theory. This raises the question: Is it legitimate to claim that the observations support the theory? Or to put it in another way: Can we conceive of any human behaviour that we could not interpret in psychoanalytic terms? If the answer to this question is ‘no’, then we can say, prior to any observation, that every conceivable observation will be interpretable in the light of psychoanalytic theory and that it will, thereby, appear to support it. But if this can be said prior to any observation, then this kind of support must not be described as genuinely empirical or observational.

This, I suppose, is the difficulty that Bacon felt. The only escape from it that he could devise was the impracticable proposal to purge our minds of all theories, and adhere to ‘pure’ observation.

III

With this, I will now leave Bacon’s views in order to give you my own view of the matter. I will first propose a simple solution of Bacon’s problem.

My solution consists of two steps.

First, every scientist who claims that his theory is supported by experiment or observation should be prepared to ask himself the following question: Can I describe any possible results of observation or experiment which, if actually reached, would refute my theory?

If not, then my theory is clearly not an empirical theory. For if all conceivable observations agree with my theory, then I cannot be entitled to claim of any particular observation that it gives empirical support to my theory.

Or in short, only if I can say how my theory might be refuted, or falsified, can I claim that my theory has the character of an empirical theory.

This criterion of demarcation between empirical and non-empirical theories I have also called the criterion of falsifiability or the criterion of refutability. It does not imply that irrefutable theories are false. Nor does it imply that they are meaningless. But it does imply that, as long as we cannot describe what a possible refutation of a certain theory would be like, that theory may be regarded as lying outside the field of empirical science.

The criterion of refutability or falsifiability may also be called the criterion of testability. For testing a theory, like testing a piece of machinery, means trying to fault it. Thus a theory that we know in advance cannot possibly be faulted or refuted is not testable.

It should be made quite clear that there are many examples in the history of science of theories which at some stage of the development of science were not testable but which became testable at a later stage. An obvious example is atomic theory. An example within modern physical theory which would deserve a detailed discussion is the theory of the neutrino.

When this theory was first proposed by Pauli, it was clearly not testable. It was even said, at one time, that the neutrino is so defined that the theory cannot be tested. About thirty years later the theory was not only found to be testable, but to pass its test with flying colours. This should be a warning to those who are inclined to say that nontestable theories are meaningless (a view which has often but mistakenly been attributed to me) or that they have no ‘cognitive significance’.

So much for the criterion of the empirical character of a theory. It does not completely solve Bacon’s problem. But it allows us to reject many of those unjustifiable claims to observational support that so worried Bacon.

The criterion of refutability, or falsifiability, or testability, is only the first step in the solution of Bacon’s problem. As we have seen, this step is taken by asking a scientist who claims that his theory is supported by experiment or observation, ‘Is your theory refutable? And what experiment or observation would you accept as a refutation?’

If the answers to these questions are satisfactory, then, and only then, can we proceed to take the second step in our solution to Bacon’s problem. It amounts to this.

Observations or experiments can be accepted as supporting a theory (or a hypothesis, or a scientific assertion) only if these observations or experiments are severe tests of the theory – or, in other words, only if they result from serious attempts to refute the theory, and especially from trying to find faults where these might be expected in the light of all our knowledge, including our knowledge of competing theories.

I believe that this, in principle, solves Bacon’s problem.

The solution amounts to this. Agreement between theory and observation should count for nothing unless the theory is testable, and unless the agreement is found as the result of serious attempts to test it. But testing a theory means trying to find its weak spots. It means trying to refute it. And a theory is testable only if it is (in principle) refutable.

IV

Let us look at a few examples. Psychoanalysis would become refutable only if it denied that certain possible or conceivable forms of human behaviour do, in fact, occur.

Newton’s theory of gravity is highly testable, for example, because its theory of perturbations predicts certain deviations from Kepler’s planetary orbits, and this prediction may be refuted. Einstein’s theory of gravity is highly testable because it predicts certain deviations from Newton’s planetary orbits, and this prediction may be refuted. It also predicts the curvature of light rays and the retardation of atomic clocks in strong gravitational fields, and again these predictions may be refuted.

There is a difficulty with Darwinism. While Lamarckism appears to be not only refutable but actually refuted (because the kind of acquired adaptations which Lamarck envisaged do not appear to be hereditary), it is far from clear what we should consider a possible refutation of the theory of natural selection. If, more especially, we accept that statistical definition of fitness which defines fitness by actual survival, then the theory of the survival of the fittest becomes tautological, and irrefutable.

Darwin’s great achievement was this, I believe. He showed that what appeared to be purposeful adaptation may be explained by some mechanism – such as, for example, the mechanism of natural selection. This was a tremendous achievement. But once it is shown that a mechanism of this kind is possible, we ought to try to construct alternative mechanisms, and then try to find some crucial experiments to decide between them, rather than foster the belief that the Darwinian mechanism is the only possible one.

Or let us take as an example a theory more closely related to experimental work: the theory of synaptic transmission. The chemical theory of transmission (as against the competing electrical theory) passed a severe test when acetylcholine was artificially applied to the contact region of the muscle fibre. The fact that it triggered the impulse like a firing nerve could be claimed in support of the chemical theory.2

The view here presented may be summed up by saying that the decisive function of observation and experiment in science is criticism. Observation and experiment cannot establish anything conclusively, for there is always the possibility of a systematic error through systematic misinterpretation of some fact or other. But observation and experiment certainly play an important part in the critical discussion of scientific theories. Essentially, they help us to eliminate the weaker theories. In this way they lend support, though only for the time being, to the surviving theory – that is, to the theory which has been severely tested but not refuted.

V

The modern view of science – the view that scientific theories are essentially hypothetical or conjectural, and that we can never be sure that even the best established theory may not be overthrown and replaced by a better approximation – is, I believe, the result of the Einsteinian revolution.

For there never was a more successful theory, or a better tested theory, than Newton’s theory of gravity. It succeeded in explaining both terrestrial and celestial mechanics. It was most severely tested in both fields for centuries. The great physicist and mathematician Henri Poincaré believed not only that it was true – this of course was everybody’s belief – but that it was true by definition, and that it would therefore remain the invariable basis of physics to the end of man’s search for truth. And Poincaré believed this in spite of the fact that he actually anticipated – or that he came very close to anticipating – Einstein’s special theory of relativity. I mention this in order to illustrate the tremendous authority of Newton’s theory down to the very last.

Now the question whether or not Einstein’s theory of gravity is an improvement upon Newton’s, as most physicists think it is, may be left open. But the mere fact that there was now an alternative theory which explained everything that Newton could explain and, in addition, many more things, and which passed at least one of the crucial tests that Newton’s theory seemed to fail, destroyed the unique place held by Newton’s theory in its field. Newton’s theory was thus reduced to the status of an excellent and successful conjecture, a hypothesis competing with others, and one whose acceptability was an open question. Einstein’s theory thus destroyed the authority of Newton’s, and with it something of even greater importance – authoritarianism in science.

Those of you who are my contemporaries may remember the days when complete authority was claimed by the secular religion of science. Hypotheses were recognized as playing a role in science, but their role was heuristic and transitory: science itself was believed to be a body of knowledge. It did not consist of hypotheses, but of proved theories – proved theories like that of Newton.

It is interesting in this context that Max Planck tells the story that, when he was an ambitious young man, a famous physicist tried to discourage him from studying physics with the remark that physics was about to reach its ultimate completion, and that there were no longer any great discoveries to be made in this field.

This period of authoritarian science has passed, I suppose forever, owing to the Einsteinian revolution.3 It is interesting in this connection to note that Einstein himself did not hold that his general theory was true – though he did believe that it was a better approximation to the truth than Newton’s, and that a still better approximation, and of course also the true theory (if ever found), would have to contain, in their turn, general relativity as an approximation. In other words, Einstein was clear from the very first about the essentially conjectural character of his theories.

As I have said earlier, it was part of the religion of science before Einstein to claim authority for science. Admittedly, there were a few heretics, notably the great American philosopher Charles S. Peirce, who said before Einstein that science shared the fallibility of all human endeavours. Yet Peirce’s fallibilism became influential mainly after the Einsteinian revolution.

VI

I have mentioned these historical facts merely because I wish to stress that the change from the authoritarian theory of scientific knowledge to an anti-authoritarian and critical theory is quite recent. This also explains why the view that the method of science is essentially the method of critical discussion, and of a critical examination of competing conjectures or hypotheses, is still felt by many to be inappropriate to the experimental sciences – why so many people still feel that what is based upon careful laboratory work has more than merely hypothetical status.

To combat this view, I may choose an example from chemistry. If you had asked an experimental chemist, before the discovery of heavy water, what branch of chemistry was most secure – least likely to be overthrown or corrected by new revolutionary discoveries – he would almost certainly have said, the chemistry of water. In fact, water was used in the definition of one of the fundamental units of physics, the gram, forming part of the centimetre–gram–second system. And hydrogen and oxygen were used as the theoretical and practical basis in the determination of all atomic weights.

All this was completely upset by the unexpected discovery of heavy water, from which we may learn the lesson that we can never know which part of science will have to be revised next.

Or take a still more recent example from physics: the breakdown of parity. This was one of those cases in which it turned out after the event that there had been many observations – photographs of particle tracks – from which we might have read off the result, but that the observations had been either ignored or misinterpreted. Much the same thing had happened before when the positron was discovered, and before this when the neutron was discovered. Earlier still, before the discovery of X-rays, it happened to William Crookes himself, the inventor of the Crookes tube with the help of which X-rays were subsequently discovered.

VII

I may now perhaps sum up the first part of my talk by restating all the controversial things I have been saying in a number of theses, which I shall try to put in as challenging a form as I can.

1  All scientific knowledge is hypothetical or conjectural.

2  The growth of knowledge, and especially of scientific knowledge, consists in learning from our mistakes.

3  What may be called the method of science consists in learning from our mistakes systematically: first, by taking risks, by daring to make mistakes – that is, by boldly proposing new theories; and secondly, by searching systematically for the mistakes we have made – that is, by the critical discussion and the critical examination of our theories.

4  Among the most important arguments that are used in this critical discussion are arguments from experimental tests.

5  Experiments are constantly guided by theory, by theoretical hunches of which the experimenter is often not conscious, by hypotheses concerning possible sources of experimental errors, and by hopes or conjectures about what will be a fruitful experiment. (By theoretical hunches I mean guesses that experiments of a certain kind will be theoretically fruitful.)

6  What is called scientific objectivity consists solely in the critical approach: in the fact that if you are biased in favour of your pet theory, some of your friends and colleagues (or failing these, some workers of the next generation) will be eager to criticize your work – that is to say, to refute your pet theories if they can.

7  This fact should encourage you to try to refute your own theories yourself – that is to say, it may impose some discipline upon you.

8  In spite of this, it would be a mistake to think that scientists are more ‘objective’ than other people. It is not the objectivity or detachment of the individual scientist but of science itself (what may be called ‘the friendly-hostile cooperation of scientists’ that is, their readiness for mutual criticism) which makes for objectivity.

9  There is even something like a methodological justification for individual scientists to be dogmatic and biased. Since the method of science is that of critical discussion, it is of great importance that the theories criticized should be tenaciously defended. For only in this way can we learn their real power. And only if criticism meets resistance can we learn the full force of a critical argument.

10  The fundamental role played in science by theories or hypotheses or conjectures makes it important to distinguish between testable (or falsifiable) and non-testable (or non-falsifiable) theories.

11  Only a theory which asserts or implies that certain conceivable events will not, in fact, happen is testable. The test consists in trying to bring about, with all the means we can muster, precisely these events which the theory tells us cannot occur.

12  Thus, every testable theory may be said to forbid the occurrence of certain events. A theory speaks about empirical reality only in so far as it sets limits to it.

13  Every testable theory can thus be put into the form ‘such and such cannot happen’. For example, the second law of thermodynamics can be formulated as saying that a perpetual motion machine of the second kind cannot exist.

14  No theory can tell us anything about the empirical world unless it is in principle capable of clashing with the empirical world. And this means, precisely, that it must be refutable.

15  Testability has degrees: a theory which asserts more, and thus takes greater risks, is better testable than a theory which asserts very little.

16  Similarly, tests can be graded as being more or less severe. Qualitative tests, for example, are in general less severe than quantitative tests. And tests of more precise quantitative predictions are more severe than tests of less precise predictions.

17  Authoritarianism in science was linked with the idea of establishing, that is to say, of proving or verifying, its theories. The critical approach is linked with the idea of testing, that is to say, of trying to refute, or to falsify, its conjectures.

VIII

I now turn to the second part of this talk, devoted to problems and their role in science.

Science begins with observation, says Bacon, and this saying is an integral part of the Baconian religion. It is still widely accepted, and still repeated ad nauseam in the introductions to even some of the best textbooks in the field of the physical and biological sciences.

I propose to replace this Baconian formula by another one.

Science, we may tentatively say, begins with theories, with prejudices, superstitions, and myths. Or rather, it begins when a myth is challenged and breaks down – that is, when some of our expectations are disappointed. But this means that science begins with problems, practical problems or theoretical problems.

Before going on to develop my thesis here more fully, I may perhaps say a few words about the term ‘expectation’, which I have just used.

In walking down steps it sometimes happens that we suddenly discover that we expected another step (which was not there) or, on the contrary, that we expected no other step (while in reality there was one). The unpleasant discovery that we were mistaken makes us realize that we had certain unconscious expectations. And it shows us that there are thousands of such unconscious expectations. A similar example is this: if we sit and work in a room in which a clock can be heard ticking, we may hear that the clock has suddenly stopped. This makes us conscious of the fact that we expected it to go on ticking – even though we may not have been conscious of hearing it.

A study of animal behaviour teaches us that animals similarly adjust their behaviour to impending events and become disturbed if the expected event does not happen.

We may say that an expectation, conscious or unconscious, corresponds, on the pre-scientific level, to what we call, on the scientific level, ‘a conjecture’ (about an impending event), or ‘a theory’.

In my views about the methods of science and especially about the role of observation4 I disagree with almost everybody except Charles Darwin and Albert Einstein. Einstein, incidentally, explained his views on these matters concisely in his Herbert Spencer Lecture, delivered in Oxford in 1933, and entitled On the Methods of Theoretical Physics.5 There he told his audience not to believe those scientists who say that their methods are inductive.

Since, as I said, I disagree about these matters with almost everybody, I cannot hope that I shall convince you and I shall not try to do so. All I shall attempt is to draw your attention to the fact that there are some people who hold views on these matters differing widely from the usual ones, and that men like Darwin and Einstein are among them.

My thesis, as I have already indicated, is that we do not start from observations but always from problems: from practical problems, or from a theory which has run into difficulties – that is to say, a theory which has raised, and disappointed, certain expectations.

Once we are faced with a problem, we proceed by two kinds of attempt. We attempt to guess, or to conjecture, a solution to our problem. And we attempt to criticize our usually somewhat feeble solutions. Sometimes a guess or a conjecture may withstand our criticism and our experimental tests for quite some time. But as a rule, we find that our conjectures can be refuted, or that they do not solve our problem, or that they solve it only in part. And we find that even the best solutions – those able to resist the most severe criticism of the most brilliant and ingenious minds – soon give rise to new difficulties, to new problems. Thus we may say that our knowledge grows as we proceed from old problems to new problems by means of conjectures and refutations – by the refutation of our theories or, more generally, of our expectations.

I suppose that some of you will agree that we usually start from problems. But you may still think that our problems must have been the result of observation and experiment, since prior to receiving impressions through our senses, our mind is a tabula rasa, an empty slate, a blank – for there can be nothing in our intellect which has not entered it through our senses.

But it is just this venerable idea that I am combating. I assert that every animal is born with many, usually unconscious, expectations – in other words with something closely corresponding to hypotheses, and thus to hypothetical knowledge. And I assert that we always have, in this sense, inborn knowledge to start from, even though it may be quite unreliable. This inborn knowledge, these inborn expectations, will, if disappointed, create our first problems. And the ensuing growth of knowledge may therefore be described as consisting throughout of corrections and modifications of previous knowledge – of previous expectations or hypotheses.

Thus I am turning the tables on those who think that observation must precede expectations and problems. And I even assert that observation cannot, for logical reasons, be prior to all problems, although, obviously, it will sometimes be prior to some problems – for example to those that spring from an observation which has disappointed some of our expectations or which has refuted some of our theories.

Now this fact – that observation cannot precede all problems – may be illustrated by a simple experiment which I wish to carry out, by your leave, with yourselves as experimental subjects. My experiment is to ask you to observe, here and now. I hope you are all cooperating and observing! Yet I fear that some of you, instead of observing, will feel a strong urge to ask: ‘What do you want me to observe?’

If this is your response, then my experiment was successful. For what I am trying to illustrate is that in order to observe, we must have in mind a definite question which we might be able to decide by observation. Charles Darwin knew this when he wrote: ‘How odd it is that anyone should not see that all observation must be for or against some view …’.6

I cannot, as I said before, hope to convince you of the truth of my thesis that observation comes after expectation or hypothesis. But I do hope that I have been able to show you that there may exist an alternative to the venerable doctrine that knowledge – especially scientific knowledge – starts from observation. (The still more venerable doctrine that all knowledge starts from perception or sensation or sense-data, which, of course, I also reject, is, incidentally, at the root of the fact that ‘problems of perception’ are still widely considered to form a respectable part of philosophy or, more precisely, of epistemology.)

IX

Now let us look a little more closely at the way in which we get acquainted with a problem.

We start, I say, with a problem – a difficulty. It is perhaps a practical problem, or a theoretical problem. Whatever it may be, when we first encounter the problem we cannot, obviously, know much about it. At best, we have only a vague idea what our problem really consists of. How, then, can we produce an adequate solution? Obviously, we cannot. We must first get better acquainted with the problem. But how?

My answer is very simple: by producing a very inadequate solution, and by criticizing this inadequate solution. Only in this way can we come to understand the problem. For to understand a problem means to understand why it is not easily soluble – why the more obvious solutions do not work. We must therefore produce these obvious solutions and try to find out why they will not do. In this way, we become acquainted with the problem. And in this way we may proceed from bad solutions to slightly better ones – provided always that we have the ability to guess again.

A very trivial example of this method of attempting to solve a problem by trial and the elimination of error is the task of dividing a largish number – say 22376 – by another one – say 2784. Our usual method is to guess the first figure of the quotient – our guess may be that it is 7 – and to try out whether our guess was correct. If our guess was 7, we easily find that we were in error, and that we have to replace 7 by 8. There are many less trivial mathematical problems for which the standard method of solving them is to start with a guess and subsequently to correct the error made.7

These examples should make it clear that the method of trial and error-elimination is utterly different from the so-called (but in my view nonexistent) ‘method of induction by repetition’. Nevertheless, the two have often been confused.

In simple mathematical problems the solution can always be found after a small number of trials and errors, or even after only one. But this is not, of course, generally true of mathematical problems (some of which are insoluble). And it is certainly not true of problems in the empirical sciences. Yet it is generally true that the best if not the only method of learning something about a problem is to try first to solve it by guessing and then to try to pinpoint the mistakes we have made.8

This, I think, is what is meant by ‘working on a problem’. And if we have worked on a problem long enough, and intensively enough, we begin to know it, to understand it, in the sense that we know what kind of solution does not do at all (because it simply misses the point of the problem) and what kind of requirement would have to be met by a serious attempt at a solution. In other words, we begin to see the ramifications of the problem, its sub-problems, and its connection with other problems.

At this stage our tentative solutions may be submitted to the criticism of others – to critical discussion, that is – and perhaps even be published.

Or if you are an experimentalist, you may now proceed to test your solution. If it is the solution of a practical problem of experimentation, you will try it out in various experiments. If it is a conjecture, a hypothesis, you will test it with the help of experiments.

These experimental tests are, of course, again part of the process of critically ‘working on a problem’: of getting to know it, of getting acquainted and really familiar with it, and thus perhaps of improving one’s chances of finding, some day, a satisfactory and illuminating solution.

However this may be, the really important point I wish to make is this. If the question is asked, ‘What is it to understand a problem?’, my answer is that there is only one way to learn to understand a serious problem – whether it is now purely theoretical or a practical problem of experimentation. And this is to try to solve it, and to fail. Only if we find that some facile and obvious solution does not solve our problem do we begin to understand it. For a problem is a difficulty. Understanding it means experiencing this difficulty. And this can only be done by finding out that there is no easy and obvious solution to it.

Thus we become acquainted with a problem only when we have many times tried in vain to solve it. And after a long series of failures – of attempts yielding solutions which turn out to be unacceptable – we may even become experts in this particular problem. We shall have become experts in the sense that, whenever somebody else offers a new solution – for example, a new theory – it will be either one of those solutions which we have tried out in vain (so that we shall be able to explain why it does not work) or it will be a new solution. In this case we may be able to find out quickly whether or not it gets over at least those standard difficulties which we know so well from our unsuccessful endeavours.

My point is that even if we persistently fail to solve our problem, we shall have learned a great deal by having wrestled with it. The more we try, the more we shall learn about it – even if we fail every time. It is clear that, having become in this way utterly familiar with a problem – that is, with its difficulties – we may have a better chance to solve it than somebody who does not even understand the difficulties. But it is all a matter of chance: in order to solve a difficult problem one needs not only understanding but also luck.

Thus, like science itself, which begins and ends with problems and progresses through wrestling with them, the individual scientist should also begin and end with his problem and wrestle with it. Moreover, while wrestling with it, he will not merely learn to understand the problem, but he will actually change it. A change of emphasis may make all the difference – not only to our understanding, but to the problem itself, to its fertility and significance, and to the prospects of an interesting solution. It is important for a scientist to be awake to these changes and shifts, and not to make them either unconsciously or surreptitiously. For it often happens that a reformulation of a problem can reveal to us almost the whole of its solution.

X

My view of the significance of problems for the methodology or theory of scientific knowledge may perhaps be summed up by the following considerations.

The theory of knowledge – and especially the theory of scientific knowledge – is constantly faced with a near paradox which may be brought home to us by the clash of the following two theses.

First thesis: Our knowledge is vast and impressive. We know not only innumerable details and facts of practical significance, but also many theories and explanations which give us an astonishing intellectual insight into dead and living subjects, including ourselves, and human societies.

Second thesis: Our ignorance is boundless and overwhelming. Every new bit of knowledge we acquire serves to open our eyes further to the vastness of our ignorance.

Both of these theses are true, and their clash characterizes our knowledge-situation. The tension between our knowledge and our ignorance is decisive for the growth of knowledge. It inspires the advance of knowledge, and it determines its ever-moving frontiers.

The word ‘problem’ is only another name for this tension – or rather, a name denoting various concrete instances of it.

As I suggested above, a problem arises, grows, and becomes significant through our failures to solve it. Or to put it another way, the only way of getting to know a problem is to learn from our mistakes.

This applies to pre-scientific knowledge and to scientific knowledge.

My view of the method of science is, very simply, that it systematizes the pre-scientific method of learning from our mis takes. It does so by the device called critical discussion.

My whole view of scientific method may be summed up by saying that it consists of these three steps:

1  We stumble over some problem.

2  We try to solve it, for example by proposing some theory.

3  We learn from our mistakes, especially from those brought home to us by the critical discussion of our tentative solutions – a discussion which tends to lead to new problems.

Or in three words: problems – theories – criticism.

I believe that in these three words the whole procedure of rational science may be summed up.9

XI

Having discussed problems and their growth at some length, I now turn to theories. I shall discuss the question: What is meant by saying that we ‘understand’ a scientific theory?

This question has been much discussed, and it has been suggested that we should not speak at all about ‘understanding’ theories – that the idea that we can understand a theory is out of date. It has also been suggested that those who, like myself, speak about ‘understanding’ mean either the understanding of a crude mechanism, like a clock, or else ‘understanding’ in the sense of being able to draw a picture, or make a model, of the process in question. And it is then pointed out, and quite correctly, that modern physical theory no longer confines itself to clockwork mechanisms, or to picturable processes. From this it is concluded – wrongly, I believe – that the whole idea of ‘understanding’ a theory is out of date. And this conclusion is widely accepted, not only by physicists but also by some biologists.

I do not think that the conclusion is correct. And I do not see any reason why understanding a theory should be any more out of date than understanding a problem: a process which I have described without appealing to models or pictures which may be intuited or visualized.

Understanding a theory, I suggest, means understanding it as an attempt to solve a certain problem. This is an important proposition, and one which too few people understand.

What is the point of, say, Newton’s theory? It is an attempt to solve the problem of explaining Kepler’s and Galileo’s laws. Without understanding the problem situation that gave rise to the theory, the theory is pointless – that is, it cannot be understood.

Or take as an example Bohr’s theory (1913) of the hydrogen atom. This theory was describing a model, and was therefore intuitive and visualizable. Yet it was also very perplexing. Not because of any intuitive difficulty, but because it assumed, contrary to Maxwell’s and Lorentz’s theory and to well-known experimental effects, that a periodically moving electron, a moving electric charge, need not always create a disturbance of the electromagnetic field, and so need not always send out electromagnetic waves. This difficulty is a logical one – a clash with other theories. And no one can be said to understand Bohr’s theory who does not understand this difficulty and the reasons why Bohr boldly accepted it, thus departing in a revolutionary way from earlier and well-established theories.

But the only way to understand Bohr’s reasons is to understand his problem – the problem of combining Rutherford’s atom model with a theory of emission and absorption of light, and thus with Einstein’s photon theory, and with the discreteness of atomic spectra. The understanding of Bohr’s theory does not lie in visualizing it intuitively but in gaining familiarity with the problems it tries to solve, and in the appreciation of both the explanatory power of the solution and the fact that the new difficulty that it creates constitutes an entirely new problem of great fertility.

The question whether or not a theory or a conjecture is more or less satisfactory or, if you like, prima facie acceptable as a solution of the problem which it sets out to solve is largely a question of purely deductive logic. It is a matter of getting acquainted with the logical conclusions which may be drawn from the theory, and of judging whether or not these conclusions (a) yield the desired solution and (b) yield undesirable by-products – for example some insoluble paradox, some absurdity.

XII

It may be appropriate at this stage to say something about the acceptance of theories – much discussed by philosophers of science as the question of ‘verification’.

To begin with, I wish to make it quite clear that I regard the question of the acceptance of a theory or conjecture as one whose importance is much over-rated (quite apart from the fact that I don’t believe in the verification or verifiability of theories, but this I shall not discuss here).

Consider just one example. Einstein proposed his theory of general relativity, he defended it patiently against violent criticism, he suggested that it was an important advance, and that it should be accepted as an improvement on Newton’s theory – but he never accepted it himself in that sense of ‘accepted’ which almost all philosophers of science regard as important. What I mean is this. Philosophers of science speak as if there were a body of knowledge, called science, which consists, in the main, of accepted theories. But this seems to me utterly mistaken, and a residue of the dreams of authoritarian science prevailing in the days when people thought that we were just on the verge of completing the task of science, a thing that Bacon believed in 1600, and that some competent physicists still believed in 1900, as Max Planck has told us.

It seems to me that most philosophers of science use the term ‘accepted’ or ‘acceptable’ as a substitute for ‘believed in’ or ‘worthy of being believed in’. There may be a lot of theories in science that are true and therefore worthy of being believed in. But according to my view of the matter, this worthiness is no concern of science. For science does not attempt positively to justify or to establish this worthiness. On the contrary, it is mainly concerned with criticizing it. It regards, or should regard, the overthrow of even its most admirable and beautiful theories as a triumph, an advance. For we cannot overthrow a good theory without learning an immense amount from it and from its failure. As always, we learn from our mistakes.

The overthrow of a theory always creates new problems. But even if a new theory is not yet overthrown, it will, as we have seen from the example of Bohr’s theory, create new problems. And the quality, the fertility, and the depth of the new problems which a theory creates are the best measures of its intrinsic scientific interest.

To sum up, the question of the acceptance of theories should, I propose, be demoted to the status of a minor problem. For science may be regarded as a growing system of problems, rather than as a system of beliefs. And for a system of problems, the tentative acceptance of a theory or a conjecture means hardly more than that it is considered worthy of further criticism.

XIII

I have not said anything about induction so far, and I should not have said anything were I not afraid to disappoint some of those who came to hear a philosopher on scientific method – and thus on induction.

So I must say now that I do not believe there is such a thing as an inductive method or an inductive procedure – unless indeed you decide to use the name ‘induction’ for that method of critical discussion and of attempted refutations which I have described here.

I never quarrel about words, and I have of course no serious objection if you wish to call the method of critical discussion ‘induction’. But if you do, then you should be aware of the fact that it is very different from anything that has ever been called ‘induction’ in the past. For induction was always supposed to establish a theory, or a generalization, while the method of critical discussion does not establish anything. Its verdict is always and invariably ‘not proven’. The best that it can do – and this it does rarely – is to come out with the verdict that a certain theory appears to be the best available (that is to say, the best so far submitted to examination and discussion), that it appears to solve much of the problem it was designed to solve, and that it has survived the severest tests that we were able to devise. But this does not, of course, establish the theory as true (that is to say, as corresponding to the facts, or as an adequate description of reality) – although we may say that what such a positive verdict amounts to is that, in the light of our critical discussion, the theory appears to be the best approximation to the truth so far attained.10

In fact, the idea of ‘better approximation to the truth’ is at once the main standard of our critical discussion and an aim we hope to attain as a result of that discussion. Among our other standards is the explanatory power of a theory, and its simplicity.11

In the past, the term ‘induction’ has been used mainly in two senses. The first is repetitive induction (or induction by enumeration). This consists of often repeated observations and experiments, which are supposed to serve as premises in an argument establishing some generalization or theory. The invalidity of this kind of argument is obvious: no amount of observation of white swans establishes that all swans are white (or that the probability of finding a non-white swan is small). In the same way, no amount of observed spectra of hydrogen atoms on earth establishes that all hydrogen atoms emit spectra of the same kind. Theoretical considerations, however, may suggest the latter generalization, and further theoretical considerations may suggest that we should modify it by introducing Doppler shifts and Einsteinian gravitational red-shifts.

Thus repetitive induction is out: it cannot establish anything.

The second main sense in which the term ‘induction’ has been used in the past is eliminative induction – induction by the method of eliminating or refuting false theories. This may look at first sight very much like the method of critical discussion that I am advocating. But in fact it is very different. For Bacon and Mill and other exponents of this method of eliminative induction believed that by eliminating all false theories we can finally establish the true theory. In other words, they were unaware of the fact that the number of competing theories is always infinite – even though there are as a rule at any particular moment only a finite number of theories before us for consideration. I say ‘as a rule’, for sometimes an infinite number is before us. For example, it was suggested that we should modify Newton’s inverse square law of attraction, replacing the square by a power differing slightly from the number 2. This proposal amounted to the suggestion that we should consider an infinite number of slightly different corrections to Newton’s law.

The fact that there is always an infinity of logically possible solutions to every problem is a decisive fact for the philosophy of science. It is one of those things that makes science such a thrilling adventure. For it renders inefficient all merely routine methods. It means that scientists must use imagination and bold ideas, though always tempered by severe criticism and severe tests.

It also shows, incidentally, the mistake of those who think that the aim of science is merely to establish correlations between observed events, or observations (or, worse, ‘sense data’). What we aim at in science is much more. We aim at discovering new worlds behind the world of ordinary experience: such as, perhaps, a microscopic or submicroscopic world – gravitational, chemical, electrical, and nuclear forces, some of them, perhaps, reducible to others, and others not. It is the discovery of these new worlds, of these new undreamt-of possibilities, which adds so much to the liberating power of science. Correlation coefficients are not interesting if they merely correlate our observations. They are interesting only if they help us to learn more about these worlds.

XIV

Let me conclude this part of my talk with a practical proposal.

There is a tradition still alive in the writing of scientific papers which I have dubbed ‘the inductive style’. I am sure you all know it, and some of you may still practise it. A very well-known form of it is to write a paper by first describing the experimental arrangements, then the observations, possibly a curve which may link them up, and perhaps concluding (in small print) with a hypothesis. This inductive or Baconian style has a long and glorious history: great and world-shaking papers have been written in this style – for example, Sir Alexander Fleming’s paper reporting his first observations of penicillin.

But we all know that Fleming did not merely observe effects: he knew many things beforehand. He knew about Ehrlich’s hopes, and the possibility of antibiotic substances had been discussed by biologists for years. And Lady Fleming has told us in a paper, which, I believe, is not yet published, how greatly her late husband was interested in these questions, and in the medical possibilities of such substances.

Thus Fleming was not a passive observer of an accident. So far as it was an accident, it was one that happened to a well-prepared mind – a mind aware of the possible significance and desirability of ‘accidents’ of this kind. But an innocent reader of Fleming’s paper would hardly suspect it. And this is the result of the traditional inductive style, which, in its turn, comes of a mistaken view of scientific objectivity.

Now the practical proposal I wish to make is this. We should, as a matter of course, give the widest freedom to scientists to write papers as they think fit. But we could nevertheless encourage a new style, a style totally different from the traditional one.

A paper written in this new style might be in the following form:

It would start with a brief but clear statement of the problem situation as it stood before the research was started, and with a brief survey of the position reached so far in the discussion. It would then proceed to state briefly any hunch or conjecture related to the problem that may have motivated the research, and say which hypotheses the research hoped to test. Next it would outline the experimental arrangements, adding, if possible, reasons for choosing them, and the results. And it would conclude with a summary which would state whether any tests had been successful, whether the problem situation had changed in the opinion of the author, and if so, in what way. This part would also contain new hypotheses, if any, and perhaps some comment on how they could be tested.

Papers have been written in this style, some of them upon my suggestion. They were not all kindly received by the editors. But I believe that in the present situation of science in which high specialization is about to create an even higher Tower of Babel, the replacement of the inductive style by something like this new critical style is one of the few ways in which mutual interest and mutual contacts between the various fields of research can be preserved, or rather recreated. And I hope that the interest of the intelligent layman may also be rekindled in this way.

All this, of course, is merely a proposal open to discussion. But these matters ought to be discussed. For there does not seem to have been much discussion of questions like this for a long time – perhaps not even since Bacon, almost 400 years ago.

XV

I now come to the brief concluding part of my talk, entitled ‘Responsibilities’.

Anybody who says anything today about the human or social responsibilities of scientists is expected, I am afraid, to say something about the bomb. So let me get the bomb out of the way first, for what I really wish to discuss has nothing to do with it.

Far be it from me to belittle the danger of nuclear warfare. The danger is terrible, as we all know, and the prospects of avoiding this kind of warfare are not as good as one could wish. This being so, we should try to make the best of a very unpleasant situation. It seems very likely that we shall have to live for a long time under the shadow of the bomb, and the only thing most of us can do, as far as I am able to see, is to accept the situation.

One of the things which we should avoid as far as possible is to become hysterical about it, and to proclaim loudly that this danger is the responsibility of us all.

There is very good reason for saying that road accidents are the responsibility of us all, because we are all users of roads, and we are all liable to make mistakes at times, as drivers or as pedestrians. But with the possible exception of a very small number of political or military leaders, we cannot do anything sensible about the danger of nuclear warfare.

In saying this I am taking a line which is rather the opposite of that which many worthy and well-informed people are taking. There has appeared, for example, quite recently a leading article in that interesting periodical, The Bulletin of the Atomic Scientists, which first developed a philosophical argument against fatalism and determinism, and went on to conclude that we are all responsible for what is going to happen – that the situation is a most urgent and desperate one, and that we should all do something about it as quickly as possible.

The author did not say what we should do. I suppose he thought that everybody should do his best according to his or her particular situation.

I think this author was wrong. I do not think it would be helpful if millions of citizens began to feel that they just had to do something about the bomb, and that they would be irresponsible, and fail in their duty as citizens, if they did not do something to prevent nuclear warfare. It seems to me possible even that an outbreak of this kind of feeling (which I personally would be inclined to describe as hysterical) might well add to the danger of nuclear attack.

A fact of life that we had better face is that sometimes we are involved in situations about which we should be ready to do whatever can be done, but about which we happen to be unable to do anything.

I do not wish to be dogmatic, and any practical suggestion or proposal should be most carefully discussed. This holds, of course, for the proposal called ‘unilateral disarmament’. Yet even though I have always been a great admirer of Bertrand Russell as a philosopher, I feel that such proposals as unilateral disarmament have nothing whatever to recommend them. It seems strange to me that the propagators of unilateral disarmament never consider the possibility that if they were more successful in their propaganda, so that our determination to resist were seriously weakened, they might easily precipitate a nuclear attack. After all, there can be little doubt that the 18 years of uneasy nuclear peace enjoyed by us were very largely due to our readiness to fight. In other words, practical experience has shown that nuclear armament, dangerous as it is, may postpone the outbreak of nuclear warfare – perhaps for a sufficiently long time to lead to controlled disarmament. On the other hand, Hiroshima and Nagasaki have shown that if only one side in a conflict possesses atom bombs, it may well decide to use them in order to bring the conflict to an end (and to a speedy end, if possible, before the other side decides to build up – or to rebuild – an atomic arsenal).

Without having even the slightest inclination towards fatalism, I feel, like the famous ‘man in the street’, that those who cannot do anything about this danger should recognize this fact and learn to live with the danger as well as they can.

But I do think that, quite apart from the bomb, there are many sides to the present uneasy situation about which we can do something, and about which scientists, more especially, can do much, in purely peaceful ways.

Both our society and that of the Russians have a common background – the secularized religion of science. I mean the Baconian belief which grew during the Enlightenment that man may, through knowledge, liberate himself – that he may free his mind from prejudice and parochialism.

Like every great idea, this idea of self-emancipation through knowledge has, as we now know, its obvious dangers. Yet it is a very great idea. At any rate, we have embraced it. And though we can refine it, and develop it, we certainly cannot repudiate it now without condemning a large part of humanity to death by starvation.

Marxism calls itself a science. It is not a science, as I have tried to show elsewhere. Yet in calling itself a science, it pays homage to science and to the idea of self-emancipation through knowledge. Much of its seductive power is connected with this fact.

At any rate Marxism, even though it has produced a ruthless dictatorship and an arrogant contempt for freedom and for individual human beings, is committed, like ourselves, to the idea of self-emancipation through knowledge – through the growth of science.

Thus there is a field of peaceful competition here, and one in which we can hardly fail, if we enter it whole-heartedly. The most important task for scientists in this competition is, of course, to do good work in their own particular fields. The second task is to shun the danger of narrow specialization: a scientist who does not take a burning interest in other fields of science excludes himself from participation in that self-liberation through knowledge which is the cultural task of science. A third task is to help others to understand his field and his work, and this is not easy. It means reducing scientific jargon to the minimum – that jargon in which many of us take pride, almost as if it were a coat of arms or an Oxford accent. Pride of this kind is understandable. But it is a mistake. It should be our pride to teach ourselves as well as we can always to speak as simply and clearly and unpretentiously as possible, and to avoid like the plague the suggestion that we are in the possession of knowledge which is too deep to be clearly and simply expressed.

This, I believe, is one of the greatest and most urgent social responsibilities of scientists. It may be the greatest. For this task is closely linked with the survival of an open society and of democracy.

An open society (that is, a society based on the idea of not merely tolerating dissenting opinions but respecting them) and a democracy (that is, a form of government devoted to the protection of an open society) cannot flourish if science becomes the exclusive possession of a closed set of specialists.

I believe that the habit of always stating as clearly as possible our problem, as well as the present state of the discussion of the problem, would do much to help towards the important task of making science – that is to say, scientific ideas – better and more widely understood.

NOTES

1  The quotation is taken from ‘The Third Day’. The translation is my own. Cp. Stillman Drake’s translation, Dialogue Concerning the Two Chief World Systems, University of California Press, Berkeley and Los Angeles, 1953, pp. 327f.

2  See, for example, J.C. Eccles, The Physiology of Nerve Cells, Johns Hopkins University Press, Baltimore and Oxford, 1957, pp. 182–4.

3  When writing this I ought to have remembered the strange period from about 1929 or 1930 to 1932 or 1933, now easily forgotten, when the same feeling as described by Planck emerged again, though only for a short time, among some leading physicists. It is described by C.P. Snow in The Search where a Cambridge physicist whom he describes as ‘one of the greatest mathematical physicists’ and as ‘Newton’s successor’ is made to say: ‘In a sense, physics and chemistry are finished sciences.’ (Penguin edition, London, 1965, p. 162. See also p. 88 for suggestions about the identity of the physicist.) A somewhat similar attitude may be discerned in R.A. Millikan’s Time, Matter and Values, University of North Carolina Press, Chapel Hill, 1932, p. 46. The ‘finished science’ of those days was the electrical theory of matter, that is, the theory of protons and electrons: the structure of matter was to be explained by electrical forces (and even gravitation might well in the end be reduced to electricity). This theory of matter, which completely dominated the first third of the century, has slowly and almost silently disappeared – certainly without causing anything like a violent, or even a conscious, revolution. (It should be remembered, in this context, that at that time quantum mechanics was the theory of electrons and of their behaviour in electrical fields, especially in the electrostatic fields of positively charged nuclei.)

4  What follows here, up to and including the first three paragraphs of section IX, is taken over, with very little change, from my Herbert Spencer Lecture of 1961. When I was giving the present lecture, I did not intend to publish the Spencer Lecture. But I have now published it as chapter 7 of my Objective Knowledge.

5  Albert Einstein, On the Methods of Theoretical Physics, Clarendon Press, Oxford, 1933. (Also in Albert Einstein, The World as I See It, translated by Alan Harris, Watts, London, 1940.)

6  More Letters of Charles Darwin, edited by Francis Darwin and A.C. Seward, Appleton, New York, 1903, volume I, p. 195. Darwin’s comment ends with the words (which I admit weaken it as a support of my thesis) ‘if it is to be of any service!’

7  Cp. for example the so-called ‘Transportation Problem of Linear Programming’. See S. Vajda, An Introduction to Linear Programming and the Theory of Games, Methuen, London, 1960.

8  Cp. G. Polya, How to Solve It, Princeton University Press, Princeton, NJ, 1948.

9  The criticism by which we try to discover the weak spots of our theories leads to new problems. And by the distance between our original problems and these new problems we can gauge the progress made. Cp. my Conjectures and Refutations, p. 313.

10  I may perhaps note, in this context, that the much suspected term ‘truth’ in the sense of ‘correspondence to the facts’ has been rehabilitated (and shown to be innocuous) by Alfred Tarski, and that, using Tarski’s theories, I have tried to do the same service to the terms ‘better approximation to the truth’ and of course ‘less good approximation to the truth’. (See chapter 10 and the addenda of my Conjectures and Refutations.)

11  The explanatory power of a theory is discussed in my Logic of Scientific Discovery, as are some relevant meanings of the term ‘simplicity’ as applied to theories. More recently I have found it enlightening to interpret the simplicity of a theory as something that must be related to the problems which the theory is supposed to solve.

Revised version of an address to the Plenary Session of the 47th Annual Meeting of the Federation of American Societies for Experimental Biology, Atlantic City, NJ, 17 April 1963, first published in Federation Proceedings, 22, 1963, pp. 961–72.