Chapter 1
The Historicity of Knowledge
The Dynamics of Alternatives
The Faculty of Abstraction
The Increase in Universality
Progress in Knowledge and in Art
The Mechanics provided by Criticism
Descriptive Epistemology
The Sociology of Knowledge
The Dynamics of Alternatives
Everybody agrees that knowledge has grown and that it was not just there, one fine day. Knowledge is therefore a historical phenomenon, something that happens and changes as time passes. But there are two different ways in which one can see that growth, and there is no agreement at all as to which of these two ways is the right one. One can, first, take it that the relationship between the passage of time and the growth of knowledge is more or less random; or, one can take it that the relationship between the passage of time and the growth of knowledge is not random.
The first view is the view of Positivism in any of its many varieties. For Positivists, the advent of knowledge itself is the product of history. After millennia of superstition, be it theological or metaphysical, the Enlightenment brought rational knowledge. Once enlightenment has brought about knowledge, the growth of knowledge ceases to have a history. In the view of Positivism, when people make mistakes about knowledge, the mistaken view is simply discarded. It is true that there is a history of science in the sense that there is a saga of discarded ideas. But this history is itself only of anecdotal interest, a mere curiosity. It is not itself part of the enterprise of growing knowledge. The history of science, in this view, is irrelevant to science.
For all those people to whom Positivism is not a valid account of the origin and growth of knowledge, the matter is not so simple. Old-fashioned empiricists or Baconians or Positivists believed that a theory is the sum total of its verifications and had to be discarded as soon as these verifications failed. To these people, the history of science is co-extensive with the sequence of true theories. This kind of history writes itself and is, at the same time, both banal and redundant. To people who believe that theories are held until they are falsified, the history of theory-holding becomes more interesting and more relevant. For they have to ask themselves why some theories were held in the first place, i.e. before they were falsified; and what, at some stage or other, prevented or precluded their falsification.
But people who hold that some theories are held even after they have been falsified, and that some theories prove, up to a point, immune to falsifying instances, will see that the history of science is not only interesting but moves into the centre of the stage. For they must ask themselves why some theories are comparatively immune from falsification or even observation and why they continue to be held; why they are considered immune; and why and when such immunity may cease.
As we have advanced slowly and gradually from naive Positivism to a view of knowledge which states that there is valid knowledge, even though it cannot be exhaustively accounted for in observational terms, the preoccupation with the history of science has grown. But there is more to the growing preoccupation with the historicity of science than the move away from Positivism. In earlier days and precisely when philosophers like John Locke first sought to account for the genesis of knowledge, they thought to account for it by using physics as their model. Early physics was very much a push-me-pull-you affair, a universe in which masses moved in response to a stimulus. In this kind of universe, knowledge came to be thought of as something like energy transfer. A body emitted light; the light hit the retina; the retina sent messages to the mind; and so, the mind ended up by having knowledge of the source of the light. The quality and veracity of knowledge was considered to be proportional to the purity of the energy transfer. In essence, knowledge was considered to be the effect the known object had on the mind of the knower. The known caused knowledge, or induced knowledge, in the mind of the knower.
We know today that this account of knowledge on the model of physics is only partially correct. To be sure, there is always a certain amount of energy transfer. But once the energy reaches the nervous system of the knower, the process becomes infinitely complex and interpretative. Feedback starts to operate inside the nervous system and what emerges in the end as ‘knowledge’ bears very little resemblance to the ‘object’ which can be said to have caused the initial energy transfer. Rather, we now know that the acquisition of knowledge can be understood much better when we use a model from biology instead. Biological evolution does not proceed by energy transfer and the emergence of new organisms is not induced or constrained by the environment. On the contrary, the initiative lies inside the organisms. They make proposals to the environment and the environment selects those proposals which are viable. Similarly, we can think of knowledge as a theory about the world and evaluate the theory, in order to distinguish true theories from false theories, by trial-and-error pattern matching. 1 The biological model is a better guide to the acquisition and growth of knowledge than the physical model and, for that matter, even knowledge about the physical universe must be seen to have been acquired on the biological model.
In this way, we can see that the knowledge we have is not something that is determined or constrained by the world we have knowledge of. If one starts with that idea, one will immediately recognise the historicity of all knowledge because one must see that correct or true or viable knowledge comes in sets of alternatives. Just as there is more than one solution to a watery environment – there are fish, amoebas, crabs, reptiles, etc. – so one can see that one can, for example, describe the planetary system by Newtonian Mechanics or by Einstein’s General Theory of Relativity. When there are alternatives, the succession of alternative views constitutes the historical dimension of knowledge. Some alternatives are earlier than others, some presuppose others, and even discarded ones – such as the phlogiston theory or the theories of alchemy – must be seen as having their limited uses for very limited observations. They are not so many ‘errors’, but partial successes .
As soon as one understands that knowledge is not a representation of the world and a blow-by-blow account of the world in all its details, one recognises that the choice between Ptolemy and Copernicus is not as easy as it sounds. One can use Ptolemy in order to account for the movement of the earth around the sun. Ptolemy’s system is extremely cumbersome, but provided one does not expect it to explain why apples fall to the ground as well, it has its uses. In other words, it is discarded because of its cumbersomeness and because of its lack of universality, but not because it is directly found not to be a true representation of the way things are. When one sees Ptolemy and Copernicus as alternatives, one begins to study science in its historical dimension. The reason why Ptolemy came to be discarded and Copernicus accepted is part of scientific knowledge because that reason is not simply a matter of substituting the true for the false. It is a matter of comparing alternatives.
When Herbert Butterfield introduced the history of science into academic education in his lectures at Cambridge University and published his famous book in 1949, he was a great pioneer. However, he was still living in the age of innocence. The history of science, to him, was educationally important and had to be studied because science was and is a force in culture. But there is nothing in that book which would have surprised or disturbed Positivists like Carnap or Mach, or made any reader suspect that the history of science is actually part of science. The discovery of the historical dimension of science is comparatively recent. It was brought home to people only with the demise of the old belief that knowledge was simply truth about the world; that false knowledge gradually was discarded; and that, provided the world itself did not change, the truth about it could not change. Whitehead’s famous epigram that a science which hesitates to forget its founders is lost, is a perfect expression of this old view of knowledge in which the more true simply supersedes the less true and the false. But the moment it was recognised that such certainty in regard of representational truth and accuracy of detailed depiction was inapplicable to knowledge, criteria other than pure truth pushed themselves forward. When one has to weigh and compare theories according to the consensus they command, according to their differential explanatory power, their verisimilitude, their relevance, their coherence and their correspondence with each other, the history of science becomes the history of alternative theories and moves into the centre of the stage. With every one of these new criteria of knowledge, past knowledge was never completely discarded; rather, it remained relevant or, at least, remained as an evolutionary preparation for a later theory. In this view of the history of knowledge, even the ancient belief in the rationality of God can be seen as a necessary step in the evolution of mathematical reasoning about nature.
On the old view, the history of science was irrelevant. Phlogiston, Ptolemy, alchemy, the ether, absolute space, etc., were all so many errors of judgment and of perception. They found their place with the devil, with the three-tiered universe of biblical imagination and with the immortality of the soul in the long history of superstition. The problem that was of real interest to contemporary philosophers was the distinction between reality and appearance and the degree of veracity to be attributed to those theories which scientists had pronounced to be true. Was there a reality behind the appearance described by those theories? Were those theories all we knew or all that existed? If the world was in reality the way Plato had described it in his Timaeus , why did it appear the way Aristotle had described it in his De Anima ? In other words, philosophers used to weigh and compare theories. But they did not see them as alternative accounts of the world. They took one theory to be a description of the world as it really was; and another as a description of the world as it appeared. The question they worried about was the question of which was which.
The realisation that phlogiston and Ptolemy, ether and alchemy were actually based on quite a lot of correct observation, even though they could not be exhaustively accounted for by observation, and that neither Relativity nor Quanta – let alone quarks and quasars – could be exhaustively reduced to observation statements, brought a change. It began to dawn on philosophers that the heart of the problem was not: What is appearance and what is reality? Instead, it became: How is one to see the succession of alternative theories and how can one distinguish between them? At the moment in which the debate about appearance and reality was replaced by a debate about alternative theories, the history of science became part of science.
The Germans have given a special word to the historicity of science: Theoriendynamik – the dynamics of theories. With the conception that knowledge has a historical dimension and that the theories knowledge consists of have a dynamism and succeed one another, even though they are all theories about the same world, one can abandon the older idea that different theories are different ‘appearances’ or different portraits of reality. Hence, now that we can think of the dynamics of theories, we need no longer worry whether an appearance is an appearance of reality or whether, as has at times been argued, it is reality itself; and if an appearance of reality, whether it is a deceptive appearance even though, as an appearance it is true as a theory; or, if not deceptive, whether the reality behind it is inferred or ‘real’. Instead we can now see that the real problem is, first, to determine the nature of the dynamics, i.e. to see how one can explain the succession of theories, weigh alternative theories and explain their succession by showing that the more ‘rational’ theories keep on stealing a march on the less ‘rational’ ones; second, to determine the meaning of ‘rational’ in this context; third, whether alternative theories can be compared and whether the history of the sequence of alternatives is random or not, i.e., whether it is a story in which one damned thing happened after another.
On this view of knowledge as the sum of alternative solutions, knowledge has a history. But on such a view, that history has no direction. Alternatives come and go and are substituted for one another. In order to grasp that that history of substitutions and exchanges has a genuine historicity, one has to remind oneself of another feature of knowledge. One has to understand knowledge as knowledge of universals or of universal regularities. We count as knowledge only information about universal regularities, even though that knowledge need not be about absolutely unlimited universalities. Knowledge or information about simple ‘here and now’ details and unique perceptions is not knowledge. One might even question whether knowledge as ‘here and now’ is even possible.
Not even particular statements are descriptions of nature. Any particular statement, in so far as it employs words (or concepts), is making use of words which transcend particular phenomena. For this reason, not even a concrete particular statement can be descriptive and its validity cannot be judged in terms of its degree of pictorial accuracy or in terms of its power to ‘represent’ or mirror. Not even historical knowledge is descriptive or representational, no matter how particular the several sentences of which it consists are. In any case, it does not represent the time-span in which the events have happened in the way a mirror mirrors or a picture depicts. It selects; and the most one can say is that the events which have taken place do not contradict certain parts of the time-span.
Since knowledge is always knowledge of regularities and has therefore to be couched in terms of universal laws, it follows that knowledge cannot be representational. Knowledge is neither a map nor a mirror nor a portrait. Once this is admitted, we can again see the historical element in knowledge. If knowledge is knowledge of general laws, then the growth of knowledge is not just an accumulation of detailed and particular observations, but a growth of the universality of the general laws. We speak of progress, when the particular facts explained by a general law can be explained by a new general law which explains not only the particular facts already explained, but also other particular facts which had not been explained by an old general law. Knowledge of regularities cannot be representational because a universal law does not represent anything we can observe. At best, we can observe only a limited number of instances. A general law asserts something about regularities and therefore cannot represent what we can observe. With this argument we can eliminate from consideration all those thorny problems which occupied Positivists in general and Mach in particular. Mach was deeply concerned to distinguish between presentationalism, of which he approved, and representationalism, of which he disapproved. The former was the idea that the world is presented directly to consciousness and that the appearances are the external world and that observation of particular instances is real knowledge. Statements of regularities, he said, are merely shorthand devices to sum up myriads of direct observations. Representationalism, on the other hand, was the idea that the external world is not directly presented to consciousness and that appearances are something mental and that the external world is something one can reliably infer from these appearances. If one takes this view, there is no reason, he said, why one should imagine that any inference of this kind is ‘reliable’. Since we cannot consider either presentationalism or representationalism as knowledge, the whole question as to which of these two views is the correct one is without interest.
The Faculty of Abstraction
All but the most rabid empiricists are aware that there is a large component in our knowledge which is a priori . We are presumed to be able to learn a language and we are capable of expectations . Some people even believe that we have innate ideas with a specific content, though this is more open to doubt. If we had no expectations before we have experience, we could never make comparisons between experiences and distinguish similar experiences from dissimilar experiences. If one is looking, for example, at a circle and cannot form the expectation that there are other circles like it, one will look at other circles without being able to recognise them as circles. For without the expectation that ‘circulateness’ is a regular occurrence, one will look at the next circle with completely innocent and fresh eyes and thus allow oneself to be diverted by all the differences, no matter how minute, between the first circle and the second circle; and would conclude, very reasonably, that they are not really alike. Hence, our very ability to make comparisons and distinctions and to classify experiences under headings, depends on our ability to have expectations of certain regularities.
As I have said, there is really no doubt about this matter. Even those rabid empiricists who believe that the mind is a bucket and that everything, even our expectations, are poured into it, accept that we have expectations of regularities. They would merely add that these expectations have to be ‘learnt’.
There is, however, considerable doubt and large areas of disagreement as to the source of these a priori faculties, as well as in regard to the question of whether there really are regularities in nature to correspond to our expectations of regularities. Let us discuss these two separate questions in turn, starting with the question of whether there is anything in nature which corresponds to our expectations.
At one end of the scale there are people like Sir Arthur Eddington, who could not get himself to deny the a priori component but who considered it entirely subjective. In his view, the a priori component in knowledge – which he recognised, for example, in the General Theory of Relativity – vitiated its veracity because he considered it to be ‘wholly subjective’ 2 and something which one should hope would eventually be eliminated. Obviously, he did not believe that it corresponded to anything in nature. Next, we find philosophers like Husserl and Frege, who invoked a priori concepts or categories in order to escape from the subjective clutches of psychologism. Unlike Eddington, they held firmly to a priori categories – but not because they were sure that they corresponded to anything in nature. They thought that they were a good ploy to avoid the extremes of subjectivity. Next are those neo-Positivists who, having given up the attempt to reduce a priori expectations to observational experiences, argue that observational phenomena have the properties they would have if they were embedded in a world in which there really are regularities, even though, literally, they are not so embedded – or, at least, can never be shown to be so embedded.
Then we come to Kant. Kant was quite certain that there are a priori categories and forms of understanding. However, he was also certain that they could not correspond to nature as it is in itself. They are in our minds and in nature as experienced by our minds. Next, we come to Plato. Plato postulated that the mere fact that we have knowledge of ephemeral, particular events is proof that we must have a priori knowledge of expectations. They must, he argued, be in nature; for if they were not, we would have no knowledge of nature. This argument looks potent, but is circular. For the nature he said we have knowledge of is precisely the sort of nature one cannot have knowledge of unless one has a priori knowledge of certain expectations. On this interpretation of Plato, he too is one of those philosophers who are not so sure that the regularities we seem to have expectations of are really embedded in nature.
When we turn our attention to society rather than to nature, we are immediately struck by the fact that there are regularities. But these regularities are manmade and artificial. They consist in legal norms, in positive legal enactments, in customs and conventions. Nobody has ever denied that they are real and that they exist, whether we watch them or not. But then, they are different kinds of regularities and not the sort of regularities Kant, Plato and Eddington were thinking of. They are obviously there only because we put them there. ‘Being a member of a society’ means that we have certain expectations about the behaviour of other members of that society and about our own likely behaviour.
Finally, at the opposite end of the spectrum there are those philosophers who believe dogmatically that regularities are embedded in nature. However, the only reason they can produce for this belief is the belief that God made nature to have regularities. For example, they say, we can use mathematics to describe the world because God Himself was a mathematician.
It now turns out that we can look towards biology to settle the dispute and to assure us that there are regularities in nature and that the a priori expectations we have of them are not only justified, but also that their presence in our minds or nervous system – whichever one prefers – can be explained by the theory of evolution by natural selection.
When we are thinking about the biology of evolution we must assume that there are regularities in nature. Without such regularities there could be no adaptation by natural selection. Every organism which is adapted is adapted to the regularities in its environment. It makes no sense to say that it is adapted to the infinitely random vagaries of all possible details and particulars of its environment. To be adapted to all possible variations and individual particularities would not only require a store of information which would transcend the storage capacity of even the most complex organism, let alone of individual cells. It would also make adaptation impossible because it would reduce it to successful learning; or, alternatively, it would require us to think that any organism is infinitely adaptable and could adapt itself to any environment whatsoever.
The blind, fixed avoidance response of a Paramecium incorporates only a single element of information about the objects blocking its path; namely, that there is at that point an insurmountable obstacle to locomotor progression. 3
Infusoria, for example, by means of their phobic and topic responses, seek an environment
containing inter alia , a particular concentration of H-ions. The commonest acid found in nature is carbonic acid, the highest concentration of which is found in waters in which paramecia flourish, especially in the vicinity of rotting vegetable matter, because the bacteria that live on this matter give off carbon dioxide. This relationship is so dependable, and the occurrence of other acids, let alone toxic ones, is so rare, that the parmecium manages admirably with one single item of information, which put into words would say that a certain acid concentration signifies the presence of a mass of bacteria on which to feed. 4
To quote Konrad Lorenz again:
The mechanism which enables so many animals to develop conditioned responses is an adaptation to the physical fact of energy-transformation. Preparatory or avoidance response to the conditioned stimulus preceding the biologically relevant one is only adaptive (in terms of species-survival) when the two stimuli occur in sequence with reliable regularity. This is only the case when both are links in the same causal chain. The mechanism of the conditioned response takes in only one piece of information about this relationship – that the effect follows the cause in time. And this ‘cognitive feat’ is of tremendous survival value! In addition, it is actually a correct piece of information, since it remains quite true even when viewed from the higher level of human causalistic thought. 5
We can see from these examples that evolution by natural selection and adaptation would not be possible, let alone conceivable, unless we were able to assume that there really are regularities in nature. The organisms which have the right expectations about them are ‘adapted’ and get selected. Those that do not have the approximately correct expectations, cannot be selected for survival. They are too unadapted. From the single-cell organisms upwards to the presence of Homo sapiens we can be sure that the process of evolution has been possible because there are regularities in nature.
Now that the question about the presence of regularities in nature has been settled by biology – we would not be here, in other words, if there were no regularities – we can turn to the next question: How has it come about that we know about these regularities and have expectations about them? Obviously, the answer that we learn about them by observation must, by the nature of the case, be ruled out. These expectations, wherever they come from, cannot come a posteriori . The whole reasoning behind the doctrine that our knowledge of regularities is post rem is not only false, but actually absurd. There is first a logical consideration which leads to this conclusion. No matter how many observations any organism is capable of, the number of conceivable observations must always be limited. One can therefore not get knowledge of real regularities from a limited number of observations. Even myriads of observations can at best only be summed up as ‘so many’ observations of a regularity. There can be no knowledge that the regularity will occur tomorrow and that we are entitled to ‘expect’ it to occur. Next, there is a psychological consideration which leads to the same conclusion. Without an expectation, we could not even look for a regularity or recognise one if it occurred. Suppose we observed the sun rise once. Unless we were able to have an expectation of a regular repetition of that sunrise, we would not be able to recognise the second event as a sunrise. The first day it would appear red; the next day it would seem golden; the third day the sun might be hidden by a cloud. Unless we expected the sun to rise, we would be bound to be misled by these particular differences into thinking that whatever was happening on the horizon was not a repetition of the first event. Unless we had an expectation, we could not make the necessary abstraction and dismiss the differences in accidental colouring as irrelevant.
Having thus dismissed the possibility that expectations of regularities are lodged in our minds a posteriori because they are post rem , we are nevertheless left with a bewildering array of explanations as to how such a priori knowledge got into the mind. One of the obstacles to a settlement of this question comes from the fact that philosophers have concentrated on the mind of Homo sapiens and have always overlooked the fact that cognition is not specifically human, but part of the entire evolutionary process.
Aristotle reasoned that a priori knowledge of universals, i.e. expectations, must be something spiritual. Particular observations, he said, are observations of physical phenomena. Since no sum of particular observations can amount to an expectation of regularities, our expectation of such regularities must be ‘spiritual’. 6 Next we come to the view that we have no expectations at all and that what we designate as our a priori expectation of the regularity of occurrences is flatus vocis . This view is partly based on the false idea that all there is, is a ‘here and now’ of particular occurrences. If this idea were viable, we would not have evolved to be here to have this idea. Partly, this idea is based on the notion that there well may be regularities but that, by the nature of the case, we can have no knowledge of regularities. This notion is not as absurd as it may seem, for it is indeed very mysterious that we have a priori expectations – at least, it is mysterious as long as one avoids looking at biology.
Plato thought of abating the mystery by claiming that we gain knowledge of a priori expectations before we are born, when our soul is not yet encased in a body. In this unembodied state, he argued, the soul is capable of looking at these regularities and of taking them in. Though birth and incarnation cause the soul to suffer a certain clouding of these expectations, they can be polished up by careful memory-work. As against Plato, Kant believed that we simply happen to have them when we are alive.
Whitehead believed that our expectations, justified or not, derive from a historical accident. He argued that in the early days of Western culture people believed in an omnipotent God who had made the universe and regulated it according to immutable laws. This notion is most probably false. But, he continued, we have become so used to the idea that there are immutable regularities that we have been able to turn an ancient superstition to good account. We have given up the idea that God made the world; but we have retained the notion that there are regularities in the world. And so, science was born out of religion. 7 Our religious past, he maintained, endowed us with an instinctual habit of thought about regularities and order. In this way, Whitehead would really explain our a priori knowledge as a historical accident. Consider, in contrast, the position of knowledge in ancient China. Here, too, myth taught that there was an order and that there were regularities both in nature and in society. Everything in the universe, according to this myth, was in spontaneous harmony. This harmony was such that there were no specific laws or regularities. Hence, educated Chinese in the eighteenth century, confronted through the arrival of the Jesuits by knowledge of European science and its contention that nature was regulated by general laws, dismissed such science as a form of naivety inspired or conditioned by an anthropocentric presumption that there were regularities. For this reason, science could not take roots in Chinese culture. 8 If Needham and Whitehead are right, the whole question as to the presence of a priori expectations in our knowledge is a cultural accident. In the West, we were just lucky that we had inherited a science-promoting myth; in China, on the other hand, people were just unlucky in that the myth they had inherited was not science promoting or philocognitive, even though, as Needham is the best to inform us, they obtained quite a lot of knowledge in a different way.
If Whitehead explained our a priori knowledge as a lucky historical accident, we also find a number of sociological explanations which see it less as an accident and more as a product of our cultural evolution. Ever since Dürkheim, many people have believed that it was through religious belief that man realised that the material world could not be apprehended solely through sense-data. The first step towards the power of abstraction came when man realised that a clan and its totemic emblem, though very different in terms of sense-data, partook of the same essence. From that moment on, both religious and scientific thinking have sought to translate the realities of direct observation into an intelligible language. This idea has been taken up by countless investigators. For example, we find that Alfred Sohn-Rethel as recently as 1970 attempted to explain the power of abstraction as the result of the market economy. When people had to exchange the products of their labour for their ‘equivalents’ in the marketplace, they were forced to abstract, for the products of their labour were, as their observation told them, different from the goods or services they exchanged them for. 9 Even more recently, Eric Gans has presented another sociological theory about the origin of language in general and the origin of designating particular objects by general words. In this process, we can see the emergence of abstract thinking and generalisation. In very primitive tribes thousands of years ago, people had inchoate desires of aggression which were displayed in violent sacrifices. Then there came a moment of crisis when members of the tribe, desiring to seize the victim’s remains, were suddenly held in check by the realisation that in this free-for-all sacrifice, every member might be the next victim. Thus there begins a communal holding back which, in turn, becomes a source for reciprocal awareness through which actual and generalised violence becomes mediated violence. The most effective instrument of mediation is the growth of language – the realisation that objects can be designated and that one word can stand for a variety of events. Thus, violence is abated and contained and men learn to grasp the significance of universal words, i.e. of speech acts which cover more than one particular instance. 10
It does not really matter whether one finds these or other Durkheim-derived theories attractive or plausible. What matters is the cogency of the attempt to explain the presence of a priori knowledge, of expectations and of our ability to abstract. Plato had a mythical explanation; Kant had no explanation at all. Whitehead considered it a matter of luck. From Dürkheim on, we have been invited to look towards cultural evolution and the history of social structures for an explanation.
At the present time, when one is taking the work of Konrad Lorenz, Egon Brunswik and Donald Campell into consideration, one can observe that a paradigm shift is taking place. We are now invited to abandon these sociological explanations and consider biological explanations instead. The research programme seems to be changing. But lest we prejudge the issue, I would like to point out that the change is taking place according to perfectly rational criteria. The change is not taking place because the sociological research programme is ceasing to yield fruits, as Imre Lakatos would have it. On the contrary. Though there is a change, the old sociological programme is yielding ever new fruits – as the abundance of sociological literature on the subject and the whole pursuit of the sociology of knowledge proves. Nor is there an impending shift because, as Kuhn would have it, the older generation of sociologists is dying out. On the contrary. The number of younger people engaging in the sociology of knowledge is far greater than the number of sociologists of the older generation. The change is coming because the biological explanation is wider than the sociological explanation. Everything that was explained by the sociological explanation, from Dürkheim on, can be explained by the biological derivation of a priori expectation. Moreover, the biological explanation also covers a vast array of phenomena in biology and evolution and links the presence of expectations in the human mind or nervous system to the ability for abstraction one finds in infusoria or in the primitive tick, an insect parasitic on dogs, sheep and cattle. The new biological theory of the origin of our power to abstract and our ability to have expectations is, therefore, a genuine progress.
Curiously enough, Plato’s and Kant’s view are compatible with the new biological paradigm. In the new paradigm, we find that a priori expectations are indeed present at birth. Where Kant did not and could not have given an explanation, and where Plato gave a mythical explanation by postulating that the soul had pre-existed its encasement in a body, the biological explanation shows that the presence of a priori expectations comes through evolution. There are regularities in nature and those cells which are adapted, in the sense that they behave as if they expected the regularities in their environment, have a better chance of survival than those which don’t. In this sense, through the process of natural selection and heredity, the ability to have expectations of regularities is phylogenetically a posteriori . Organisms ‘learn’ from the environment. However, they do not learn as individuals but as species, because only those that are adapted to expecting the regularities survive and have offspring. This process makes the a priori expectation of regularities ontogenetically a priori . Every individual organism is born with the appropriate expectation. In this way, the regularities which exist in nature are eventually transferred to the organisms which survive by natural selection and the order of nature becomes the nature of order. With the evolution of sexual reproduction, the process of incorporation of a priori expectations is vastly speeded up. Sexual reproduction presupposes kinship recognition and this means that individuals must be able to abstract sufficiently from the amorphous mass of stimuli they encounter: they must be capable of distinguishing the members of their own species with which they can mate successfully from all other individuals, animal or material. Hence, with sexual reproduction, there is a special premium on the ability to abstract and generalise. In this process, phylogenetic and ontogenetic experience are combined and thus we can explain what has puzzled empiricists and rationalists for centuries. The rationalists held fast to the idea that we have a priori expectations, but could not explain why. The empiricists held fast to the idea that we learn from observations, but could not explain why. In the biological model of the growth of knowledge, these two processes of cognition are beautifully meshed. The biologist has taught philosophers of all persuasions that experience and a priori expectations derive from different layers of the evolution of organisms and that phylogeny and ontogeny are meshed to make up adaptation. The old empiricist, of course, has to accept that learning is not induction or ingestion, but the selective elimination of those organisms which have not happened to have the right responses. Similarly, the old rationalist has to accept that while it is true, as he contended, that there is a priori knowledge in every individual, that knowledge is not absolutely innate but the result of phylogenetic experience, i.e. phylogenetically a posteriori : it appears, therefore, as ontogenetically a priori . 11
However, the expression ‘phylogenetically a posteriori ’ is, strictly speaking, misleading. There can be no suggestion of earlier organisms gathering painfully, by inductive accumulation, experiences of regularities. All that is meant is that expectations of regularities evolve together with the species. Organisms which have poor or no expectations are eliminated and, in this way, as the species evolves so do the expectations of regularities. In every species, the correct expectations have become adapted because all organisms with incorrect expectations or without expectations have been eliminated as a result of their reproduction rate being slower than that of the organisms with more fitting expectations. We ought, therefore, simply to speak of ‘phylogenetic experience’ rather than of expectations which are ‘phylogenetically a posteriori ’.
When one thinks of knowledge as knowledge of regularities, one recognises the a priori element which enables one to make abstractions so that one recognises similarities and differences in particular observations and classifies them and considers them relevant or irrelevant to one’s expectations, as the case may be. Knowledge, therefore, is essentially in the shape of general laws, i.e. in the shape of statements about regularity. It is true that there is other knowledge. For example, our knowledge of the constitution of the atmosphere as a ratio between oxygen and nitrogen is not the knowlege of general laws. But it would be impossible to achieve the knowledge of that ratio if we were not able to pick out oxygen and nitrogen by their respective regularities in the first place. Similarly, we have knowledge that something is the planet Venus without necessarily knowing general laws about the planet Venus. But we could not identify a certain star as a planet if we could not know of the regularities pertaining to planets. Therefore, though much knowledge is not couched in terms of general laws, it must always be derived from knowledge couched in terms of general laws.
If one meant by knowledge the mere accumulation of particular and unrelated pieces of information, the passage of time would be connected with such accumulation in a banal and trivial sense only – that is, in the sense that it takes time to accumulate particular pieces of information. Knowledge, in brief, is not a dictionary – though one could not even compile a dictionary without the ability to abstract and recognise similarities and differences and disregard what one judges to be accidental or insignificant details. Genuine knowledge is knowledge of the regularities. Every organism, even the most simple and most primitive amoebic cell, has an inbuilt ability in its rudimentary nervous system (or in what takes its place) to register information about regularities. By contrast, the ability to become consciously aware of individual characteristics and particular instances requires a well-developed nervous system and, in the end, a finely attuned sense of discrimination. The ability to pick out particular instances as such is of no great value for survival and is probably, even as far as purely theoretical knowledge is concerned, of very limited interest. The faculty which really matters is the almost Platonic faculty to pick out particular instances and recognise those instances they are similar to and to discriminate whether these similarities are of consequence or not. Such sorting out depends on an expectation of regularities; for only if one knows beforehand what to expect can one form a judgment as to which similarities are significant. A fly has certain similarities with an aeroplane; but it would be of little interest to study flies and aeroplanes unless one had a theory about aerodynamics which one wanted to test and examine. By contrast, if one had a theory about passenger transport, the similarities between flies and aeroplanes would become negligible.
We are here concerned with the power to make abstractions, i.e. to recognise differences and similarities. This ability depends on the ability to disregard irrelevant details, that is, to be able to abstract from individual observations and experience. We could not do so if we did not have expectations of similarities and differences. Certain experiences might tell us that flies are very small aeroplanes. Abstraction tells us that this is not so. We could not abstract unless we expected differences and kept on looking for them. Without this ability to abstract and disregard irrelevant details in the environment, the paramecium could not have survived; or rather, those paramecia which were lacking in this ability to have expectations of similarities, are not here to tell the tale. The power to abstract and to behave ratiomorphously (Egon Brunswik’s term) is the result of evolution and the basic form of adaptation to the environment. In conscious human beings this remarkable ability is highly developed. But even here, ‘long before one can formulate reasons for it, one often notices that a particular complex of events appears interesting or fascinating. Only after a while does one begin to suspect that there is something regular about them.’ 12 One can document this observation from the biographies of great scientists. ‘What precisely is thinking?’ Einstein wrote. 13 ‘When at the reception of sense impressions, memory pictures emerge, this is not yet “thinking”. And when such pictures form series, each member of which calls forth another, this too is not yet “thinking”. When, however, a certain picture turns up in many such series then – precisely through such return – it becomes an ordering element for such series, in that it connects series which in themselves are unconnected. Such an element becomes an instrument, a concept.’ I. Bernard Cohen described Newton’s ‘style’ in a slightly different way. Newton, Cohen writes, displayed his style in an alternation of two phases of investigation. In the first phase, he deduced the mathematical consequences of an imaginative construct which was founded on a simplified and idealised natural system. Next, he compared the deductions with observations of nature and somewhat altered the initial construct accordingly, and then deduced again the mathematical consequences, and so on. 14 Phase one begins usually with nature simplified and idealised, i.e. with expectations of regularities and the omission of details. 15 One can see the same process at work in a completely different description of Newton’s mind. Frank E. Manuel, relates how Newton, in early childhood deprived of his beloved mother, kept longing for her and was powerfully drawn to distant persons. Since this yearning never found expression in sexuality, it could have ‘achieved sublime expression in an intellectual construct whose configuration was akin to the original emotion.’ 16 There might well have been a relationship between this longing and a later intellectual structure in which a sort of an impulse to attraction is a key term descriptive of a force. The picture of attraction at a distance turned up again and again. Newton, therefore, disregarded the details so that he could make the attraction to his distant mother and the attraction of the planets to the sun similar, in spite of obvious differences in almost all details.
The Increase in Universality
We count as knowledge only knowledge of universal regularities. We have explained how we can think of such knowledge without imagining that it is built up laboriously from the summation of particular observations. We can now detect that knowledge of universal regularities embodies the notion of progress or, more precisely, embodies automatically the idea that the growth of knowledge is unilinear in the direction of progress.
Progress consists in the formulation of theories which are more and more universal, so that with the help of one theory we can understand or explain phenomena which used to be separate and for the understanding of which we used to need two or three different theories. Progress in this sense is a matter of increasing depth, not of an increase in reclaimed land. ‘I suggest,’ Popper writes, ‘that whenever in the empirical sciences a new theory of a higher level of universality successfully explains some older theory by correcting it , then this is a sure sign that the new theory has penetrated deeper than the older ones.’ 17 Progress in the sense of an increase in depth means that Newton was a progress over Galileo and Kepler; and Einstein, a progress over Newton. The DNA theory of molecular biology is a progress over Mendelism. 18
The demand for the greatest possible universality of general laws is not a demand for a virtuoso performance or a form of misguided monism which requires all knowledge to be deducible from one single formula. The demand follows directly from the notion of radical criticism. It would be perfectly feasible to rephrase any expectation of regularities, even the meanest one, as a general law. ‘When diaphragms contract, lungs will fill with air.’ The knowledge expressed in this general law is very elementary. It explains why lungs fill with air. But it then leaves something else unexplained, i.e. the general law about lungs and diaphragms. Why, we may ask, do lungs fill with air when the diaphragm contracts? Critical judgment must seek to explain, and the best explanation is to obviate the unexplained general law by producing another general law of higher universality which enables us to deduce the correlation between lung-filling and diaphragm-contraction. If we can deduce this correlation as well as many other things from the new general law, we will call it more universal than the law which merely correlated lung-filling and diaphragm-contraction. Thus, the move towards higher universality and the preference for laws of higher universality derive directly from the criticism that the initial general law did not explain much and stood, in turn, in need of an explanation. The more general laws are subsumed under fewer laws of greater universality, or the more different phenomena can be explained from fewer general laws of high universality, the less is there left to be explained. In this fashion, knowledge grows by the universalisation of knowledge. As general laws of high universality make general laws of low universality superfluous, our ignorance decreases when fewer and fewer low-universality laws remain to be explained.
The growth of knowledge is therefore not a growth in pictorial or representational accuracy, but a decrease in the kind of knowledge which stands itself in need of further explanation.
For an old-fashioned or traditional empiricist, ‘progress’ will mean a transition to a theory that provides more empirical tests for its basic assumptions, and this means that progress consists in greater accuracy and more justification. A little reflection will show that this notion of progress is unimportant or even misleading. All knowledge enables one to make predictions about the future. But it is possible to measure the quality of one’s knowledge and the quality of one’s predictions by two different standards – one sensible, and the other absurd. Let us look at the absurd one first. One can measure the quality of knowledge by the accuracy of its predictions. Imagine weather forecasts in New Zealand. New Zealand consists of islands surrounded by the Pacific Ocean; when the forecasters and meteorologists are following their knowledge, their forecasts show a very high degree of incorrectness. This derives from the geographical situation of the country, which makes accurate prediction almost impossible. Suppose that somebody in New Zealand forecasts the weather by tossing a coin. It is likely that his predictions will have a higher degree of accuracy than the predictions based on meteorological knowledge. If one measured the quality of ‘knowledge’ by the accuracy of the predictions it yields, one would in New Zealand have to prefer the coin-tossing method to the meteorological method. 19
The other standard of measuring the quality of knowledge is not based on the accuracy of the predictions one can make, but on the smallness of the number of laws one needs in order to make predictions. The fewer general laws needed, the better the knowledge. One can see that it is more sensible to speak of progress when one has to use fewer general theories. Such progress consists then in an increase in explanatory power. If one speaks of progress if there is an increase in the accuracy of one’s predictions, one would often have to consider a theory such as coin-tossing for weather prediction to be a progress over meteorological knowledge – which clearly would be an absurdity. If knowledge is always knowledge of general laws, and if knowledge grows by the formulation of more and more universal laws, knowledge cannot be representational. Every statement of regularities always transcends the finite observations we can make. No general law, whatever its degree of universality, can be said to depict or represent a state of affairs.
Newton’s breakthrough is a special instance of this contention. Newton did not just observe nature and did not just produce mathematical principles; but he produced mathematical principles of nature . Nothing in nature was described accurately by these mathematical principles and the old question whether nature has to be mathematical to be described by Newton’s or anybody else’s mathematical principles does not arise. The growth of science takes place by the increasing universality of theories. Theories are supplanted by theories which have greater universality, and it is in this that the growth of knowledge consists. Theories are never, or hardly ever, supplanted by theories which have a comparable range in universality. The growth, in this sense, is guided by the purely rational consideration that a more universal theory is preferable to a less universal theory. Consider as an example the case of optical phenomena. When some of these phenomena could not be explained by Newton’s particle theory of light, one had the choice of giving up Newton’s particles or Newton’s optics. The choice was made in favour of Maxwell’s conception of light and against the idea that light is particles because at that time Maxwell’s conception of light as waves was a more unifying and universal theory than the theory that light consisted of particles. The so-called paradigm shifts do not take place at random. A paradigm is not dropped because it happens to explain x, y and z instead of a, b and c. The substitution is more likely to take place when the new paradigm has a greater universality and explains not just x, y and z, but also w, x, y and z. As is well known, Kuhn has argued that there is no progress in paradigm shifts. But he is able to maintain this argument only because he never compares paradigms according to their range and their explanatory power. As soon as one takes the universality of paradigms into consideration, one can see that there is usually a rational ground for the shift and that the shift is in the direction of greater universality.
Or consider the history of the nature of heat. The caloric theory of heat had been able to explain all sorts of things, but it received its death-blow from the understanding of the atomic nature of matter. The kinetic theory of gases as developed by Boltzmann and Maxwell linked phenomena of heat to molecules in continual motion. Here, we find that progress is made because there was a shift towards a paradigm of greater universality.
If progress is a growth in universality, we must presume that greater universality is greater truth. Truth itself is not the factor which provides the momentum, for it would be very hard to define truth as anything other than accuracy of individual or particular observations. If one presumes that the momentum in the growth of knowledge is furnished by a desire for the achievement of truth, one will remain ineluctably bound to deriving knowledge from knowledge of particular observations rather than from knowledge of regularities. Nevertheless, truth does come into the matter because when we explain four phenomena rather than two by one single theory, we must presume that we can do so because the theory which explains four rather than two phenomena has greater truth. So while truth is not itself a guiding principle, it is a value which is ultimately linked to universality. Our preference for greater universality follows from our insight that knowledge has to be knowledge of universals and not just detailed information about myriads of particular instances. Knowledge enables us to deduce detailed information. Hence, the more universal knowledge is, the greater the amount of detailed information we can deduce from it. The progress in the growth of knowledge is therefore not an intentional move towards greater truth, but we presume that greater truth is an unintended outcome of that growth.
In broad terms, the growth of knowledge is very similar to evolution itself. Evolution takes place because natural selection makes use of materials which had evolved earlier and which had been put at its disposal. 20 Helium develops from hydrogen and lungs from gills-never the other way round – and such development is an irreversible process. In knowledge there is a similar irreversible process when planetary motion and the temperatures of gases are explained by one single theory when before there had been two different theories. Progress is ascertained when a new theory explains old observations as well as new observations. A new theory which explains only new observations, leaving old observations un explained, would have very little to commend itself and would certainly not be considered improved knowledge.
Progress in Knowledge and in Art
Knowledge differs from all other human pursuits and institutions by virtue of its progressiveness. In all other pursuits – such as art, literature, morals and social institutions – there are changes, but no progress. If one sees progress in any of these pursuits it derives from the extraneous declaration that there is a value in some forms of art and not in others, or in some social institutions and not in others. If one does not set up such a heteronomous goal for art, literature, morals and social institutions, one cannot discern progress in the changes they are subject to. The elevation of heteronomous values for art, literature and morals is, at best, fairly arbitrary. If one believes that the goal of art is the production of ‘beauty’, one can certainly discern progress in some of the changes of style – except that one will end up by realising that ‘beauty’ itself is a value which is subject to paradigm shifts. If beauty will not do, one could try, for art, ‘accuracy of depiction’ and then assess progress in terms of an approach to accuracy of depiction. However, ‘accuracy of depiction’ is itself a heteronomous standard for art because it is not essentially part of the artistic enterprise, but only one of its many possible goals. By this heteronomous standard, Andy Warhol’s Coca Cola would be a progress over Picasso or Jackson Pollock. On a more sophisticated level, Suzan Gablik 21 has argued that one can see the history of Western art as a series of cognitive revolutions, the last of which is the modern period in which the artist has at long last learnt to reason purely in terms of verbal propositions without recourse to images or other concrete materials. Titian and Rembrandt may have aesthetic merit and be praiseworthy, but are on a lower level of development than Don Judd or Sol LeWitt. Modern art, in this analysis, is at long last on the level of thought exhibited by adults.
This approach really depends on the establishment of art as a cognitive enterprise and on the consequent elimination from art of aesthetic criteria. If one abides by aesthetic criteria, one cannot see the changes of style of which the history of art consists as progress; this must lead to the acceptance, in the history of art, of relativism. If one is afraid of relativism, one will be driven to attempts to assimilate art to cognition. The intellectual progress of E.H. Gombrich is a telling example of what happens when fear of relativism intrudes into the history of art. In his early book Art and Illusion , 22 Gombrich argued that no artist has an ‘innocent eye’. An artist, he claimed, can never start with the observation or imitation of nature; all art, therefore, must remain conceptual. The manifestations of visual expression, on this view, are conventions. If there is a shift of conventions – a paradigm shift – it cannot be counted as progress and cannot be explained in rational terms as a ‘better’ solution to a problem or as a solution of a larger number of problems. In this view, Gombrich committed himself to a view of the history of art in which differences of style are utterly relative and beyond rational criticism. In a more recent collection of essays, The Image and the Eye , 23 Gombrich, wary of relativism, prefers to assimilate art to cognition and is moving in the direction of treating artistic expression as a form of knowledge. According to a relative view of art, the depiction of the moon as a square is as good as a depiction of the moon as a circle. If aesthetic considerations prevail, there is no reason why these two depictions should not be equally good, provided they satisfy certain aesthetic considerations. But if one treats artistic expression as a form of knowledge, then the moon as a square is not as accurate as the moon as a circle. In this last volume, Gombrich is trying to look upon the history of art as a non-relativistic enterprise which aims at conformity to visual perception. Gombrich now starts with the visual image, considers it to be inherently truthful because of the evolutionary pressures which would have eliminated men had their visual perceptions been consistently inaccurate or misleading, and then proceeds to judge artistic production in terms of visual perception. In this way, he goes diametrically against the argument in his Art and Illusion , but manages to establish that there is progress in art and succeeds in eliminating relativism from the succession of styles.
The question at the heart of this matter is whether one can reasonably consider art as a form of cognition or not. If one does, then Gombrich’s assimilation of art to knowledge and the consequent detection of progress in art is valid. However, if one considers art to be guided by aesthetic considerations, this assimilation to knowledge is unacceptable and one is left with the conclusion that there is no progress in art. In such a case, Gombrich’s attempt to set up visual perception as a criterion for art must remain an attempt to set up a heteronomous goal. 24
The denial of progress in art does not necessarily mean that changes in style are entirely random and incapable of rational explanation. There are styles of art which create real problems. Impressionism, in solving the problems posed by romantic realists or naturalists, eventually created a new problem. Realists such as Delacroix or Ingres had managed to paint people and objects that looked like ‘real’ people. But in so doing they had done violence to the mechanisms of visual perception. We do not see outlines of landscapes, the impressionists argued, but bits and pieces of colour in different shapes; if we do manage to spot a human being or a landscape, it is only by summing up the total impression created by the bits and pieces. Thus, they concentrated on the spots of colour. This, in turn, created a new problem because when we have lots of bits of colour, the canvas to be filled by them has to be organised in its own terms and cannot be considered a medium to be used. Cézanne accordingly concentrated on the patterns of colour which covered the canvas and, in so doing, opened the way for cubism. The cubists, in turn, explored the shape of the bits of colour and organised them in formal patterns, even though often enough they ceased to be clearly recognisable people or landscapes. And from here there was only one little step to the utter formalisation of art and to abstract art. Here, we have a development of styles of painting which is the result of problem solutions. However, one cannot really detect progress. For there is no reason why we should consider Kandinsky or Picasso a progress over Ingres and Delacroix. There certainly has been no net gain by Picasso, even though we can see that he ‘solved’ problems posed by the alleged naturalism of Delacroix.
Or consider the law of progress in art discovered by Wölfflin. 25 According to this law, art develops from a geometric or linear style to a convoluted or ‘painterly’ style. Wölfflin had no difficulty in showing that in Europe the art of the Renaissance had changed from the linear style of, say, Fra Angelico to the painterly style of Rembrandt. Had he looked for examples outside Europe he would have found confirmations. The changes in style in Moghul tombs in India can be explained very well by Wölf Bin’s law. The earlier Humayun tombs are linear. From there we get the more convoluted composition of the Taj Mahal and, finally, the utterly ‘painterly’ appearance of the tomb of Aurangzeb’s wife in Aurangabad. This development can always be explained as the result of exploration. First, one draws in lines and outlines; then one experiments with the colours inside the outlines; eventually, one leaves out the outlines and produces forms and shapes by an organisation of the colours alone. However, there is again no progress in these changes. The changes are the result of experiments suggested by the techniques of drawing and of using colour and of the chemistry and viscosity of the materials one uses for making colours. One’s preferences for linear art or for painterly art will be governed by heteronomous standards which in themselves have nothing to do with art as such and are not an essential part of art. The difference between art and knowledge is obvious. In knowledge, we are aiming at universality right from the start. An increase of universality is an essential part of improved knowledge and the progress which is produced by such an increase is not a progress in accordance with a heteronomous standard, but a progress in the autonomous standard of knowledge itself.
Without the inbuilt notion of universality, changes in knowledge and the entire historicity of knowledge could never be seen as growth and progress. There are in fact many philosophies of science – and I am not thinking only of the extreme case of Positivism and its view of all changes in knowledge merely as discarding errors – which do not consider the inbuilt notion of universality and which see, therefore, no progress in knowledge at all. Without the inbuilt notion of universality one is very readily left with the laconic view that knowledge is not different from art and literature. Positivism had its simple idea of progress. It believed that progress resulted from the elimination of error, and that errors kept being eliminated because careful observation of nature and of people would constrain one to eliminate error. Progress would thus be obtained by the minute reproduction and duplication of everything one picked up by observation. Knowledge was seen as a second world, a mirror-image of the world or a reflection of the world in the mirror of the mind. Once one lets go of this simplistic Positivism in which knowledge is a duplication of what is ‘out there’, one is left without an appreciation of progress – unless one understands the role of universality. When that role is not understood, one is left with a variety of choices each of which eliminates the possibility of progress in knowledge. Among the alternatives to a Positivistic determination of knowledge by observation, we can take our pick: according to some, knowledge consists in unfalsified guesses; according to others, knowledge is what happens to be advantageous to the ruling class; and others, again, think that knowledge is determined by the social relations or the social structure to which the people who hold it are subject. None of these views considers the universality factor and all are therefore left to see no progress in the growth of knowledge or the changes to which knowledge is subject. We discover here a stance which is diametrically opposed to that of Gombrich. Where Gombrich wanted to combat relativism by assimilating art to knowledge, we find here a number of thinkers who want to establish relativism by assimilating knowledge to art and by wiping out the reality of the distinction.
Some centuries ago, the belief in alchemy and witchcraft was fashionable. In some places in the modern world, it is the custom to believe in voodoo. Some millennia ago, people had confidence in sympathetic magic. In the modern world, where Western culture dominates, some people believe in Quantum Mechanics and General Relativity. Not so very long ago, the ancestors of these same people used to believe that Newtonian Mechanics was true. There is no accounting for tastes and all these fashions are much of a muchness. Who knows, we might find black magic or hermetic cosmology attractive before long. The attentive reader may think that I am here parodying Paul Feyerabend. In fact, I am merely reporting the theories of so respectable and staid a professional anthropologist as Mary Douglas. Little has changed in a thousand years, she writes. 26 We still talk of pollution and danger. A thousand years ago, we were afraid of witches. Today, we are afraid of industrial and nuclear pollution. Some may imagine that we have ceased to fear witches because we have found out that witchcraft is imaginary and cannot be practised. But, according to Mary Douglas, the truth is that we have stopped persecuting witches (persecution was a form of social control) because we have found alternative methods of social control. There has been no growth in knowledge, merely a shift of the locus of authority from the centre to the borders, so that we now fear pollution from industry where before we feared pollution from witches. With this kind of argument, the pursuit of knowledge is completely relativised and shows no more progress than the pursuit of art or literature or the changing pattern of social control on which all the other pursuits depend.
Once the distinction between the pursuit of knowledge and all other human activities has become blurred, it becomes also increasingly difficult to distinguish between different kinds of knowledge such as black magic and Quantum Mechanics. And, finally, it appears as a puzzle that Newtonian Mechanics should have been replaced by General Relativity or, for that matter, why Aristotle’s theory of motion should have given way to Galileo’s theory. ‘Viewed sub specie aeternitas scientists (even physical scientists) are a fickle lot. The history of science is a tale of multifarious shifts of allegiance from theory to theory.’ 27 The problem appears precisely when one is looking at science sub specie aeternitas instead of looking at it as a historical phenomenon. The difficulty is compounded if one assimilates the pursuit of knowledge to all other human pursuits, for then one can see nothing but changes and the change from Newton to Einstein must appear mysterious. Only when one sees the pursuit of knowledge as a special phenomenon and understands that it is a pursuit which shows progress rather than mere change. 28 can one grasp how misplaced the expression ‘fickle’ is in this context.
So far, knowledge has been characterised by two features: alternative pieces of knowledge (rather than error and truth or strictly incompatible alternatives), and universality. The presence of alternatives has led us to the essential historicity of knowledge and the presence of universality in all knowledge has shown that the notion of progress is inherent in the conception of knowledge, and that the historicity which depends on the presence of alternatives is unilinear in the direction of increasing universality.
The terms ‘science’ and ‘knowledge’ have been used interchangeably, as if they were synonyms. This usage was intentional. In common usage, ‘knowledge’ is a wider concept than ‘science’. However, it seems to me that nothing but confusion can arise when one is seeking to maintain a rigid distinction between the two terms. If one maintains a distinction, one will be tempted to include in knowledge false knowledge (hermetic cosmology or witchcraft, for instance) and tempted to imagine that science is that part of knowledge which is absolutely true knowledge, knowledge of which one can be certain. Such temptations ought to be resisted and one of the surest ways of resisting them is by treating ‘knowledge’ and ‘science’ as synonymous.
Any decision between competing pieces of knowledge should be taken to be a question of preference, not a matter of either/or. One’s preference is determined by comparing the degrees of universality of the knowledge in question and consistent preference of the more universal produces progress. Such progress is rational because it results from the unrelenting criticism 29 of the less universal and the consistent preference of the more universal.
Knowledge is also characterised by other qualities. First, there is a common-sense preference for quantifiable and precise knowledge. But quantification and precision are only valuable because they make criticism easier. They have no intrinsic value. Second, one tends to prefer knowledge which leads to predictions, and by that standard much of Freudian psychoanalytical knowledge is ruled out. Nevertheless, it can often enlighten and enrich 30 and no knowledge should be ruled out simply because it does not lead to predictions. Third, one tends to prefer knowledge which has been ‘accepted’ to theories which are held by people in solitude. But if one prefers knowledge which has been ‘accepted’, one allows that the people who have accepted it have removed it from criticism by their ‘acceptance’. This is not to say that knowledge which has withstood criticism should not be ‘accepted’. It is merely to say that ‘acceptance’ as such cannot be a characteristic of knowledge. Fourth, one tends to regard certainty as an essential quality of knowledge. But since knowledge is knowledge of universal regularities, no such knowledge can be in any sense certain. Fifth, to many people knowledge is cumulative. But since progress of knowledge consists in our ability to explain more and more of the known particular facts by fewer and fewer general laws, knowledge cannot be said to have grown by accumulation of information about particular events. The above list of properties of knowledge is a list of accidental properties which are valuable if and only if they make criticism easier or more convenient. In no instance ought one to criticise any knowledge because it fails to live up to any preconceived and dogmatic standards of quantification or certainty; or because it does not yield predictions or does not add new particular pieces of information to the store we already have.
The Mechanics provided by Criticism
We are thus left, in addition to the notion of alternatives and universality, with the notion that knowledge can count as knowledge only when it is criticised. The best definition of knowledge, therefore (leaving aside alternatives and universality), is that we count as knowledge every theory which is left standing when all conceivable criticism has been temporarily exhausted.
If criticism is to play such a crucial role in the characterisation of knowledge, and if it is to join the presence of alternatives and the inbuilt notion of universality as senior partner, so to speak, one will have to make sure one understands what one means by it and whether it is the same as ‘rationality’.
On principle, in making criticism the crucial quality of knowledge, one is saying that in knowledge all statements are open to criticism. This initial move invites an immediate rejoinder: the statement that all statements are open to criticism is open to criticism. If it is not, the counter-move continues, it ceases to be part of our knowledge and, what is more, it ceases to be part of our knowledge of knowledge. 31
I do not consider this counter-move to be significant. The counter-move may well be legitimate and, indeed, should be legitimate. But it does not follow from such legitimacy that it will lead to valid criticism of the statement that all statements are open to criticism. All it says is that the statement that all statements are open to criticism could be criticised. This amounts to saying that the statement that all statements can be criticised is not a dogma or an article of faith and that it does not enjoy any privileged status. If and when it has been criticised validly, we will have to think again. This I take to be the meaning of W.W. Bartley’s Panrationalism. As he himself says, Panrationalism is more a matter of attitude than of logic. 32 This Panrationalism also draws our attention to the intimate connection between criticism and rationalism. We say, if we are Panrationalists, that it is rational to criticise everything and to hold on to only those statements which have so far withstood criticism. In this view, ‘reason’ does not denote a substantive faculty or a correct method of arriving at statements which are true; but a negative quality. When one is rational, one is open to criticism and an absolutely limitless invitation to criticism is the essence of rationality. Any view held in defiance of criticism, for whatever reason at all, is an irrational view. If rationality is not a substantive faculty like ‘the dictate of right reason’ or ‘obedience to observation’, but merely total criticism with no holds barred, it is perfectly conceivable that two people who make opposite conjectures and hold incompatible views are both rational as long as they will encourage and tolerate and digest criticism of their views.
In Panrationalism, nothing is exempt from criticism and all criticism is legitimate. It is just as legitimate to criticise by saying, ‘I have a gut feeling that…’ as by saying, ‘It is self-evident that…’ as by saying, ‘Observation shows that…’ There is no criticism that is privileged or more ‘rational’ than any other. Somebody may criticise by appealing to authority; somebody else, by appealing to tradition or to consensus. It is then open to the opponent to rebut and an appeal to, say, authority is only reprehensible if it implies a denial of Panrationalism – that is, if it implies an invitation that the authority appealed to must not be criticised. No criticism can be ruled out in advance. One has to allow any criticism and see what happens. One cannot lay down guidelines that the criticism that position A is contrary to observation is a ‘better’ criticism than the criticism that position A is contrary to gut feeling.
Mostly when invitations for criticism are issued, it is implied that criticism must follow certain ‘rules’. For example, one may start by pointing out certain inconsistencies in the views of one’s opponent. Then one may demand to know what observations entitle him to hold his views, and so forth. In short, one behaves as if there were guidelines for criticism and as if criticism meant simply a standard scrutiny of one’s opponent. If all standards are met, it is assumed, criticism must stop. But all guidelines (does the view satisfy experience? is it consistent? is it formulated correctly in the linguistic sense?) are nothing but disguised attempts to define correct knowledge in advance. If there were such guidelines, one would have to know what kind of knowledge one is seeking; one would have to know, in other words, what one seeks to know. Such guidelines are then nothing but standard checks to see whether any proposed knowledge actually conforms to one’s definition of knowledge.
The crux of the matter is that one cannot, in advance of having knowledge, say what knowledge is and then determine the royal road for getting there. In order to find knowledge we must proceed ‘rationally’. But we do not mean by proceeding ‘rationally’ that we know beforehand what we want to find and that in employing the correct method (say, observation) we will be sure to get there. On the contrary, ‘rationality’ is just a word to describe the correct way of finding out what is going on by using unlimited criticism. Let us go briefly through all possible meanings of ‘rational’ to see what we will be left with when all trivial or false uses have been eliminated.
To be rational is to make valid inferences. In this sense, the word ‘rational’ has a substantive meaning, but the procedure is trivial.
We also say that we are ‘rational’· when we are behaving intelligently. Intelligence, however, is not a rational faculty at all. It is a state of mind where we are aware of the fact that we have a choice between at least two possible courses of action or thought and we choose the ‘right’ one. If we are guided by experience and if we can make a proper judgment by drawing on our evaluation of the consequences, we are acting intelligently. Such choice cannot, however, be described as a rational choice.
One could, however, define arbitrarily what would be the ‘right’ choice. There are two possible ways of defining ‘right’ in this context. We could define ‘right’ as what is compatible with sense observation (leaving the question of whether there ought to be falsification by observation, verification by observation, corroboration by observation, etc., aside). Or we could say that to make the ‘right’ choice is to obtain the consensus of a speech community, of the participants in a language game, of professional colleagues, etc.
If we decide on the first definition of ‘right’, we are pretending to know what the world is really like. We assume we have an ontology and that the world is the sort of entity which can best be fathomed by observation. This way of defining ‘right’ must be dismissed because it forces us to pretend that we really know what we are setting out to find. The inadvisability of such a procedure becomes very clear when we try, for argument’s sake, a different ontology. Suppose somebody says that the world was created by God. It would, in this case, be ‘rational’ and ‘right’ to abide by the authoritative verdict of a person who claims that God had revealed Himself to him. In this case, tradition and an authoritative interpretation of that tradition would appear more ‘rational’ than observation.
If we decide on the second definition, we next have to decide the question as to who is a rightful member of the community whose consensus we will go by and why a majority decision of such a community should be accorded privileged status. Following Wittgenstein, there have been many philosophers who have advocated that we must abide by the rulings of such a community. Pragmatically, this is quite commendable. But one must resist the temptation (which Wittgenstein and his many diverse followers have not been able to resist) to mistake that kind of consensus for the truth about the world. Nevertheless, if one sets up the rules obtaining in community as the ultimate criterion and prefers that criterion to an ontology, one will be forced to conclude that ‘rational’ behaviour in thought and action is ‘obedience to the rules’.
It takes very little reflection to see that in the two cases examined – deference to observation and deference to rules – one is using the word ‘rational’ to describe the means to reach an end and not to describe the ‘correct method’; at least, it describes ‘correct method’ only in so far as the end to be attained is independently known. ‘Rationality’, in both cases, is not a procedure to find the truth, but a method for getting to a goal which, in turn, is set up independently and not subject to rational scrutiny. Though this kind of rationality is a substantive procedure, it is a procedure which will not go far towards the acquisition of knowledge.
It is also possible to turn this whole matter around. One can say that one abides by observation as something rational because the rules of the community one is a member of decree that the word ‘observation’ be used in a certain way. Here, we find that ‘observation’ is a function of being a member of a community and gets its credentials from the rules obtaining in that community. Conversely, one can say that the people who abide by observation as something rational form a community, and that the membership of that community is defined by ‘willingness to abide by observation’. People who are so willing are people who believe, irrationally, in a certain ontology, i.e. in the ontology which makes observation the most likely method to make true statements about the world.
No matter which way one turns this matter around, one is always left with the same conclusion: rationality is a means to an end. The end is either an ontology, belief in which defines membership of a community; or membership of a community, which membership decrees that the word ‘observation’ be used in the way the community decrees. Either the ontology in question is the non-rational and unquestioned goal or the membership of a community is the non-rational, unquestioned, ultimate, brute fact. Either way, the exercise of ‘rationality’ is a means to an end which is itself not capable of rational examination.
Thus, we conclude that if we make the first definition of ‘right’, we are in reality using the word ‘right’ to describe a means to an end: that is, a means for discovering the true qualities of the world which we seem to know before we even start our journey of exploration. If, on the other hand, we are making the second definition and define ‘right’ as obedience to the consensus or rules of a community, we are also using ‘rational’ and ‘right’ as a means to an end, i.e. the end of determining by whose decision we will abide. In either case, the value or validity of the end itself is extra-rational.
Thus, we come to the last and only reasonable meaning of ‘rational’. It means free, total, uninhibited and unlimited criticism. The only rule for rational behaviour is that no criticism can be ruled out. 33 In this sense, ‘rationality’ is not a substantive faculty which, if used, yields the desired truths. It is a purely negative exercise, something performed on and against positions and statements. We say a person is rational when he is prepared to offer his non-rational thoughts or behaviour to criticism. Rationality, then, has nothing to do with discovering thoughts or assuming stances, but with the criticism of these thoughts and stances.
It may seem as if there were a prima facie similarity between scepticism and rationalism. Scepticism and rationality have, in fact, something in common. They both agree that we must reject the view that knowledge is true if we have a compelling or good reason for it or if we have arrived at it by the correct road. But then comes the difference. Rationality leaves knowledge standing if there is no good reason for rejecting it. Scepticism, on the other hand, is dogmatic. It doubts or rejects even when there is no good reason for rejection. Scepticism, in other words, is merely the dogmatic opposite of dogmatism.
Panrationalism is an extension or extrapolation from biological evolution. In evolution, proposals are made and the decision as to which organism will be selected is left to the consequences of the proposal. There is no initial plan and no correct way in which mutations will have to occur, not even a preliminary sizing-up as to which mutations (i.e. proposals) are likely to have a better chance. Any philosophy of science which leaves the ultimate decision entirely to criticism and forgoes preliminary knowledge as to what one wants to know, how one ought to go about it and what procedure is most certain to lead to the right goal, is constructed on the model of biology. By contrast, any philosophy of science which sees the acquisition of knowledge as the result of ‘right’ procedure and which validates knowledge by reference to, and in terms of, the procedure employed in finding it, is constructed on the model of physics, where causation is energy transfer. 34 In such a philosophy of science, correct knowledge is the result of an acknowledgment of a stimulus. Provided the stimulus is allowed to transfer its energy without hindrance or distortion, the resulting knowledge must be correct knowledge of the object in which the stimulus originated. The philosophy of science extrapolated from biological evolution operates without causal connections between knower and known. It assumes that there are proposals and that these proposals are relentlessly criticised.
In following the example of evolution, we can also get a clearer appreciation of what we mean by rationality. Rationality, we argued, is not a substantive faculty which allows us to follow a procedure which would be ‘right’ and which would lead to the desired results. It is a negative faculty of relentless criticism. However, in taking biology and evolution as our guide, we can also see that criticism which appeals to observation and uses observations to falsify pieces of knowledge (regardless as to whether they are general theories or particular statements) is preferable to any other kind of criticism. If one were to allege such a view in cold blood – that is, without biology and in vacuo , so to speak – as if it were absolutely true, one would stand on very shaky ground. Why, a critic might legitimately ask, is a referral to observation and falsification preferable to an appeal to authority or to a referral to tradition or intuition? In vacuo one cannot have an answer to such a question; and this is the reason why all old philosophies, from Bacon to Locke right down to the Vienna Circle, must break down. They imagine, without any good reason, that sense experience or observation is somehow privileged. However, when one is speaking in a biological context and understands knowledge as a relationship between an organism which has evolved in a certain environment and that environment, one can see that observation does indeed occupy a privileged position. Moreover, it occupies this privileged position when it is used to falsify, for falsification is a procedure in harmony with the general process of evolution. The proposals which are made in the form of theories are not induced from, or induced by, observations; nor are they incorporated, as Lamarck would have us believe, 35 through observation (or learned adaptation), into either organism or theory. The proposals are thrown up spontaneously and, having been thrown up, are checked, tested and possibly falsified by the environment. For this reason, the use of observation for falsification is biologically justified. We thus get a double presumption. First, there is a preference for observation as against other sources of knowledge; and second, there is a mandatory preference for using observation to falsify rather than to verify or induce because, like organisms, nervous systems propose knowledge to the environment but do not build it up piecemeal by making random observations.
However, we must end this discussion of rationality with a word of caution. In evolution, the consequences of non-fitting proposals are relentless and merciless. As soon as Homo sapiens arrives on the scene, there is a fundamental change which has important consequences for the philosophy of knowledge one derives by extrapolation from biological evolution. Man is capable of creating societies which differ from species in that they are artificially circumscribed by culture. Any culture interferes with the process of critical selection of theories because it consists, in an important sense, of the nurture of a body of knowledge which enables people to gauge in advance what they set out to discover. Proposals in the form of theories are therefore not exposed to radical and relentless criticism but are shielded by the prevailing knowledge embodied in the culture. Unlike organisms, theories are protected artificially from the consequences of falsity. For this reason, it has been possible for false theories to flourish for millennia without falling victim to rational criticism. In a culture, rational criticism cannot be permitted lest the knowledge which forms part of the social bond be endangered. The only rational thinking which is permitted is of the means–end type. The end is set by the prevailing culture and can therefore not be falsified by further criticism. This phenomenon has been well observed and described by the many sociologists of knowledge who have been able to show how, in any one culture, the knowledge considered valid is a fairly rational outcome of the intelligent means available in that particular culture.
Culture shields us also in a different sense from the consequences of false theories. Every culture creates a mutual-aid system which makes it impossible for people to suffer directly the consequences of their folly and of their cognitive failures. It organises co-operation between its members and this co-operation enables societies to flourish in spite of their maintenance of false knowledge. We will return to this topic in chapter 7 .
Reason, then, is the negative activity of criticism – and, moreover, the negative force of total criticism. If one exempts any proposition from criticism, one is not rational. Truth is what is left standing when all criticisms are exhausted for the time being. In that sense, reason and truth are interdependent. There can be no such thing as an irrational truth. Truths are found by methods other than rational procedures. They become rational truths if and in so far as they withstand criticism. Thus, when we speak of the rationality of science, we do not mean that science is the sum total of all the propositions we have been led to by following the dictates of reason; rather, we mean that we believe those propositions to be true which have not fallen a victim to criticism.
If reason is not a substantive force but merely an attitude (i.e. we should speak of having a rational attitude to knowledge) rather than an ability to find knowledge, it is ultimately linked with the notion of radical criticism. Criticism can demolish knowledge, but never assure us that what is left is known with certainty; for what is left can always be criticised the following day. A rational attitude to knowledge is an attitude one has after the knowledge has been invented; not a device for inventing it.
By the same token, genuine knowledge can never be certain knowledge. It is common usage to equate science with certainty. If we follow the correct method, it is alleged, the findings will be securely established and we will have certain knowledge as opposed to superstition, guesswork, hearsay, revelation or subjective musings – all of which are the results of seemingly incorrect ways of finding knowledge. The real reason why they lead to incorrect knowledge is that they are often exempt from criticism or that they are insufficiently criticised; not that they have been discovered by following an ‘incorrect’ method. These and similar methods are not suspect in themselves, but are made suspect by the fact that they are either linked to the avoidance of criticism or are considered proof against criticism because they are alleged to yield true knowledge.
It makes no sense to use the word ‘scientific’ as an adjective. One should never deny scientific status to any piece of knowledge as distinct from any criticism one can make of its contents. Followers of Marx defend the ‘scientific’ status of Marxist knowledge; opponents of Marx reject the claim that Marxism has ‘scientific’ status. In this debate, the discussion centres upon the question whether the truths of Marxism have been arrived at by the ‘correct’ method. If so, Marxism has ‘scientific’ status and one need not examine whether its theories are true. Such a debate is senseless. It is based on the idea that there is a ‘correct’ method and that one can distinguish in advance between scientific knowledge and non-scientific knowledge. If one does, one implies that there is a special kind of knowledge which has been reached by the ‘correct’ method. What one is really saying is that since it is ‘scientific’ because it has been reached by the ‘correct’ method, it is not open to criticism. If one wants to use the word ‘scientific’ as an adjective, one can use it to describe only that knowledge which has been left standing when all criticism has been exhausted. Since such exhaustion can be only temporary, there is really no point in using the adjective at all.
The consequences of the conclusion that there can be no certain knowledge must also be applied to our knowledge of knowledge. There are not only limits to our knowledge of the world and of ourselves, but also limits to our knowledge of knowledge. If our knowledge of the world remains open-ended, so does our knowledge of that knowledge.
There is a certain presumption that our cognitive apparatus, being an adaptive response to the world, does not get its messages and responses totally wrong. If it did, we would not be here. Organisms who make habitual mistakes are not here to wonder about it. A monkey who keeps misjudging the distance from one branch to the next branch would soon be a dead monkey and have very few offspring to perpetuate that kind of error about the environment. However, Homo sapiens is not just a monkey. He conceptualises his ‘knowledge’ and, in conceptualising, can easily make mistakes. Unlike the monkey, the concepts, regardless of whether they are right or wrong or misleading, can be used to construct cultures. They can, for example, be used as social bonds. All men who subscribe to certain concepts are constituting a certain culturally defined society. People in such a society will practise cooperation and mutual aid and thus effectively protect themselves from the disastrous consequences of the mistaken concepts they are using as social bonds rather than as instruments for orienting themselves in the world.
In view of this process by which Homo sapiens has been able to exempt himself from the relentless natural selection to which all other organisms are subject, it is frequently suggested that Homo sapiens has only one way open to him. Since he cannot rely on natural selection to weed out his conceptual mistakes, he has to give his allegiance to whatever concepts happen to be going in the culture of which he is a part and to the protection of which he owes his survival in the face of his mistakes. This suggestion is the main burden of Richard Rorty’s Philosophy and the Mirror of Nature . 36 Having lost (though Rorty does not spell this out but takes it for granted) the stern corrective selectivity of nature, man can avoid mistakes only by voluntary submission to the epistemic authority of his culture. Above all, he must submit by depriving philosophers of their arrogant habit of criticising cultures.
Without being able to contradict the argument that man has indeed lost the natural correctivity of nature by his ability to use concepts for the construction of cultures, and mindful of the fact that even our knowledge of knowledge is uncertain, I would nevertheless argue for a different strategy. Rather than recommending submission to the authority of our culture, I would plead that we see to it that our cultures are reorganised in such a way as to remove artificial protective barriers – such as dogmatic adherence to certain kinds of knowledge or dogmatic rules governing the use of words – and create instead as open a situation as possible in which there is competition between alternative theories, between alternative usages of words and concepts. Hence, the mistakes made in the formation of concepts can be criticised and corrected .
To be sure, not even the availability of alternatives is the same as suffering the direct consequences of mistakes in the animal world. The consideration of alternatives and the selection of the least criticised one is quite different from natural selection. Nature kills the organism which is not adapted. The most we can do is to kill the theory we can criticise. If we don’t, we will still survive. Thus, the consequences of mistakes are never really dire – unless we imagine ourselves in a totally unreal situation in which we have no families and no friends and no social support at all. No matter how open a culture, the minimum support which would protect us against mistaken selection of concepts would always be there. Inside a culture, no matter how open the culture, when one has chosen critically one of the available alternatives, one would still be protected from the immediate consequences in case one has chosen the wrong one. One would at least be protected from the immediate consequences. Seeing that we cannot achieve certainty, and least of all certainty about knowledge, we must leave the matter here and merely confess that, in leaving it, we are more honest than those followers of Rorty who are prepared to seek refuge behind the epistemic authorities of their culture, thus using a stratagem to create the semblance of certain truthfulness.
Descriptive Epistemology
The connection between knowledge and its history is intimate. Even the most daring scientist and researcher must watch other researchers and it is only from the history of science that he can learn what constitutes an experiment; how far he may go in inventing ad hoc hypotheses in order to salvage a theory contradicted by observations; under what circumstances he may dismiss an observation as an accident; and what the implications of incompatibility are. One could give hundreds of examples. But these brief indications suffice to show that, in an important sense, the scientist learns his craft from history, i.e. from the examples of others. This does not mean that scientists do not learn from their contemporaries as well. It merely means that scientists have to learn their craft and that since much of their craft was practised in the past, they learn from history as well as from their fellow-scientists. Even learning from an older teacher is like learning from history.
History is a sort of schooling. At the same time, it is also more than schooling. The philosophy of science is no longer ‘abstemiously restricted to a logical analysis of the status of scientific truth,’ D.T. Campbell writes. ‘While there is a strong interest in explicating the normative decision rules science should use in deciding between theories, this gets mixed up with arguments about which decision rules science has used, implicitly or explicitly, in presumably valid decisions in the past, and thus can be seen as a hypothetical, contingent search for normative rules … For such theory of science, the history of science is fundamentally relevant.’ 37 For this reason, some of the finest books on science are cast in a historical mould: L.N. Cooper’s Introduction to the Structure and Meaning of Physics , 38 R. Gregory’s Mind in Science , 39 François Jacob’s The Logic of Life , 40 as well as Jonathan Miller’s more popular The Body in Question . 41 It is true that there are still a few philosophers of science who persist with a purely formal approach to the philosophy of science. This approach has been christened the ‘Received View’ and is presented as an introduction in Frederick Suppe’s The Structure of Scientific Theories 42 But when one looks at Suppe’s alternatives to the Received View, one finds Toulmin, Kuhn, Hanson, Popper and Feyerabend. These philosophers of science have nothing in common except the fact that their alternatives are all history-oriented. Each one of them has a philosophy of science which deals with, and accounts for, the historical dimension of science. It would seem that in the philosophy of science we are faced by an impending paradigm shift away from Suppe’s ‘Received View’. We cannot tell yet quite in what direction the paradigm is going to shift. The present book is meant to determine this direction. But whatever the ultimate direction, it seems bound to move towards an acceptance of the historicity of science and away from the static Received View which has enabled philosophers of science to treat science in a purely formal and systematic way, as if it had no history or as if its history were an accidental by-product and of no relevance to science itself.
If the history of science is basically a sort of descriptive epistemology because it is concerned with how people have acquired knowledge, it can, by itself, be no guarantee that the knowledge so acquired is true. For this reason, philosophers of science, and especially scientists themselves, have often shown no interest in the history of science. But this is only one side to the question. The other side consists in the fact that we have acquired knowledge and in learning how we have succeeded in doing so, we can learn something about the constitution of the world which has made it possible for us to do so. To quote D.T. Campbell again: ‘I also want descriptive epistemology to include the theory of how these processes could produce truth or useful approximations to it: in what possible worlds, in what hypothetical ontologies, would which knowledge-seeking processes work?’ 43
Look at the emergence of the molecular theory of gases. At first, it seems, it was an ingenious metaphor, culled from French sociology. 44 This ‘how’ was no indication of truth. But tests enabled the metaphor to be declared literally true. It became a dead metaphor 45 and now these molecules can even be observed by means of electron microscopy. The study of this particular history of science involved description as well as normative considerations; the normative precepts derived from it tell us something about the constitution of the world in which one can arrive at some truth by starting with metaphors. ‘The rationality of science,’ S. Toulmin says, ‘cannot depend solely on the formal validity of the inferences drawn within the scientific theories of any given time … We can recognise the source of science’s explanatory power only if we come to understand also what is involved in the processes of conceptual change: in particular, how the character of these processes can give authority to new concepts, new theories, even new methods of thought, inquiry and argument.’ 46 The concept of field, for example, ‘was introduced by Faraday to aid him in visualising the effects of charges on other charges,’ writes L.N. Cooper. 47 ‘Maxwell attempted to visualise the electric field as a mechanical stress in the medium of space, the ether. Since that time, the electric field has acquired a significance deeper than any such mechanical interpretation and, like some other concepts, such as momentum or energy, finally becomes more important than the specific theories out of which it came.’
There is a peculiar irony here. For a Positivist, the nature of the material itself decides all questions of truth, experiment and ad hoc hypotheses. It is only to the non-Positivist that these questions are not themselves part of the subject-matter of science. For this reason, however, the non-Positivist becomes the very man who has to find out what actually happened in the history of science (i.e. he is the man who has to ask a question which bears a notoriously Positivist stamp). To the Positivist, on the other hand, this notoriously Positivist-sounding question is a matter of peripheral and negligible curiosity.
If this kind of connection between knowledge and history is intimate, there is another kind of link which is constitutive of the very conception of knowledge. The human nervous system – and I will take it for granted that we mean by knowledge a certain relationship between that nervous system and the environment – has evolved on a large planet moving with a comparatively low velocity. This means that for all practical purposes and certainly so far as the mere survival of that nervous system is concerned, the nervous system is adapted to large masses and low velocities. Its powers of perception as well as its faculties of concept-formation are therefore primarily formed in attempts, by trial and error, to adapt to this kind of environment in which bodies move and in which their position in space and time is always known. 48 When sophisticated apparatus and sophisticated interpretations of the observations made with sophisticated apparatus became known, it turned out that human beings have considerable difficulty in adjusting their powers of perception and intuition to the idea that there are subatomic particles moving with velocities approaching the speed of light and whose position in time or space cannot be known with certainty or, if we know their position in space, we cannot also, at the same time, know their position in time and vice versa.
In inviting the reader to imagine what it might be like to live on an atom, I am inviting him to imagine what it might be like to have evolved on an atom or electron and to be adapted to it. This is quite different from inviting somebody to imagine that he is an atom or that he is the size of an atom or electron so that he can witness what happens when electrons jump from one orbit to another. The second invitation has been proposed as a thought experiment to help with the ontological problems created by Quantum Mechanics. 49 I am not here concerned with a thought experiment, but with the important fact that there is an essential link between the fact that we are here on earth and not on an electron and the faculties at our disposal for understanding that we are here.
Our nervous system has evolved as a successful adaptation to a space which contains bodies and which is something absolute, not something which results from the relations between bodies. When asked to consider that space is a ‘property’ of bodies, our mind boggles and an intuitive grasp of such a consideration becomes well-nigh impossible. This indicates that there is a sort of historical significance in the fact that Euclidean geometry and absolute space were discovered before we began to use Riemannian space and the Uncertainty Principle. It is indeed inconceivable that it could have been the other way round. By contrast, we could try an experiment of thought. What would the history of science have been like for creatures existing on sub-atomic particles moving with the speed of light? One could argue convincingly that, for such creatures, the discovery of Riemannian space and of the Uncertainty Principle would have been easy and natural; probably as natural or no more difficult than the discovery of Euclidean space and the principles of causality presupposed by the laws of gravity in absolute, Newtonian space was for human beings living on a comparatively large planet moving slowly round the sun. However this may be, it would seem that there is a real and significant link between the growth of knowledge and the progression from Euclid to Newton, and from Newton to Quantum Mechanics. It is no accident, given the evolutionary pressures our nervous system is subject to, that the progression of knowledge was not the other way round. We could, in other words, not conceivably have started with Quantum Mechanics.
Physicists frequently state that the description of their theories in ‘plain language will be a criterion of the degree of understanding that has been reached.’ 50 Professional scientists may not take such statements too literally; but they enshrine an important truth about the historicity of the whole scientific enterprise. First, there comes common-sense understanding; later, sophisticated mathematical formulation. However, since our senses and the nervous systems which control them are adaptations to the common-sense environment, the eventual reduction of complicated concepts to common-sense language is by no means just a matter of scientific popularisation and of attempts at science for the layman. It is, on the contrary, an essential part of the whole scientific enterprise. Vision has become specially adaptive, and more adaptive than smell, to bipedal creatures. On top of vision we get the adaptiveness of consciousness and of the words that are linked to consciousness. This trio – eyes, consciousness and words – determines nine-tenths of our perception of the world. Thus, we are creatures to whom the presence of a limited object such as a chair or a table is the referent of a word and an entity which is located at a given time in a certain space. When we see the same entity somewhere else, we must conclude that it has moved and that time must have elapsed. In this way, our evolutionary conditioning militates all along against an intuitive grasp of Quantum Mechanics .
Frege believed that true knowledge must be based on changeless, ahistorical properties and relations. This view is untenable. First, our neurophysiology is the product of natural selection by an earth environment. This means that, at best, its mental imagery is compatible with macro-events. It cannot grasp or formulate concepts which are an exact fit to macro-events, let alone to micro-events. Hence, it can never be a precise mirror of nature; only a lamp which partially illuminates nature. Next, knowledge is always growing knowledge. Like species themselves, any bit of knowledge is unstable and can be eliminated. Hence, there cannot be unchanging and ahistorical properties of knowledge. To say that there are, is like saying that no matter what the species, light sensitivity is always the same and always interpreted by its owner in the same way. In the growth of knowledge, we always operate with such loose concepts as ‘force’ or ‘energy’. At one point, Newton succeeded in giving a precise mathematical definition to ‘force at a distance’. It turned out 200 years later that while the concept had been defined with the utmost precision, there was nothing it referred to because Newton’s gravity (i.e. ‘force at a distance’) was instantaneous and there can be nothing faster than light. The concept of ‘maximum velocity’ was loose, too. Then it came to be defined precisely; and now some scientists are beginning to wonder whether light does move with the maximum velocity possible.
All efforts to establish that our knowledge is based on unchanging and precise concepts and their relations to propositions, etc., are efforts to establish rules, in advance, for right knowledge and wrong knowledge. Both Frege and his opponents – Kripke, for instance – are, ultimately, inductivists. They believe they can arrive at truth not by error elimination, but by following the correct procedures for formulating propositions. Some of their propositions, they grant, may turn out to be empirically false. But they think that half the battle is won if the words have the precise referent. Knowledge being what it is, nothing is won – even if the words had the precise referent.
We are quite used to speaking of ‘curved space’ when light is bent as it passes near the sun. Such an expression must be a metaphor to bring the complicated calculation involved in General Relativity within the layman’s reach. But the apparatus used for the measuring of the effect of the bending is not a metaphor. Not only does it exist on earth and can be handled, but the correctness and accuracy of the observations involved depend on what the human eye can see with the help of that apparatus. In the end, even in physics and chemistry, data have to be analysed with the help of the eye. True, we have electronic microscopes and sophisticated scanning devices. But, in the end, the pointer readings have to be made by the human eye. In physiology and biochemistry, many experiments are designed to elucidate ‘invisible’ spatial structures. But the ultimate goal of these experiments is to produce three-dimensional pictures or a series of pictures of atoms in space. ‘Visual perception,’ John Ziman writes, ‘plays a vital part in the most aristocratic branch of experimental science – the physics of elementary particles.’ 51 In the bubble chamber, high-energy-charged particles make visible tracks which can be photographed in order to be examined by the human eye and inspected for unusual ‘events’. These and many other considerations show that the historical connection between common-sense reasoning and sophisticated mathematical formulation of theories has always to be kept alive and that the unilinear development of scientific theories from ordinary observation and learning to abstract theories is an essential and inevitable ingredient in scientific knowledge. The connection is indeed historical in essence and in its nature. It is not just a ‘historical’ accident in the sense that we might be able to conceive that the physics of elementary particles could have been generated in some other way. The connection has evolved and could continue to evolve. It is not static. All efforts to give a permanently valid description of the connection between formal languages and natural languages is therefore misguided; and so is the Frege enterprise and everything which goes with it to determine the eternally valid links between language and reality, reference and referent.
The historicity of knowledge is further underlined by the development of modern physics. For a long time now, we have looked to physicists for information about what the world is really like. In an important sense, the trust that physicists would find out was linked to some form of materialism. If the world is made of matter, then it was reasonable to expect that people who study the behaviour of matter would be able to tell us what the world is like. Physics was at first phenomenally successful in the quest for the real world. Whenever there were discrepancies between intuitive observation and the findings of physicists, one explained them in terms of the differences between appearance and reality. The world, we used to say, appears one way and is another. It may be as in Plato’s Timaeus , but appears as in Aristotle’s De Animal . 52 Motion may appear to be the result of a push, as Aristotle observed; but is in reality a permanent condition unless stopped, as Galileo reasoned.
Eventually, however, the belief that one was tracking down matter disintegrated. Matter turned out not to be a stuff, but particles; and atomic particles revealed themselves more and more sub-divisible until finally the sub-atomic particles turned out to behave as if there were no local causes, no absolute locations, and as if communications between them were instantaneous and faster than light. If this were not enough, even the old gap between matter and life and between matter and mind seemed capable of being bridged – not because life and mind turned out to be material events, but because matter turned out to be less and less like solid stuff: even when fairly solid, matter emerged as capable of negentropy and self-organisation and thus similar to the activity performed by what we thought were purposeful minds. If we stick with the distinction between appearance and reality, we are now faced with such a huge discrepancy between the world as it appears – the world with local causes and determined movement of particles which have velocity and at any moment in time a determined location – and the world as it is alleged to be in reality according to Quantum Mechanics and the Principle of Indeterminacy. These two worlds are so much at variance that the old explanation that the one is the world of appearance and the other the world of reality, will not sound plausible. On one hand, we have Quantum Mechanics and, on the other, a macroscopic world in which we are living, in which everything has a cause, in which one can locate particles, and in which we are rationally entitled to insist that events are either like bullets or like undulations but that they cannot be, under any circumstances, both at the same time. There have been many attempts to bridge the gap. We have been given theories about hidden variables and infinite worlds to resolve the paradoxes of Schrödinger’s cat and Wigner’s friend, paradoxes which arise when one is looking with eyes formed and evolved in one world at the events taking place in the other world.
It will be helpful if one applies historical reasoning to this problem. Our cognitive apparatus has evolved in a macroscopic world and we are living in a mesocosm 53 and are adapted to it. The stuff which has evolved is made of the stuff to which we are adapted. This historical process is quite intelligible and has led us to perceptions in which local causes are paramount and ultimate. Knowledge of this ultimateness and paramountcy is a historical product of the interaction of molecules with each other, from the primeval soup in which life originated to the evolution of the nervous system of Homo sapiens . This historical process is not an illusion or an appearance of an underlying, different reality. It is simply what has happened when molecules behaved in a certain way. The knowledge which we derive from the historical end product is a sort of self-reference of molecules. Molecules, organised in a certain manner, will react to other molecules in an adaptive manner, i.e. tell us about local causes and the relationship between velocity and location in time and space.
However, these same molecules can also stand in a different relationship to each other. When, instead of letting them evolve as living systems vis-à-vis non-living molecules and vis-à-vis other living systems, we start bombarding them with sub-atomic particles and split them up into the sub-atomic particles they consist of, we simply get a totally different kind of relationship between electrons and protons and muons, etc. The difference in these relationships is not due to the fact that one is an error or that one is an appearance of an underlying reality; neither is it that one is governed by hidden variables and the other is not. The difference results from two lines of evolution. In one line, we get living systems in relation to their environment; and in the other, sub-atomic particles in relation to each other. Though both sets of relationships are made up of the same ‘stuff’, they reveal themselves in different ways. If one bombards sub-atomic particles with sub-atomic particles, one will get one set of relations between them. If one allows atoms to form molecules and then lets molecules self-select each other to produce living matter, one will find that the relationship between such molecules is different from the relationship which emerges when sub-atomic particles are related to sub-atomic particles. Looking at the problem in this historical way, the variance the mesocosm we are conscious in and the microcosm we are only conscious of is not only not surprising, but is precisely what one would expect. At bottom, there is only one world. But depending whether the knowledge we have of it is one kind of self-reference or a different kind of self-reference, this one world refers to itself in at least two different ways. In one way, the self-reference is molecules as living organisms to each other and to non-living molecules; and in the other way, the self-reference is sub-atomic particles to sub-atomic particles. In one self-reference, there are local causes and in the other, there are not. 54 One arrives at this view of the problem by realising that our knowledge of local causes is neither a chimera nor an error, and that the absence of local causes is not a fault in our understanding of Quantum Mechanics 55 but that the knowledge of local causes is the result of a very specific historical process to which molecules have subjected themselves. If one studies the sub-atomic components of these molecules by bombarding them with other sub-atomic components, there are no local causes.
The relationship between scientific knowledge and its history is made even more complex by the fact that the growth itself is no validation of the knowledge acquired, not even of knowledge tentatively and temporarily held. Both in individual life-spans and in mankind as a whole, knowledge is acquired, discovered and learnt. All this takes time and is slow. Allowing for the fact that some of the preconditions for all these acquisitions are to a certain extent genetically programmed, we are merely shifting the problem of the historicity of science further back. For the genetic programming in, say, a human being, is itself the result of millennia of evolution by natural selection which is at bottom a process of knowledge-acquisition. At any rate, taking the genetic preconditions into account, there still remains the fact that the process of subsequent learning and acquisition cannot be automatically a validation of the knowledge acquired. Biological organisms which make too many mistakes in this acquisition get eliminated by natural selection. But theories learnt and formulated by conscious human beings have to be scrutinised artificially. The mere acquisition and survival, in this particular case, cannot be a guarantee of validity. Hence, the need for a philosophy of science and, more specifically, for a philosophy of science which takes cognisance of the fact of the historicity of science.
The Sociology of Knowledge
Finally, we come to an almost accidental link between knowledge and history. Ever since it was discovered that cognition is not completely constrained by nature and that there is a gap between what we know and what our observations might oblige us to know, philosophers have realised that the way the gap is to be filled might be determined by sociological factors and, further, if filled by sociological factors, they must vary with the passage of time so that what we know changes with the passage of time. Thus was born the sociology of knowledge.
The sociology of knowledge consists of a formal part and a substantive part. The formal part seeks to derive our ability to abstract and to form expectations about the regularities in the environment from man’s social experience. Examples of this derivation have been discussed above, in the section entitled ‘The Faculty of Abstraction’.
The substantive part of the sociology of knowledge deals with the actual content of knowledge and seeks to explain it as a function of social experience or social self-interest. Since thought is not rationally determined, the sociology of knowledge says, it must have a social or existential basis. 56 There is ample room for speculation and exploration here. Some sociologists have tried their hand at explaining the rise of Protestantism as the result of the penetration of sixteenth-century regimes by vested interests. This penetration weakened the regimes and made the Catholic belief in the immanence of God implausible. 57 Other sociologists have attempted to explain the support of scientific research at the expense of magic, astrology and alchemy in the seventeenth century as the result of the Puritan spirit of prudential calculation. 58 There is a famous book which relates the rise of Newton’s Mechanics entirely to the growth of the bourgeoisie in early-eighteenth-century Britain, 59 and another in which the coming of Quantum Mechanics is explained in terms of the social turbulence of the German Weimar Republic. The Uncertainty Principle, we are told, sounded plausible and quite certain in a society in which all traditions and values had become uncertain. 60 Similarly, we have often been told that the reception of the theory of natural selection in the nineteenth century owed a lot to the aspirations of the liberal bourgeoisie because the theory appeared to countenance capitalistic competitiveness. Cartesian mechanism, to look at another example, with its emphasis on single elements and the clock-like relations between them, mirrored a society of alienated individuals and the division of labour governed by iron laws of economic self-interest. And again, the Central Dogma of molecular biology (DNA instructs proteins and proteins carry out these instructions) appeals to people who are used to a society in which intellectual labour dominates mere production and designs, execution. It is the science, the argument goes, of the white-collar elitist. In yet another example, we are told that seventeenth-century mechanical philosophers insisted on the passivity of matter and the view that particles do not move or do not stop by themselves and on a divine ordering of natural law because these views seemed plausible in the light of their latitudinarian social and religious convictions. 61
In a famous pronouncement, Marx generalised this explanation of knowledge by saying that the ruling ideas in each epoch are the ideas of the ruling class. The German socialist philosopher Ernst Bloch maintained that formal logic is a bourgeois invention to validate the status quo. Formal logic, he says, never yields new truths. By logic, one can make explicit only what was implicit in the premisses. Hence, formal logic is the paradigm of conservatism and ought to be rejected by revolutionaries who are trying to change society. 62
On a more systematic basis, Mary Douglas has endeavoured to explain cosmological theories in terms of the social structures of the societies in which they are prevalent. Societies with a strong centre of authority have one kind of theory; societies with a weak centre, another. 63 Even more systematically, Berger and Luckmann have produced a famous book in which they seek to show that our entire knowledge of reality is a function of our social experience and varies with our social experience. 64 In all these pursuits there is a very strong realisation of the links between history and knowledge; for as social structures change with history, so does the knowledge which appears plausible to people experiencing the structure or is inspired by the structure.
The rise of the sociology of knowledge is intimately linked to the decline of Positivism. As long as people thought that knowledge is what observation obliges us to believe, there was no need to seek an explanation for why certain people hold certain pieces of knowledge outside observation. At most, one went to the psychology of perception and the mechanisms of speech acts to explain delusions of perception and errors of expression. The great founders of the sociology of knowledge, from Marx on, have all been Positivists of sorts and have therefore argued that our knowledge of nature, being based on observation, is exempt from sociological explanation because it can be explained satisfactorily in observational terms. Even Durkheim argued that there was no need to doubt the objectivity of scientific knowledge of nature just because we know that our ideas of time, space and cause, and our ability to classify, are constructed out of social elements. 65 Mannheim, too, considered knowledge of nature exempt and capable of standing on its own feet. While other knowledge varies and fluctuates, knowledge of the immutable laws of nature grows by simple accumulation which represents the growth of knowledge. As far as our knowledge of nature was concerned, there was no need to run to sociology to explain its presence and its growth. 66 Even the younger generation of sociologists of knowledge openly confess to some kind of Positivism. David Bloor, for example, argues that we can distinguish between perception and thinking in our knowledge. In so far as knowledge is grounded on perception, there is no call for a sociology of perception. It is only when we are faced by the ‘thinking’ part, i.e. by the theoretical component, that a sociological explanation is required. The theoretical is the social; the observational is governed by nature. 67
There is, then, a long tradition in the sociology of knowledge in which knowledge of nature is identified with Positivism and observation; on this view, only non-observational knowledge as it displays itself in religion, art, literature, morals and politics is in need of a sociological explanation. At first glance, one might therefore think that our entire rejection of Positivism as an explanation for the acquisition and growth of knowledge of nature would drive us all the more firmly into the arms of the sociology of knowledge. If not even knowledge of nature itself (contrary to the ideas of Dürkheim and Mannheim) cannot be exhaustively justified in terms of observation or reduced to protocol sentences, it must be explained as the function of something other than experience of nature. The recourse to the sociology of knowledge, it would seem, becomes all the more urgent, the firmer one is in one’s rejection of Positivism.
There is, however, a third way out. If we all reject Positivism nowadays, we still have a choice between a sociological explanation of knowledge and a biological explanation of knowledge. The correct rejoinder to the sociologists who would welcome us into their parlour now that we have given up Positivism is that the non-experiential and a priori parts of knowledge can be explained biologically and that the recourse to a sociological explanation is un- necessary. It is perfectly true that one cannot justify any theory exhaustively and conclusively in observational terms. But it does not help and seems unfruitful to argue that therefore the theory in question must be explained as the dictate of class interest or a reflection of a social structure in which the people holding it are living. If one chose this recourse to sociology, one would soon deprive knowledge of its relational character and reduce it, as indeed the sociology of knowledge does, to some kind of narcissistic reflection of the known in the mirror of one’s own social existence. If biology does not work and if the biological derivation of the non-experiential component in all knowledge had to be given up, we might indeed be left with nothing better than a recourse to sociology. But until such time as the biological explanations are refuted, we should not wilfully and voluntarily embrace a sociology of knowledge which impoverishes our knowledge of knowledge. As long as we have biology up our sleeve, it does so needlessly.
If the main link between knowledge and the recourse to a sociological explanation merely results from a disregard of biology, there is also a more sinister link. On the old view of knowledge, knowledge was the result of correct observation and correct reasoning. Provided one did not make mistakes and provided one did not allow oneself to be side-tracked by neurotic obsessions and phobias, one would be entitled to expect that correct reasoning and correct observation would lead to correct knowledge. However, for some time now we have recognised that there can be no such ‘correct’ reasoning and no such ‘correct’ observation because in order to distinguish between ‘correct’ and ‘incorrect’ observation and reasoning, one would have to have an ontology – i.e. one would have to know in advance what the world one wants to know is like. Since one cannot have such an ontology without knowledge, the derivation of the ‘correct’ method of acquiring knowledge from prior knowledge of an ontology has proved futile. The reasonable move, once this is recognised, is to admit that knowledge is not acquired by the pursuit of a ‘correct’ method; rather, it is what is left standing when criticism has been exhausted. We have seen in the above section on ‘The Mechanics provided by Criticism’ that this move is the sensible answer to the demise of Positivism.
The sociologists of knowledge, however, have refused to accept this answer. Instead, they have continued to think that if there is knowledge, it must be the result of a ‘correct’ method of getting it. Seeing that there can be no ‘correct’ method, they have jumped to the conclusion that the reason why it is not correct and why it does not yield universally true knowledge is that it is the dictate of class hatred or of a prevailing mode of production, or reflects a social structure. In other words, the contribution of the sociology of knowledge to this whole debate has consisted in an attempt to unmask and expose the real reason why people have pursued a certain method to acquire knowledge. Instead of giving up the notion that if one has knowledge there must be a correct way of getting it, the sociologists of knowledge have engaged in what appears as an enlightening manoeuvre: if one exposes the real reason behind scientific method, one will know more than one did before the real reason was exposed. In this sense, the sociology of knowledge has been able to parade as a form of enlightenment because it has purported to reveal what is behind the so-called ‘correct’ method. In the present argument one must, however, conclude that the pursuit of such a strategy is not enlightening but a form of deception. Genuine enlightenment would consist in the recognition that there is no correct method; not in the revelation that there is an ulterior purpose in the pursuit of the correct method.
Suppose, however, that the pursuit of knowledge is not a rational pursuit and not the result of following the dictates of substantive reason. Suppose, instead, that we pursue knowledge by allowing proposals and hypotheses to be put forward and then expose them to criticism. In such a situation, we consider those beliefs to be knowledge which has withstood criticism. Truth is what is left over, after criticisms have been exhausted. On such a view of the growth of knowledge, there is no denying the gap. But there is no need for a systematic investigation as to how the gap was filled and what factors had to be invoked to follow the path of reason from observation and experience to perfect knowledge. On the view that the growth of knowledge is the result of criticism rather than the result of obedience to the dictates of substantive reason, and on the view that rationalism is the pursuit of criticism rather than an enslavement to the rules of ‘reason’ – the sociology of knowledge becomes superfluous.
Nevertheless, sociology has a very important contribution to make to our knowledge of the growth of knowledge. Its contribution does not consist in the substantive explanation of how the content of knowledge is determined sociologically, but in a negative explanation of why knowledge is rare and intermittent and why it needs very special social conditions in which it can grow. It is common knowledge that the growth of knowledge is very intermittent and discontinuous, and that it happens only in very few places. Most of it occurred during the last three centuries, though there was a certain amount in ancient Greece. It is therefore not far fetched if one seeks an explanation for this curious phenomenon in history or, more specifically, in the history of social structures.
While certain basic knowledge about the weather, the growth of seeds, the nurture of animals, and similar matters, is subject to the selective pressures of the environment – people who believe that animals reproduce non-sexually cannot survive if they are dependent on meat and other animal products – there are many pieces of knowledge held consciously which have very little direct bearing on physical survival. They can be held or discarded regardless of the environment in which the people who hold them are living. Nevertheless, they are frequently used for a very useful function. They are used as a social bond so that societies can be formed with defined members and these societies can survive because defined membership makes co-operation and division of labour possible. Membership, we might say, is defined by subscription to certain beliefs about God, nature, the universe, man and his destiny, and so forth. One might describe this kind of social bonding as catechismic, for membership of such a society depends on being able to give the ‘correct’ answers to a catechism. Clearly, in such a society the contents of the beliefs used as a catechism are not available for criticism and therefore cannot be examined. They are held dogmatically. Such dogmatism should, however, not blind us to the fact that it performs a very useful and essential function in keeping the society together. One might say that knowledge, in such cases, is being used for non-cognitive purposes. One could compare such non-cognitive purposes with the non-monetary purposes for which money is frequently used. Money is intended as a means of exchange. But given modern opportunities for communication, it has turned out that it can also be used as a commodity and that its value, in such cases, is not determined by the availability of the goods and services it represents, but by the supply and demand for money itself. There is no denying that the non-monetary use of money serves a useful purpose, just as the non-cognitive use of knowledge can serve a useful purpose. In such catechismic societies, one has to protect knowledge artificially from criticism. Knowledge has to be elevated to the rank of dogma. The easiest way of doing this is by stopping communication with outside societies in which other knowledge is used, possibly also catechismically. If communication has to occur, one has to take other steps for protecting one’s own knowledge, lest it be criticised and, at least in part, abandoned. If it were to be abandoned, the bonds of society would become loosened. If knowledge were submitted to trial and error, this would be tantamount to dissolving the social system. In catechismic societies, people have to adopt a mercantilistic attitude to knowledge. In catechismic societies, people practise cognitive mercantilism and thus exempt knowledge from the pressures of a free market.
Though there are many societies which are quite literally catechismic because they require a confession of faith or a subscription to a set of beliefs as a qualification for membership, there are many other societies which are only metaphorically so. In these societies, there is a catechism but no formal declaration is required for membership. On the contrary, members are born into the society and in being born into it they become automatically committed through nurture and possibly even through heredity to certain axioms, values, sentiments and beliefs which remain impervious to experience and indifferent to contradiction. They are present in the society before the individual is born and will continue after his death. In this sense, they are not psychological features but social constraints. To an outsider, it will appear as if every baby, at birth, is baptised to a certain catechism; but in reality, the catechism involved is merely a hypostatisation. In reality, it is part of the social constraint which shapes every individual’s life and does not permit questioning because of the character of the social order concerned. 68 It follows, therefore, that only in societies where the social order is non-catechismic (i.e. cognitively neutral) can beliefs and theories be examined critically. For trial-and-error testing one needs the presence of alternative theories. When a social system consists of a given set of theories about the world, it is impossible for alternatives to be entertained. But when a social system is open or neutral, then the presence and entertainment of alternative theories will make no difference to the social system. In this way, the progression of scientific knowledge is accidentally related to social systems. This is not to say, as so many people have alleged, that one can take one look at the structure of a social system and determine from that structure what kind of theories the people in that social system will believe to be true. On the contrary, the truth of any theory is quite unaffected by the social structure of the society in which it is held to be true. But it does mean that since knowledge depends on the possibility of trial and error and that since trial and error depends on the presence of alternative theories which, if the old ones do not pass the trial, can be substituted and, in turn, subjected to further trial and error, there are certain kinds of society in which genuine knowledge cannot grow. If one recognises this, one will still not be able to determine what special kind of knowledge can grow in societies in which knowledge can grow; but one will be able to detect the kinds of societies in which beliefs must be held dogmatically because they are part of the very social structure itself, the kinds of societies in which genuine knowledge cannot progress. By the same token, we can determine the optimum social conditions for the growth of knowledge, although none of these conditions in itself will enable us to say what kind of knowledge will grow or, after it has grown, why this particular kind of knowledge grew under these particular conditions. This connection between social conditions and the opportunities for the growth of knowledge constitutes a sort of negative sociology of knowledge.
This negative sociology provides two explanations. First, it explains the optimum conditions for the growth of knowledge. These conditions are at their best when cognitively and intellectually there is something resembling perfect competition. This means that there are no exigencies of social bonding which would exempt any piece of knowledge or belief from radical criticism. This, in turn, would mean that there is absolutely no belief or knowledge which is required to fulfil an extraneous social or psychological function, such as encouraging co-operation or providing emotional comfort. This, in turn, would require a situation in which co-operation is derived from and based upon a different mechanism (i.e. different from the community-forming power of shared beliefs and shared rituals). It would also require a situation in which people are either not in need of emotional comfort from certain beliefs – that is, they are so integrated emotionally and so regenerate that comfort is unnecessary – or that comfort is provided by some force other than solacing beliefs. However this may be, one can clearly see that it is just as difficult to obtain a social field in which there is perfect intellectual competition as it is to construct a field in which there is perfect economic competition .
Second, negative sociology explains why the growth of knowledge has been so rare and so intermittent in human history. In most of the situations we find in history, the degree of perfect competition has been so low that knowledge was not able to grow. The presence of perfect competition – even of competition approximating perfection – is very rare. In this way, negative sociology can explain the absence of the growth of knowledge and explain why in a few societies, at certain times, there has been a growth of knowledge. Unlike the conventional, positive sociology of knowledge, the negative sociology of knowledge does not claim to be able to explain the content of knowledge or the reasons why knowledge with content A is, in certain societies, preferred to knowledge with content B.
The negative sociology of knowledge provides an explanation for the growth of knowledge which is in marked contrast to the explanation given, for example, by Joseph Needham. 69 According to Needham, archaic, practical knowledge can be transformed into experimental and growing knowledge provided a number of specific institutional changes are made in society. There has to be a removal of those class barriers which separate artisans from theoreticians; the development of a special attitude of curiosity about nature and society; and, finally, the growth of a specific ideology in which quality is reduced to quantity and a mathematical reality affirmed to be behind all phenomena and a proclamation of a space and time uniform throughout the universe. One can see that if such changes were to occur, a certain kind of knowledge would be likely to emerge. But in specifying the kind of knowledge likely to emerge (doctrines about uniformity of space and time, proposals to reduce quality to quantity), one is claiming to know in advance what the world is like and what kind of knowledge will be most likely to do justice to it. Such a claim, as we pointed out above, is not legitimate because it prejudges the issue by laying down guidelines for a successful inquiry. The negative sociology suggests, instead, that knowledge is most likely to grow when the social order is maintained in a non-catechismic way – that is, when it is sustained by bonds which are not made up of bits and pieces of knowledge. In such an order, knowledge can be set free and exposed to unrelenting criticism. This unrelenting criticism is, however, a luxury which most societies, depending on pieces of knowledge for their social bonding, cannot afford to enjoy. Negative sociology of knowledge also suggests that societies in disorder – for example, Renaissance Florence 70 or the German Weimar Republic – are likely to be as favourable to the growth of knowledge as societies in which social bonds are cognitively fairly neutral. Periods of disorder and social disequilibrium have a destructiveness of their own. But they may countenance a great growth of knowledge. Take Leonardo da Vinci as an example. We may suppose that when he noticed that the Archbishop of Pisa had conspired to have the Medici brothers Lorenzo and Giuliano murdered during Mass in front of the high altar of the Cathedral of Florence, he concluded that he was living in a social disorder in which people were obviously taking liberties. Hence, he decided that it was in order for him to take liberties, too, break a taboo and study anatomy by dissecting corpses.
It is frequently argued that the discontinuity in the growth of knowledge and the stubborn survival of false but dogmatically adhered to pieces of knowledge, is due to the slow evolution of the human mind and its cognitive apparatus. The minds of mankind, it is alleged, have developed slowly. At first, their powers of perception, of reasoning and of logic were like those of young children. Piaget has documented the slow development of these faculties in children and shown how these powers improve and reach full maturity only after puberty. But long before Piaget, as long ago as the seventeenth century, philosophers like Francis Bacon, in the Preface to his De Sapientia , maintained that the earliest men were capable of only pre-logical and confused thinking, that they were unable to tell the difference between a metaphor and a literal description. The only thing Piaget has contributed to this line of argument is the evidence which seems to indicate that the mental growth of every child recapitulates during the first dozen years the mental development of mankind. We will leave aside the question of whether Piaget’s researches were initially guided by the thought that children must be like primitive men or whether he researched and found independently that the mental and cognitive faculties of children were similar to those of primitive human beings.
The view that primitive men cannot think logically and failed to categorise their perceptions correctly has had, ever since Bacon, a large number of supporters. In the eighteenth century, it played a crucial part in the thought of Vico, Herder and Rousseau. 71 In the nineteenth century, it became the starting-point for the systematic history of human thought in Hegel, and was used in a slightly different way by the great nineteenth-century evolutionists, Comte, Tylor and Frazer. They held that mankind had been prey to magic and religion before ascending to the liberating power of science because it simply takes time for human beings to grow up. In the twentieth century, the theory that primitive humans are incapable of rational thought and that their cognitive faculties are therefore stunted, has been revived by Lévy-Bruhl and Bruno Cassirer. The latest and most carefully documented study to this effect, by C.R. Hallpike, 72 appeared as recently as 1979. His is an important book which covers the whole ground more systematically than any of its predecessors and provides a great deal of ethnological evidence to support the claim that the mental and cognitive faculties of primitive men are not sufficiently developed to allow for the growth of knowledge, let alone for the development of science, with its abstractions, generalisations and its deductive reasoning.
Needless to say, the evaluation of all these findings is greatly influenced by the personal values of the observers. To rationalists like Comte and Frazer, this primitive mentality was contemptible. They rejoiced in the fact that we had left it behind. Romantics like Wordsworth and Herder welcomed it and developed nostalgia for it. They deplored the fact that we had left it behind and regretted that children, in growing older, abandon their straight intuitive powers. In between, there are the values of Lévy-Bruhl and Cassirer, who both took a more balanced view of the development. They appreciated the advantages of pre-logical thinking for community life and for being in tune with nature itself. They also appreciated the development of cognitive and logical faculties in modern man without regarding this development as a fall from grace, as the romantics had done.
In the twentieth century, on the other hand, many scholars, conscious of the ‘racist’ implications of the view that primitive men think pre-logically and modern men do not, often prefer to stress the continuity of mental faculties – either by saying that both primitive men and modern men are capable of logical thought, or by saying that both primitive men and modern men are swayed by non-logical thinking. While one must appreciate the ethical considerations which prompt such a refusal to spot a difference in logical power between primitive men and modern men, such a view does not exactly help to explain why knowledge took so long to grow. Nothing was done to resolve this debate when Quine showed 73 that the supposed cornerstones of modern rational knowledge – the dogma that there is an absolute distinction between analytic and synthetic truth, and the dogma that all meaningful statements can be reduced to terms which refer to immediate experience – have very shaky foundations. In abandoning these two dogmas, Quine writes, the supposed boundary between speculative metaphysics and natural science must become blurred and bring about a shift towards mere pragmatism. Thus, ‘primitive’ and ‘modern’ cease to be the antithetical terms which might have explained the difference between primitive man’s persistent harbouring of false knowledge and the modern growth of knowledge. If Quine is right, not even modern men can take an ability for rational knowledge for granted and such differences in knowledge as there undoubtedly are, must be explained pragmatically. A pragmatic view may have much to commend itself. In taking it, however, one must face the fact that the next question is likely to be a political one. If the prevalence of superstition is a matter of cultural accident and if the growth of critical knowledge is also a cultural accident, one recognises the justice of Foucault’s remark that questions of knowledge turn out to be questions of power: Who issues the decrees? Who controls funds? Who wields influence? Surprising though it may sound, people who start with Quine may end with Foucault. 74
Lévi-Strauss, on the other hand, approaches the question in a different way, combining a strict egalitarian rationalism with some of the pragmatic considerations suggested to be necessary by Quine. There are, he admits, cognitive differences between children and adults as well as between primitive men and modern men. But all human beings, he says, are capable of counting and of making binary distinctions: totemic animals are not for eating but for counting. The fact that primitive men use totemic animals for counting and for making binary distinctions rather than abstract numbers and digital computers, is to be explained by the peculiarities of their cultures. Unlike Quine, Lévi-Strauss is confident that the logical potential in man is universal and can be taken for granted. If it displays itself in different and seemingly incompatible ways in different cultures, the differences can be explained by the accidental contents of the cultures – not by an appeal to a pre-logical mental faculty as the romantics from Bacon to Lévy-Bruhl and Cassirer would have it. Such evolution as there has been, concerns only the content of cultures, not human nature. Take away the totem from a primitive man and replace it by an abacus or a digital computer, he is saying, and you will soon see that men are all alike in their rational faculties.
The arguments on all sides are transparently shot through with evaluations of childhood and nostalgia or with contempt for childhood and primitive life – though it has to be admitted that Quine’s argument claims to be logical and is certainly not based on ethnographic data. There also has to be considered the purely psychological evidence often advanced by Evans-Pritchard. When all things are said and done, we find that there are innumerable people in the modern world, fully grown up and living in industrialised societies, who are as superstitious and as incapable of logical inference as any Zande or Nuer. Evans-Pritchard and other British social anthropologists have very successfully explained how the seemingly strangest beliefs of primitive people are part and parcel of their social system and, therefore, as long as one is standing inside that social system, not in the least ‘irrational’. Given certain social institutions of accountability, for example, some witchcraft beliefs seem almost inevitable and certainly quite plausible. This method of explaining what are to us untenable beliefs, however successful for the insider and for the practising field anthropologist who makes himself an insider, begs the question. Our question is not whether under certain special circumstances a belief in witchcraft is not as strange and improbable as it might appear to us. Our question is why and when certain people ceased to hold strange and improbable beliefs. The method of looking at every belief from inside the society in which it is held, and from the inside only, is unable to answer this question.
The negative sociology of knowledge suggests that the mind of primitive man is primitive because it lives perpetually under shelter. Primitive minds are not exposed to competing claims and to competing concepts. They do not have to evaluate and compare even when they are free to exercise choices. Without competition there is no criticism and no critical selection. As a result, no concepts and no beliefs are abandoned for better or more suitable ones. This protectionism rather than a special structure of the mind of early man, makes for the primitive nature of his thought.
Whether one is prepared to follow Quine or Evans-Pritchard, Lévy-Bruhl or Lévi-Strauss, one must make sure to ask the right question. The right question is whether all human beings have an ultimate rational baseline or not. If they do, the absence of logic and science in so many societies must be explained by our negative sociology. Negative sociology explains their absence by the fact that in primitive societies logical reasoning and abstract concept-formation are inhibited because counting and binary distinctions are inhibited. The inhibition is due to the fact that these faculties are artificially protected because they are used catechismically for non-cognitive purposes as social bonds. If they do not have such a baseline, the absence of logic and abstract, consistent concepts, and the prevalence of intuition and metaphor (i.e. of what strikes us, modern observers, as metaphor) must still be explained by a negative sociology. If primitive men were naturally pre-logical in their thinking, negative sociology of knowledge provides an explanation for the persistence of such pre-logical cognitive habits. In using these habits as social bonds, primitive men were not exposing themselves to criticism and thus found in their social order an artificial protection of those habits which under open competition would soon have been weeded out. In both cases, the lack of development of logic or the continued presence of metaphorical intuitions must be explained by lack of competition and critical selection. This lack is due to the fact that metaphors or logic – whichever – are artificially protected by the exclusion of alternatives. They are used to define the boundaries of societies and not treated as instruments of cognition. The factor which has changed with time as we approach Western modernity, whichever way one is looking at it, is not human nature but the nature of the social constraints under which it is operating.
The best available explanation for the discontinuity of the growth of knowledge and for the variety of its occurrence must, therefore, be sought in changes in social structure and can best be explained by a negative sociology of knowledge. In early and primitive societies, people could not afford to allow such knowledge as they had about the weather, the growth of plants, sickness and the seasons, etc., to be exposed to criticism. They had to use it to define their societies and social boundaries and this use was more important for human survival than veracity. People who subscribed to that knowledge, whatever it was, were insiders; and people who did not were outsiders. Knowledge and subscription to knowledge is a more flexible form of bonding than the kind of bonding provided by blood relationships or by such physical characteristics as race and skin-colour. The bonds circumscribed by blood relationships are far too narrow and would oblige one to keep on marrying and choosing wives and husbands from far too narrow a circle. The bonds circumscribed by race or colour are far too wide and would include, in any geographical area, far too many people and include them indiscriminately. To make co-operation, mutual help and division of labour effective, the society has to be large; but not too large. Knowledge and the ritualistic practices which follow from it are the best available form for this kind of bonding and hence there grew up the universally practised principle extra ecclesiam nulla salus . The prostitution of knowledge for this purpose is bad; but its adaptive value in the history of mankind has been enormous. Clearly, knowledge so protected could not be subject to criticism and therefore had to remain at a very low level of adequacy and truthfulness. The prevalence of what we describe as primitive thought or pre-logical thinking is therefore a secondary phenomenon. It is the result of social protectionism. The requirements of some social structures inhibit the growth of knowledge in two ways. First, they protect false beliefs because adherence to those beliefs constitutes a social bond; and second, they prevent, in doing so, a critical appraisal of the differences between metaphor and less metaphorical expressions. The first inhibition concerns content; the second, the cognitive apparatus itself.
While there is obviously considerable doubt as to whether man’s rational potential is universal or not or whether it has evolved gradually and if it has evolved, what the most suitable conditions for such an evolution might have been, a negative sociology of knowledge manages to by-pass this entire debate. According to the negative sociology of knowledge, whichever side one is on, one is always left with the same conclusion. According to the negative sociology of knowledge, there is no need to make up one’s mind on this question or to take sides. The negative sociology accepts the findings of scholars like Hallpike: primitive men are pre-logical. However, there is no need to decide whether they are naturally so (as Bacon and Lévy-Bruhl claim) or whether they are so because of the accident of their local culture (as Lévi-Strauss claims). They are either inhibited from displaying their natural, logical potential because they have to use the beliefs they have as social bonds. Thus, they cannot expose them to competition and criticism and remain caught with whatever happens to be going. They are stuck with totemism not because they are pre-logical, but because they cannot afford to look at alternatives lest their social bonding break down. Or, they are prevented from developing forward towards logical reasoning because the kind of social order they are living in makes trial and error impossible. If the first alternative applies, they are logical but cannot avail themselves of their logicality because they cannot change the content of their culture according to the dictates of their logic. If the second alternative applies, they are pre-logical and cannot move forward towards logic because their social order inhibits the sort of discussion which alone can promote the growth of logic. Either way, there is no growth of knowledge – regardless as to whether pre-logical behaviour is natural or a cultural accident. All one should add is that the primitive mentality which is inimical to the growth of knowledge is not confined to people who are technically known as ‘primitive’.
When, for a number of reasons, societies developed which were not ehtirely dependent on subscription to a given system of knowledge for the definition of their bonds, it became possible to criticise knowledge and release it from the bondage in which it had been kept. Thus, we find that wherever more neutral forms of bonding have developed, knowledge began to grow. Most of traditional knowledge gave way to criticism, new knowledge was invented only to give way to further criticism, and so forth. This cannot, of course, be taken literally. There is no society – not even the most modern, post-industrial, urban mass society of atomised individuals – in which the bonds are completely neutral. For that matter, even if it were, radical criticism would not be practised by everybody all the time. Even in such societies, radical criticism is really practised only by ‘scientists’, that is, by some people. Negative sociology merely states that the societies of such radical criticism practitioners find it impossible to exist when most of the knowledge which is available or held is pressed into the service of social bonding. When it is not, societies of criticism practitioners are tolerated. These societies – or, better, sub-societies – are held together by shared practice of criticism, not by a particular belief; not even by the belief that one ought to practise criticism. The practice of radical criticism is not based on the belief that radical criticism is ‘right’. If it were, one would call such practice a commitment to a belief which, in turn, cannot be criticised. There is, however, no such commitment. Radical criticism is the simple operation of reason. While reason is not a substantive force which can tell us what is the right thing to do, it is self-sustaining or self-supporting; for it would simply be against reason to accept any knowledge without criticising it.
Finally, one should not underrate the positive contribution made by the dogmatic preservation of knowledge and by its artificial protection and the comparative stunting of the cognitive apparatus that goes with it. During the millennia of such protection, people developed the art of writing and the whole conceptual apparatus which accompanied it. In this way, even the dogmatic protection of knowledge helped to prepare the ground for the cognitive apparatus on which the eventual growth of knowledge came to depend.
The negative sociology of knowledge indicates that both Milton and Mill were wrong. Milton, in one of his most purple passages, pleaded for freedom of thought because, he argued, the truth will eventually survive and triumph over everything else. Mill concurred, though he struck a more cautious note. To silence discussion, he said, is a presumption of infallibility. The negative sociology of knowledge makes a different case. Milton was wrong because truth is not something which is there, held by some lucky people so that, when there is freedom, untruths will disappear and truth will be left standing. The negative sociology of knowledge says nobody knows what the truth is – not even what it might be. But if there is completely free discussion, some opinions or theories will emerge temporarily because people cannot think of cogent criticisms, for the time being. Mill was equally wrong. When discussion had to be silenced, people did not silence it because they thought themselves infallible, but because the knowledge they protected artificially from discussion and criticism was too precious: it was being used not as knowledge but as a social bond. The question of infallibility did not come up. Claims to explicit infallibility can only come up, and then presumptuously, when there is free intellectual competition; and, in such a situation, they cannot stand up to criticism and will be laughed out of court.