the problem of knowledge
With me, the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would anyone trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?
Charles Darwin
Knowing one’s thoughts no more requires separate investigation of the conditions that make the judgement possible than knowing what one perceives.
Tyler Burge
As we know, there are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.
Donald Rumsfeld
The Hollywood movie The Matrix imagines a future in which humans have been enslaved by machines. From birth to death, they sleep imprisoned in pods where they are fed through tubes and washed by robots. While they slumber, they believe that they have normal jobs and families, but in fact their lives are conducted in a computer-generated simulation of late twentieth-century America. For reasons known only to the machines, it is best that all humans occupy this closest sustainable approximation of Utopia – the benighted citizens of the virtual Third World presumably being hightech cardboard cut-outs. The simulation is so detailed and faithful that no one suspects that they do not live in the physical world. Everyone has virtual hobbies and virtual relationships and, as one character remarks, steak tastes just as good in ‘the Matrix’ as the real thing (not that he has ever tasted the real thing). Such a paranoid fantasy may yet be the fate of humanity. There is a theory that the reason SETI (Search for Extra-terrestrial Intelligence) has so far failed to detect any signals from alien civilizations is not that sufficiently advanced cultures sooner or later destroy themselves in war, but that they decide to spend their time in a virtual reality paradise as soon as they discover how to devise one. By plugging themselves into a virtual reality of their own design, they are able to leave behind the inevitable frustrations of life in the natural world and exist free from pain and death. The period when they would be broadcasting signals would last from their invention of radio to the development of technology that can render virtual experiences at least as good as the real thing – let’s say, 150 years. The thought that life in the Matrix might not be so bad after all is a comforting one because, according to one thinker, we may be in it already.
The Swedish philosopher Nick Bostrom puts the chance at around one in five.1 He sees three possibilities for the future of humanity: either we will become extinct before the ‘post-human’ era in which we are able to create the Matrix, or we will decline to create a significant number of simulated people when we get there, or we are already living in the Matrix. The latter depends on the prospect of computing power continuing to increase until true artificial intelligence has been created, which Bostrom rates as very likely. This may be disputed at the outset, but the philosopher is used to making predictions, having worked as an adviser to the European Union on scientific research and the CIA on long-term security risks. Born in 1973, Bostrom is very young to be a respected philosopher. As a fifteen-year-old he wandered into his local library when bored one day and picked out a book at random: Thus Spake Zarathustra by Friedrich Nietzsche. Reading Nietzsche transformed his attitude to school, and in his undergraduate days he studied three full-time programmes simultaneously. He now spends his time philosophizing about a ‘trans-human’ future in which machine intelligence has far outstripped that of its creators and in which humans have merged with their technology and uploaded their consciousness into digital computers. These interests have garnered him more appearances in the mainstream media over the past few years than any other living philosopher bar Peter Singer and Noam Chomsky.
Bostrom maintains that once we can simulate consciousness, we may then decide to simulate worlds for artificial minds to inhabit, and might even place them within re-creations of human history without them knowing. In such a future, most minds might belong not to flesh-and-blood creatures like ourselves, but to digital individuals living inside artificial worlds. The task of constructing the artificial world could be made easier by furnishing it only with those parts that its inhabitants need to know about. For example, the microscopic structure of the Earth’s interior could be left blank, at least until someone decides to dig down deep enough, in which case the details could be hastily filled in as required. If the most distant stars are hazy, no one is ever going to get close enough to them to notice that something is amiss. Other philosophers have even suggested that quantum indeterminacy is a feature of the limited resolution of our simulated world. And how are we to discern whether our own world is real or simulated? It is a simple matter of probability. If one day every PC user had such a simulation running on their computer, the ratio of simulants to ‘real’ people could be a billion to one. Bostrom tells us:
If betting odds provide some guidance to rational belief, it may also be worthwhile to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that’s where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose.2
Assuming that, one day, we will be able to create artificial minds and artificial worlds – and assuming that we will be inclined to create numerous simulations of human history – the vast majority of conscious beings who will ever have lived will never have set foot in the physical world. It is very likely that any given person – oneself included – is among them.
Bostrom believes that since we are ignorant of the purpose of our simulated world, there is no point in trying to please its programmer. However, others have suggested ways in which we might try to do this. The American economist Robin Hanson advises: ‘If you might be living in a simulation then all else equal you should care less about others, live more for today, make your world more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy.’3 In case our descendants’ tastes should vary from ours:
one should emphasize widely shared features of entertaining stories. Be funny, outrageous, violent, sexy, strange, pathetic, heroic, …in a word ‘dramatic’. Being a martyr might even be a good thing for you, if that makes your story so compelling that other descendants will also want to [simulate] you…If our descendants sometimes play parts in their simulations, if they are more likely to play more famous people, and if they tend to end simulations when they are not enjoying themselves, then you should take care to keep famous people happy, or at least interested. And if they are more likely to keep in their simulation the people they find more interesting, then you should try to stay personally interesting to the famous people around you.4
Plato first suggested the idea of a virtual world in the fourth century BC. The Greek philosopher compared the physical realm to a cave in which people were chained, their backs to the entrance. Having spent their entire lives able to see only the shadows cast on to the far wall of their prison by people walking past the cave, they mistake these flickering shapes for real human beings. According to Plato, our everyday perceptions work in the same way, in that we see around us only the shifting reflections of a higher realm. There could be no question of having ‘knowledge’ of the physical world since we can only truly know, he argued, that which was truly real. ‘Truly real’ objects were eternal and unchanging, rather like numbers or a concept such as ‘Gold’, which was to be distinguished from imperfect worldly instances of gold. The plane in which these perfect entities resided was beyond the reach of our eyes and ears. Nonetheless, Plato promised that it could be explored with the power of reason. The French philosopher René Descartes put out a similar message in the seventeenth century. He imagined a ‘Malicious Demon’, who was able to construct an entirely fictitious environment around us, such that our every belief would be mistaken. Reason could again come to our aid: if we could only establish the existence of God as a necessary truth, then this kernel of knowledge would underwrite our everyday beliefs about the world around us. Descartes thought that any sufficiently ‘clear and distinct’ perception would have to be true because, unlike the demon, the Almighty would not lead us astray in those beliefs that he seems so eager to press upon us. Unsurprisingly, it proved no easier to demonstrate beyond doubt the existence of God than the existence of tables and chairs. Plato’s ‘higher realm’ too is the subject of speculation, rather than an object of knowledge.
In 1787, Immanuel Kant wrote that ‘it still remains a scandal to philosophy and to human reason in general that the existence of things outside us…must be accepted merely on faith, and that if anyone thinks good to doubt their existence, we are unable to counter his doubts by any satisfactory proof’.5 Two centuries later, this ‘scandal’ has still not been resolved, but it can be said that philosophical society has become more permissive since Kant’s day. The ever-present possibility of error is no longer counted as enough by itself to threaten our claims to possess knowledge. Although philosophers still cannot prove that ‘scepticism’ is false, and that the external world is real rather than an illusion, they have demonstrated that knowledge is at least possible. Most of them no longer seek indubitable foundations, such as the existence of God, upon which to rebuild the superstructure of our understanding. To lower our expectations in this way is not a calamitous defeat if we overcome the notion that in order to be said to know something, we must also know that we know it. It was this assumption that led philosophers on a fruitless search lasting 2,500 years.
Plato set the rules for the chase with his ‘tripartite’ account of knowledge. He demanded, first, that the proposition in question be true; second, that one believes it; and, third, that one can provide a justification for one’s belief. The last condition was necessary to differentiate real knowledge from mere ‘true belief’ – that is, an unsupported opinion that happens to be correct. Knowledge cannot be left to fortune, and nor can we allow the truth to be attained by a lucky guess. True belief resembles knowledge and in many cases can be just as useful as the real thing, but Plato argued that it lacks the stabilizing anchor that justification provides. This instability was demonstrated in the conduct of the political leader Anytus. Although he shared with Plato’s teacher Socrates a dislike of the paid gurus known as ‘sophists’, unlike Socrates, he was unable to give any good reasons for his opinion. Anytus’ suspicions were correct and protected his purse from the sophists’ charlatanry, but ultimately they were based on prejudice. Because his judgements of character were irrational they were unreliable, and they eventually led him to indict Socrates for corruption of the young and condemn the philosopher to death. According to Plato, beliefs held without reason tend to behave like the statues of Daedalus, which were so lifelike that they ran away in the night. Plato was concerned with ‘locking’ the truth into place. To achieve this, the mental state of possessing knowledge needed to mirror its objects: just as they were eternal, perfect and unchanging, our knowledge of them had to be unshakeable and beyond revision. The problem then was deciding what exactly constitutes a good enough justification to know something to be true.
Unfortunately, beliefs are frequently true purely by chance, even if they are justified. Imagine that I am waiting for the results of the 2004 US presidential election and see a television news announcer declare victory for ‘George Bush’. I therefore believe that the Republican candidate has won the election, and indeed he has. But, unbeknownst to me, when I switched on the news channel it was actually running an old video of George Bush Sr’s victory in 1988. My belief in Bush Jr’s victory is both true and justified, but since it is also the result of an accidental misperception it cannot be counted as knowledge. To use a different example, I possess a wristwatch that has always told the correct time. One afternoon I look at it and see that it is half past four. It is indeed half past four, and my belief to that effect is true, as well as being justified by recourse to my timepiece. However, the watch is in fact broken, having stopped at 4.30 that morning, and it was pure chance that I next checked it at precisely 4.30 in the afternoon. I have a justified true belief about what time of day it is, but it would be odd to describe this as knowledge, since after looking at my watch I would have believed the time to be half past four whatever the real time was.6
We might counter that these beliefs could not really have been justified all along, since our reasons for holding them proved to be fallible. But if we take away the subjective character of justifications and demand that they always deliver veracity then we will be left knowing nothing at all, since we very rarely possess indubitable evidence for our beliefs. The problem with all accounts of justification is that they concern the relationship between oneself and one’s beliefs, whereas what we need is an account of the connection between one’s beliefs and the worldly facts. Ever more elaborate justifications might make us feel more secure about our beliefs, but that is no good if they are not true. The American philosopher Alvin Goldman argued that knowledge depends both on what goes on in the head and on its relationship – its causal relationship – to what goes on in the world. He argued that if we want to bring the concept of knowledge within the remit of the natural sciences, so that it sits alongside such well-understood quantities as tables, genes, colours and temperatures, then we should view knowledge as a natural relationship between the external world and the knowing mind. Goldman has since spent forty years refining his approach to this task, though he assured me over the telephone that he has many interests ‘outside philosophy’ and listed ‘cognitive science, neuroscience, social psychology, political theory, law’, adding ‘Oh, and sports’. In 1967, he proposed that to know something is for one’s belief to be causally related to the object of that belief.7 Causal theories were all the rage in the 1960s and were applied to subjects such as perception, memory and action. Goldman wrote his dissertation on the latter, arguing against those philosophers who held that the reasons for an action were fundamentally different from that behaviour’s causes. He then decided to apply the same thinking to the problem of knowledge. At the time there was a sharp distinction between questions of how to justify our beliefs and what were called ‘questions of discovery’. Questions of discovery concerned how one came to an idea or a belief, and these were put in the category of psychology rather than philosophy. The dominant view was that the mental mechanisms that brought one to a certain condition had nothing to do with the matters of justification that the study of knowledge was all about.
Goldman asked us to suppose that a geologist notices deposits of solidified lava around an area of countryside and comes to believe that a nearby mountain must have erupted there several centuries ago. Assuming that there was indeed such a volcanic event, then whether her belief is knowledge depends on the causal process that induced it. If there is an unbroken causal chain between the eruption and the geologist’s perception of the lava, then she knows that the mountain erupted. Suppose alternatively that at some point in between the eruption and her perception, an open cast mining company removed all the lava. A hundred years later, someone ignorant of the eruption decided, for whatever reason, to scatter lava around the area to make it look as though a volcano had once erupted there. In this case, the causal chain has been broken. The geologist’s belief is not knowledge because the fact of the eruption was not the cause of her believing that the volcano had erupted.
Goldman’s account was the first ‘externalist’ theory of knowledge – so called because what turns a belief into knowledge is partly something external to one’s mind. Another aspect of externalism in the theory of knowledge is the reliability or unreliability of the mental operations used in forming one’s belief. According to externalism, such reliability is essential for a belief to qualify as knowledge. But people may not know the mental operations by which their beliefs are formed, or whether these mental operations are reliable. People often seem to know things without being able to say how. Perhaps a contestant on the Who Wants to be a Millionaire? game show has a strong feeling that Lima is the capital of Peru, although they have completely forgotten how they came by this information. Yet they feel so sure about it that they are willing to wager £500,000 that they are right. It seems unfair to judge that they do not really know the answer just because they cannot remember that they once read it in Encyclopaedia Britannica. They can provide no justification for their belief, but neither was it a lucky hunch. Even where no justification can be unearthed, if the ‘luck’ comes thick and fast and often enough, one might suspect more than coincidence, for they may have a reliable process informing their beliefs without realizing it.
According to the American philosopher Fred Dretske, and most pet owners, animals such as cats and dogs can be said to possess knowledge – even though sophisticated justifications for their beliefs may never cross their animal minds. If canines do not require a conscious apprehension of their methods, then neither do humans. He explains:
If an animal inherits a perfectly reliable belief-generating mechanism, and it also inherits a disposition, everything being equal, to act on the basis of the belief so generated, what additional benefits are conferred by a justification that the beliefs are being produced in some reliable way? If there are no additional benefits, what good is this justification? Why should we insist that no one can have knowledge without it? 8
It is no simple matter to characterize the ‘reliable’ processes of which Goldman and Dretske speak. In the earlier example, my belief in the Republican victory in 2004, though true, was not caused by the object of that belief – namely, George W. Bush, the then Governor of Texas – but by his father’s success in 1988. However, the 2004 result may have been the cause if George Bush Jr’s victory was what prompted the broadcasters to show a repeat of his father’s election. In this case, there is a direct causal chain from George Bush Jr’s victory to my belief about the present, yet my belief would still not be knowledge because too much luck is involved. It seems that not just any causal relationship will do. Had I known that I was watching a rerun, then I would not have formed the belief that George W. Bush had won the election, so perhaps we should say that we possess knowledge only if there is no other information that would have changed our mind had we come across it. I would obviously have changed my mind about the time had I known that my watch had stopped.
However, things can get out of hand when we ask just what else we need to know in order to possess true knowledge. For example, I might read in the newspaper that the president had been assassinated.9 The report, we shall imagine, is accurate, but had I read any other paper that day, or watched the television or listened to the radio, I would have got the impression that the president had survived, because the president’s aides had been busy putting out propaganda that their boss was alive and well. By pure chance, my sole source of information was the only one publishing the truth. Perhaps I was lucky not to see the propaganda that had misled everyone else, but what if propaganda existed that was recorded but never broadcast? Or what if such propaganda was conceived by one of the president’s spin-doctors but was never discussed with his colleagues? In a sense, I would be lucky if this propaganda never materialized. Or what if there never was such an aide? Would I be ‘lucky’ because, if there had been, then I would have believed his or her lies? It seems that we cannot but get ‘lucky’ every time we succeed in forming a true belief, no matter what process leads us there.
Goldman’s ideas were adapted by Robert Nozick, the man sometimes cited after his death in 2002 as President Ronald Reagan’s favourite philosopher because of his view that so-called ‘social justice’ was incompatible with freedom. A libertarian, Nozick pointed out that any state subsidy for a certain group funded by taxation must entail nothing short of forced labour for the rest of us in order to pay for it. He was the son of Russian immigrants who had come to America to avoid just such a scenario, and from his unpromising beginning as an overweight, nervous boy growing up in Brooklyn he became one of the world’s greatest philosophers. Looking like a bushy-eyebrowed Gregory Peck in a roll-neck sweater he was also, by all accounts, the most handsome. Nozick gave up political thought early in his career to concentrate on more abstract areas of philosophy, such as the question of knowledge. He agreed that some causal links were too arbitrary to underpin knowledge. He favoured adding a further condition to knowledge: that you know something if and only if you would not have believed it had it been false. One’s beliefs must be very sensitive to changes in the truth for them to count as knowledge. Imagine a father who refuses to believe that his son is guilty of a terrible crime, and then is vindicated when his son’s innocence is comprehensively proven in court. The father did not truly ‘know all along’ that his son was innocent: even if the verdict had gone the other way he would still not have believed his son was guilty. His belief was built on faith, not evidence. Under Nozick’s account, faith turns out to be a poor method of deriving true beliefs, as it is completely insensitive to changes in circumstance. Faith does not, in Nozick’s terminology, ‘track the truth’. Even when the facts change, faith stands still.
Nozick wondered if, when people get older and more comfortable in their views, they cease to be sensitive to the facts and their cognitive states pass from knowledge to belief. Even if the old are wiser and possess more truths, perhaps, he mused, it is the young who have knowledge. On the other hand, sensitivity to truth may be measured over far longer periods. Perhaps the laws of Natural Selection have favoured minds whose beliefs eventually ossify, finding this to be the best way of ensuring a longer-term relationship with the facts. Perhaps we might otherwise become oversensitive, ending up as nervous wrecks who can’t believe in anything. Sensitivity to the truth does not mean fragility. The most sensitive are not necessarily those who are most ready to change their minds at the slightest provocation. The meek will not inherit the truth any more than they will the earth. Being sensitive to the truth can mean being extremely insensitive to background noise and other irrelevancies. Where, for Plato, it was the justification of our beliefs that tethered the ‘statues of Daedalus’ and bound us to the truth, for the externalists such as Nozick it is a natural, inherited propensity to believe certain kinds of things because of our evolutionary needs.
Some philosophers object that an evolutionary process would in fact deliver nothing of the sort, because it is geared to ensuring not that our beliefs are true, but that they enable us to better survive and reproduce. It is on these grounds that the Christian thinker Alvin Plantinga denies that is possible to be ‘an intellectually fulfilled atheist’. Alhough he now teaches at the Catholic Notre Dame University in Indiana, Plantinga is a Calvinist who abandoned a generous scholarship from Harvard to study at Calvin College, from where he graduated in 1954. As America’s foremost religious thinker, Plantinga has earned two entries in Daniel Dennett’s spoof dictionary, The Philosophical Lexicon: ‘alvinize, v. To stimulate protracted discussion by making a bizarre claim. “His contention that natural evil is due to Satanic agency alvinized his listeners”’, while ‘planting, v.’ is ‘To use twentieth-century fertilizer to encourage new shoots from eleventh-century ideas which everyone thought had gone to seed; hence plantinger, n. one who plantings’.
With his tall, lean frame and ‘chin curtain’ beard, he certainly looks the part. Echoing Darwin’s doubt, Plantinga insisted to me when I met him at the Notre Dame campus:
If you’re an atheist and a naturalist [someone who does not believe in miracles] then you have to ask yourself the question, ‘How likely is it that the faculties which were designed by natural selection to promote reproduction, fitness and survival and not to promote true belief will be cognitively reliable – that is, reliable in providing more true beliefs than false beliefs?’ I think the answer is: not very likely at all.
His argument is that under a materialist conception of the world, the particular contents of our beliefs have no causal role to play in our behaviour, because ‘All that counts is that the neurophysiology be right – that the right muscle contractions occur, the right neural events occur. It doesn’t matter what content gets associated one way or another with that’ – rather as it doesn’t matter whether grass looks red or green to us, so long as our perception helps us to identify it as fertile pasture. For example, suppose someone in humanity’s prehistory sees a tiger one day and believes he should run away as fast as he can. This belief has greatly aided man’s survival, but it need not be the same belief as we would have had in the same circumstances. ‘Perhaps,’ suggests Plantinga, ‘he liked the idea of being eaten, but believes that tigers are vegetarian and runs away looking for a better prospect, or perhaps he thinks tigers are cuddly pussycats and that running away is the best way to play with them.’ As far as natural selection is concerned, all that matters is that he runs away. There is selection pressure to form a belief that one should run away, but no such pressure that one should run away for one particular reason rather than another. With so many possibilities – each equally effective in helping us to survive in a tiger-infested jungle – it is unlikely that the caveman’s belief happens to be the true one. From this, argues Plantinga, ‘it is a short step to doubting all or most of our beliefs – including our belief in evolution itself, thus rendering the naturalist project self-defeating’.10
Religious believers have no such worries, as God has given them their cognitive faculties so that they might know the Truth and ascend to heaven on its back – not merely so that they can avoid predators and find sustenance. However, it is not so easy to drive a wedge between the content of our beliefs and their consequences. If different beliefs can lead to the same behaviour in all conceivable circumstances, then it would be fair to conclude that they are not in fact different. So far, we may not have encoutered the predicament in which alternative beliefs about tigers diverge, but given the countless number of situations in which generations of our ancestors found themselves, it is fair to assume that they would have come across the important ones at some time or another. When the tiger catches our caveman it will be easy enough to see that the beast is far from the playful vegetarian he imagined. The belief that tigers are highly dangerous, on the other hand, is not so susceptible to reversal. Put simply, true beliefs about man-eaters are more likely to survive than false beliefs because there are fewer opportunities to refute them. By contrast, Plantinga’s God could not give us an accurate view of the world by giving us true beliefs at the outset, since changes in our environment will render any comprehensive ‘pre-programming’ obsolete. He would need to be constantly intervening in his creation to plug the holes in our knowledge. To a philosophical naturalist, our sensitivity to such changes depends on how difficult it has been for our species to survive in its environment. A swiftly changing world may be expected to produce sharper minds than an Earth without seasons. There are, therefore, some grounds for suggesting that if the world were a better place, we would be less well-equipped to appreciate it.
The peculiar problem posed by scepticism in all its forms – from the Matrix to Plato’s Cave – is that it lurks in situations to which we cannot be sensitive. We would still hold all our current beliefs even if we were pod-dwellers or the playthings of Descartes’s deceitful demon. Such predicaments, if true, would not be truths that could be ‘tracked’, so Nozick would say that even if we believed them, we could not know them. By definition, they admit of no means by which we could correct any mistaken opinions about them. But even those truths that we can be said to know must remain at a certain distance from us, since the mechanisms that connect us to the facts and give us knowledge are simply more facts. So long as the truth to which we are connected is external to us, what links it to us will also be external, even though there may be glimpses of the nearest end of the tether. The desire for certainty – the desire for an answer to the ‘scandal’ alleged by Kant in 1787 that the existence of the world around us must ultimately be taken on faith – represents a false dream. Thanks to Goldman and his followers, we can talk with confidence about how we might come to acquire knowledge. But the foundations of knowledge are outside us, just as its objects are – that is, in the processes of the natural world, rather than within us as Kant hoped. It is worth asking what kind of knowledge could be underwritten from within. We could achieve certainty only if the objects of our knowledge were figments of our imagination – that is to say, if the external world itself were brought within us. The desire for certainty amounts to the desire to become Descartes’s demon.