She …
… speaks things in doubt
That carry but half sense. Her speech is nothing.
Yet the unshaped use of it doth move
The hearers to collection; they aim at it,
And botch the words up to fit their own thoughts.
—Gentleman (on Ophelia’s madness), William Shakespeare, Hamlet
To understand delusions we need to work out the neurological or neurochemical changes linked to them. But we also need to interpret what is going on at the human or psychological level. Delusions themselves involve interpretations, although deviant ones, of what the deluded person experiences.
Delusions may be distortions of either intuitive or reflective interpretation. Perhaps there is not always a sharp boundary between the two. As with night and day, there may be dusk in which the intuitive and the reflective shade into each other. The answer to this is relatively unimportant. Nearly all of the relevant cases are clearly on one side or the other of the boundary. In at least one direction there is interaction. Intuitive distortions sometimes cause distortions of the reflective interpretation needed to back them up.
“Delusion” is hard to define. It is common to think of delusions as false beliefs, irrationally based and stubbornly held. This is not utterly wrong, but it is too simple.
Instead of attempting a watertight definition, I will outline a cluster of central features of delusions, or at least of the delusions for which psychiatric help may be relevant.
There are problems about how strongly some delusional “beliefs” are really held. There is “double bookkeeping.” The person may say, “The staff here are poisoning my food,” and then happily go off to lunch. Apparent statements of belief may be undermined by a mocking demeanor or a manic cackle.
One psychiatrist’s patient claimed to have had a baby at Buckingham Palace. This claim, if persisted in by a woman with no royal connections, seems to be a false belief amounting to a delusion. However, there are possible interpretations of her saying “I had a baby at Buckingham Palace” that do not involve her believing in the statement. She could be toying with the idea that it is true, or acting out a fantasy about being a princess. It could be something said to mislead or annoy the psychiatrist. It could be some kind of joke. Things said by people seeing psychiatrists can have the ambiguities of comments by Shakespearean clowns or fools.
Even where belief does come in, there is a range of possibilities. Perhaps the person believes the delusion quite literally—or holds part of it, wavering about it at different times. There is also nonliteral “belief,” again found also in people without a psychiatric diagnosis. Joan Didion described her “year of magical thinking” after her husband John died: “Of course I knew John was dead … Yet I was myself in no way prepared to accept this news as final; there was a level on which I believed that what had happened remained reversible.” Clearing out his clothes, she found a reluctance to give away his shoes: “I stood there for a moment, then realized why: he would need shoes if he was to return. The recognition of this thought by no means eradicated the thought. I have still not tried to determine (say, by giving away the shoes) if the thought has lost its power.”1
A woman with a diagnosis of schizophrenia had a delusion linked to vomiting. Her later account was explicit about her “belief ” not being literal: “I got the idea that in taking food I was in a sense eating the body of my youngest child. I did not believe this to be the literal case, but the aversion to food was because of this association.”2 Many delusions are better described as being “on the belief spectrum” rather than simply as beliefs.
Delusional beliefs are often false, even bizarrely so. But we all have false beliefs, and delusions sometimes may happen to be true. Some paranoid people really are persecuted. What matters is whether experience or other evidence is interpreted in ways likely to produce beliefs that track reality.
We all have only limited rationality.3 So coming to beliefs by imperfect means of tracking reality does not mark off deluded people from the rest of us. Irrationality is a matter of degree. Delusional beliefs, though, are based on extremely poor means of tracking reality.
Typically they involve overinterpretation, often to an extreme. David C. Boyles describes this as part of his experience of schizophrenia: “Then the songs on the radio started to be singing directly to me. There was a messaging theme from all the songs. I was supposed to choose. I was stuck in between two worlds … Things in the environment were taking on new meanings. I noticed black birds flying in circles in the sky. It was a symbolism to me, of evil waiting to catch its prey.”4 When the phone rang at night, he rejected his brother’s telling him it was a friend, thinking “it was evil calling, coming to get me.” When he was driving at night, the brightness of the oncoming headlights was a “contact from God.”5
What about fanatical followers of dubious religious or political systems of belief ? Psychiatric delusions are only a minority of stubbornly held irrational beliefs. Delusions need to be distinguished from such beliefs coming from indoctrination or other social influences. In the psychiatric context, delusions are stubbornly held irrational beliefs that come from some personal distortion of thought. Richard Dawkins writes of The God Delusion, which he sees as a delusion in the sense of being an irrational false belief. He does not see it as a psychiatric delusion, but as one heavily influenced by indoctrination and social pressures. As his book shows, he sees it as a delusion for which the cure is argument, not psychiatric treatment.
Delusions are stubbornly held. Few psychiatrists expect to reason someone out of a delusion. If the victim of a delusion does question it, checking sometimes seems only to generate new experiences apparently confirming it: “Then I began to have the feeling that other people were watching me. And, as periodically happened throughout the early stages, I said to myself that the whole thing was absurd, but when I looked again the people really were watching me.”6
Different kinds of delusion may need different explanations. Hearing “voices” may have different causes from delusions of alien control, where the person thinks some of his or her actions come from the will of someone else. “Thought insertion,” where the person believes “This thought is in my mind, but is put there by someone else,” may need a different explanation from Cotard’s delusion: “I am dead.”
Some delusions are only about a specific topic. Some of these are bizarre. People with Capgras delusion think a family member or friend has been replaced by an identical impostor. Some, equally specific, are much less peculiar. People with rejection sensitivity are sometimes deluded about other people’s attitudes toward them: “The doctors and nurses in this hospital have all taken against me.” At the other extreme are wide-ranging delusional systems, where the individual sees the entire world through a ramified system of incorrect beliefs. Some delusions last a long time; others are unstable and fleeting.
Some delusions (including thought insertion, delusions of alien control, Cotard’s delusion, and others) involve distorted awareness of oneself or one’s thoughts or actions. Others, such as delusions of persecution, are not about oneself in the same immediate way. Different delusions may involve different distortions of perception or thought. A single delusion may have more than one cause. And what explains the origin of a delusion may not explain its content.
Some great scientists and philosophers, including Wilhelm Ostwald and Ernst Mach, were at first skeptical about the existence of atoms. Einstein traced this back to their “stick to the facts” attitude: “This is an interesting example of the fact that even scholars of audacious spirit and fine instinct can be obstructed in the interpretation of facts by philosophical prejudices. The prejudice—which has by no means died out in the meantime—consists in the faith that facts by themselves can and should yield scientific knowledge without free conceptual construction. Such a misconception is possible only because one does not easily become aware of the free choice of such concepts, which, through verification and long usage, appear to be immediately connected with the empirical material.”7
In this obliviousness to interpretation these thinkers are, on a grander scale, like the rest of us. Most of us do not know—or else forget—Proust’s thoughts about the intellectual processes involved in such an apparently simple thing as seeing a face. We do not think about the possible role of mirror neurons in seeing what other people are doing. Few people notice how much they impose interpretations on what they see or hear. So interpretations can seem obvious, not open to serious challenge.
A Muslim taxi driver asked me what my religion was. When I said I did not believe in God, he was astonished. How I could possibly not believe? A large issue for a taxi ride, but I said something about reluctance to believe things without good evidence. He expressed further astonishment. How could I not see the evidence in everything around us? He unconsciously made the much debated assumption that order or beauty in the world must come from a designer. The striking thing was the unawareness of any assumptions or interpretation on his part. He just saw the work of God in all around him. Interpretation was invisible to the taxi driver as it was to Ostwald and Mach.
Often we do know about our reflective interpretations. Playing social chess, we consciously estimate other people’s reactions to what we do. But intuitive interpretation, lying behind the way we just “see” things, is rarely conscious. Some people are completely unaware of these processes of interpretation. Others notice them only in reflective moments. Because interpretations usually seem to be just how things are, it is easy to be taken in by a false intuitive interpretation. And for deluded people, as for taxi drivers and great scientists, it is hard to be persuaded of a mistake.
Delusions of alien abduction are a case in point. Some people have awaked from sleep to find they cannot move. They interpret this as having been paralyzed by aliens who are abducting them. There is a plausible alternative account of their paralysis. In some stages of sleep, motor output from the brain is blocked, so that our body does not do things like lash out at people who threaten us in dreams. This sleep paralysis normally goes away when we wake up. But under some conditions there is a delay, resulting in a frightening experience of waking up unable to move.
It is very hard to change people’s minds about having been a victim of alien abduction. In her study of this, Susan Clancy gives various explanations, of which the first is that “the abduction memories and the concomitant emotions feel real to them.”8 Why do the abduction memories feel real? The invisibility of interpretation probably plays a role. We see these people as having an experience of paralysis and wrongly interpreting it as part of alien abduction. But to them the experience presented itself as an experience of being abducted by aliens, with the element of interpretation hidden from view. So any skeptic seems to be denying that they had the experience that they remember so vividly. If this is right, it may be a factor in other delusions as well.
One account of delusions sees them as rational responses to strange experiences. They are thought to come from the person’s trying to make sense of abnormal experiences caused by some neurological or neurochemical failure.9 This model has been applied to hearing “voices,” delusions of alien control, and thought insertion.
On this account, the “voices” people hear come from a breakdown in the brain mechanisms that let us distinguish real sounds from imagined ones. Christopher Frith suggests that some symptoms of schizophrenia reflect a disorder of self-awareness. They may come from a breakdown in the system that underlies our normal awareness of our own goals and intentions.10 Delusions of alien control involve a failure to realize that my action was brought about by my own intention. People who hear “voices” ascribe the products of their own imagination to external speakers. And those with thought insertion do not recognize that they themselves are the authors of certain thoughts rising up in their own minds.
Christopher Peacocke has suggested that what the person lacks an awareness of might not be intentions and goals. The missing awareness may be of something more fundamental. To think a thought is a kind of action, similar in this way to physical activity, normally active rather than passive. But persons with these delusions experience their thoughts or actions as passive. This misplaced sense of passivity may be the key to both thought insertion and “alien control.” The deluded person might know his or her intentions but just not realize that they are actively carrying them out.11
These are all failures in the brain’s self-monitoring mechanisms that normally give rise to the sense of agency. When I decide to drink some water, the brain monitors this intention. My awareness of it accompanies my lifting the glass to my lips. This is what keeps our actions on track. If the monitoring fails, I may find myself lifting the glass, apparently without intending to do so. The idea of being controlled by someone else may seem a possible explanation of this.
This model may fit thought insertion. Having a thought can be seen as a kind of performance, again coming from an instruction that is monitored. Of course it does not feel as if a thought is preceded by such an instruction. When we see the leaves fall, a thought about Hopkins’s poem “Margaret” might come to mind. We are not aware of any process coming between seeing the leaves and the thought. But as John Campbell has suggested, there must be some process, no doubt unconscious, by which seeing the leaves causes the thought. Perhaps our thoughts are kept on track by a monitoring process, as our actions are. Breakdown of the monitoring system could make a thought seem not to be under one’s own normal control. As with “alien control” of actions, the sense of passivity could create an impression that the thought comes from “outside.”12
Why are delusions so persistent, clung to so hard? Some of them have a peculiar vividness and intensity. The writer with the pen name “John Custance” was diagnosed with manic-depressive psychosis. He took from a psychiatry textbook the phrase “heightened sense of reality.” This fitted times when “the outer world makes a much more vivid and intense impression on me than usual.” One aspect of this he noticed was heightened visual experience. Others were rapid leaps from one idea to another, and a sense of ineffable revelation in which it seemed that “all truth, all the secrets of the Universe were being revealed, as though I had some clue, some Open-Sesame to creation.”13
Part of the stubbornness of delusions could be that this vividness and sense of revelation give such a powerfully authentic impression that skepticism stands little chance against them.
Our knowledge of some things about our bodies and our mental lives comes from some kind of inner “tag.” I know which hand is my right one, without being able to say how I know. I know a name is on the tip of my tongue without understanding how. Things are tagged in many different ways. Some tags may be linked to delusions. They include tagging things as familiar or strange, tagging things as being or not being part of “me,” and tagging thoughts or actions according to what I intend them to be.
Some delusions involve disturbance of our usual ability to recognize people. Obviously this could be linked to a failure of our normal systems for recognizing faces. But sometimes the explanation could involve tagging.
One component of recognition can be thought of as a “familiar/strange” tagging system. Among disorders of recognition, perhaps the most dramatic is the previously mentioned Capgras delusion: the belief that someone close has died and been replaced by an identical impostor. A related delusion is that someone’s appearance has changed so they now look just like someone else. And there is the Fregoli delusion (named after an actor and impersonator) of a persecutor who turns up disguised as various other people. Perhaps a “familiar/strange” tagging system, working too hard, generates these indescribable but intense and conviction-laden experiences.
Andrew Young and others have put forward an account of a failure that could be involved in Capgras delusion.14 There is evidence for the existence of at least two anatomically different systems involved in recognition. There is “overt” recognition, based on analysis of information about the person’s appearance, voice, and so on. And there is “covert” recognition, involving an emotional response that tags the person as familiar. Young suggests that the covert system may be out of action in those with Capgras delusion, with the result that, although they know the person in front of them looks and sounds exactly like their husband, wife, or partner, the absence of the normal emotional response gives a strong tag of unfamiliarity, leading to thoughts about an impostor.
Another system that may be disrupted is the tagging of things as being or not being part of me. The idea of things being tagged as “me” or “not me” gets some support from the difficulty of describing experiences of peculiarly vivid awareness of “being me” that people sometimes have. In The Idiot, Fyodor Dostoyevsky draws on his own experience in describing the intensity of Prince Mishkin’s consciousness just before the onset of an epileptic fit. It was “purely and simply an intense heightening of self-awareness … and, at the same time, the most direct sense of one’s own existence taken to the highest degree.”15
It is striking that Dostoyevsky does not specify any visual, tactile, emotional, or other feature of the experience. There is no equivalent of Proust’s madeleine dipped in tea that might serve as a vehicle for this direct sense of one’s own existence. The lack of such a vehicle would fit with the cause of the pre-epileptic experience being some kind of boosted functioning of an unconscious tagging system.
Gerard Manley Hopkins had moments of heightened self-awareness. He emphasized the distinctiveness of the experience. He also stressed its incommunicability—something that would be expected if it came from unconscious tagging rather than from interpreting some sensory or emotional feature of the experience. He said, “When I consider my self-being, my consciousness and feeling of myself, that taste of myself, of I and me above and in all things, which is more distinctive than the taste of ale or alum, more distinctive than the smell of walnutleaf or camphor, and is incommunicable by any means to another man (as when I was a child I used to ask myself: What must it be to be someone else?). Nothing else in nature comes near this unspeakable stress of pitch, distinctiveness, and selving, this self-being of my own.”16
As with Dostoyevsky, the incommunicability of the nature of Hopkins’s intense experience of the self suggests an unconscious tagging system. If there is such a “me/not me” tagging system, its failure or malfunctioning could be part of the explanation of the inarticulate certainty that “I am dead” found in those with Cotard’s delusion.
Sometimes in dreams or imagination we make mistakes that can be seen in two ways. I dream I am having a conversation with Mahatma Gandhi, but the image of the face is of Jawaharlal Nehru. Was I dreaming of Gandhi but making a mistake about his face, or was I dreaming of Nehru but getting his name wrong? I may simply know the answer to this: “It was Gandhi—I just got the face wrong.” The dream is mine and my sense of what was going on overrides the visual discrepancy.
There is a system of labels or tagging bound up with what I take myself to be doing. If the person was tagged as Gandhi, then that is what my image meant even if I did get the face wrong. (It is said that Warden Spooner, after preaching a sermon in New College Chapel, corrected himself: “Every time I said ‘Aristotle,’ I should of course have said ‘Saint Paul.’ ” Even an eccentric preacher is the authority on what he meant to talk about.)
Normally we cannot explain what this tagging consists in by citing a feature of the experience. There is no equivalent of the caption appearing below someone’s face on television. (“It may be Nehru’s face, but underneath it says it is Gandhi.”) Tagging seems to involve no conscious interpretation of any sign. Whatever goes on in the process of tagging is unconscious. All we are aware of is the end result: our conviction that it is Gandhi. No one is going to persuade us that we are wrong. If tagging is linked to delusions, they might be clung to so strongly because of some carryover of this unshakeable conviction.
Why do people cling to delusions so stubbornly? Perhaps the heightened sense of reality plays a role. So, perhaps, does the unshakable certainty given by tagging. Even if both factors are relevant, there are still unanswered questions about delusions’ deep roots. When one delusion is given up, why does another often emerge to replace it? This substitution suggests a “wellspring” model, in which delusions rich in detail keep bubbling up to the surface of the mind. To say the wellspring must be some kind of unconscious mental activity is not to explain it. (Though there may be some link with whatever unconscious processes generate dreams, which also are often startlingly specific.) It is hardly news to say that there are processes here of which we understand almost nothing.
Some accounts of delusions emphasize “poor reality testing”—the misinterpretation of evidence. Other accounts suggest that delusions may reflect differences of epistemological stance—distortions deeper and more central in the person’s way of interpreting the world.
If delusions are partly defined as beliefs that are based on bad tracking of reality, to explain them by “poor reality testing” is close to tautological. To have content, the explanation has to cite fairly specific distortions of perception or thought. One possibility is that delusions come from highly exaggerated versions of distortions to which we are all prone.
“Normal” people weigh evidence in ways that are systematically skewed. When people are given a description of another’s personality (shy, meek, tidy, and so on) and asked to guess the probability of his being a farmer or a librarian, they tend to see which of their two stereotypes the personality fits. They ignore the fact that there are many more farmers than librarians.17
Our judgment is distorted by irrelevant factors, giving too much weight to the first case (“anchoring”) or to cases involving people we know or who are famous (“salience”). We more easily accept evidence fitting our preconceptions than evidence against them. We underinterpret evidence, not seeing the wood for the trees. Or we overinterpret it, jumping to conclusions, projecting patterns onto it, or seeing causes in mere conjunctions.18
Where “poor reality testing” seems the right account, exaggerated versions of “standard” distortions such as salience and anchoring may help to generate and maintain delusions. Overinterpretation may play a role in paranoia. There is evidence that people with delusions are more ready than others to jump to conclusions, even about things not relevant to their delusions.19
There are some questions left unanswered by this approach. For instance, why are delusions so specific? “Poor reality testing” does not explain this. One woman thought she was being persecuted by spectacle-wearing doctors and nurses, who used their glasses to refract too much light into her eyes.20 She was not just overestimating the probability of this happening. Why persecution by means of light? Why with spectacles?
Another remaining question is why many delusions are so bizarre. There are thoughts that other people are robots, or that the whole world depends on me, or that my best friend has been replaced by an identical impostor. Being so bizarre, these beliefs suggest something more radically wrong than cognitive biases or unusual experiences. And the belief of people with Cotard’s syndrome that they are dead goes beyond the bizarre to the paradoxical or impossible.
People’s accounts of their delusions are sometimes so bizarre as to be almost unintelligible. Philosophers such as Ludwig Wittgenstein, W. V. O. Quine, and Donald Davidson in different ways have stressed the links between meaning and belief. If, on my interpretation of what you say, your beliefs come out as unintelligible or highly irrational, this raises the question: Have I misinterpreted you? If it all makes more sense on a different account of what you meant, should not that account be preferred?
John Campbell has pointed out that this kind of rationality constraint on how we interpret people raises a problem about interpreting delusions.21 Can someone who says he is dead, despite walking and talking, really understand “dead” as we do? Can someone who claims her husband has been replaced by an impostor, and does nothing to test this by discussing past shared experiences, really understand the meaning of what she says?
In his discussion of this question, Campbell quotes a person with schizophrenia who said that his words bear two meanings: the meanings they ordinarily have and the meanings he is trying to use them to express. There is no obvious answer to the general question of whether deluded people have a proper grasp of the meaning of their claims. Perhaps “grasping the meaning” is a family of achievements, and each may admit of degrees.
Such beliefs as that other people are robots, or that the whole world depends on me, do not demonstrate simply a poor grasp of evidence. Perhaps they even more reflect some difference in the framework brought to bear on interpreting it.
People’s beliefs form a system that functions in a holistic way. When predictions based on things we believe turn out to be mistaken, we have choices about which of our beliefs to change. Some beliefs may have very shallow roots. We thought it was not summer, but one swallow changes our mind. Or some belief may be tenaciously clung to, regardless of the case against it. As Cardinal Newman said of his religious belief, ten thousand difficulties did not make a doubt.
We need the right balance between responses that are too rootless and those that are too rigid. The extreme of rootlessness is having such a tenuous commitment to a belief system that all of it is destroyed by some slight evidence against any part of it. The extreme of rigidity is clinging to a belief system so tenaciously that no evidence is ever allowed to change any of it. But what is the right balance? Can we start to explore this by looking at cases where the balance has been absurdly misjudged?
There are some striking instances of how the rigid approach can explain away evidence. Confronted by the fossil evidence for evolution, Philip Gosse argued that, to test our faith, God had arranged fossils to look as if evolution had happened. His fixed point was the truth of the Bible story, and the rest of his thinking had to be skewed to fit this.
Another, political, instance comes from the British Communist Party in 1939. The Central Committee of the Party had to discuss the Nazi-Soviet pact. One of its implications was an order from Moscow that they were to withdraw support from the war against Hitler. They were to work for Britain’s defeat. Many members had joined because the Party seemed to provide serious opposition to the Nazis. The new policy would make them go against their deepest political instincts.
Many of them had also adopted, as a fixed point in their system of beliefs, the idea that the Soviet Union could do no wrong. The transcripts of the debate show them agonizing as they tried to retain this fixed point in their system by skewing other beliefs. Some bending and squeezing would make it easier to see the Soviet Union as right. Suppose democracy and fascism were not importantly different. Or suppose the British Empire was as bad as Nazi Germany. Or Germany was so weak as not to be a threat. Or suppose Britain and France were worse aggressors than Nazi Germany. None of these claims was plausible, but each was adopted by some members of the Central Committee in the effort to defend the fixed point in their system.22
Gosse on evolution, and this response to the Nazi-Soviet pact, are two extreme cases of the holism of a belief system being exploited to protect a particular belief. In these cases the evidence is tortured by stretching and squeezing to make it fit the fixed point in the system. The implausibility is obvious. The same is true of many delusions, as with the identical impostor in Capgras delusion.
Few philosophers would be surprised by the thought that there is an overlap between philosophy and psychiatry. Perhaps less attractive is the thought of an overlap between philosophy and madness. Some people become interested in philosophical questions as their sanity starts to crumble.
The sailor Donald Crowhurst disguised his failure in a ’round-the-world race by sending radio reports of false positions to suggest he was winning. Worry about the fraud being detected, and a long lonely time in a hostile sea, may have led him to lose touch with reality. As this happened he started to write in a logbook a section headed “Philosophy.” His thoughts included these: “The process we call Mathematics. The flower of basic intelligence. Ideas can be manipulated. If manipulated under a correctly formulated set of rules, they produce new results which clearly reveal aspects of the original concept which, though valid, would not have been so clearly revealed. The idea that Mathematics is the language of God, however, possesses more poetry than abstract validity.” (So far, a recognizable if unenticing philosophical thought about mathematics. But a worrying note creeps in.) “It should be restated as Mathematics is perhaps the only certain common ground man TODAY occupies in the Kingdom of God.” (And soon sanity is clearly lost.) “That these sentences would at first sight apparently be devoid of physical meaning is hardly surprising, for if we had a complete understanding of their meaning we would indeed have arrived at the stage it is now the object of the exercise to predict.”23
In this Donald Crowhurst was not alone. One of the striking features of people on psychiatric wards is how much their conversation is about topics also discussed in philosophy journals. Is the physical world the only world? Does it exist outside my mind? Could other people be unconscious robots? Is there a God? Do we have free will? Is telepathy possible? The atmosphere of the discussion is different, but the topics overlap.
One thing some people on psychiatric wards have in common with philosophers is awareness that the commonsense interpretation of the world is not the only one. It can seem that people on psychiatric wards take seriously forms of skepticism that philosophers discuss only academically.
John Custance, whose “heightened sense of reality” and “ineffable sense of revelation” during his manic episodes have been noted, was also interested in philosophy. He was a striking case of someone whose sense of revelation led him to treat his belief in the significance of his abnormal experiences as a fixed point. To preserve this belief, he changed other beliefs, including philosophical ones, undeterred by considerations of plausibility.
He described one of his abnormal experiences. In one room of the hospital were framed pictures, whose glass reflected the window on the opposite wall and the buildings outside. Over a period of weeks he found that the reflections became distorted: “The chimneys left the vertical plane and moved round to the horizontal, eventually to forty or fifty degrees below the horizontal, while the reflection of the building itself became correspondingly curved, until the whole vertical structure formed a sort of inverted U. This puzzled me greatly. I don’t think I was horrified at first. What could it mean? My vision was otherwise quite normal; I could play badminton, billiards, and so on. But whenever I sat in one of those chairs and looked at the prints, I could see this strange phenomenon. Certainly I was bewitched. But that was no new discovery; it did not frighten me more than I was frightened in any case. Then, suddenly, the answer came. Bishop Berkeley was right; the whole universe of space and time, of my own senses, was really an illusion.”24
John Custance was unusual in knowing a bit about Berkeley’s philosophy. But perhaps, without mentioning philosophers, this kind of strategy may be adopted by others trying to understand their strange experiences? An interesting discussion by Louis Sass links psychiatric delusions with philosophical discussions of skepticism.25 Sass looks afresh at the much discussed case of the German judge, Daniel Paul Schreber, who in 1903 published a notably articulate account of his psychiatric illness.26
Schreber had a ramified delusional system in which he heard voices and sometimes saw two suns. He had a unique relationship with God, who depended on him and who contacted him through “nerves.” Schreber had the solipsistic view that the world and other people depended in various ways on his own mind. Some events were miracles that depended on him, while other people often had only a problematic existence. “The human forms I saw during the journey and on the platform in Dresden I took to be ‘fleeting-improvised-men’ produced by miracle.”27 But at times his grip on his own existence as the person who had his experiences seemed precarious; he would describe the experiences impersonally, as though they occurred but did not belong to anyone.
Louis Sass takes up the strand of solipsism in Schreber and compares it with Wittgenstein’s discussion of solipsism. Wittgenstein suggested that solipsistic thoughts are more likely to arise when a person is passive. When we walk around, perhaps knocking things over and picking them up, we are more likely to be aware of objects’ independent reality than when we are sitting still and staring. Schreber’s delusions were embedded in a life that fit Wittgenstein’s view. Apart from brief and reluctant walks, Schreber liked to sit still all day at the same place in the garden.
More importantly, Wittgenstein makes conceptual points about the paradoxes of solipsism. Arguments for solipsism are usually conceptual rather than empirical. The evidence for the existence of things other than me is not poor. The solipsist says that no evidence is enough to prove their existence beyond doubt. The same line is taken about proving that other people have experiences. By setting the standard of proof unattainably high, the solipsist makes it impossible to defend the view that things and experiences exist outside my mind. Wittgenstein’s response is that if you make it impossible for others to have experiences, it becomes empty to say that they belong to you: “If as a matter of logic you exclude other people’s having something, it loses its sense to say that you have it.”28 Sass links this up with the way Schreber’s impersonal descriptions suggest only a weak sense of himself as the person having them.
Sass’s application of Wittgenstein suggests a new use for philosophical discussions of the implications of deviant beliefs. These implications may suggest possible experienced consequences for deluded people who actually hold those beliefs. Perhaps this applies especially to people with a distorted sense of themselves and of their own agency. But it may be possible to build on Sass’s approach by asking about more general links between philosophical beliefs and psychiatric disorder. The links could go either way. A deviant epistemology might distort the interpretation of normal experiences, or, as with Custance, the validity of a delusive experience can be defended by means of altering beliefs about epistemology. Either way, it seems worth looking at what happens to deluded people’s sense of plausibility.
Perhaps someone with Capgras delusion has the neurological deficit that Young and others describe: when a familiar person appears, the usual emotional response is lacking. But more than this must be missing. If you are someone I usually warm to, but today I feel no emotional response to you, I may look for an explanation. Perhaps I have a hangover. Perhaps today you are using some perfume I do not like. Perhaps things said last time have left some chilliness between us. If none of these seems true, I will go on looking. But one explanation I will not be tempted by is that you have been replaced by an identical impostor. As a convincing story it ranks below the school excuse that “the dog ate my homework.” Capgras delusion carries with it the loss not only of an emotional response but also of a sense of plausibility.
In delusions, as well as pressures to distort, there is not the usual restraining thought: “Surely all this is too unlikely?” To think about what may be going wrong, it would be helpful to have a clearer account of the undistorted constraints of plausibility.
We often identify delusions by their bizarre implausibility. The distortions of thinking in Gosse’s interpretation of the fossil evidence, and in the 1939 Communist Party Central Committee’s interpretation of the political situation, have a mainly social rather than a psychiatric explanation. They come from such strong indoctrination (or self-indoctrination) in a religious or political ideology that thinking is distorted to protect the ideology’s central beliefs. Here the religious and political cases contrast with the psychiatric ones. But what the “social” and the “psychiatric” distortions of thought have in common is a disregard for the constraints of plausibility. It is this that makes such thinking a very poor guide to tracking reality.
Linking these different distortions of thought by their shared disregard of implausibility raises questions. What are the normal plausibility constraints? And why should they be respected? When is it reasonable to give up a deeply entrenched belief because of new evidence against it? When is it reasonable to use the entrenched belief as a reason to doubt the evidence? When is it reasonable to accept someone’s testimony and when is it not? Should we prefer a simple and elegant theory that fits nearly all the evidence, or a complex and untidy account that fits all of it? How much evidence is needed to turn a hypothesis into a fact?
One hope has been that science and philosophy, partly by extrapolation from obvious cases, might be able to explain what plausibility is. Perhaps they might even generate rules to steer people toward the more plausible interpretations of the world. Such an inquiry might highlight misguided strategies of thought underlying delusions.
If epistemology and philosophy of science gave this clear guidance, we would have a map of the plausibility constraints on beliefs. We might then see whether a deluded person lacks the whole map or only some parts of it. But those parts of philosophy disappoint this hope. Books on philosophy of science are not rulebooks for scientists trying to choose between hypotheses.
There is the case, argued by Jerry Fodor, that the holistic nature of reflective interpretation makes it unlikely that we will find specific strategies for deciding about the plausibility of rival beliefs.29 Reflective interpretation is holistic, but the conclusion drawn may be too pessimistic. It seems worth trying to approach the general question by looking at some ways in which we do draw a line between plausible and implausible beliefs.
Our minds are finite and we have to answer questions in a limited time. So we consider only some aspects of a problem. We choose between a limited number of strategies. And we consider only a few possible answers as serious candidates. Does this bounded rationality have an underlying coherence, supporting the exclusions we make against other possible ones? Or do we separate the plausible from the implausible by many different strategies, each justified by having been found to work roughly but quickly in a context that is irreducibly specific and local?
Some implausibility detectors appeal to very general parts of our system of belief. Doubts about a claimed miracle may appeal to the general reliability of scientific laws. And, as in this case, different general belief systems often influence what is seen as likely.
But other implausibility detectors are highly specific. At Paddington Station, I ask the price of a rail ticket to Oxford. The man behind the glass says it is 407 pounds but when bought on a Tuesday it comes with a lettuce as a free gift. The resulting mental alarm bells have not been triggered by the scientific worldview. The warning comes from specific beliefs about the likely costs of tickets, and about likely promotional offers. If the man then asks to borrow the pair of socks I am wearing, the plausibility of his testimony plummets even further toward zero. One of the reasons it is hard to say whether certain scenes in Dostoyevsky or in Franz Kafka are closer to dreams or to madness is that both dreams and madness escape the normal plausibility constraints.
A possible clue to the experience of delusions comes from dreams. They too combine rational thinking with tolerating the bizarre. Dostoyevsky describes waking memories of how ingeniously in a dream we outwitted our enemies: “You guessed that they were perfectly aware of your trick and were just pretending not to know your hiding-place; but again you outwitted and cheated them, all this you remember clearly.” He goes on: “But why was it that your reason was able to reconcile itself to the obvious absurdities and impossibilities with which your dream was crammed? One of your killers turned into a woman before your very eyes, then from a woman into a shy and hideous little dwarf—and you accepted it at once as an established fact, with barely a hesitation, and at this very moment when your reason, on the other hand, was at a pitch of intensity and demonstrating extraordinary power, shrewdness, perception and logic?”30
Reasoning can persist, split off in dreams from the lost normal sense of plausibility. If this can happen in dreams, it is less surprising to find it also in madness. And, outside of either dreams or madness, reasoning and intellectual analysis can function independently from (at least some of) the plausibility constraints.
In epistemology the standard form of reasoning about beliefs is Socratic. A belief is challenged first by questions designed to make the person state it more explicitly and to give reasons for it. Then unwelcome logical consequences are drawn out from the belief or from its supporting reasons. Epistemology works by spelling out the costs of different systems of belief. Unwelcome consequences are an implicit invitation to abandon or modify a belief. But the fact that they are unwelcome does not itself come from logic. It comes from an intuitive sense of implausibility.
Logic alone is enough to exclude inconsistent belief systems, but not to choose between consistent ones. Someone with no intuitive sense of plausibility can still produce a map of the costs of belief systems. But the map on its own will not generate decisions about which costs are acceptable or unacceptable. Something extra is needed. That “something extra” is relevant to delusions.
Epistemology without intuitive plausibility weightings is inconclusive. This is paralleled by the “frame problem” in artificial intelligence. If an intelligent machine is designed to perform a simple task like fetching a package, and given access to any information it wants about the alternative strategies, it may never actually start the job. Without any way of excluding irrelevant information or questions, it will have an indefinitely large number of preliminary calculations to carry out. After several years it may still be working on such calculations as that going out of the door will not start a snowstorm in Russia, or will not make any camels die in Egypt.31 Difficulties in designing a relevance detector for such a machine have suggested that, in people, emotional responses may function as relevance prompts. There may be no general intellectual strategy for seeing what is relevant. Instead, we might go by how things “feel.”
Antonio Damasio has argued for the cognitive role of emotional responses. His patient “Elliot” had undergone surgery to remove a brain tumor.32 As a result he was incapable of completing tasks on time. Asked to read and classify some documents, “he might spend a whole afternoon deliberating on which principle of organization should be applied: Should it be date, size of document, pertinence to the case or another?” Elliot’s state was the frame problem come to life. His intellectual abilities were intact, except that he was unable to plan activities over time or to make decisions. Damasio concluded that Elliot’s emotional coldness stopped him valuing some options above others and “made his decision-making landscape hopelessly flat.”
This may be like the plausibility constraints on beliefs. The “something extra” needed in addition to logic may not be an intellectual strategy. It may depend on the emotional “feel” of an idea. (Not on the great emotions of love, hatred, anger, or fear, but on calmer passions: “There is something fishy about this”; “That sounds really cool”; “It has a good feel to it”; “I don’t like the sound of that”; “There is something not quite kosher about this offer.”)
Perhaps deluded people’s lack of a sense of plausibility is linked to problems of “personal chemistry” in everyday life. Their intuitive and emotional “feel” for other people is often weak. And one of the symptoms of schizophrenia is having great difficulty in making decisions. The indecisiveness of schizophrenia is reminiscent of Elliot. It too may come from an emotional blankness that makes it hard to assign values to different options. The evaluative weight deluded people give to things is often bizarre. K. W. M. Fulford speaks of “evaluative delusions.” One patient forgot to give his children their pocket money. He called this “the worst sin in the world,” saying that he was “worthless as a father” and that his children would be better off if he was dead.33 In addition to making it hard to assign values to options, disrupted emotional intuition could make it hard to assign plausibility to beliefs.
When thinking about plausibility, we often use the metaphor of weight. How weighty is a certain argument? How much weight should be given to this testimony? The metaphor can be used to contrast two kinds of failure in the sense of plausibility. Bad cognitive strategies can give a belief (or some evidence or an argument) either too much weight or too little weight.
Some beliefs are given great weight. There are everyday beliefs we cannot seriously give up. I have no more than two hands. Trees do not make jokes. Chairs do not leap away to avoid being sat on. These beliefs are so heavy that we cannot pick them up and move them. If I seem to experience joking trees, I will wonder whether I am dreaming, drugged, or having a psychiatric breakdown. It is right to give weight to these beliefs, as they have vast empirical support.
But a belief may be given too much (sometimes much too much) weight. A clear nonpsychiatric case was the belief of the British Communist leaders in 1939 that the Soviet Union could do no wrong. In psychiatric disorders, the belief “I am dead” may be treated as too heavy to move in the way that “Trees don’t make jokes” is. One explanation of this heaviness could be some distortion of a tagging system whose normal operation gives certainty without supplying evidence open to conscious scrutiny.
When people think about philosophy, nihilism can be a temptation. There are so many alternative ways of thinking about the world (and so many arguments about them to evaluate) that it can seem impossible to choose among them. None of them seems to have any more weight than any other. Far from being too heavy to move, their lightness makes them both seem unreal and absurdly easy to pick up. A student in a philosophy examination who feels this lightness might choose any opinion more or less at random.
Something like this could happen to people whose psychiatric disorder has disrupted their emotional and intuitive feel for plausibility. When thinking is as “light” as this, someone may just “choose” any old version of the world, without feeling a real commitment to it. This would fit with the “double bookkeeping” of some people who have delusions. It would also fit with the sense of mockery, the sense of the person not really being serious about the belief, that sometimes comes across.
The approach here centers around six conjectures. The first three are about the “heaviness” of distortions of intuitive interpretation. The other three are about distortions of reflective interpretation.
First conjecture: Because the interpretation that shapes our perceptual experiences is invisible, challenges to a delusive interpretation may be defeated by a strong intuitive conviction that alien abduction or whatever was experienced. This makes the belief “heavy” or hard to shift.
Second conjecture: The heaviness of some delusive intuitive interpretations might be linked to the heightened sense of reality that delusional experiences sometimes give.
Third conjecture: The link between heaviness and unconscious tagging delivers “certainty” without evidence open to scrutiny.
Fourth conjecture: When holism plays a strong role in our reflective interpretation, intense feelings of intuitive conviction, if treated as fixed points in a belief system, may lead to distortions in other parts of the system. These distortions could affect beliefs about standards of plausibility.
Fifth conjecture: Delusional beliefs depend on the loss or impairment of the normal feel for what is plausible.
Sixth conjecture: The failure of the plausibility constraints may itself be part of the disruption of emotional intuition so common in psychiatric disorder.
These conjectures aim at a part of the “human” interpretation of delusions that, while drawing on neurological information, is intuitively intelligible in psychological terms. The six conjectures are all empirical claims, needing empirical testing. I put them forward in the hope that they can advance knowledge—even if, as Karl Popper taught us, this is achieved most often by inviting refutation.