21

Brain, Mind, and Agency

I do not find the neurobiological theory of mental illness as helpful to my recovery because it deprives me of any sense of self-determination and responsibility. When I think I am a group of chemical reactions, each with its own scheme and plan, I feel dehumanized and powerless. I feel that I am thinking, feeling and acting at the whim of those chemicals, not through any effort or responsibility of my own.

—Dan Fisher, “Humanity and Voice in Recovery from Mental Illness,” in From the Ashes of Experience: Reflections on Madness, Survival and Growth, ed. Phil Barker et al.

Doesn’t his mind need help as well as his brain?

—From a woman’s letter about her brother’s psychiatric treatment

Some psychiatrists, encouraged by scientific advances, may hope to avoid the philosophical issues discussed in this book. Pieter Geyl famously called history “an argument without end.” Isn’t philosophy even more so? For further progress do we need anything beyond the increasingly detailed causal accounts coming from epidemiology, genetics, pharmacology, and neuroimaging?

The pull of this antiphilosophical view is easy to feel. But it depends on not asking questions that matter. It is the vision held by people living in a city with buildings so solid that they forget they are in an earthquake region. Psychiatry’s conceptual framework rests on the meeting point of two of the hardest and deepest problems in philosophy: the mind-brain problem and the problem of free will. The city is built at the intersection of two fault lines.

The challenge the mind-brain problem still poses to science and philosophy makes it a major fault line underneath our conceptual scheme. We are far from resolving the disputed issues about how conscious thoughts, feelings, and decisions relate to states of the brain, and about why consciousness evolved at all. Luckily psychiatrists can to some extent keep these problems about mind and brain at arm’s length.

For psychiatry, by far the more serious fault line is the cluster of problems about agency and free will. These start to be noticeable when society has to respond to things done by people with psychiatric problems. When what someone does is partly the product of their mental disorder, psychiatrists, lawyers, and many others find it hard to draw the boundaries of responsibility. How far, when, and in what ways does having a psychiatric disorder absolve people from responsibility for what they do?

Here some comments will be made about the mind-brain problem before turning to issues of agency as they arise in psychiatry.

Mind and Brain

Thomas Nagel famously asked the question “What is it like to be a bat?” We do not have sonar, the “radar” that tells bats about nearby objects. If sonar fails and the bat can’t locate food on the wing, we have some idea what this feels like. We do know perceptual failure. But we have never experienced the detection of objects by sonar. One day we may fully understand the brain mechanisms involved. But even then we will not know the subjective “feel” of it.1 Even when all the neuroscience is understood, this subjectivity seems an unknown residue.

This applies to us as well as bats. When the mechanisms of human color vision are fully mapped out, a neuroscientist born blind could know all about them. If later the neuroscientist is given sight, the experience of color might still be a surprise: “This is amazing. I had no idea it was like this.” If a complete neuroscientific account leaves this out, it seems not to be a complete account. If so, there is a huge question. What is this “extra” subjectivity, and how does it relate to the neuroscience?

How did consciousness evolve in a material universe? At one level the answer is not hard. Visual, auditory, and other experiences are useful, telling us things we need to know about the world. The experience of pain is a warning not to repeat harmful encounters. And so on. But this explanation may not go deep enough.

A person and a computer might carry out the same simple but long calculation: perhaps multiplying one immensely long number by another. A human might find the experience tedious, exhausting, or frustrating. We might ensure that the person has a break during the task, or is not asked to do it often. We have no concern about the computer. This is not because computers calculate so quickly. It is because we think there is no answer to the question “What is it like to be a computer?” We do not think they have experiences.

This is why the easy explanation of the evolution of consciousness might not go deep enough. It is useful to be able to process visual and auditory information about the world and its opportunities and dangers. But computers can do all this without, we believe, having experiences. Perhaps we are wrong about computers. If not, the usefulness of all this information only explains why computer-like processing might have evolved. It fails to explain why in our version (or in some other species) subjective experiences are bundled up with these processes.

The problem is dramatized by the idea of “zombies”—beings that look and behave like us, and whose brains process information in ways neuroscientifically indistinguishable from ours, yet with the single but large difference that they have no subjective experiences.2 Why has evolution thrown up humans rather than zombies?

Intuitively it seems conceivable that zombies could exist. If this is right, the materialist account of consciousness in terms of the brain and its functions seems incomplete. In this department, zombies have as much as we do. Something else has to explain why we have experiences they lack.

One materialist response is to challenge the intuitively plausible thought that zombies could exist. If zombies could do all that we can do, our states of consciousness would be unnecessary: mere by-products that do no work, feeding back no input to the brain. Each experience would be purely a product of its corresponding brain state. So experiences could not causally influence even each other. But we notice, compare, and think about our experiences. Noticing and comparing are also experiences. It has been suggested that a theory that makes experiences unable to interact with each other could not account for what we do here.3 The claim is that, despite appearances, we can rule out the version of our consciousness needed to make zombies possible.

This is only one of the arguments against zombies. This one, like the others, has been challenged. The debate continues. No consensus has been reached on whether or not conscious states transcend the neuroscientific story. The problem for materialists is explaining why we and not zombies evolved, and how subjective experiences fit into a purely neuroscientific account. The problem for those who think conscious states involve something more than this is to explain what these nonphysical properties are, and how they can influence chemical and other events in the brain. Scientifically, isn’t such interaction wildly implausible?

In our present state of knowledge, psychiatry needs the language of mind-brain interaction. But this need not involve any theoretical commitment to the implausibilities of dualism.

Everyday and Other Varieties of “Mind-Brain Interaction”

Although interactionist dualism may be a dubious theory, our lives are saturated with the body influencing the mind. A blow or an injury causes pain. Being tired or hungover can make us irritable. Drugs can relieve pain or cause euphoria. And so on. Equally platitudinous is everyday interaction the other way. Our mental state is one of noticing it is time to go somewhere, and our brain sends signals to the muscles needed for us to get up.

Other instances are more striking. Dr. Oliver Sacks described the physical response to stress of a patient in for a checkup: “On one of these return-visits I was dismayed to see a rather violent chorea, grimacing, and tics, which he had never shown previously. When I inquired if there was anything making him uneasy, he replied that he had taken a taxi to hospital, and that the taxi-meter was continually ticking away: ‘It keeps ticking away,’ he said, ‘and it keeps me ticcing too!’ On hearing this, I immediately dismissed the taxi, and promised Mr. E. we would get another one and pay all expenses. Within thirty seconds of my arranging all this, Mr. E.’s chorea, grimacing and ticcing had vanished.”4

What in Freud’s time was called “hysteria” involved apparent physical disorders such as paralysis, with no obvious organic cause. The Freudian view was that hysterical patients suffer largely from reminiscence: the memory of traumatic events was repressed and then burst out in the form of physical symptoms.5 The diagnosis of hysteria is obsolete, but physical symptoms apparently caused by psychological problems are now classified as “conversion disorders.” If genuine, these cases dramatically show the impact of the mental on the physical. But it is obviously difficult to be sure there is no organic cause. Malingering and other accounts have to be considered.6 “Conversion” as a mental impact on the body is dramatic but debatable.

Less debatable are the adaptive capacities of “neuroplasticity.” The brain responds to new demands made on it and to how we use it.7 When brain injury damages a region used for a particular skill or process, the same region in the opposite hemisphere sometimes takes over. Or brain regions processing input from one sense might be reassigned to another. The visual cortex of people who have been blind for a long time may take over processing Braille and other input from touch.

The brain’s plasticity includes the way a specialized region can enlarge with use. One case involves the hippocampus. We draw on “maps” laid down in the posterior hippocampus to find our way about. (The anterior hippocampus is involved in mapping somewhere new.) London taxi drivers need a huge mental map of the streets of London. To get their license, they have to pass “the Knowledge”—a test of their use of the mental map to choose the best routes. Later their daily work uses and maintains the map.

Scans detect the density of brain matter. Compared to controls, taxi drivers have greater density in the posterior hippocampus, the area used to access maps already laid down.8 Could it be that people who already have this enlargement are those who choose to drive taxis? There is evidence against this. The density increases with years spent driving taxis. It does seem to grow with use.

On the other hand, the taxi drivers’ anterior hippocampus, used for new mapping, had less dense gray matter than those of the controls. The two findings seem to reflect that the drivers make much greater use of a map already laid down and that they do not lay down new maps. It seems that growth that would have gone elsewhere can be hijacked by a more active region. How we use our brains can change their physical properties.

Mind-Brain Interaction and Dualism

These cases all show what is conventionally seen as the influence of mind on brain. This sounds like dualism: interaction between nonphysical (mental) states or events and physical ones. Perhaps the last systematic attempt at a general dualist account of mind and brain was by a neurophysiologist (J. C. Eccles) and a philosopher (Karl Popper) in 1977.9 As Popper put it, the brain is owned by the self, rather than the other way round: “The active, psycho-physical self is the active programmer to the brain (which is the computer), it is the executant whose instrument is the brain. The mind is, as Plato said, the pilot.”10

Eccles suggested how this might work. He saw the self-conscious mind as “an independent entity that is actively engaged in reading out from the multitude of active centres in the modules of the liaison areas of the dominant cerebral hemisphere.” It “selects from these centres in accord with its attention and its interests and integrates its selection to give the unity of conscious experience from moment to moment. It also acts back on the neural centres. Thus it is proposed that the self-conscious mind exercises a superior interpretative and controlling role upon the neural events by virtue of a two-way interaction across the interface.”11 This interaction is described in terms of cortical modules “open to the self-conscious mind both for receiving from and for giving to … Each module may be likened to a radio transmitter-receiver unit.”12

The puzzle in this theory is about the transmitting and receiving. Detail is given about areas of the cortex involved, neuronal modules, and so on. But what kind of “signal” is transmitted? Is it physical? Is it electrical or chemical, or of some other kind? Where does it go to? Because the mind is supposed not to be part of the brain, there is no physical place for the signal to go. Perhaps the signal is not physical but mental? But how should we think of physicochemical events in neurons being converted into a mental signal? And when the brain “receives” a mental signal, how does this cause changes in neurons? For Descartes, interaction involved “animal spirits” going between mind and brain at the pineal gland. The module account is the same story, updated for the age of radio or cell phones. The unanswered questions add up to a strong reason for skepticism about this kind of dualism.

Spinoza’s View: Not Dualism, but Inner and Outer Perspectives

The mind and body are one and the same thing, which is conceived now under the attribute of thought, now under the attribute of extension. The result is that the order, or connection, of things is one, whether Nature is conceived under this attribute or that; hence the order of actions and passions of our body is, by nature, at one with the order of actions and passions of the mind.

—Spinoza, Ethics

Modern materialists believe that the reality of the mind is states and processes of the brain and nervous system, without any dualist residue. Some have adopted an “eliminative” version, according to which the “folk psychology” of subjective experiences can in principle (and, one day, in practice?) be replaced by talk only about brain states and processes. Others, equally skeptical of dualism, think this denies the problem of subjective experiences by fiat, and that the losses in the “elimination” of “folk psychology” are more obvious than the gains.

It seems that an adequate solution would bypass dualism without denying or downgrading subjective experience. One approach worth exploring can be seen as a kind of “noneliminative” materialism. Philosophers as different in outlook as Spinoza and Bertrand Russell have argued that the solution to the mind-body problem is to see subjective experiences and bodily or brain states as two aspects of the same thing. In itself this claim does not solve the mind-body problem. How do we know that we are dealing with two “aspects” of something, rather than with two things or processes? And can we sidestep asking “aspects of what?”? Despite these questions, the Spinozist approach seems worth exploring. A defensible version of it might mean we could escape the dubious metaphysical interactions of dualism and the procrustean “eliminations” of some versions of materialism.

Some have argued that modern physics gives an account of the physical world that might support this Spinozist approach. Michael Lockwood says that philosophers writing on the mind-brain problem have usually seen it as a task of trying to fit a problematic mind into an unproblematic physical world. For them all the “give” is on the side of the mind. Lockwood sees this as a prejudice. “Quantum mechanics has robbed matter of its conceptual quite as much as its literal solidity. Mind and matter are alike in being profoundly mysterious, philosophically speaking.”13

The intuitively paradoxical aspects of quantum physics include “superposition”: a particle exists in a way that includes its alternative possible states at the same time, until measurement pins it down to just one of the possibilities. Some particles interact in ways that result in their “entanglement” after they are separated: when measurement pins one particle to a particular state, measurement then pins the other particle down to a corresponding alternative state. This could allow information to be sent from one place to another without going through the space in between. David Deutsch has suggested that such phenomena create the possibility of “quantum computers.” One version would allow vastly many simultaneous computations to influence each other and so produce a shared output: “In such computations, a quantum computer with only a few hundred qubits [quantum bits of information] could perform far more computations in parallel than there are atoms in the visible universe.”14

Michael Lockwood has speculated that the physical basis of consciousness may be a brain whose interactions make a quantum computer possible.15 I am nowhere near understanding quantum theory and take no view on whether this suggestion is true. The big models in neuroscience tend to follow current or predicted information technology. In the early twentieth century the brain was seen as a telephone exchange. In the mid-twentieth century it was a computer. The quantum computer model is in this tradition. Like its digital computer predecessor, it may create a fruitful research paradigm, even if it is at best a partial picture. If it does turn out that consciousness needs a brain that can support a quantum computer, we still may not have solved the mind-brain problem. A new question (Why only with quantum computing?) will be added to the question of why consciousness evolved at all.

There is a more general point. Lockwood pursues the thought that mind and matter are both mysterious. He suggests that physics might learn from introspective psychology: “If mental states are brain states, then introspection is already … telling us that there is more to the matter of the brain than there is currently room for in the physicist’s philosophy.” Developing an idea of Bertrand Russell’s, itself similar to Spinoza’s view, Lockwood says that consciousness “provides us with a kind of ‘window’ on to our brains, making possible a transparent grasp of a tiny corner of a material reality that is in general opaque to us.” He suggests that the gulf, even in the brain, between mental and physical events is an illusion coming from two different kinds of access to the same reality. We know brain events from the outside, using our senses and instruments. We know mental events “from the inside, by living them, or one might almost say, by self-reflectively being them.”16 The epigraph to his book is an ancient Chinese aphorism: “We are that in which the earth comes to appreciate itself.”

Whether the quantum computing suggestion about consciousness is right or wrong, we can reject the nineteenth-century view of matter as “inert.” That view made the origin of life a puzzle, solved by an unnecessary nonmaterial “vitalism.” Current models show how amino acids and more-complex molecules needed for life could emerge spontaneously on Earth. Matter turned out to have the potential for the development of life. There is a parallel point about mind. It is a platitude that consciousness emerged from material brains and nervous systems. As matter has the potential for life, so life has the potential for consciousness.

How consciousness emerged is one of the great unsolved philosophical and scientific questions. We may hope that, as with the emergence of life, “interaction” with something nonphysical will be unnecessary. We do not know that interactionist dualism is impossible. But its unanswered questions make it implausible. In psychiatry it is desirable to be able to talk about neuroplasticity and other forms of everyday “interaction” of mind and brain without any theoretical commitment to dualism. In the rest of this book I will talk of mental states being “embodied” in neurophysiological or neurochemical states. My pragmatic hope is that this will be useful in thinking about psychiatry, without implying that the mind-body problem has been solved.

Mind-Brain “Interaction” without Dualism

A familiar central part of the program of neuroscience is mapping mental states, events, and functions onto the brain. Another way of describing this is as showing which brain states embody which mental states, events, or functions. For the philosophical mind-brain problem, this talk of “mapping” and “embodiment” is totally lacking in explanatory power. Is “embodiment” interaction or identity? How does a brain state “embody” a thought or a feeling? Could there be zombies who have the brain states without the embodiment? This language leaves all the deep questions unanswered.

But for thinking about neuroscience and psychiatry, this philosophically unilluminating language serves the purpose. We can talk about the taxi drivers’ ability to navigate the London streets being embodied in the posterior hippocampus without denying a residual mind-brain problem. It allows us to talk easily of mental and brain events in the same sentence, facilitating the pluralist, rather than reductionist, explanatory models that psychiatry often needs.17 It also allows room for the effectiveness of psychotherapy even if a materialist view of the mind is correct.

Some materialist account may be the whole truth. Even if so, in our present state of ignorance it is not wrong to talk in ways that sound like interactionist dualism. This choice of language carries no theoretical commitment to dualism. In principle the talk about mental states can be replaced by talk about whatever brain states turn out to “embody” them. Placebos are a paradigm of mind-brain “interaction.” In some conditions the belief that you have been given medication often seems to contribute to improvement. The mechanisms include the release of opiates, action on the immune system, and so on.18 “Mapping” beliefs onto the brain states embodying them will outline a neuroscientific account of this mind-brain “interaction.” It will not solve the deep mind-body problem. But it does help neuropsychiatry dispense with mysterious transmissions to and from a nonphysical world.

Agency

Psychiatry cannot so easily bracket off the deep questions about agency and free will. How far do various degrees and kinds of psychiatric disorder impair free agency? The questions are partly empirical. What limitations do these disorders impose? They are partly philosophical. How do we decide what is a limitation? How is an irresistible impulse different from one that simply was not resisted? Which limitations are incompatible with free agency? If free agency is a matter of degree, how do we draw up the scale? At what point on the scale is a person’s responsibility eliminated or diminished?

Quite apart from moral blame or legal punishment, these questions about freedom of mind and action are important from the person’s own perspective. Part of human flourishing is being able to make decisions, to act on them, and so to have some control over your own life.

Thinking, Feeling, and Acting at the Whim of Those Chemicals?

All these things indeed show clearly that both the decision of the mind and the appetite and determination of the body by nature exist together—or rather are one and the same thing; which we call a decision when it is considered under, and explained through, the attribute of thought, and which we call a determination when it is considered under the attribute of extension.

—Spinoza, Ethics

Dan Fisher says that the neurobiological approach to psychiatry leaves him with the sense that he is thinking, feeling, and acting at the whim of those chemicals. Is he mistaken? How far do genetic and neuroscientific causal accounts leave room for input from the beliefs, desires, reasons, loves, hatreds, fears, and hopes at the heart of how we see ourselves and each other?

We have the experience of deciding to make a phone call and then doing so. Through such experiences we see ourselves as agents: we see our conscious decisions as causing our actions, or at least many of them. But neuroscience has thrown up some disconcerting questions about how far conscious decisions really control actions.

A classic study by Benjamin Libet and colleagues has been interpreted to show that a conscious decision comes after the action is initiated. An electrical change in the brain (the “readiness potential”) takes place 550 milliseconds before a voluntary act. But people report first awareness of any urge to act only 200 milliseconds before the act itself.19 Libet’s own interpretation of this time lag was that the conscious experience of deciding or willing does not initiate the act, but that the 200 milliseconds left between awareness and action leave time for a conscious veto.20

The interpretation of the experiments is controversial. How accurate is the reported onset of awareness of the urge to act? And what is the “readiness potential”? Libet took it to be the initiation of action. But we know so little about the neural complexities of action that this is uncertain. In democratic countries the civil service makes plans corresponding to the programs of different political parties in readiness to implement the policies of the winning party. (The administrative “readiness potential”?) This does not mean that a new government has to rubber-stamp a decision already taken by the civil service, or that they at best have a hasty veto. Libet’s interpretation might be right. But we will not know until neuroscience can tell whether the readiness potential embodies an unconscious decision or only preparation to implement a future decision.

Intuitively we think of our experience of the world as unmediated: that we simply see, hear, smell, or feel the world as it is. We now know that this picture leaves out the complex processes of interpretation that create our experience. These processes are unconscious, but neuroscience shows they are the real causal story of our perception. Something similar is clearly true of our actions. We do not know intuitively what goes on in the brain when we decide to call someone.

Daniel Wegner, invoking a wider range of evidence, follows Libet in suggesting that “the feeling of conscious will” is an illusion: we are wrong to think that what we experience as conscious decision causes our actions.21 His case is partly based on our ignorance of the causal processes. He also cites cases where we are mistaken about our own involvement or lack of involvement in producing actions. (These include alien hand syndrome, automatic writing, and being “possessed by spirits.”) Causation has to be established empirically, by finding conjunctions between events and then controlling for various factors to establish which are causal. The supposed causal link between conscious will and action is poorly based because it is assumed rather than investigated. Wegner suggests that the neural systems are the real causes. He thinks the experience of deciding or willing is probably a preview we are given so that we know, and can tell others, what we are about to do. It is a gauge on the control panel rather than part of the mechanism.

Wegner is clearly right that the full causal story of action includes brain mechanisms of which we are not conscious. It is more controversial that what we experience as a decision has no causal impact. Everything depends on how such experience is mapped onto the brain: in which brain states, events, and functions our experiences of agency are embodied. On this there is an intuitive appeal in the views of Spinoza and Michael Lockwood that conscious states are the brain states seen from “inside.” (Though the metaphor—if it is a metaphor—of “inside” is a reminder of the still unsolved mind-brain problem.) If the brain processes embodying these experiences are a key part of the neuroscientific causal account of action, “conscious will” is not an illusion after all.

We know too little about the neuroscience of action to be certain one way or the other. But the “illusion” view seems less plausible than the standard one. Imagine designing an aircraft in a world without radar or satellites, so that the pilots have to radio other people about their flight path. On one proposed design, the plane is actually controlled by a robot rather than by the pilot. The robot also signals to the pilot’s brain which levers to pull, and so forth. So pilots, who are in a cockpit that actually is disconnected from anything that would control the planes, believe they are flying planes. They can report their position and direction to others. If pilots are as efficient as robots, it is hard to see any reason why this absurd “disconnected cockpit” complication should be chosen rather than the simpler standard design. It is equally hard to see why the elaborate “disconnected consciousness” model of human decision, rather than the standard version, would have emerged in evolution.

So it is not unreasonable to think that our decisions do in general control our actions. Like Dan Fisher, we do not want to see ourselves as passive, just observing what happens as our brains do our thinking and deciding for us. Luckily the arguments and evidence thought to support the passive view are unconvincing. We need not give up on the idea that we are free agents. But there are still serious difficulties about the scope and limits of free agency. Does this freedom go “all the way down”? As Spinoza himself asked, to what extent are our decisions and the thinking behind them at the mercy of causal factors outside our control? There is the residual worry that, one level further down, we still may be “at the whim of those chemicals.”

The Strategy of This Part of the Book

The account of agency and responsibility will be developed in three stages. First I will sketch the central framework of thought underpinning judgments of responsibility. Then I will relate this to how various disorders—addiction, for instance—create internal constraints, limiting or distorting motivation and action. Finally, I will turn the focus to people influenced by psychiatric conditions that are not easily described as internal constraints. Some conditions change people at their very core, in their basic desires and values. In contrast to the reluctant addict, some people diagnosed with “personality disorders” might not be constrained to do things against their will. Rather, the disorder shapes who they are, including shaping their will. How far should they be held responsible? Should we see the disorder as a piece of bad luck, for whose consequences they are not responsible? Or should we say that people who choose to do bad things because they have a bad will or a bad character, however this came about, are exactly those we should blame for what they do?