Hopes for the Future of Psychiatry
My interpretation of the Broadmoor interviews, like all interpretation of people, has limitations. This has some striking ones. On the basis of meetings that lasted perhaps an hour or two with each person, I have given an interpretation of their values and of how their childhoods may have shaped them.
Longer interviews might have brought out more, or corrected some of the impressions given here. If I had lived for a month with the people I talked to, I would have developed a deeper sense of what they are like. Even in interviews, other questions might have been more revealing. Other interviewers asking the same questions might have elicited different answers. Perhaps most of all, different interviewers, even given identical answers, would in some ways interpret them differently. Are some interpretations better than others? If so, why, and how can we tell?
We face problems of interpretation in everyday life with other people. These problems arise acutely for psychiatrists trying to understand people they hope to help. Someone with a psychiatric disorder is a person, with beliefs, hopes, emotions, human needs, values, and an idea of his or her own self: a personal approach to the world and a whole inner life worth trying to understand.
How should we try to understand another person’s inner life? This part of the book is about the often underestimated role of our human interpretation of other people. It is triggered by obvious questions about the reliability of interpretations of the kind given of the interviews. But it also has broader purposes as part of a discussion of psychiatry. Borrowing from psychology and neuroscience, I will outline some of the mental equipment we draw on to get an intuitive feel for another person. One aim is to suggest that intuitive interpretation, inevitably part of psychiatry, is more reliable than is sometimes feared. Also, spelling out some of our processes of interpretation may help us understand how they go wrong in psychiatric disorders.
The discussion starts, in this chapter, with hopes for the scientific side of psychiatry. The power of computers to organize vast bodies of information may give a more fine-grained view, either supplementing or replacing Procrustean diagnostic categories. This view should improve our patchy understanding of the causes of disorders. Another hope comes from attempts to restructure diagnosis on a biological basis: rethinking psychiatric disorders as failures in genetic and neuroscientific systems. These give grounds for optimism.
But their success will still leave out the intuitive human interpretation that is the other side of psychiatry. The following chapters discuss what this human interpretation is, and why its development should also be one of the hopes for the future.
Many psychiatric disorders are less bad to have now than they were in the past. Over recent decades more powerful medications with less severe (though rarely negligible) side effects have been developed. Even so, many disorders are contained, not cured. Is this simply because research still has a long way to go? Or are there flaws in the framework of psychiatric thinking?
The dominant classification of psychiatric disorders is in the Diagnostic and Statistical Manual of Mental Disorders. This has been developed and refined in successive editions (from DSM-1 to DSM-5). The whole project has been predictably controversial. Do some of the categories wrongly medicalize what are just parts of the human condition? Do the DSM categories reflect external pressures from health insurers? Are they influenced too much by the pharmaceutical industry?1
Along with these ethical and political issues, there are questions about whether the categories are scientifically arbitrary. A hint that this may be so comes from the problem of “wandering diagnosis”: symptoms can change in ways that suggest that one disorder is morphing into another and perhaps back again. “It did look like bipolar disorder at first, but now it looks more like schizophrenia. Perhaps it is schizoaffective disorder. We will see more clearly as it runs its course.” Wandering diagnosis raises questions about the distinctness and usefulness of the diagnostic categories.
Why these categories? Where do they come from? They are not obvious natural kinds, as the elements of the periodic table are. The categories can seem like the colonial boundaries in Africa: lines drawn from outside, sometimes uniting very different tribes in one territory, sometimes dividing a single tribe between different territories. These problems are real. Even so, DSM or some alternative is important for research. Without standard categories, research would be blurred. Any systematizing account may be better than none.
I will use the standard psychiatric categories as starting points for discussion, but with more than a tinge of skepticism. The crude map of the colonial boundaries needs to be replaced by a finer grid. This grid will be partly biological. But a view of psychiatry as being 80 percent neurobiology has its own conceptual rigidity. There is a need to reflect the complex two-way interplay between our biological and social dimensions. The conceptual map needs to be more fine-grained, and also to reflect the fluidity of how actual people change and develop.
The power of computers to analyze vast amounts of information may help us keep the gains of DSM for research without its arbitrariness. An enormous database can go beneath the diagnoses to more fine-grained information: not about “people with schizophrenia,” but about “recently bereaved women who hear voices saying they are worthless.” Time and cost may limit how detailed a history can be. But the well-resourced ideal psychiatry of the future would aim for the most detailed and precise description possible of likely relevant factors.
Although we do not know in advance what factors are causally relevant, any description has to be selective. “Her sister is taller than she is” could be relevant, but without some special clue it is doubtful that it is. The obvious questions include the ones of a standard case history: symptoms, the narrative pattern of the developing disorder, relevant genetic and neuroscientific information, medical treatment and the person’s responses to it, and his or her personal history, including family, work, and social relationships. If available, evidence about the person’s temperament and general psychological characteristics could be relevant, as could their interests, their religious or other beliefs, their values, and their hopes and fears.
Particular symptoms are more fine-grained than diagnostic categories—“having delusions” rather than “schizophrenic.” Some symptoms are more fine-grained than others—“hearing voices” rather than “having delusions.” Knowing what the voices say is more fine-grained still. And obviously some narratives are more fine-grained than others. “Violent and emotionally labile” is less informative than “He was emotionally warm arriving at his mother’s house, saying how much his old room meant to him, and telling her about his new job, but only an hour later he was crying and he killed her cat.”
A huge, fine-grained database may highlight patterns impossible to see when research is limited to the broad diagnostic categories or based on the limited experience of individual psychiatrists. There is a hopeful parallel with the way computers warn general practitioners about dangerous drug combinations when they are prescribing medication. It is good not to have to rely on what one doctor has noticed through experience.
The wandering diagnosis problem may partly reflect something deeper in the biology of psychiatric disorder.
In the early days of mapping the human genome there was optimism that there might be simple causal links between genes and psychology. Some had hopes of discovering “the gay gene” or “the” gene for autism. That picture was far too simple.
The first complication comes from gene expression. Every cell in a person’s body has the same DNA. But in different organs, such as the liver or the heart, different parts of the DNA are expressed, or turned on. Epigenetics is about this “turning on.” Epigenetic changes alter which parts of the DNA are expressed. Causes of these changes include childhood abuse. Some causes come before birth. If a pregnant woman is stressed by her partner’s violence, this can cause epigenetic changes in her child, and changes in the receptor for the stress hormone cortisol might still be found in the child in adolescence. Some epigenetic changes are temporary; others can last a lifetime.2
In the case of schizophrenia, the early optimism was linked to the hope for a “magic bullet” to counteract “the” gene responsible. The shooting image can run both ways. It was hoped that the magic bullet would eliminate the single sniper whose own firing caused schizophrenia by disrupting normal neurochemistry and psychology. This was again too simple. So far each genetic variant that has been found relevant to schizophrenia explains only a very small part of the disease’s genetic component. More than a hundred variants have now been found. The single marksman has been replaced by more than a hundred different snipers, each making different contributions to the disruption.
And some of these genetic variants also increase the risk of different psychiatric conditions, such as bipolar disorder. One response to this is to ask how “different” these conditions are: “The biology of psychotic illnesses may fail to align neatly with the classic Kraepelinian distinction between schizophrenia and manic-depressive illness.”3 In the United States the National Institute of Mental Health has set up a “Research Domain Criteria Project” aimed at moving psychiatric research away from traditional diagnostic boxes. The new focus is on possibly interacting “domains”: genetically influenced neurobiological systems whose malfunctioning may lie behind psychiatric disorder.4
Some of our current boxes may turn out to match the underlying reality no better than the medieval “humors.” But this approach will not necessarily eliminate all the diagnostic boxes. Some of them may turn out to reflect real clustering of affected people on a number of the biologically based dimensions. In such a case the traditional diagnosis would be vindicated, although the details coming from the dimensional approach would give a finer-grained understanding of the causal mechanisms.
The huge fine-grained database of the psychiatric future may well enable both social context and personal history to be integrated with the key neurobiological dimensions. Does this mean that computer psychiatric diagnosis and treatment will surpass the human version, as computers now defeat chess grand masters?
Thought about this can draw on the competitive history of supercomputers and human chess-playing skills. In 1996 the supercomputer Deep Blue lost to the grand master Garry Kasparov. But in 1997, when IBM had doubled the processing capacity, Kasparov lost to Big Blue. He says that the ten years from 1994 to 2004 were a chance to study the relative strengths and weaknesses of computers and grand masters.5 They were the years of serious contest. Before, computers were too weak. Later they were too powerful.
Computer chess strength comes from seeing the outcome of every possible first move, followed by the outcome of every possible second move in each scenario, and so on as possibilities proliferate with more moves. Seeing all the possibilities down each branch goes far beyond what a person can do. With only twenty possibilities for each move, a depth of five moves on each side (twenty to the power of ten) gives over ten trillion outcomes. Computers spot disastrous outcomes a grand master may not. The massive predictive power can be enhanced by a huge database of moves and consequences in past games.
Computers at this level win by brute force of processing capacity. No intellectual subtlety or imagination comes into it. As Kasparov puts it, they are good where we are weak, and vice versa.
The ten years of serious contest were used to investigate human chess skills pitted against the computers. Kasparov set up, and played in, games of “Advanced Chess.” Two human contestants each had access to a computer and a database of millions of games. This made some things easier. The players shed the burden of having to remember previous outcomes of a move. The computer warned of tactical blunders, leaving the players free to think creatively about strategy. The exclusion of issues about memory and tactical blunders made strategy even more decisive.
In a 2005 “freestyle” chess tournament, teams could be any combination of people and computers. People with computers—sometimes just laptops—defeated even the strongest computers alone. The best teams mixed human strategy and computer tactical acuity. The winning team was not a grand master with a massive computer, but two amateurs with three computers, each “coached” to go deeply into the possibilities of different key positions. As Kasparov put it, good process was more important than the quality of the chess player and of the computer combined.
This outcome is cheering to those of us whose tribal loyalty makes us want our team, humans, to win against computers. Human intelligence devised the coaching strategy that defeated the brute force of more powerful computers.
But we should be cautious. Many have tried to show that there are things we can do that in principle machines cannot. Computers have already surpassed some of the supposed limitations. Perhaps such limitations do exist: the biologically evolved human brain may have properties that cannot be replicated by any combination of hardware and software. But this is not obvious. It will be still less obvious as we insert biological material into computers, blurring the boundary between them and us.
Chess-playing computing has developed beyond brute force to include elements of human strategic intuition. Brute force examines every one of the branching tree’s billions of moves, despite almost all of them being nonstarters. To add elements of human strategy to computer tactical power, the branching tree must be pruned. The vast number of absurd moves should be discarded, leaving room for deeper scrutiny of promising ones.
When should moves be discarded or kept? Obviously the impact on both players has to be assessed. How should we evaluate an outcome of a move as good or bad? Some approaches use general criteria: the number of moves left open, the chances of making a capture, and how well defended the position is. One alternative sets up a module for each of a set of objectives, such as taking a particular piece. The module scans the database to see which moves have previously brought the objective closer. Then more strategic choices about pruning may come in. Should different modules be activated in different stages of the game? What chances does a move give of a capture at an acceptable cost, or of checkmate?6
There are two ways to improve computer chess-playing. One stays with brute force but increases the calculating capacity. The other copies human intuitive strategies, pruning the branching tree in the light of increasingly sophisticated rankings of outcomes.
That brute-force computers can surpass humans is hardly more interesting than that cars can surpass the fastest runners. We know that human strategy using weak computers can defeat stronger computers. We also know that computers can be enhanced by many—perhaps all—human strategic skills. One day they may reproduce even our human skills of interpretation. As with much artificial intelligence, one of the most intriguing questions is what the process of replicating human strategies in computers can tell us about the nature of our own intuitive thought. Our intuitiveness is, at least for now, a distinctively human form of intelligence.
In psychiatry, as in chess, brute-force computing may be powerful. Perhaps one day researchers trawling the huge fine-grained psychiatric database will find ways to prevent or cure severe mental disorders. If so, the brutish method will not worry us. For now we have nothing like the near-complete neurobiological and psychological knowledge required. As in chess, the best strategy usually needs the fluidity of human interpretation as well. Gail Silver, diagnosed with borderline personality disorder, expresses a thought still likely to be echoed for many years: “The Registrar goes up on the old pedestal!! HE is going to sort out all this mess and muddle AND make me all better! Well, actually he is not. He hasn’t got that magic pill or that magic wand I so want him to have. What he has got is the ability to help me begin to understand what is going on for me and the patience while I struggle to talk.”7