CHAPTER THIRTEEN

HOW TO FIX THE CRACKS?

Toward the end of my long series of interviews, I walked through one of London’s most exclusive neighborhoods toward the illustrious Belgrave Square. The sky lent a crisp, blue backdrop to the tapestry of autumnal leaves rippling in the trees overhead. As I reached the end of Pont Street and turned into the grand, Regency-era square, there on my left, nestled against the Austrian Embassy, sat the grande dame of British psychiatry—the Royal College of Psychiatrists. Like a posh wedding cake, it proudly ascends an elegant five stories from the broad and leaf-dappled pavement, painted in a creamy magnolia with Doric columns holding up the portico entrance. It was stately, elegant, and oddly edible.

After making a couple of failed attempts on the intercom system, I put my ear closer to the brass speaker and tried to decipher the whispered and garbled fizzing. At once the system kicked in. “Push the door!” someone boomed. Shocked, I pushed far too hard, almost falling into the lobby. A demurely dressed receptionist smiled innocently. “Can I help you?”

“I have an appointment with Professor Sue Bailey,” I mumbled, rubbing my ear.

“Ah, yes, Dr. Davies.” The receptionist smiled. “Please do wait in the members room. She’ll be with you shortly.”

Five minutes later, my hearing returned, I was escorted up five flights of stairs to Professor Bailey’s office, which is neatly tucked away on the top floor. It is a modest office compared with what I had come to expect and considering that Professor Bailey is, after all, the president of the Royal College of Psychiatrists.

Bailey is a straight-talking and no-nonsense psychiatrist with a grounded Manchester accent. Her handshake is firm, her voice steady, and I sensed right away when she said “I know your work” that she wanted to cut right to the chase.

I had come to talk to the president of the Royal College of Psychiatrists to discuss the future of psychiatry. I started by raising with her a crucial event that was about to occur in the profession. This concerned an editorial that would soon be published in the British Journal of Psychiatry, one of the most respected and widely read psychiatry journals internationally. The editorial was written by twenty-nine senior consultant psychiatrists, all members of the Royal College of Psychiatrists, and all of whom expressed deep concern about the current state of psychiatry. Their article is so significant because it runs counter to the growing professional belief, many times articulated in the same journal, that the current crisis in psychiatry can be best solved by reaffirming psychiatry’s identity as a discipline essentially concerned with studying, diagnosing, and pharmacologically treating essentially “brain-based” mental diseases.

Although the authors of the editorial do not deny that the brain sciences and psychopharmacology have a role to play in psychiatry, they insist that the dominant “medical model,” which sees the diagnosis of brain disorders and the prescription of drugs as psychiatry’s primary task, has not only failed the test of science but is not getting the clinical results it promised. They therefore insist that the “medical model” must take a backseat to interventions and methods that really do work. In essence, the editorial requests a paradigm shift away from the medical model and toward an approach that prioritizes healing relationships with people, helping people find meaning in their lives, and using therapies and other social/humanistic interventions as the first line of treatment.

Once I had outlined the editorial’s main arguments to Sue Bailey, I asked her whether she would like to comment.

“Okay,” said Bailey, sitting up presidentially, “I prefer not to comment because I think their vision is quite limited, really. So while I am very fond of them, I think we can have a better vision than the vision they’ve got.”

I gently pointed out that their position simply builds upon the mounting evidence that the current medical model is under considerable strain. While there are still no biological markers found for the majority of mental disorders, the medical model is also leading to the widespread over-medicalization of our problems and to the over-prescribing of often dangerous and ineffective drugs. I asked what she made of those charges.

“Let’s pick those points off,” said Bailey impatiently. “It is quite interesting that over-medicalization is leveled at psychiatry [i.e., the idea that psychiatry is wrongly recasting many normal and natural human responses into medical conditions requiring drug treatment]. It would probably be better leveled at primary care and GPs. When you go into a profession where you want to help people, and you don’t have the tools to help them, the temptation is to medicalize them.”

I tried to ask Bailey why she held GPs responsible when it was psychiatry that had put these drugs on the map, promoted their value to GPs, and dramatically expanded the number of mental disorders for which these pills can be prescribed. But as I began to speak, Bailey raised her hand, indicating she had more to say. And while what she said did not make for easy listening, it is still worth quoting in full:

“So let’s focus back on the critical psychiatrists. They are saying that many problems [treated as mental health problems] are not medical issues. They may be right, but does it actually make any difference to the person who is in distress? The person is in distress for some reason, and that may be because they have had a bad day, or it may be that they have been traumatized and abused by Jimmy Savile209, for instance. And it’s now in the newspapers and they are worried about it.

“Now, it may not need medical treatment, but they need some support, to be listened to. I have no problem with that. I have a problem with [Bailey names a senior critical psychiatrist] accusing us of over-medicalizing problems. But you’d need to talk to a neuroscientist about that. There isn’t much evidence about biomedical markers [for mental disorders]. I am more on the side of social science; that is what I do, I do qualitative research. But actually there is increasing evidence of biomedical markers. But it would be really useful if the neuroscientists on one end, the social scientists somewhere in the middle, and the critical psychiatrists at the other end would all get over themselves and actually look at this properly.

“You’ve got a human in front of you who has come to you for a reason, and who wants help. And the job is to listen to them and to try and disentangle it, and some of it will be about social support, some will be about advice about how they are living their life, and some of them temporarily will have a distress—however that is diagnosed in the new classification system. Some of them will sit there with a rancid depression that no one has yet taken any notice of. What we are about is a patient-doctor dyad that’s trying to understand the dilemma that comes into the room, and that’s what we have got.

“And I think that’s what we do in mental health. We care and treat. So over-medicalization, yes, may be happening, but at the other end we’ve got [suffering people]. I mean, this morning I’ve been to Deaths in Custody where 200 young people have committed suicide whilst in juvenile prisons. So the father of one of these very bravely came and told the young man’s story, and probably the main reason why he committed suicide was that nobody recognized he had ADHD …

“So I suppose I don’t have a lot of time for people who are fixed to one theory and one point of view. We are in the business of understanding human nature and then doing what we can, within the evidence base, whether it’s from a medication evidence base, a talking therapies evidence base, an alternative therapies evidence base—we do the best we can.”

I took a deep breath. I was not sure where to go from here. I was struggling to understand what Bailey was actually saying. Was she saying that GPs are medicalizing our normal and natural responses to the problems of living, but not psychiatrists? And was she saying that the writers of the editorial are “fixed to one theory” by arguing that psychiatry relies too heavily on the medical model, which privileges diagnosis and medication?

As the interview continued, I therefore tried harder to pinpoint precisely where the institutional leader of British psychiatry exactly stood. As we went back and forth, what gradually emerged was that Bailey viewed the writers of the editorial as indeed “fixed to one theory”; a theory that she felt contradicted her more eclectic vision: that we must use medication, diagnosis, the insights of social science, psychotherapy, neurobiology, and social care as combined resources to help patients.

When I pointed out that the authors of the editorial also seemed to share this more eclectic view (aside from their request for the medical model to take more of a backseat), her eyes suddenly narrowed. “I say I am different from them because they are zealots in their own model. And I think any zealot-driven model is a bad idea.”

This seemed to me a strong statement to make. So I asked Dr. Sami Timimi, a British consultant psychiatrist, a director of medical education in the NHS, and a contributor to the editorial, whether he felt such a statement was fair.

“Positioning critical psychiatry as extreme and zealous,” responded Timimi coolly, “seems to me to reflect a possible misunderstanding of our position. She seems to think that we are proposing an alternative, one-dimensional model that can be applied across psychiatry. But that is what we are arguing against, because that is what the medical model has done: prioritized a narrow focus on the diagnosis of symptoms and prescribing. It also sounds like she is having great difficulty imagining that psychiatric services could exist if diagnosis, for example, didn’t play a central role.

“Now as far as I can see, all the evidence says that diagnosis hasn’t helped in terms of forwarding the science; there is virtually no concrete evidence linking our diagnostic categories to either biological or even psychological markers. But even if we put that to one side and look at how clinically useful our current diagnostic system is, all the evidence suggests that in terms of helping us make useful clinical decisions, the current diagnostic framework doesn’t help at all. In fact, it seems to do the opposite.”

What the evidence shows, according to Timimi, is that what matters most in mental health care is not diagnosing problems and prescribing medication, but developing meaningful relationships with sufferers with the aim of cultivating insight into their problems so the right interventions can be individually tailored to their needs. Sometimes this means giving meds, but more often it does not. The problem with simply putting labels on people, Timimi believes, is that it ends up often medicalizing problems that are not medical in nature. And this isn’t helped by successive expansions of the DSM and ICD, which encourage practitioners to wrongly medicalize more and more emotional troubles as mental disorders that only warrant pharmaceutical treatments.

As this last criticism had been also made by many other people I’d interviewed (you’ll recall the chair of DSM-IV, Allen Frances, saying DSM-IV created three new false psychiatric epidemics), I decided to ask Sue Bailey whether she agreed that the expansion of DSM and ICD was a driver of medicalization.

Bailey seemed irritated by the question. “Look, I think there are frankly better things people should be doing with their time. I haven’t actually got a lot of truck with these discussions, if I am honest with you. The majority of people I look after are living in poverty, with inequality, and have experienced abuse. They’ve got undiagnosed, unrecognized mental illnesses. So I actually think that we should focus on the reality of what we can do as doctors rather than having erudite discussions about the various situations of what DSM should have done.”

Again, I was surprised to hear this. So I quoted Bailey’s response to Dr. Timimi. He was less surprised than incredulous. “These debates about medicalization are debates about real things that affect real people in everyday practice. I think what she said shows a staggering intellectual complacency, and a real desire to avoid thinking about the clinical implications of medicalization.”

After all, for the twenty-nine authors of the editorial, the more people whose suffering is wrongly medicalized, the more will be prescribed often dangerous and inefficacious drugs, the more will suffer the stigma of unnecessarily being labeled “mentally ill,” and the fewer will be offered non-medical alternatives. For authors of the editorial like Timimi, then, it is understandable why Bailey’s dismissal of medicalization provoked incredulity, especially because, as Timimi summarized, the evidence shows that the medical model (without which there’d be no medicalization) is not working.

To illustrate this final point, Timimi revealed the results of some research he had recently undertaken. He compared two different mental health teams working in the NHS. One team followed the usual medical model—where diagnosis and drug treatment took precedence—while the other team adopted a “non-diagnostic” approach—where medication is given only sparingly, where diagnosis is hardly used at all, and where individual treatment plans are tailored to the person’s unique needs. Timimi then gave me two clinical examples of how the “non-diagnostic”’ approach actually works in practice.

In the first example, a young man enters the consulting room displaying behavior that would traditionally warrant the diagnosis of ADHD. But rather than assign the diagnosis, the psychiatrist invites the mother in and takes some family history. It turns out that the son and mother had been living for many years with domestic violence, until the abusive man eventually left. But the boy had been so scarred by the experience that his behavior was now understandably chaotic.

Rather than diagnosing and medicating his behavior, the psychiatrist focused on helping the mother and son gain insight into why the boy was struggling. In one session, for instance, while the psychiatrist was reflecting on the mother’s feelings of guilt and lack of confidence, the mother began to cry. Straight away the boy started rebuking the psychiatrist; he saw the psychiatrist as yet another man hurting his mother. This event opened up a conversation about how hypervigilant the boy had needed to become, and how this was hampering his life. By talking and working things through, together they developed an explanatory framework that began to make sense of why the boy might be struggling with authorities and with being angry generally. As this work continued, the boy, without drugs or a diagnosis, gradually began to improve.

In the second example, the same psychiatrist met the family of a boy who had many learning difficulties but who had worked out how to cover this up, usually by acting the class clown. Again, rather than using the common approach of assigning a diagnosis and prescribing meds, the psychiatrist centered on working with the boy’s local school. In the end, the psychiatrist helped the boy move to another school where more classroom support could be offered and where the boy could enjoy a fresh start. Once again, this simple non-medical strategy worked to good effect.

The psychiatrist in both examples spent far more time with his patients than would have been required had he merely assigned a diagnosis and prescribed medication. But the results were worth the additional time, effort, and of course money. In fact, when Timimi assessed how well patients did in the non-diagnostic approach compared to those treated via the medical model, the differences were dramatic: Only 9 percent of patients treated by the non-diagnostic approach continued needing treatment after two years, compared with 34 percent of patients who were being treated via the medical model. Furthermore, only one person from the non-diagnostic approach ended up having to be hospitalized, whereas over fifteen people in the medical-model team were referred for in-patient hospital treatment. Finally, the non-diagnostic approach led to more people being discharged more quickly and to the lowest patient “no show” rate out of all the mental health teams in the county.

“So we know the non-diagnostic approach was effective because we measured the outcomes,” said Timmi. “It had the lowest use of medication, the lowest use of inpatient treatment, but the highest recovery rate. So research like this shows that a non-diagnostic approach can not only work, but it can work a lot better than the current medical one we have.”

This view was further confirmed for Timimi when, after his study, he became consultant to one of the lesser successful mental health teams in his county, where the medical model was firmly entrenched. What he found was that most patients had been in the service for many years and had become psychologically dependent upon their pills and thoroughly acculturated into thinking of their problems as related to diagnoses like ADHD, bipolar disorder, depression, and so on. The medical model seemed to be actually helping create patients who had chronic conditions, conditions that were products not of their biology but of how their problems were being medically managed. The medical model was, in effect, creating the circumstances in which patients were staying unwell for longer.

Timimi’s approach is not unique. Many of the psychiatrists contributing to the editorial also embrace similar methods, but so too do increasing numbers of others. When I interviewed Dr. Peter Breggin, another internationally renowned critical psychiatrist located in the United States, it was clear he had long ago rejected the medical model, and to very good effect. “The model I prefer to use is a person-centered team approach,” said Breggin, “where the prescriber and the therapist work with the family and the patient. This approach is centered around the person, and what the patient really wants, feels, and needs.”

For Breggin, this did not mean simply diagnosing and medicating people outside their context, but sidestepping the medical model and focusing on the matrix of relationships in which the person habitually operates. “If I get a child who is labeled ADHD and is on stimulants, I just work with the family in the office. I take the kid off the drug and work with the family in an honest and caring atmosphere, which is something kids love, working out what went wrong. Even if the child is ‘psychotic,’ I guarantee that in time these problems can be fixed if we all work together responsibly. And we are getting amazing results.”

For Breggin, most problems are created by the contexts in which people live, and therefore require contextual, not chemical, solutions. “People who are breaking down are often like canaries in a mineshaft,” explained Breggin. “They are a signal of a severe family issue. And sometimes the one who is breaking down is being scapegoated; sometimes they are the most sensitive, creative member of the family, or sometimes they are the one person in the family with a really different personality. You don’t know what is going on, often, but with work you can see the dynamics that have developed in the family that are pulling things down.” For Breggin, because the medical model fails to take context seriously—whether the family or the wider social context—it overlooks the importance of understanding and managing contexts to help the person in distress.

Another consultant psychiatrist and contributor to the editorial, Dr. Pat Bracken, echoed the value of this more person-centered approach. “One of the reasons psychiatry is in crisis is that we are simply overprescribing meds. Lots of people say that the only thing they get from psychiatry is pills. But there are serious questions about whether this approach has delivered. Has it alleviated distress, has it helped people enduring states of madness or depression, has it helped them to move on in their lives? There are serious people standing back from psychiatry, looking at the evidence and saying, hang on a minute here, there is no evidence that this massive expansion of drugs is working. In fact, there is growing concern that this enormous tidal wave of prescribing is actually causing major problems, least of all by increasing mortality rates of people experiencing mental illness.” (For more information on this, please see the appendix.)

Bracken argues, in keeping with the editorial, that psychiatry must therefore readjust its relationship to the usefulness of psychopharmacology. “But this is not the same as being anti-drugs,” Bracken was keen to emphasize. “We are very clear in the editorial that there is a role for using medicine to ease human distress. Rather, we believe we need to get balance back into the situation again.”

For Bracken, the balance will return only by dethroning the medical model from the helm of psychiatry, where problems are simply understood as symptoms and signs of underlying illnesses. But he also recognizes this is a tall order. “The medical model was established in the asylums in the early twentieth century,” continued Bracken, “so it has been at the heart of psychiatry’s identity for a long time. This could have changed in the 1970s–’80s when psychiatry moved out of the asylums. At this time the natural process would have been for psychiatry to have become more interested and involved with the social sciences, with efforts to look at what kind of environments are important for people to recover and flourish in.

“But instead, at that very moment, the pharmaceutical industry started to target psychiatry in a way that has allowed the dominance of the medical paradigm to continue. In fact, I think the influence of pharma has made the medical focus in psychiatry even narrower, largely because money speaks. When there is money to support academic departments, psychiatry departments, etc., that all promote this approach, well then, it is not surprising that is what continued.”

Bracken’s message was clear: “We are not saying there is no valuable practice going on in psychiatry. Our point is that when you actually look at how people are helped, it is not all about medication and diagnosis; we are rather focusing on issues to do with negotiation of meaning and context, prioritizing relationships with people, working democratically with other agencies and service users. We are not abandoning our typologies in that move, but we are seeing the use of drugs, the use of therapies, the use of diagnosis, even, as secondary to something more primary. And when psychiatrists in their individual practices do that, I think that’s what people say is a good psychiatrist. So we should start turning the paradigm round, start seeing the non-medical approach as the real work of psychiatry rather than as incidental to the main thrust of the job, which is about diagnosing people and then getting them on the right drugs.”

2

It was clear to me what the future of psychiatry would be if the critical psychiatrists had the power. But as this group was on the institutional outskirts, what I now wanted to know was how the future would look for those with power at the center. Would the future look anything like what the critical psychiatrists hope for, with their desire to relegate the medical model?

I put this question to Sue Bailey, who once again offered a strong response. “The risk [of challenging the medical model] is that we end up without a voice for mental health,” she said. “Despite the best efforts of some senior members of this college, mental disorder is still not recognized by the United Nations and World Health Organization as a non-communicable disease [i.e., as a serious medical illness like heart disease, diabetes, or cancer].”

Getting this recognition was crucial, Bailey believed, because it would finally force governments to take mental health issues seriously, which would in turn increase provision for mental health care. But so long as the profession remained internally divided, mental disorders were less likely to be granted this coveted disease status. “By having the neuroscientists and the critical psychiatrists fighting,” explained Bailey, “I would go as far as to say they are causing harm, because our argument about the importance of mental health is as strong as [the argument about the importance of] cancer. But cancer specialists have these polarized debates in a closed room, and yes, with a bit of blood spilt on the carpet. But when they come out, they speak about cancer in one voice. We do not speak about mental health in one voice, but until we do, with enough common ground, then mental health will not get the attention or service provision it deserves.”

The main problem with Bailey’s quest for professional consensus is that the forces of disagreement are deeply entrenched. How can you get the brain-disease psychiatrists and the critical psychiatrists to agree on a common vision of mental distress when their understandings of suffering, of etiology, of treatment, of diagnosis are fundamentally at odds? It is like asking Buddhists and Christians to agree that their views of the afterlife are, at bottom, really the same.

Furthermore, Bailey’s request for professional agreement is hobbled by her aim for those critical of the medical model to help form a consensus that would ultimately allow mental disorder to attain “disease status.” But why would the critical psychiatrists do that? Their whole point is that to think of suffering in “disease terms” is to fundamentally misunderstand what we are dealing with.

3

Once I had closed the door of the Royal College of Psychiatrists behind me and stood again outside on Belgrave Square, I felt a sudden and gloomy wave of certainty that Sue Bailey’s position was an impossible one. In many ways, I could understand her exasperation—the reconciliation she sought was so improbable that her solution offered very little hope. I therefore would need to turn to others I had spoken to, unconstrained by having to juggle many competing interests, who had thought through alternative ways in which psychiatry could be reformed.

When summarizing the various views I had heard, they broadly boiled down to about four propositions. So let me briefly state them here:

I realize that an entire book could be written for each of these propositions. But I have nothing like that kind of time. So allow me to just give you a very brief summary of each of these propositions in turn.

This first proposition I heard again and again. For example, I interviewed Professor Thomas Szasz, perhaps one of the most influential critical psychiatrists of the twentieth century and author of the now classic The Myth of Mental Illness. He had always argued that the biological philosophy of suffering underpinning psychiatry was causing more problems than it solved, so I asked him what his philosophy looked like.

“My understanding of emotional suffering,” answered Szasz in considered tones, “and I hope I won’t be misunderstood, is no different from the traditional understanding of the Jews, Christians, or Muslims of emotional suffering. Suffering is life. God didn’t put us on this earth, assuming that he did, to be happy. Life, as I put it humorously, is not a picnic. It makes no difference if you are a king or a pope or a tyrant.”

I asked Szasz what is it about our period that makes it normal for everyone to be medicated when misery strikes. Why do we believe as a culture that suffering must be removed chemically, rather than understood in many cases as a natural human phenomenon and possibly something from which we can learn and grow if worked through productively?

“In one sentence, and my statement is not original, our age has replaced a religious point of view with a pseudoscientific point of view,” responded Szasz. “Now everything is explained in terms of molecules and atoms and brain scans. It is a reduction of the human being to a biological machine. We don’t have existential or religious or mental suffering anymore. Instead, we have brain disorders. But the brain has nothing to do with it, except that it is an organ necessary for thinking.”

“So by reducing everything to the physical,” I clarified, “have we distorted our understanding of the meaning and potential purpose of emotional discontent? We have turned it into a malady from which we need to be cured?”

“You put it perfectly. And that’s why people keep looking to novels, to writers, to the cinema, and to the theater to see emotional stories acted out and lived out. They don’t consult psychiatric textbooks.”

Sadly, a month after I interviewed Szasz he died at his home in Syracuse. He was aged ninety-two. I was probably the last person to have interviewed him. But among what must have been some of his closing words on psychiatry, he leaves us with the following statements, which I summarize: It was hubris for medicine to try and manage realms of life for which it was never designed to treat. It had become deluded in its belief that its physical technologies, its electroshock machines, and laboratory-manufactured molecules could solve the deeper dilemmas of the soul, society, and self. Professional grandiosity had blinded both the industry and a growing public to psychiatry’s serious philosophical and technical limitations, and this had been compounded by the demise of traditional systems of meaning that once provided alternative solutions to the riddles of human despair.

If Szasz was correct that psychiatry is misguided about what it could hope to achieve, I now wondered how our culture would alternatively respond to the millions of suffering people who each year seek out psychiatric drugs and explanations for their pain. I decided to put this difficult question to consultant psychiatrist Pat Bracken, who had also written extensively on the limitations of the dominant biomedical approach.

“There are no easy answers,” said Bracken with a sigh. “This widespread suffering may actually be a social phenomenon. Go back a few decades and you would have seen a much more central role for the church. The Sri Lankan anthropologist Gananath Obeyesekere has talked about this a lot, about the crucial role of culture in handling people’s distress, giving people words, giving people paths, giving people rituals through which they can find some peace in this world. Religion has often played that role. But in a post-religious secular society, what happens when we don’t have religion to do those kinds of things? What do we do then?

“While turning to medicine or therapy may be appropriate in some cases, this is a far bigger problem than can be answered by medicine or psychotherapy alone. What complicates things more is that we also live in a capitalist society, where there is always going to be someone trying to sell you something—whether a drug or a psychotherapeutic session. In fact, some people would argue that capitalism can only continue by constantly making us dissatisfied with our lives so that there is always something new to sell us. It is constantly in the business of churning up our desires. You know, if everybody said I am very happy with my television, my car, and everything else I’ve got, and I am perfectly content with my lifestyle, the whole economy would come shattering down around us.

“And that’s one of the ironies of our current recession—we are constantly saying we must get the economy going, we must consume more, we must buy more, but this all relies on our dissatisfaction. So I don’t have an easy answer for the question of what we can do with the huge numbers of the ‘worried well’ who now rely on psychiatric drugs. But what I am saying is that it is a vast sociological, anthropological, and almost spiritual problem for human beings. So the idea that medicine is going to come up with a neat answer is far from the truth. In fact, the belief that it can is also behind the rise of antidepressants and other drugs. But the only people who have benefited from that are those working in the drug companies.”

Bracken felt that his many years working as a consultant psychiatrist had taught him that what we customarily call mental illness is not illness in the medical sense. It is often a natural outcome of struggling to make our way in a world where the traditional guides, props, and understandings are rapidly disappearing and where negative experiences often blight our lives. Instead, our dominant worldview is now driven by barely perceptible capitalist imperatives: to work, to earn, to attain profit, to succeed, to consume. Not all mental strife is therefore due to an internal malfunction, but often to the outcome of living in a malfunctioning world.

The solution is therefore not yet more medicalization, but an overhaul of our cultural beliefs, a re-infusing of life with spiritual, religious, or humanistic meaning, with emphasis on the essential involvement of community, and with whatever helps bring us greater direction, understanding, courage, and purpose. This is something way beyond what the medical model can offer, with its technological outlook and its financial entanglements with key industries in the capitalist machine.

“What may also be needed,” said Ethan Watters, “is for people to become de-enthused.” So long as people continue to defer to psychiatric myths of biological breakdown and chemical salvation, the status quo in psychiatry will remain, including its many worrying excesses. “So if psychiatry itself starts to lose some of its status,” continued Watters, “by having to start proving its legitimacy in terms of outcome studies, then that’s all for the better, because consumers will know a little more about what they are getting and the enthusiasms will weaken.” Watters’s view is that as people become more aware of psychiatry’s excesses, yes, it will lose some legitimacy; but insofar as this will bring public expectations back into line with what psychiatry can actually deliver, the change should be welcomed.

While it was interesting for me to hear these various pleas for greater professional modesty, it also struck me how at variance they were with what was actually happening on the ground. We now know, for example, that consecutive editions of the DSM and ICD keep expanding the number of diagnoses believed to exist. We know that psychotropic prescriptions are rising year on year. We know that public dependence on psychiatry is at an all-time high, and we know that alternative systems of meaning through which we once managed and understood their discontent (religious, philosophical, spiritual, etc.) no longer have their appeal for increasing numbers. So if a greater professional modesty does not seem to be on the horizon, where do we go from here?

This moves us to proposition number two: reforming the relationship between the pharmaceutical and psychiatric industries. We have seen how the medical model would never have attained such power and influence without the financial backing of the pharmaceutical industry. But we have also seen how the full extent of this patronage has not been made fully transparent to the public. Again and again I heard from critics a demand for this culture of concealment to end.

In this area, at least, there may be a sliver of good news. I have already spoken about the Sunshine Act in the United States, soon to be implemented by the Obama administration, where doctors will be increasingly forced to declare their pharmaceutical ties. This may discourage some of the excesses we have seen by shaming the unscrupulous into more ethical behavior. In Britain the situation is less clear. Sue Bailey, for example, assured me that the EU has now formed an “Ethical Life Science Group,” which will keep better track of industry payments to institutions and doctors, and that the Royal College is now “conducting its own internal survey asking members to report whether the organization they work for receives industry money.” These changes are to be welcomed. But do they go far enough?

My belief is that until we have a national online register where you can freely check what a given psychiatrist, researcher, psychiatric department, or mental health organization is getting paid, and by whom, internal surveys count for very little because the figures will continue to remain a professional secret. After all, you have a right to know whether the psychiatrist who has just prescribed you or your child a powerful drug is being paid by the company who makes that drug. You also have a right to know whether a mental health organization that speaks favorably about antidepressants receives yearly donations from antidepressant manufacturers. Until there are public websites where such payments are made fully transparent and which therefore enable the full extent of the problem to become clear, the real debates about how to reform industry ties won’t even begin.

Should there be limits placed on what doctors receive yearly? To what extent should industry payments be donated to charity? To what extent should unpaid voluntary industry service be obligatory (for which companies then reimburse the NHS)? These are no doubt thorny issues, which warrant long and hard debate. But right now these debates are not only avoided, they aren’t even being proposed in the places that count.

This brings us to the third proposition: the changes needed in the training of our future psychiatrists. At present, much psychiatric training in the UK and United States provides only cursory lip service to academic critiques of the bio-psychiatric world view. Serious anthropological, sociological, or philosophical critiques of the medical model are seen at best as interesting sidelines to what psychiatrists actually do. What is generally not imparted is a thorough and lasting social, critical, or historical awareness of what trainees are participating in, how they are participating in it, and in what ways this participation ultimately sustains practices disadvantageous to the patient community. Trainees are not educated to doubt or even question the system in any constructive way, but only to be certain in its application. As one trainee recently put it to me, “The critiques are all very well, but I did not come into medicine to critique medicine but to do medicine.” The assumption here is that what one thinks has little bearing on what one does.

Until a new, critically reflective generation of psychiatrists emerges, nothing will change. But right now such robust critical thinking is far from being instilled in the new generation. As Pat Bracken said frankly regarding training in the UK, “What I hear from the trainees working with me is that the exams are very much heavily skewed toward learning facts, diagnostic categories, and causal models all framed in the medical model, as though you can teach psychiatry in the same way as you teach respiratory medicine or endocrinology.”

This point was also echoed by the consultant psychiatrist Duncan Double, who has studied psychiatric training in Britain. As he writes, in Britain today there is still “an orthodox medical approach to the problems of interpreting and treating mental disorders” and that “any challenge to this orthodoxy is suppressed by mainstream psychiatry.”

Double provided a couple of examples of how this orthodoxy plays itself out. He recalled a consultant psychiatrist who had a critical approach, but who confessed that he nevertheless trained his psychiatrists conventionally to enable them to prepare for the examination of the Membership of the Royal College of Psychiatrists. What troubled this psychiatrist most was not just that he was acting against his convictions, but that by the time these trainees had passed their professional tests their critical sensitivities had been eroded. Double also recalled a consultant in psychiatry who had once ruefully remarked that she had become “irretrievably biological” in her approach to psychiatry. Although this was regarded as an acceptable outcome of her training, she felt she was not able to deal with any criticism of psychiatry. In short, her training had closed her off not only to the limits of what she was doing, but also to any seriously considered non-biomedical alternatives.210

When I interviewed Duncan Double I asked him for more specifics, beyond the anecdotal, about how training looks today. I asked whether today’s trainees are obliged to read, for example, works by Dr. Paula J. Caplan, Professor Irving Kirsch, Professor Joanna Moncrieff, Professor David Healy, and others. “No, those sorts of books would be largely ignored,” responded Double, as though I had said something very naïve.

“But how about other critical perspectives?” I pressed. “Would there be any critical scrutiny of, say, the construction of the DSM, psychiatry’s relationship to the pharmaceutical industry, of the biomedical philosophy of suffering, of psychiatry’s wider socio-cultural history?”

“There would be very little of this, really,” said Double, who then explained that a serious problem with introducing these perspectives is that outside criticism of the profession is often too readily dismissed by many as a kind of anti-psychiatry. “The problem with this dismissive view,” continued Double, “is that young psychiatrists are often fearful of being identified with critical positions because they think it may actually affect their progression in their careers.”

If we accept that propositions one, two, and three are far from being realized, we could be forgiven for thinking that psychiatry’s future, and therefore the entire mental health system, will continue along the same lines as at present. And this is why the fourth and final proposition seems to many the most important proposition of all: the public needs to become better informed about the current state psychiatry is in, and if the mental health industry does not reform, be prepared to vote with their feet.

Time and again, commentators on both sides of the Atlantic, who despaired about the likelihood of internal reform, raised this crucial point with me. As Breggin put it, “The only thing that is going to change things is if people literally stop going. My own belief is that this is because psychiatry is a money-making self-contained machine, which is by definition resistant to change from the inside.”

“Supposing you are right,” I said to Breggin, “then how do you get people to stop going? How is that going to happen?”

“I think we need braver journalists and authors, dissident psychiatrists and psychologists,” answered Breggin. “This has to be an educational movement, a political movement. We need grassroots disillusionment among professionals, among consumers, and the sciences. From this we can only hope there will be manifested new kinds of organizations, research, and journals.”

Could this be the route to reform, then—a reliance on us? From everything I have learned from all my encounters, I have to say as inadequate as this solution feels, it may well offer the most hope in the coming years. As Timimi put it, “The things that get powerful institutions to change don’t usually come from inside those institutions. They usually come from outside. So anything that can put pressure on psychiatry as an institution to critique its concepts and reform its ways must surely be a good thing.”

4

It is 2:00 a.m. on November 29, 2012. I am sitting alone at my kitchen table, the lights are dimmed, the shutters closed, and I am writing these words with only a large pot of coffee to prop me up. My wife and baby daughter are tucked up safely in bed. I wish I were there too. But I know this will be a long night. My publishers expect this book by the morning. There have been too many nights like this in recent months, too much fretting, too much vacillation, too few hours to spare. Don’t get me wrong, I have enjoyed the journey I have been on immensely; but even so, I am now looking forward to a little rest, and most of all to spending more time with my family.

But even during those times when I have been away from them, during my various lockdowns in airport lounges, my disrupted nights in strange hotels, my long and uncomfortable journeys by car, plane, and bus, I haven’t in truth ever been entirely alone. Because you, the reader, have always been there. You have been at the front of my mind as I have struggled to honestly and clearly communicate the revelations set down in these pages. I know that everything I have said won’t resonate with you, that you will formulate your own opinions. But I also hope that some of the facts here disclosed will leave their mark on your views and future actions.

I believe that psychiatry is not the enemy, that the people I have disagreed with are not the enemy. No, I believe the only enemy is anything that actively tries to conceal the inconvenient facts. And the good news is that today in psychiatry the cell of concealment is cracking. The truths are finally seeping out. As to what role you choose to play in furthering this seepage is naturally a matter for you alone. But insofar as I have succeeded in making what is not generally known more freely available, then I have to admit it is with a lighter heart that I finally bring this book to a close and bid you good night.

Appendix

Antipsychotics (neuroleptics)—Breaking the Brain? In what follows I draw upon these books, but especially the excellent work of Robert Whitaker.

As I have not really discussed the effects of antipsychotic medications in the main body of this book, and because this is such an important topic, let me at the very least say a few brief words before I close. What follows can in no way do justice to the vast body of research pursuing the pros and cons of antipsychotic medication—their benefits, their side effects, and their harmful neurological effects. As I can only provide the most cursory survey here, I have placed a short list of useful books at the end of this appendix for those interested in pursuing this topic further.

While it is true that antipsychotics can help certain people some of the time, inflated claims about their ability to “cure” mental disorders are as unsubstantiated as are claims made for the “curing” powers of antidepressants. Like the claims for antidepressants, those made in favor of antipsychotics must also be weighed against the facts. And any serious reading of the research literature on antipsychotics reveals that the facts are problematic.

Antipsychotics (or neuroleptics, as they are more technically known) are administered for what are considered to be the more severe forms of mental disorder, including schizophrenia, bipolar disorder, delusional disorder, and psychotic depression. These drugs are broadly classed under first-, second-, and third-generation headings—third-generation being the newest wave of antipsychotic medications. All antipsychotics are said to work in a similar way, most purporting to block dopamine (the chemical thought responsible for psychotic experiences) receptors in different brain pathways. It is important to note, however, that just as for the antidepressants, there is no research confirming that antipsychotics fix any known brain abnormality or that they “rebalance” brain chemistry to some optimal level.211 Rather, like any mood-altering substance, they simply alter the brain’s functioning, and in ways that are still mostly opaque to researchers.

Despite the uncertainty as to how antipsychotics work neurologically, studies of their effects over the short term do reveal them to have moderately better results than placebos at reducing certain psychotic symptoms, helping people stabilize after a psychotic episode. That conclusion is now rarely disputed. What is disputed, however, is the precise value of this “stabilized” state, especially for people who may have recovered spontaneously or may have responded well to other therapeutic support. For instance, using antipsychotics over the short term does not just lead to a reduction in the intensity of psychotic symptoms. Their effects are less specific than that: they also diminish other physical and emotional functions integral to all our mental activity. Psychiatrist Peter Breggin has referred to this state of diminished activity as “deactivation.”

To give a sense of what the very common experience of deactivation looks like for people taking antipsychotics, let me quote a patient’s description of deactivation documented in the British Medical Journal. While this patient felt his psychotic symptoms had somewhat abated after antipsychotic use, he also bemoaned other disturbing changes: “My personality has been so stifled that sometimes I think the richness of my pre-injections days—even with brief outbursts of madness—is preferable to the numbed cabbage I have now become … In losing my periods of madness I have had to pay with my soul.”212

It is of course up to the individual patient to decide whether paying with one’s soul is an acceptable price for the mitigation of their symptoms, assuming first that the patient is in an able enough emotional state to make this decision for himself, and second that the patient has been provided by his doctor the information necessary for making an informed decision (which generally does not happen).

So let’s provide some of this information here, by way of assessing the long-term effects of antipsychotic treatment. First, despite many claims that neuroleptics improve the outcome of schizophrenia, the actual evidence for this is scant.213 On the contrary, research conducted over consecutive decades since the 1950s indicates that long-term usage may actually make outcomes worse for the majority of patients. A particularly pertinent suite of studies in the 1970s revealed this paradoxical conclusion.

William Carpenter and Thomas McGlashan at the National Institute for Mental Health explored whether patients fared better off drugs. Their study, published in the American Journal of Psychiatry, showed that 35 percent of the non-medicated group relapsed within a year after discharge, compared with 45 percent of the medicated group. This finding was confirmed in a later study by Jonathan Cole, published in 1977 in the American Journal of Psychiatry, which reviewed all of the long-term studies on schizophrenic outcomes. It concluded that at least 50 percent of all schizophrenia patients could fare well without drugs, and therefore requested a serious reappraisal of current prescription practices.

This was quickly followed by a study in International Pharmacopsychiatry published in 1978 by Maurice Rappaport from the University of California. He followed eighty young males diagnosed with schizophrenia over a period of three years. All had been initially hospitalized, but not all had been administered antipsychotics. He showed that those not treated with antipsychotics, again, had by far the best outcomes: of those treated with antipsychotics a full 73 percent were rehospitalized, compared with only 47 percent who were not given meds. Rappaport thus concluded that “many unmedicated-while-in-hospital patients showed greater long-term improvement, less pathology at follow-up, fewer rehospitalizations, and better overall functioning in the community than patients who were given chlorpromazine while in the hospital.”214

Studies like these gained further legitimacy in the 1980s, when two researchers from the University of Illinois, Martin Harrow and Thomas Jobe, began a long-term study of sixty-four newly diagnosed schizophrenia patients. Every few years the researchers assessed their progress. They asked whether the patients’ symptoms had decreased, whether they were recovering, how they were getting on with their lives, and whether they were still taking antipsychotics. After observing the patients for two years, they noticed that the experience of those still taking antipsychotics began to diverge from those who weren’t. By four and a half years into the study, nearly 40 percent of those who had stopped taking medication were “in recovery” and more than 60 percent were working. Whereas of those still taking medication, only 6 percent were “in recovery” and few were working.

As the years unfolded these dramatic differences remained: at the fifteen-year follow-up assessment, 40 percent of those off drugs were in recovery, versus 5 percent of the medicated group. These results were published in 2007 in The Journal of Nervous and Mental Disease and led the authors to conclude that patients who had remained on medication did far worse than patients who had stopped their medication—this latter group were more successful in their lives and at work.215

Research like this has raised many serious concerns that have often been sidelined by many working within psychiatry. One of particular importance pertains to the precise long-term damage that antipsychotics can cause the brain. This is to say, can the neurological changes antipsychotics induce explain the worse outcomes of medicated patients?

Writer Robert Whitaker has closely investigated this question in his book The Anatomy of an Epidemic. He builds upon the research by two physicians, Guy Chouinard and Barry Jones from McGill University, which provides a neurological explanation for why patients taking antipsychotic medication long-term regularly fare worse than non-medicated groups. They start from the accepted position that antipsychotics work by blocking dopamine receptors in the brain, for which, they theorize, the brain then compensates by increasing the number of dopamine receptors (by up to 30 percent). This then leaves the brain hypersensitive to dopamine once the medication is stopped—a hypersensitivity believed responsible for further psychotic experiences and thus further “relapses.” Whitaker summarizes the problem thus:

Neuroleptics [antipsychotics] put a brake on dopamine transmission, and in response the brain puts down the dopamine accelerator (the extra D2 receptors). If the drug is abruptly withdrawn, the brake on dopamine is suddenly released while the accelerator is still pressed to the floor. The system is now wildly out of balance, and just as a car might careen out of control, so too the dopaminergic pathways in the brain. The dopaminergic neurons in the basal ganglia may fire so rapidly that the patient withdrawing from the drugs suffers weird tics, agitation, and other motor abnormalities; the same out-of-control firing is happening with the dopaminergic pathway to the limbic region, and that may lead to “psychotic relapse or deterioration.”216

What Whitaker argues is that Chouinard and Jones’s work had in theory ascertained why the outcomes of people taking antipsychotics were so comparatively low: medication escalates the likelihood of relapse by increasing dopamine activity once the person has ceased taking medication. As such relapses are then treated with yet more medication, which in turn raises the likelihood of yet another relapse, a vicious spiral of worsening mental health ensues.

It is not just the dopamine system that Whitaker has argued was being damaged by long-term antipsychotic use. Recent studies on macaque monkeys, for instance, have shown that after two years on either haloperidol or olanzapine, there was an observable shrinkage in their brain tissue.217 And similar structural changes in the brain have also been discerned in humans. One study showed that antipsychotics could increase the basal ganglia and, as with monkeys, reduce gray matter in certain brain regions.218

Furthermore, there is no definitive research indicating that these brain changes are indeed in any way positive. Just like the brain changes sustained by long-term drug or alcohol abuse, these changes may well be both deleterious and irreversible. For example, some patients who take antipsychotics long-term develop what is called “tardive dyskinesia” (TD), a gross motor dysfunction characterized by sudden, uncontrollable movements of voluntary muscle groups (e.g., constant movements of the mouth, tongue, jaw, and cheeks; constant tongue protrusion or squirming and twisting). The problem with this condition is that it remains after the drugs are withdrawn, which has been taken as evidence for their causing permanent brain damage. Whitaker argues that antipsychotic usage may not therefore be curing the brain but rather damaging the brain. This leads him to offer the following reflections:

This does not mean that antipsychotics don’t have a place in psychiatry’s toolbox. But it does mean that psychiatry’s use of these drugs needs to be rethought, and fortunately a model of care pioneered by a Finnish group in western Lapland provides us with an example of the benefit that can come from doing so. Twenty years ago, they began using antipsychotics in a selective, cautious manner, and today the long-term outcomes of their first-episode psychotic patients are astonishingly good. At the end of five years, 85% of their patients are either working or back in school, and only 20% are taking antipsychotics.219

Outcomes for serious mental illness are disheartening, despite increasing use of antipsychotics. Many psychiatrists often blame these poor results on the nature of the disorder itself. As one senior psychiatrist put it to me, “Schizophrenia and bipolar disorder are chronic, life-long conditions, so relapses are to be expected.” This psychiatrist did not ask whether such chronicity could be partly or entirely drug-related, an interpretation supported by studies showing that schizophrenia often has much better outcomes in places where antipsychotics are less aggressively and frequently prescribed.

As I have hardly been able to do full justice to the controversial and counter-intuitive points discussed above, fortunately there are many sources that can help you explore these important matters further. Here are some particularly well-researched and trustworthy books:

Breggin, P. R., and Cohen, D. (1999) Your Drug May Be Your Problem: How and Why to Stop Taking Psychiatric Drugs. Massachusetts: Perseus Books.

Healy, D. (2008) Psychiatric Drugs Explained. London: Churchill Livingston.

Moncrieff, J. (2009) The Myth of the Chemical Cure: a critique of psychiatric drug treatment. London: Palgrave Macmillan.

Whitaker, R. (2010) The Anatomy of an Epidemic. New York: Broadway.