In Chapters 1 and 2 we tried to demonstrate that depression has been recognized throughout history. In ancient times, the problem was called melancholia to signify its association with black bile and humoral imbalances. Treatments that existed were largely targeted at restoring balance, for example, through the use of herbs and concoctions, of purgatives to flush out toxins from the gut, or of leeches to remove impurities from the blood. In the Middle Ages, when underlying physical causes of melancholia were rejected, treatments were targeted at the supernatural or evil forces that were assumed to have possessed the person, leading to punitive interventions ranging from the use of the earliest forms of straitjackets to witch-hunts. In the 17th and 18th centuries, mechanical and circulatory theories of the causes of melancholia increased in popularity. Treatments included mechanical devices to induce vomiting (e.g. whirling chairs) or swing chairs to excite the patient and overcome apathy.
The earliest individual treatments that were devised had little or no chance of succeeding as the ideas about the causes of depression bore minimal resemblance to what we now think (such as discussed in Chapter 4). As such, the major intervention for many centuries involved the removal of the individual with melancholia from their home environment. The first known asylums came into existence in Baghdad in about ad 705 and Muslim physicians were renowned for their humane approaches to patients. In Europe, monasteries were the main source of care until asylums began to be introduced in the 1300s. However, the primary role of these institutions was to provide custodial care to keep people with mental illnesses away from society, and it was not until the 18th century that reformers such as Pinel in France and William Tuke in England began to change the role of asylums into more therapeutic environments.
By the turn of the 20th century, when Kraepelin’s classification of mental disorders held sway, individuals with manic depression and melancholia were still more likely to be admitted to an asylum. However, the diagnosis of depression was now applied to a much broader spectrum of individuals, many of whom had conditions that fitted into Freud’s view of neuroses and more and more of these individuals were treated in outpatient settings.
To give a flavour of the evolution of interventions for depression in the 21st century, we will first discuss the treatments that were introduced for people admitted to asylums such as sedation (barbiturates, insulin coma therapy) and physical treatments (electroconvulsive therapy, trans-cranial magnetic stimulation, and psychosurgery). This is followed by a discussion of the development of the medications used today for inpatients and outpatients, such as antidepressants and the mood stabilizer lithium. Finally, we discuss psychotherapies, which are primarily used as a treatment for outpatients.
The early drugs used in psychiatry were sedatives, as calming a patient was probably the only treatment that was feasible and available. Also, it made it easier to manage large numbers of individuals with small numbers of staff at the asylum. Morphine, hyoscine, chloral, and later bromide were all used in this way. The induction of sleep with bromide was first tried as a psychiatric treatment by Neil Macleod, a psychiatrist in Edinburgh in 1899. He used bromide for a patient suffering from acute mania who slept for days and then awoke ‘cured’. However, bromide sleep was soon abandoned as bromide was found to be toxic and there were fatalities. In the 1920s, a Swiss psychiatrist, Klaesi, used barbiturates to induce prolonged sleep and as a way of calming patients to improve the rapport between patient and doctor and increase the possibility that they could engage in psychotherapy. This treatment became popular, but again a number of deaths led to its discontinuation, although the outpatient use and abuse of barbiturates continued for many decades. In the USA Henry Stack Sullivan (a psychoanalyst) also suggested the use of alcohol to calm patients sufficiently to allow them to participate in psychotherapy.
Insulin coma therapy came into vogue in the 1930s following the work of Manfred Sakel, a doctor working in a private sanatorium in Berlin. He noted that insulin injections (a hormone that regulates blood glucose levels) led patients being treated for opiate addiction to become less agitated. Also, if the doses of insulin were increased the patients went into a coma after which they were much calmer and less irritable. Sakel initially proposed this treatment as a cure for schizophrenia, but its use gradually spread to mood disorders to the extent that asylums in Britain opened so-called insulin units. These were specially designed to administer the treatment for one to three hours at a time, often persisting for two to three months (or sixty or more sessions). Recovery from the coma required administration of glucose, but complications were common and death rates ranged from 1–10 per cent.
Insulin coma therapy was initially viewed as having tremendous benefits, but later re-examinations have highlighted that the results could also be explained by a placebo effect associated with the dramatic nature of the process or, tragically, because deprivation of glucose supplies to the brain may have reduced the person’s reactivity because it had induced permanent damage.
A number of ‘shock therapies’ (later called electroconvulsive treatment or ECT) are described throughout the ages. However, their development from the 1930s onwards was initially stimulated by the now disproved hypothesis that individuals with schizophrenia or other severe mental disorders such as manic depression did not suffer from epilepsy. This notion led to an assumption that inducing convulsions in those with severe mental disorders could then lead to the reduction in symptoms. It was an Italian professor of psychiatry, Ugo Cerletti, and his assistant (Lucio Bini) who were the first people to use electricity rather than chemicals such as camphor to produce an epileptic fit in humans.
Despite the hypothesis for the mechanism of action of ECT being wrong, it was noted to be effective in reducing symptoms of depression and became very widely used in the 1940s and 1950s. Originally, ECT was given without anaesthetic and was associated with complications such as bone fractures because of the dramatic epileptic fits that were produced. Unsurprisingly, it became a feared treatment and was widely regarded as punitive. There are many graphic descriptions in Western literature of its use as a punishment and its lasting, negative effects (e.g. Clockwork Orange; One Flew over the Cuckoo’s Nest) and several famous writers talk of their negative personal experiences of ECT such as Sylvia Plath, in The Bell Jar. The unmodified versions of ECT also had side effects such as memory loss, which the writer Ernest Hemingway complained bitterly about.
Although public surveys suggest some softening of attitudes towards the modified type of ECT used today, it remains the source of much controversy. It is primarily used for severe depression or mania if these problems do not respond to other treatments. The procedures have been refined radically from the earliest primitive interventions depicted in the cinema. For example, an anaesthetic is given so that the patient is unconscious, there is no longer any visible evidence of a convulsion, and the process is closely monitored by measuring the electrical activity of the brain. Whilst these modifications have made ECT somewhat more acceptable to patients and their families, the lack of clarity about how ECT works means that concerns remain. The present hypothesis is that the seizure makes the receptors (docking systems for chemical messengers in the brain) of the brain cells more sensitive to the effects of the messenger chemicals, which in turn send stronger signals around the nervous system and help to correct defective functioning in the neurotransmitter and hormone systems (described in Chapter 3).
In modern psychiatric practice, the main reason for using ECT is that it can produce rapid improvements in symptoms. Hence it is often used when individuals are so depressed that they cannot even eat or drink properly and the depression is no longer a psychiatric crisis but a medical emergency. Interestingly, in those with less severe or less life-threatening episodes of chronic depression, a new treatment, called transcranial magnetic stimulation (TMS), is increasingly being used. This does not require anaesthesia; an electromagnetic coil is placed over the scalp and uses magnetic fields to stimulate nerve cells in the brain to improve symptoms of depression.
The development of psychosurgery as a treatment for mental illness arose from evidence that industrial accidents that caused brain damage may be associated with changes in temperament and could render people calmer than they had appeared previously. From the 1890s onwards, observers suggested that similar brain changes could be reproduced surgically by severing the connections between the frontal lobes and the rest of the brain and that this could be used as a treatment for severe anxiety and depression as it would reduce emotional responsivity. In 1935, the Portuguese neurologist Antonio Moniz described a surgical procedure called ‘leucotomy’ where part of the frontal lobes of the brain was destroyed using an instrument called a ‘leucotome’. Moniz claimed great success for leucotomy operations and was awarded a Nobel Prize in 1949.
In the USA, Walter Freeman and James Watts developed the technique further, which they called ‘lobotomy’. Lobotomy did make patients calmer but there was a high price to pay for this, as lobotomy also reduced their judgement and social skills and could cause personality changes. Concern about the risk of abuse of this operation was expressed by the public and in literature and films. A classic and distressing portrayal of its misuse appears in Ken Kesey’s novel and the film One Flew over the Cuckoo’s Nest where lobotomy is performed on the rebellious Randle McMurphy to punish and control him after he has attacked the leader of the inpatient unit, Nurse Ratchet. Few individuals who saw the film can fail to be anti-psychosurgery.
The advent of other treatments and greater scrutiny of the reasons for psychosurgery have been associated with a dramatic decrease in its use in most countries in the last sixty years. During the early 1950s, prefrontal leucotomy was performed on about 14,000 individuals in the United Kingdom, with operations on women outnumbering men by about two to one. By the 1970s fewer than 100 operations were performed each year in the United Kingdom, and currently it is considerably less (10–20 operations per year). Its use is carefully regulated and the procedure is only carried out in specialized centres for highly selected cases after extensive assessments. These defined circumstances usually involve highly distressing and severely debilitating chronic depression or obsessive compulsive disorder for which no other treatment produces any benefits. The procedure has also changed radically, with the crude approaches used in the early operations being superseded by an approach called stereotactic surgery, a computerized procedure which places small electrodes within a selected part of the brain associated with emotional control.
The newest surgical procedure introduced in psychiatry is Vagal Nerve Stimulation (VNS), which was originally introduced for individuals with treatment-refractory epilepsy. Although VNS is not technically a psychosurgery procedure, as it does not involve surgery to the brain, it does involve the surgical implantation of a pacemaker-like device in the body. A wire attached to the device allows the delivery of brief electrical impulses (about 30 seconds or so in duration) to the left vagus nerve in the neck. The vagus nerve has numerous connections to many key regions of the brain and it is believed that stimulation of the vagus nerve modifies the activity of some areas of the brain that are involved in regulating mood. The evidence for the use of VNS is equivocal and it is not recommended in all countries as a treatment for depression. Also, a potential drawback is that the response is slow and the benefits of VNS may not become apparent for nine or more months after the device has been implanted. Currently, its use is reserved for carefully selected individuals with treatment-resistant depression.
The general public have long been suspicious about the use of physical treatments for depression that we have described and have repeatedly expressed fears that the treatments will be misused. However, the main reason for the demise of physical treatments was the discovery of drugs to treat specific psychiatric illnesses.
By the 1950s, pharmacology for general medical conditions was developing rapidly. Psychiatrists were also very keen to find drug treatments for use in their speciality, but most discoveries arose as offshoots from research in general medicine. For example, in 1951 Henri Laborit, a surgeon in the French Navy, wanted to find a way of reducing surgical shock in patients which he thought was mainly a consequence of the anaesthetics they were using. Laborit began to experiment with antihistamines and came across chlorpromazine and noticed that patients became less anxious or indifferent to emotions or pain if they had been given this drug. This finding was brought to the attention of Pierre Deniker, who was a psychiatrist, and Deniker and his colleague Jean Delay started to use chlorpromazine in the Hospital of Saint Anne in Paris.
Deniker and Delay reported that chlorpromazine was very helpful for patients with schizophrenia, mania, and very severe depression. Indeed, individuals who had been institutionalized for years were discharged to live normal lives in the community, leading to overoptimistic predictions that this represented a revolutionary treatment that would lead to the closure of psychiatric hospitals. Although chlorpromazine is more relevant to the treatment of schizophrenia than depression, its discovery and introduction for patients with mental disorders changed the face of psychiatric practice and kindled a new-found enthusiasm to find other medications. One of the first antidepressants, called imipramine, had a chemical structure similar to antihistamines.
With the rise of the monoamine theory of depression came the introduction first of tricyclics (so called because the chemical compounds comprised three interconnected chemical rings) and then of the monoamine oxidase inhibitors (so-called because they prevented the activity of the enzyme monoamine oxidase). These classes of medications increased the amount of monoamine available in the synapse, although different medications sometimes worked more on one monoamine than another. The monoamine oxidase inhibitors were more complicated to prescribe as they could interact with foods in the normal diet (such as cheese) and so were less widely used than the tricyclics, but both types of drug remained the mainstay of treatment for many years.
The next antidepressants to be introduced again increased the amount of monoamines available in the brain but produced effects in a slightly different way from the first generation drugs. The new drugs were called selective serotonin reuptake inhibitors (the SSRIs), of which the most (in)famous is Prozac. Initially viewed as a significant advance with easier prescribing regimes and different side effect profiles from the old drugs (that made the new medications more acceptable to some patients), the SSRIs and all the so-called second generation antidepressants introduced since then have increasingly been scrutinized and criticized by patients and professionals. These negative reactions are fuelled partly by claims that the benefits may have been overstated because of biased reporting of research results, partly because of the marketing strategies that try to extend the use of the drugs to broad populations of patients, and also because of concerns that SSRIs may increase rather than decrease self-harm in some people or be addictive for others. Some of the concerns about SSRIs have not stood the test of time, but, as noted in Chapter 4, suspicions remain about the relative benefits and risks of these medications and the public and media demonstrate huge ambivalence towards these medications (see Figure 9).
9. Media representations of Prozac.
It was an Australian psychiatrist John Cade who discovered that lithium carbonate could be used as a mood stabilizer for severe depression and for mania. In the 1940s, Cade developed a theory that there was a toxin responsible for causing mental illness and that the illness abated when the toxin was excreted in urine. Working at the Bundoora Repatriation Hospital in Melbourne, he started to experiment by injecting guinea pigs with urine from manic patients to see if this caused manic symptoms to develop. He used lithium to dissolve what he thought was the toxin (uric acid) so that this could be injected. Cade’s hypothesis was not proven, but he noticed that the guinea pigs receiving the lithium solution became less energetic and slowed down. Cade suggested lithium could be used to treat mental illness and started to use it in a number of patients with mania, schizophrenia, and depression. He found that lithium had a remarkable effect upon mania but limited effects upon the other conditions. Acute symptoms of mania were effectively cured and indeed Cade gave lithium to his brother who had manic depression.
Cade’s work did not initially lead to major changes in the treatment of mania. It was only some years later when Mogens Schou, a Danish psychiatrist, undertook a scientific trial that confirmed Cade’s observation that lithium calmed manic patients. However, there was a further delay before lithium was officially approved for use in patients. This was partly because in the early days it was not clear what a therapeutic dose of lithium should be and too high a dose could lead to potentially fatal lithium toxicity. It was also true that there was little incentive for Pharma companies to produce lithium tablets as lithium was a naturally occurring substance, and so no drug company could patent it or realistically make any profit from it.
Nowadays, lithium is widely used, mainly for bipolar disorder, and it is a better anti-manic as compared to antidepressant drug. It is not uniformly the treatment of choice as a mood stabilizer as its prescription has to be accompanied by regular blood testing and monitoring to prevent toxicity. As such, it is more popular in some countries than others (e.g. it is more widely prescribed in Europe than the USA). Other medications that may stabilize cell membranes are also prescribed as mood stabilizers, including drugs initially introduced as anti-convulsants (e.g. sodium valproate).
From time to time the idea has been expressed that we should harness the potential effects of a natural salt such as lithium (see Box 7). The argument is that we could increase everyone’s exposure to it with schemes akin to the prevention of tooth decay by fluoridation of water. Such calls often follow media articles such as the one in December 2009 when a report from a Japanese study in Oita suggested that suicide rates were lower in areas where the amount of lithium in tap water was higher—leading to suggestions that lithium should be put in the drinking water.
Lithium was marketed as a tonic in the 1920s.
Charles Leiper Grigg from the Howdy Corporation invented a tonic/hangover cure containing lithium citrate which he called ‘Bib-Label Lithiated Lemon-Lime Soda’. The name was subsequently changed to 7 UP (although it no longer contains lithium)!
Psychological interventions and the notion of talking being part of the therapeutic approach to asylum patients were described prior to the 1930s. However, it was Freud who clarified that talking with patients was not simply a vehicle for expressing empathy and support. He suggested that if the conversation was guided by an underlying psychological theory it could be used to bring about a talking cure. This was (and is) called psychoanalysis and, whilst current views on psychoanalysis are quite polarized, the introduction of a non-physical, non-drug treatment of depression represents one of the most important innovations. We briefly consider Freudian approaches and then discuss current psychotherapy interventions and some of the issues that may act as barriers to wider use of therapies.
Freud utilized his theories of the mind and ego defences and argued that it was important to target the symptoms that he believed represented unconscious conflicts. The therapy was often long with several sessions per week for many years. During therapy the patient lay on a couch while Freud sat behind the person’s head and therefore out of their field of vision. The patient was encouraged to talk of anything that came to mind (a process Freud called free association) or to describe dreams. The therapist was trained to be ‘like a blank canvas’ onto which the patient could project issues from their past or could relive relationship conflicts. The therapist’s skill was in making interpretations about what the patient said or did during therapy. Freud suggested that this process allowed the patient to gain understanding and insight into unconscious conflicts in their life that had generated the symptoms they were experiencing. Developing insight was believed to lead to resolution of symptoms and allow the patient to continue on a path of more healthy personal development.
Freud’s critics point to multiple weaknesses in the model he proposed and it is easy to see that there are many flaws in this approach. However, it is worth remembering the era in which Freud began to practise psychoanalysis and a cursory glance at the rationale given and the nature of the physical treatments used would lead any reasonable observer to conclude that the latter interventions were equally defective. Perhaps a more telling observation is that most physical treatments have evolved more over time than psychoanalysis. Also, a valid criticism of psychoanalysis is that it risked being a rather exclusive club. Not only because most of the patients needed to be able to fund private therapy sessions several times a week for many years, but also because they needed to be able to express their emotions and difficulties in detail—perhaps indicating a certain level of income and education. This led to claims that the best candidates for these talking therapies were ‘YARVIS’ patients—young, articulate, rich, verbal, intelligent, and successful. Further concerns revolved around the notion that whilst the development of insight and self-awareness may be helpful it may not automatically promote change in how people act or cope.
Many of the current briefer interventions that are now available, such as counselling, interpersonal therapy (IPT), and cognitive behaviour therapy (CBT), are suitable for a broader group of depressed patients than Freudian analysis. Furthermore, interventions such as IPT and CBT extend beyond helping people understand their actions and reactions to include specific techniques that explicitly focus on changing behaviours and reducing the risk of future episodes of depression. These therapies also emphasize that the patient and therapist are collaborators in the process of change with a more equal relationship than that adopted in psychoanalysis (where the therapist is clearly in a position of power). Also, new therapies are evolving that combine elements of more than one therapy model, for example cognitive analytic therapy (CAT) combines some of the ideas of psychoanalysis and CBT.
Mindfulness represents a new mainstream therapy that is primarily a new take on meditation as practised in many religions throughout history. Mindfulness therapy encourages individuals to develop moment by moment awareness of bodily sensations, thoughts, feelings, and the environment. The therapy uses integrated relaxation and other interventions to help people take a non-judgemental approach towards their thoughts and feelings and to reduce their stress through acceptance and adaptation. If continued as a long-term habit it can prevent relapse, especially in those who had previously experienced repeated episodes of depression.
Media articles suggest that therapy is more popular than medication for the treatment of depression. However, enthusiasm for therapies in the public at large is not universal and research evidence suggests that about 30 per cent of patients do not want therapy or do not complete a course of therapy. Interestingly, this percentage is very similar to the rates of refusal or dropout from treatment with antidepressant medications. A barrier to the use of all therapies is that not everyone who is depressed wishes to engage in a talking treatment and also a desire to receive therapy does not guarantee that an individual will have a good outcome from this approach.
A further barrier to increasing access to therapies is the fact that some respected scientists and many scientific journals remain ambivalent about the empirical evidence for the benefits of psychological therapies. Part of the reticence appears to result from the lack of very large-scale clinical trials of therapies (compared to international, multi-centre studies of medication). However, a problem for therapy research is that there is no large-scale funding from big business for therapy trials, in contrast to Pharma funding for medication studies. Until funding is available to undertake long-term, multinational, multi-centre studies of therapies there will continue to be a delay in the accumulation of robust evidence about how best to employ therapies in clinical practice.
It is hard to implement optimum levels of quality control in research studies of therapies. A tablet can have the same ingredients and be prescribed in almost exactly the same way in different treatment centres and different countries. If a patient does not respond to this treatment, the first thing we can do is check if they receive the right medication in the correct dose for a sufficient period of time. This is much more difficult to achieve with psychotherapy and fuels concerns about how therapy is delivered and potential biases related to researcher allegiance (i.e. clinical centres that invent a therapy show better outcomes than those that did not) and generalizability (our ability to replicate the therapy model exactly in a different place with different therapists).
Some of the critiques of the evidence-base for therapies are far-fetched, but it is certainly true that at times the benefits, acceptability, and ease of delivering high-quality therapy have been overstated. It is also clear that there has been a lack of attention to side effects or adverse effects of therapies, and recent surveys, such as those carried out by Glynis Parry and colleagues in the United Kingdom, suggest that up to one in ten individuals report negative reactions to therapy.
Overall, the ease of prescribing a tablet, the more traditional evidence-base for the benefits of medication, and the lack of availability of trained therapists in some regions means that therapy still plays second fiddle to medications in the majority of treatment guidelines for depression.
The mainstay of treatments offered to individuals with depression has changed little in the last thirty to forty years. Antidepressants are the first-line intervention recommended in most clinical guidelines, although it is increasingly recognized that brief therapies are an important option. Perhaps the most noticeable change in recent years is the shift away from the ‘doctor knows best’ approach towards a recognition that individuals have the right to express their treatment preferences and be involved in a process of shared decision-making. This shift is allied to the increased emphasis on personalized medicine, and the need to modify treatments to make them more relevant to each individual patient. As such, these issues will be briefly discussed.
Much of the treatment research in the 21st century focused on finding antidepressants that overcame the symptoms of an acute illness episode. It was stated that it takes about two weeks for medication to start to work, six weeks for people to feel significantly better, and that the medications should be continued for three to six months at least in an effort to minimize the risk of a relapse. This approach exposed three issues: first, individuals were not always good at sticking with a medication regime and not everyone completed a course of treatment. Second, medication only works for as long as an individual takes it; once it was stopped the risk of relapse rose significantly. Third, depression is a highly recurrent disorder and treating the acute episode is really only part of the equation, so treatment approaches needed to incorporate strategies to avoid further episodes as well. Clearly these problems needed to be addressed at a system level and at an individual level.
The increasing recognition that depression was a life course illness has led to several attempts to copy the systematic health services employed for chronic physical illnesses such as diabetes or hypertension. These chronic disease management models involve several key elements that are useful in helping people with depression, including an emphasis on long-term outcomes not just acute episodes; a greater expectation that primary care or community health services will provide a ‘call and recall’ system to ensure the service is more proactive in supporting and monitoring the person’s progress and any barriers to treatment (and not just leaving everything up to the patient etc.); clearer treatment pathways (including how to decide to move to the next step in the treatment process); and shared care guidelines providing transparency about which individuals should be referred to specialist services and who can be best helped by primary care or other services.
Such systems of health care and treatment for depression have been implemented with varying degrees of success in different countries. The main benefits have been to help clinicians and individuals with depression to take a longer-term view of the problem and to offer a better method for deciding which treatments to use for different individuals. The down side is that the system is still not sufficiently sensitive to individual preferences and the personal differences that may critically influence treatment outcomes.
What makes one person with depression adhere to taking antidepressants for months or even years and another person to stop the medication after a few days? For years, the perceived wisdom was that it was side effects of medication. Although the newer antidepressants have different side effects from the earlier medications, the actual percentage of individuals who complete a course of treatment has remained the same for around fifty years (at about 60 per cent). Furthermore, studies suggest that about 5 per cent of non-adherers never took the prescription to be dispensed by a pharmacist (so they clearly could not have experienced any side effects). An alternative explanation of non-adherence with antidepressants was that individuals with severe depression perhaps lacked ‘insight’ into the need for treatment and so, it was argued, the illness impeded their awareness of what would help them and reduced their ability to adhere. However, this 60 per cent adherence rate is pretty similar to that reported for people with chronic physical illnesses who do not have any loss of insight. Lastly, it was argued that the individuals did not want medication, and wanted the option of therapy; but, as already noted, the refusal or dropout rate from therapies is similar to that reported for medication. So, the only conclusion we can draw is that individual differences are more likely to explain what is seen in the real world than some ‘group experience’ or herd instinct.
One of the best ways to understand the phenomenon described is to explore health belief models. In their simplest form these models explore how people understand and react to illness and what they think about treatments. Although the content of an individual’s health beliefs may reflect their culture and background, there are five recurring themes that allow some predictions to be made about how a person will engage with different treatment approaches. The key questions that people think about in regard to their illness experiences are:
To give a simple example, someone may believe that the problem they have is depression; that it is caused by a chemical imbalance in the brain; that it can be cured by taking a medication to correct that problem; and they may be worried that it may recur with negative effects on their social and work life. It is highly likely that this person will accept a prescription for an antidepressant and adhere to treatment for quite an extended period of time.
Someone who is not convinced that the problem is depression, who views their current state as indicative of personality weakness and believes that ‘pulling themselves together’ will resolve their problems once and for all, may be ambivalent about any sort of treatment. Alternatively, someone may agree that they have depression but may emphasize the role played by childhood trauma in undermining their self-esteem and describe that they know that they are very sensitive to feeling down in response to relationship stress. This individual may want help with their depression, but might decline medication (or question its utility), preferring instead to attend therapy.
The examples given are somewhat black and white, but they highlight that it is not just what treatment has been shown to work in a clinical treatment trial, but what treatment makes sense to any individual at the time they seek help. It goes without saying that the clinician has to strive to collaborate with a patient and take on board their perspective so that both parties can develop a shared understanding of the problem and make joint decisions about the course of action. This often requires a willingness by clinicians to modify their consulting style, and of course some find this more difficult than others. Interestingly, this philosophy is not as new as some individuals suggest; as long ago as 1878 a physician called William Osler reportedly said that ‘The good clinician treats the disease; the great clinician treats the person.’