As I mentioned in Chapter 1, MBSR and its many offspring (with MBCT being perhaps the most successful) were originally started as clinical programs. MBSR was designed at what was then called the Stress Reduction Clinic at the University of Massachusetts Medical Center; MBCT was explicitly developed as a program to prevent relapse in previously depressed individuals—mindfulness and meditation as a form of therapy. The clinical work in this area has focused mostly on people suffering from depression (or the recently depressed who might be likely to relapse into depression) or various forms of anxiety disorders, as well as people suffering from chronic pain (as a primary ailment or as a secondary aspect of a medical condition, e.g., during cancer treatment) or chronic stress.
In the previous chapter, I discussed the primary methodology of clinical research: Randomized controlled clinical trials are the gold standard. How do studies on the therapeutic effects of mindfulness measure up against that standard?
Not so well, it turns out. Many of the published meta-analyses—and there are a few1—document the process by which studies were selected. This includes detailing the reason why particular studies, once retrieved from the library, were ultimately excluded from the analysis. The arguably most stringent meta-analysis in the field is the one by Madhav Goyal and colleagues2: It focuses only on randomized controlled trials with active controls. (Goyal et al.’s focus was on meditation in general, and so the paper did include not only mindfulness trainings; mantra practice was included as well.) The researchers initially retrieved an incredible 18,753 papers; they ended up with no more than 47 in the final analysis—discarding about 99.75% of the original set. There were a number of reasons for rejection. Some were trivial: Studies that simply did not fit, like work focusing on children or adolescents, or research done on healthy volunteers, or studies that did not contain the kind of clinical measures Goyal et al. were interested in. Noteworthy was the large number of papers that did not contain original data, that is, review papers or position papers—a full 45%.3 Of the papers that contained original research, 20% did not include a control group and 26% did not randomize participants. Thus about half of the empirical research falls well short of the accepted standard in the field of clinical research.
Let’s delve into a few of those meta-analyses, starting with the most comprehensive analysis, the one by Goyal and colleagues. Goyal et al. examined changes in mental health in clinical populations (specifically, changes in depression, anxiety, stress and distress, mood and affect, substance use, eating and sleeping patterns, pain, and body weight). “Clinical population” was defined broadly here: About a third of the studies targeted individuals suffering from psychological distress (viz., depression, anxiety, stress, or substance abuse), while the rest suffered from medical problems that often lead to psychological distress (viz., asthma, breast cancer, cardiovascular disease, chronic pain, diabetes, organ transplant, or tinnitus; typically each study tackled only one problem). As mentioned, Goyal et al. looked at interventions using either or both mindfulness and mantra meditation—my interest here, given the topic of this book, is in mindfulness interventions, so I report results on this type of meditation only.4 Only randomized controlled trials with active controls were included, yielding a small set of studies: 38 articles that focused on mindfulness interventions. The average sample size was 74 people per study for a total of 2,895 participants. Goyal et al. analyzed the data separately for nonspecific active control and specific active control. Recall from the previous chapter that “nonspecific active control” refers to a control condition that is not a known form of therapy (an example would be an educational intervention); “specific active control” is a known form of therapy (e.g., relaxation therapy or psychotherapy). Comparing the effects of mindfulness to the effects of nonspecific control conditions tells us whether mindfulness has an effect over and beyond placebo. Comparing the effects of mindfulness to the effects of specific control conditions tells us whether mindfulness has an effect that is larger (or smaller) than the effect of treatment-as-usual.
The first set of results concerns the comparison of mindfulness programs with nonspecific controls. In general (but not always), the effects of mindfulness training exceed those of placebo treatment: For anxiety, the effect size was 0.38 SD (7 studies); for depression, 0.30 SD (8 studies); for stress, 0.04 SD (7 studies, this effect was not significantly different from zero); for negative emotions, 0.33 SD (11 studies); for measures of positive emotions (e.g., well-being, positive mood), 0.28 SD (4 studies, not significant); for measures of quality of life as it related to health, 0.28 SD (3 studies, the effect was not significant); for measures of sleep quantity or quality, 0.14 SD (4 studies, not significant); and for pain, 0.33 SD (4 studies). Thus mindfulness clearly has a real effect in clinical populations, over and beyond its possible effect as a placebo, on quite a few outcomes that do matter—depression, anxiety, negative emotions, and pain. There are also several important outcomes that do not show significant effects: sleep, stress, mood, and quality of life. The effect sizes are generally a little smaller than the effect sizes we observed in nonclinical studies in the previous chapter (around 0.30 SD, implying that mindfulness-trained people do better than 62% of nontreated individuals). Particularly nice to note is that the effect size of mindfulness on depression (0.30 SD) is larger than the typical effect of drug treatment: The effect of antidepressants for depressed patients with mild to moderate symptoms compared to nonspecific treatment is 0.11 SD; for those with severe depression, it is 0.17 SD.5 For anxiety, however, both drug treatment (placebo-controlled effect size of 0.80 SD for panic disorder and 0.90 SD for generalized anxiety disorder) and standard psychological treatment (placebo-controlled effect size of 0.73 SD after cognitive behavioral therapy) seriously outperform mindfulness treatments (0.38 SD).
How does mindfulness compare to specific active control treatments? For anxiety, the effect size was –0.07 SD (10 studies; a negative effect size means that the control group performed better, but do remember that the effect is not significant, meaning that it is for statistical purposes equal to zero); for depression, 0.11 SD (11 studies); for measures of stress, 0.03 SD (6 studies); for positive affect, –0.04 SD (4 studies); for measures of quality of life, 0.05 SD (5 studies); for sleep, –0.14 SD (2 studies); and for pain, –0.06 SD (4 studies). The only significant effect (and it was a small effect) was for depression.
This result means that, for almost all mental health outcomes (depression is the sole exception), mindfulness does not work any better (or any worse) than treatment as usual. This is bad news for those who were hoping that mindfulness would be a therapeutic magic bullet, curing a wide variety of psychological ailments with much more ease than standard treatments do. It does not.
The result does mean that mindfulness training is a viable alternative to traditional therapies—it is just as effective as treatment-as-usual. This is an important finding because some of the treatments-as-usual can have unpleasant side effects. For instance, the most commonly prescribed class of antidepressants (selective serotonin reuptake inhibitors such as Zoloft, Praxil, or Prozac, sometimes also prescribed for anxiety disorders) can lead to nausea, restlessness, dizziness, reduced sexual desire, difficulty reaching orgasm, insomnia, and/or weight gain or weight loss. Xanax, commonly prescribed for anxiety disorder, has side effects such as drowsiness, dizziness, insomnia, memory problems, poor balance or coordination, slurred speech, and/or loss of concentration.6 Goyal et al. are keen to point out (and other meta-analyses of clinical trials confirm this) that not a single instance of negative side effects has been noted in any of these mindfulness studies.7 So mindfulness is a more than acceptable alternative for those who do not tolerate these drugs very well and/or for those who prefer a more contemplative approach over more traditional therapeutic tactics. This observation works both ways, of course: There is no reason whatsoever to prefer mindfulness programs over other forms of treatment. Thus people who find themselves unhappy in a mindfulness program should also feel free to step out and try something else. (Meditating just because you’ve heard it is good for you, without enjoying it, does not seem like a very sustainable practice anyway.)
Mindfulness can also be used as a complementary intervention, that is, as an intervention that is added to treatment-as-usual. In a review paper on this topic, Chiesa and Serretti8 note that the evidence is scarce but somewhat encouraging. In the four studies that compared the combination of MBCT and treatment-as-usual with treatment-as-usual on its own, relapse rates for depression were half as large for the combined treatment compared to treatment-as-usual on its own. In the two studies that looked at the severity of depressive symptoms, however, combined treatment was not superior to treatment-as-usual.
Finally, a few meta-analyses (each looking at a small number of studies) have focused more specifically on mindfulness as a treatment for the negative psychological effects of medical problems. One of these9 looked at the effects of MBSR in cancer patients (most of these concerned women with Stage II breast cancer). Over three randomized controlled trials studies, the mean effect on mental health (anxiety, depression, and stress) was 0.35 SD; for physical health (immunity levels, dietary fat, hormonal indices, as well as self-reported health), it was 0.17 SD, which was not statistically significant.
Another meta-analysis10 focused on chronic pain patients (including fibromyalgia and rheumatoid arthritis) and included both MBSR and ACT interventions (with MBSR comprising about 80% of all studies). Averaging across all controlled studies included in the analysis, the interventions had statistically significant effects on pain (0.37 SD; 10 studies), depression (0.32 SD; 9 studies), anxiety (0.40 SD; 5 studies), physical well-being (0.35 SD; 6 studies), and quality of life (0.41 SD; 6 studies).
A third analysis11 examined the effects of mindfulness interventions on psychosis. Psychosis is a mental condition in which a person loses touch with reality. Symptoms include delusions (false beliefs that are held with strong conviction, even in the face of evidence to the contrary) and hallucinations (sensations that seem very real but have no basis in reality; typically hearing voices or seeing things that aren’t there); often the symptoms also include disorganized speech and disorganized behavior or a state of immobility. The meta-analysis found evidence for effects on the so-called negative symptoms of psychosis (i.e., the disruptions to normal emotions and behaviors, things like flat affect, lack of pleasure, difficulty interacting with other people; 0.56 SD; three studies). Mindfulness-trained participants were also less likely to be rehospitalized (0.60 SD; two studies). There were, however, no effects on so-called positive symptoms (i.e., hallucinations, delusions, or thought disorders; 0.19 SD, not significant; four studies).
All of these meta-analyses show promising effects, but it bears repeating that they are based on a very small number of studies.
The Goyal et al. meta-analysis examined mental health outcomes regardless of the underlying mental or physical health issue. What is affected in such cases is not necessarily the target of the intervention. For instance, MBCT is a treatment that was explicitly designed to keep formerly depressed individuals from relapsing. Its target is thus relapse prevention; the most honest way to see if MBCT works would then be to look at relapse rates. It would be nice if other effects were present as well—say, a reduction in anxiety or an increase in well-being—but this would not be critical in assessing its success. Goyal et al.’s meta-analysis does not make that distinction; the risk is that the (hopefully large) effect on target measures is confounded with the (possibly smaller) effect on nontarget measures and thus potentially diluted.
Piet and Hougaard12 found that the relapse rate after MBCT training in six randomized active controlled trials was 38% in MBCT participants versus 58% for control participants—a reduction in the risk of relapse by 34%. Importantly, MBCT was even a little more effective for participants who were at a higher risk for relapse: For participants with three or more prior episodes of depression, risk of relapse was reduced by 43% (36% for MBCT participants vs. 63% for controls). Moreover, MBCT was at least as effective as a maintenance dose of antidepressant medication in the two studies that examined this question. Clearly, MBCT does not reduce the relapse rate to zero—that would be a bit much to ask for anyway—but it performs better than standard treatment.
Khoury and colleagues13 provide additional meta-analytic data that show that effects are larger on target measures. They did not restrict the analysis to controlled studies but focused on progress from before to after a mindfulness intervention. They found an effect size of 0.57 SD across all measures (72 studies) but somewhat larger gains for more targeted interventions: 0.66 SD for depression in studies targeting depression (6 studies) and 0.72 SD for anxiety in studies targeting anxiety (10 studies). For studies with passive control, the average after versus before effect was 0.53 SD (67 studies). Effects were identical for depression in studies targeting depression (eight studies; effect size = 0.53 SD), but were 1.00 SD for anxiety in studies targeting anxiety (four studies). The evidence is not spectacular, but there is at least a hint that people make the most progress on the type of outcome that is the explicit aim of the specific intervention.
Finally, Hofmann and colleagues14 gathered studies on mental health; like Khoury et al., they focused on before to after progress. They found that mindfulness interventions were best at reducing anxiety in individuals with anxiety disorders (effect size = 0.97 SD; seven studies) and less effective in reducing anxiety in cancer patients (0.64 SD; eight studies) or people suffering from pain (0.44 SD; five studies); there was only a small, statistically not significant reduction in anxiety in depressed individuals (0.12 SD; one study). Mindfulness interventions worked well at reducing depression in depressed individuals (effect size = 0.95 SD; four studies) and did less well at reducing depression in anxious individuals (0.75 SD; six studies), individuals suffering from chronic pain (0.51 SD; six studies), or cancer patients (0.45 SD; seven studies). Thus progress is largest in the area of life where people are suffering most.
In sum, there is reason to be cautiously optimistic: Mindfulness-based therapy seems to have targeted effects. That is, mindfulness therapy aimed at reducing relapse reduces relapse (and more so than treatment-as-usual), people generally make the most progress on the type of mental health outcome that is the explicit target of the specific intervention, and mindfulness training is most effective at lightening the burden where it is felt the most.
How long does a person need to be in mindfulness training for it to be effective?
In their meta-analysis, Khoury and colleagues found a small but significant effect of the duration of treatment (i.e., the number of contact hours between therapist and clients) on measures of psychological distress. For every hour increase in duration, the effect size went up by 0.01 SD. The MBSR standard of practice15 is 34 contact hours; increasing that number to 44 would then increase the effect size by 0.10 SD. This suggests that, at least for MBSR and MBCT-style programs, some of the effective work is done in class. We don’t know why—there is no research on this—but part of the effect may be due to “modeling,” that is, teaching by example (from the point of view of the therapist) and learning by imitation (from the point of view of the participant). An indication that this may be the case comes from studies where the therapist herself had received mindfulness training—these studies had, on average, an effect that was 0.13 SD larger than studies where the therapist wasn’t trained in mindfulness.16
One other meta-analysis17 looked at the effect of the number of in-class contact hours within MBSR (30 studies); no effect was found. (The authors also, incidentally, found that the number of contact hours in these studies was much lower than the standard of 34: It varied between 6 and 28, with a mean of 19.) One possible explanation for the discrepancy is that the Khoury et al. meta-analysis may have had a wider range of durations (the paper doesn’t tell us), implying that durations longer than 28 hours are critical and/or that short programs are particularly ineffective. Another possibility is that some types of treatments may take longer to kick in than others, and if the longer treatment is more effective simply because it is a better treatment, this will show up in the Khoury et al. results as an effect of duration. I hate to use the cliché yet again, but—yes—more research is necessary.
Interestingly, in the Khoury et al. meta-analysis, the number of hours practiced at home did not predict treatment success. This study, however, used the number of hours prescribed by the program for the analysis—no doubt due to the absence of actual data on the amount of homework practice. People being people, it is unlikely that any participant followed his mindfulness coaches’ prescription to the letter.
There is one review paper, by Lisa Christine Vettese and colleagues,18 that directly addresses this question. They examined the effect of dosage by looking at the effect of home practice as reported by the participants themselves. Another difference is that they looked at this within individual studies. The Khoury et al. and Carmody and Baer studies looked at practice across studies—does Study A, with a shorter duration, yield a smaller effect than Study B, with a longer duration? That is a different question than the question of whether people who are enrolled in Study A (or Study B) and practice diligently do better than the slackers in Study A (or Study B). I would argue that the latter analysis makes more sense from a scientific standpoint because it keeps all other aspects constant. (It is likely that there are other differences between Study A and Study B than just duration.)
One interesting conclusion from Vettese et al.’s review was that very few studies have examined this question: Vettese et al. collected 98 applicable studies, but only 24 of those looked at the relationship between how well participants practiced at home and how that helped alleviate their psychological distress. Of those 24, 15 allowed for the calculation of amount of practice in the form of minutes per day—on average, participants practiced 28 minutes per day during the program, ranging from 5 minutes per day in one study to 58 minutes per day in another. (These numbers are testimony to the determination of the participants: half an hour of meditation and/or other mindfulness practices per day seems very respectable to me.) Five studies also reported practice at follow-up (with follow-up times ranging from two months to four years), which varied from 5 minutes per day less than once per week to 18.7 minutes per day every day. Of the initial 24 studies, 13 (just over half) showed at least moderate support for a relationship between the amount of home practice and psychological outcome. Vettese et al. did not crunch the numbers, but I calculated an average correlation over all 24 studies of r = .15.19 This correlation is very close to the corresponding correlation in nonclinical samples we saw in the previous chapter (r = .18 for trait mindfulness and .17 for stress)—a small but positive association suggesting that how much you practice has a modest influence on the effect you can expect to score.
Mindfulness training thus appears to be an effective form of therapy for mental health issues—it works better than placebo, its effectiveness is on par with that of standard treatment, and it works better for preventing relapse after depression than treatment-as-usual does. It also has positive effects on mental health for people afflicted with a diverse array of medical conditions.
In the previous chapter, we saw that many mechanisms have been proposed for effects on well-being but only trait mindfulness has gained some empirical traction in nonclinical samples. It is probably fair to say that there has been more attention to the “why” question in clinical research. In their review paper, Jenny Gu and colleagues identified 20 studies on mechanisms; Anne Maj van der Velden and colleagues collected 23.20 These studies also (and for the present purposes also, alas) cover a wide variety of these mechanisms: Only two mechanisms have been investigated in more than three studies (if we take three studies as the absolute minimum to conduct a meta-analysis).
The first of these oft-researched mechanisms is our old friend trait mindfulness—the self-reported ability to be present in and open to the present experience. Gu et al. collected 12 studies that looked at mindfulness and allowed for the computation of correlations. Seven of these examined levels of depression as the main psychological outcome; two focused on stress, two more on anxiety, and one on negative affect. In these studies, the effect of the mindfulness intervention on trait mindfulness was, on average, 0.72 SD; changes in mindfulness were correlated with changes in psychological outcome (r = .36). On average, changes in mindfulness after the intervention explained 61% of the changes in the mental health outcome. Another meta-analysis21 found this effect also between studies; that is, studies that resulted in stronger effects on trait mindfulness also showed more positive effects on mental health. Technically, we cannot claim that changes in mindfulness lead to changes in psychological outcomes—these are correlations, not a cascade of changes over time. I cannot think of a good reason, however, to assume that the cascade we saw in nonclinical studies (mindfulness practice leads to higher levels of trait mindfulness, which leads to heightened well-being) wouldn’t also be present in these clinical groups.
The second mechanism is what Gu et al. labels “repetitive negative thinking”—this includes both rumination and worry, which fall under the self-regulation category in Vago and Silbersweig’s categorization of effects (as we saw in the previous chapter). Recall from the previous chapter that rumination and worry tend to be particularly active in people who struggle with depression. Gu et al. found that the stilling of repetitive negative thinking explains a good amount of the changes in psychological outcome after a mindfulness intervention. They compiled six studies: three on depression and one each on stress, anxiety, and global symptoms of psychopathology. The effect of the mindfulness intervention on negative repetitive thinking in these studies was 0.65 SD; decreases in negative repetitive thinking were correlated with positive psychological outcomes (r = .33). On average, changes in repetitive negative thinking after the intervention explained 44% of the changes in the psychological outcome. If we assume a cascade (mindfulness practice causes you to ruminate less, which eases your symptoms), this means that a good portion of the effectiveness of mindfulness interventions is due to lowered levels of worry or rumination. Note that the positive changes in negative thinking after mindfulness interventions are important in their own right: For many people who have them, such persistent worries and seemingly unstoppable negative thought spirals are in themselves rather distressing. Finding some relief from their relentless onslaught is a welcome change.
Trait mindfulness and the ability to regulate thought and emotion (or at least not to lose oneself in rumination and worry) are useful skills to have, techniques that you can apply throughout life. Skill learning is wonderful, because you might hope that once the tool is in the toolbox, it is always at hand, helping you fight off further mental onslaughts. From this point of view, mindfulness training should be expected to have long-term effects.
There are two meta-analyses on long-term(ish) effects of mindfulness training on mental health. The first, by Stefan Hofmann and colleagues,22 gathered 19 studies and looked at outcomes at the final follow-up compared to before the intervention. The mean follow-up length was 27 weeks after the end of training. The effect for anxiety (17 studies) was 0.60 SD (compared to 0.83 SD right after training); for depression (18 studies), the effect size was also 0.60 SD (compared to 0.50 SD right after training), suggesting, first, that effects are still measurable half a year after the training program and, second, that the effects do not differ much from those right at the end of training—they go down for anxiety (by 0.20 SD) and up a bit for depression (by 0.10 SD).
The second meta-analysis, by Bassam Khoury and colleagues23 (the group of studies partially overlaps with Hofmann et al.’s) had an average follow-up length of 29 weeks. The effect size for the difference between the last follow-up and pretest (24 studies) was 0.57 SD (compared to 0.55 SD right after training). Mindfulness programs were very effective for the treatment of anxiety (0.91 SD, six studies, compared to 0.89 SD right after the intervention) and depression (0.75 SD, two studies, compared to 0.69 SD right after treatment). The conclusion is that, as with the Hofmann et al. analysis, mindfulness training leads to long-term effects that do not differ (in this case, at all) from the effects scored right after the training.
Khoury et al. also looked at comparisons between control treatment and mindfulness interventions at follow-up. This answers the question of whether or not mindfulness programs are more (or less) effective than standard forms of treatment: At follow-up, mindfulness treatment (17 studies) showed an effect size of 0.43 SD compared with passive control (effect size = 0.44 SD right after the training); the effect size was 0.24 SD compared with active control (30 studies; effect size = 0.34 SD right after the training). Seventeen studies compared mindfulness training with other psychological treatments. Mindfulness did better than supportive therapy24 (effect size = 0.34 SD, three studies), but its effects were not different from those of relaxation (five studies), psycho-education (three studies), and traditional cognitive or behavioral therapy (six studies). The conclusion here is that, as was the case with effects measured right after the treatment, the effects of mindfulness practice are similar to the effects of other kinds of therapy; the exception is that mindfulness does better than supportive therapy.
Bottom line: The effects of mindfulness training in clinical populations are quite stable over time, at least for half a year or so.
You might wonder if meditation and mindfulness interventions are safe.
The website of the National Center for Complementary and Integrative Health (NCCIH, a subdivision of the National Institutes of Health)25 states, perhaps a bit mysteriously: “Meditation is considered to be safe for healthy people. There have been rare reports that meditation could cause or worsen symptoms in people who have certain psychiatric problems, but this question has not been fully researched.”
While the last part of the final sentence is undeniably true—the question of negative side effects hasn’t been fully researched—the former part may be a bit puzzling. As we saw earlier in this chapter, the large-scale meta-analysis by Goyal and colleagues—which concentrated on clinical samples—uncovered not a single case of harm.26 Their database included 41 clinical trials; nine of these explicitly reported on the question of harm. One of these nine looked specifically for toxicities to hematologic, renal, and liver markers and found none; seven others explicitly reported that they found no harmful events; one did not comment on harm. Thus the NCCIH statement of caution may be a bit overcautious—so far, no clinical trial has reported that new symptoms were caused by, or existing symptoms worsened as a consequence of, mindfulness training. Your local MBSR or MBCT program is likely beneficial and unlikely to cause you harm. (Nothing can ever be fully guaranteed, of course.)
The NCCIH statement, however, covers more than mindfulness training—it is a statement about meditation in general. There are indeed a few papers that suggest that meditation can lead to negative side effects.27 These include case studies of increased negativity (increased anxiety, unnecessary critical judgment), disorientation (being confused about who you are, “loss of self,” spacing out), addiction to meditation (yes, it can happen), and interpersonal problems (such as unwarranted feelings of superiority, increased discomfort with friends), mania (an unexplained period of great excitement and euphoria), or psychotic episodes (losing touch with reality). This has brought some psychiatrists28 to advocate for careful screening, without, alas, giving clear criteria as to what to screen for and how.
The most comprehensive study on this topic is an ongoing project by Willougby Britton and colleagues. They conducted interviews with nearly 40 people who were expressly recruited because they had experienced adverse effects of meditation (mostly done in a religious [Buddhist] context rather than as part of a therapeutic endeavor); many of them experienced impairments in daily functioning lasting between six months and more than 20 years. The results from this study haven’t been published yet, but an article in The Atlantic,29 a fascinating read, previews some of the findings, many centering around disorientation, often occurring during periods of intense practice such as retreats.
It isn’t clear what proportion of meditators are at risk or what the risk factors are. Many of the former, it seems, are younger adults, in an age range where psychotic breakdowns occur more frequently than in any other age group, so the question is whether meditation is actually responsible for the episodes. The lack of strong data makes it hard to predict who is vulnerable or if there is a specific moment in meditation training when a person is most vulnerable. If you teach meditation or mindfulness, it would certainly be a good idea to keep an eye out for possible struggles your participants might be having; if you meditate and find yourself losing touch with yourself or with reality, you might want to discuss this with your teacher or your family doctor. A complicating factor is that some Buddhist traditions consider some of these adverse effects—especially disorientation—signs of progress. Thus there is some discussion about whether these adverse effects are really adverse and whether or not they should be treated.30 Another complicating factor is that, according to one diary study31 of a small group of MBSR participants, everyone is likely to struggle with distress related to the practice at some point or other during training. These temporary setbacks seem to resolve themselves as participants acquire a more observing, less reactive self.
Again, the results from clinical trials strongly suggest that these problems are unlikely to be widespread; so far, they are absent from the clinical applications of mindfulness.
The main conclusion from the research on clinical applications of mindfulness is that meditation can indeed be used as a form of medication—it has measurable effects on depression (both symptom severity and the risk of relapse), anxiety, mood, pain, and psychological problems that occur as reactions to medical issues, and it is generally safe. It is, however, clearly not a magic bullet or a “Buddha pill,” as one recent book calls it.32 A fair conclusion would be to say that it works, but not spectacularly so.
First, the effects of mindfulness programs aren’t of very large magnitude: around 0.30 SD in comparisons with nonspecific controls, that is, after taking out the placebo effect. These effects are largely on par with the effects found on well-being in nonclinical groups. Note that the effects tend to be a bit larger for targeted interventions and that mindfulness has its largest effect on the aspect of life that the particular patient is struggling with most. For depression, mindfulness appears to work better than drug treatment; for anxiety, it does not.
Second, the effect decreases to zero when compared with specific active controls, implying that mindfulness programs do just as well as other known and respected treatments for depression, anxiety, and the like. On the plus side, this makes meditation a viable alternative to other therapies. Its usefulness is increased by the finding that—in clinical contexts—there seem to be no known adverse effects. Mindfulness can also be helpful for medical conditions—the psychological side effects of cancer, chronic pain, and some aspects of psychosis.
As far as we know now, the effects are due to two factors: an increase in trait mindfulness and a decrease in rumination and worry. There is some evidence for a dose–response relationship (more practice, better results), and the gains scored right after training are largely maintained over a period of half a year after training.
Many questions remain. First, we still know very little about the possible cascade of effects. Studies that look at multiple outcomes on a day-by-day basis and keep track of actual time practiced and its relationship to specific outcomes would be very helpful here.
Second, the stability of training effects over a half-year period is something that deserves more scrutiny. In the previous chapter, we found that at least some effects of mindfulness practice were limited in time—practicing leads to greater mindfulness, which leads to greater well-being over the course of, at most, a few days. Vetesse et al.’s meta-analysis suggests that once the actual training program is over, time spent in practice sloughs off, and so there must be something else at play to explain maintenance of practice effects. The likely ingredient is trait mindfulness—mindfulness as a skill that can and will be deployed whenever a threat arises, kicking in, for instance, when you find yourself ruminating or thinking anxious thoughts and thus stopping problems close to the root. It would be worthwhile to investigate this claim, maybe using a time-sampling approach (e.g., ping former participants on their phones a few times a day and record their experiences and their reactions to those experiences).
Third, we know very little about individual differences in effectiveness in these clinical programs. Is mindfulness training a good idea for everyone suffering from depression, anxiety, psychosis, or medical problems? If it is not, is there a way to find out before the patient begins an eight-week program or at least early in the program?