Chapter 3

Science in Practice

Kelly Koerner, PhD

Evidence-Based Practice Institute

Evidence-based practice (EBP) originated in medicine to prevent errors and to improve health care outcomes (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). In psychology EBP is defined as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (American Psychological Association Presidential Task Force on Evidence-Based Practice, 2006). In an evidence-based approach to decision making (Spring, 2007a, 2007b), the practitioner should:

  1. Ask important questions about the care of individuals, communities, or populations.
  2. Acquire the best available evidence regarding the question.
  3. Critically appraise the evidence for validity and applicability to the problem at hand.
  4. Apply the evidence by engaging in collaborative decision making regarding health with the affected individual(s) and/or group(s). (Appropriate decision making integrates the context, values, and preferences of the care recipient, as well as available resources, including professional expertise.)
  5. Assess the outcome and disseminate the results.

EBP seems to be a straightforward process: get the relevant evidence, discuss it with the client, and then carry out the best practice. Yet doing so requires overcoming two sets of significant challenges: (1) finding and appraising evidence relevant to many clinical decisions is difficult, and (2) clinical judgment is notoriously fallible.

Challenges with Using the Evidence Base to Inform Clinical Decisions

To adopt an evidence-based approach to treat a client’s specific problems, practitioners should prepare by reviewing relevant research literature to identify the most effective assessment and treatment options and evaluate evidence claims as scientific knowledge accumulates and evolves. Yet doing so can be difficult or impossible.

Research evidence comes to us more easily than ever before: passively through the day-to-day use of social media or actively when we use a search engine for a specific client-related question. In both cases, however, it’s not the quality or merits of the research evidence that drive what we see. Regularly cited articles become ever more likely to be cited, creating an impression of greater quality and masking other evidence (the Matthew effect; see Merton, 1968). Search engines grant higher page positions based on algorithms unrelated to evidence quality.

Consequently, for a balanced evaluation of evidence, practitioners must increasingly rely on experts to distill scientific findings into rigorously curated, aggregated formats, such as practice guidelines, lists of empirically supported treatments, evidence-based procedure registries, and the like. Expert aggregations use an evidentiary hierarchy: meta-analyses and other systematic reviews of randomized controlled trials (RCTs) at the top; followed by individual RCTs; followed by weaker forms of evidence, such as nonrandomized trials, observational studies, case series reports, and qualitative research.

Not only is this fixed evidentiary hierarchy itself controversial (Tucker & Roth, 2006), the existing literature provides little evidence to guide the selection of conditional plans that have a high chance of success: If a client presents marker A, will intervention B predictably and consistently produce change C? For example, say a late-twenties professionally employed Latina woman seeks treatment for depression. Based on the evidence, behavioral activation could be a good choice (Collado, Calderón, MacPherson, & Lejuez, 2016; Kanter et al., 2015). However, if in addition to depression the client has common co-occurring problems such as insomnia or marital conflict, the guidance is either absent or confusing: some evidence guides the practitioner to treat insomnia and depression concurrently (Manber et al., 2008; Stepanksi & Rybarczyk, 2006), while other evidence supports combining depression treatment and marital therapy to help with depression and marital satisfaction (Jacobson, Dobson, Fruzzetti, Schmaling, & Salusky, 1991). If additional common problems are added, such as problem drinking or child behavior problems in the home, the literature provides little or no guidance. Evidence to directly inform decision making for even common branches, such as those regarding sequencing versus combining treatments, is scarce.

In part, the lack of data to inform clinical decisions is an unavoidable consequence of research challenges. Science takes time. The study of psychopathology and psychotherapeutic change is complex. The practitioner’s need for nuanced evidence may always outstrip what is practically possible in even the most practice-focused research agenda. But in important ways, the lack of evidence to guide routine clinical decisions is due to more pernicious problems with the methods used to conduct psychotherapy research.

For historical reasons, the research methods used to study behavioral interventions borrowed heavily from methods and metaphors used to develop and test pharmaceuticals. In this predominant psychotherapy-as-technology stage model, stage I consists of basic science being translated into clinical applications. Pilot testing and feasibility trials begin on new and untested treatments, and treatment manuals, training programs, and adherence and competence measures are developed. In stage II, RCTs that emphasize internal validity evaluate the efficacy of promising treatments. In stage III, efficacious treatments are subjected to effectiveness trials and are evaluated with regard to their external validity and transportability to community settings (Rounsaville, Carroll, & Onken, 2001). Important updates have reinvigorated the stage model (Onken, Carroll, Shoham, Cuthbert, & Riddle, 2014), but methodological choices guided by the model have led to unintended consequences for the evidence base that interfere with its utility in guiding routine clinical decisions.

A core problem is that the independent variable to be studied and delivered in psychotherapy has come to be defined almost solely as the unit of the treatment manual, and the problem focus at the level of the psychiatric syndrome. The treatment manual codifies clinical procedures and their order into a protocol to be standardly repeated across therapists and clients by disorder. Manuals that specify protocols for treating depression, insomnia, problem drinking, couple distress, and parenting skills deficits, for example, could be relevant to the case example presented earlier, but each manualized protocol comprises many component strategies. Psychoeducation, self-monitoring, motivation enhancement, problem solving, activation assignments, values clarification, contingency management, shaping, self-management, and so on appear in nearly every manual. Most component strategies are not unique to a single manual but instead are common and duplicated across manuals. Specific protocols may vary in how they emphasize or coordinate these component elements (Chorpita & Daleiden, 2010)—the way procedures are chosen, repeated, or selectively applied, or their delivery format—even if the basic ingredients remain the same. Because researchers and therapists predominantly consider manuals as the unit of analysis, they ignore the fact that various manuals contain mostly the same ingredients. Each manual is treated as a distinct intervention with its own siloed research base (Chorpita, Daleiden, & Weisz, 2005; Rotheram-Borus, Swendeman, & Chorpita, 2012).

Strictly privileging manuals as the unit of intervention and analysis by disorder leads to unintended problems. Any change made to a manualized protocol could be a substantive departure. Even making a modification to better fit clients’ needs or setting constraints may wipe out the relevance of existing evidence. For the researcher, this “ever-expanding list of multi-component manuals designed to treat a dizzying array of topographically defined syndromes and sub-syndromes creates a factorial research problem that is scientifically impossible to mount…[and] makes it increasingly difficult to teach what is known or to focus on what is essential” (Hayes, Luoma, Bond, Masuda, & Lillis, 2006, p. 2). For the practitioner, the choice becomes to either follow manuals to the T regardless of setting or client presentations and preferences, or accept responsibility for not knowing what outcomes can be expected if tailored treatment deviates from the manual.

Packaging knowledge and science at the unit of a “manual for a disorder” emphasizes differences among manuals even if there are overlapping common components. Researchers are incentivized for innovation, but as reimbursement becomes contingent on delivering evidence-based protocols, practitioners become incentivized to claim they are doing treatments with fidelity whether they are or not. Treatment developers then face pressure to develop quality control methods to protect client access to the bona fide version of the treatment, leading to protective steps, such as proprietary trademarking or therapist certification. Such steps then align the professional identities and allegiances of researchers and practitioners with particular branded protocols rather than with effective components linked to client need.

The rationale for rigid adherence to specific manuals is that the greater the therapist’s adherence and competence in delivering the standardized, validated protocol, the more likely it is that clients will receive the treatment’s active ingredients and thereby obtain the desired outcomes. If this assumption is true, then adherence and competence should be powerful predictors of outcome, and larger packages and protocols should in general show unique, theory-related curative ingredients.

The available research evidence only weakly supports this assumption. With some exceptions, researchers don’t consistently find correlations between adherence or competence and treatment outcome (Branson, Shafran, & Myles, 2015; Webb, DeRubeis, & Barber, 2010). And while there are many successful theory-consistent meditational studies, there are also many large, well-designed studies that have failed to find unique, distinct, theory-related processes of change (Morgenstern & McKay, 2007). If more focus was made on specific components and procedures, a focus on change processes could well be more successful, but using large manuals as the unit of analysis interferes with that possibility.

Adopting concepts and methods from pharmacotherapy research and development has produced other problems. The dose-response idea that a dosage of active ingredients produces uniform and linear patterns of client change does not fit the large individual differences in client responsivity observed in psychotherapy research. Clients differ in whether they are in fact absorbing the material and achieving desired changes in cognitions, emotions, and skills and whether these changes in turn lead to desired outcomes. As a result, large individual differences in client response occur even in treatments that have been standardized and with therapists who show high adherence to the treatment manual (Morgenstern & McKay, 2007).

Similarly, therapists aren’t uniform in the same ways that pills are uniform. Nonspecific factors that are common across protocols, such as therapeutic alliance, have been viewed as being “akin to the binding on a pill, i.e., a minimum level of engagement is needed between therapist and patient in order to provide an avenue to transmit the specific curative elements of the approach” (Morgenstern & McKay, 2007, p. 102). Instead, therapists show significant variability rather than homogeneity (Laska, Smith, Wislocki, Minami, & Wampold, 2013), which may impact outcomes in specific ways.

To illustrate, consider work by Bedics, Atkins, Comtois, and Linehan (2012a, 2012b). They studied the relationship between therapeutic alliance and nonsuicidal self-injury in treatment delivered by expert behavioral and nonbehavioral therapists (2012a). Overall ratings of the therapeutic relationship did not predict reduced nonsuicidal self-injury. Instead, reductions were associated with the client’s perception that the therapist blended specific aspects—affirming, controlling, and protecting—of the relationship. In a companion study (2012b), they found that among clients with expert nonbehavioral therapists, higher perceived levels of therapist affirmation were associated with increased nonsuicidal self-injury. They speculate that the affirmations of nonbehavioral therapists might have inadvertently been timed to reinforce nonsuicidal self-injury, whereas behavior therapists contingently provided warmth and autonomy for improvement. These findings illustrate the kinds of interplay between specific and nonspecific factors that may impact outcome. Treatment effects of even carefully standardized treatments aren’t uniform or homogeneous, and research methods that force oversimplified understandings may limit scientific advancement.

Finally, social processes drive the crucial factors related to an EBP’s reach, adoption, implementation, and sustainability at the organizational level (Glasgow, Vogt, & Boles, 1999). Historically, the stages of the psychotherapy-as-technology model move sequentially from efficacy trials to effectiveness evaluations, and only then to dissemination and implementation research. As a result, the research on crucial factors that influence external validity, clinical utility, and the intervention’s reach, adoption, implementation, and sustainability in routine settings is conducted far too late in the development process (Glasgow et al., 1999). Little evidence is available to guide decision makers who face setting constraints about what they can and cannot change as they implement an EBP.

The Challenges of Relying on Clinical Judgment

Evidence-based practice, by definition, includes clinical judgment, but gaps in the evidence mean that many clinical decisions are based solely on clinical judgment with little data to inform them. Unfortunately there are known weaknesses of clinical judgment.

Daniel Kahneman’s book Thinking, Fast and Slow (2011) has popularized our understanding of these weaknesses. According to Kahneman’s dual processing theory, we have two modes of processing information: system 1, a fast, associative, low-effort mode that uses heuristic shortcuts to simplify information and reach good-enough solutions, and system 2, a slower rule-based mode that relies on high-effort systematic reasoning.

The fast and frugal system 1 heuristics that help us quickly simplify complex situations leave us prone to a multitude of perception and reasoning biases and errors. Kahneman conceptualizes the two systems as hierarchical and discrete, and he posits that the more rational, conscious system 2 can constrain the irrational, unconscious system 1 to save us from biases and errors. However, experimental data show that these systems are integrated, not discrete or hierarchical, with both prone to “motivated reasoning” (Kunda, 1990; Kahan, 2012, 2013a). If quick, impressionistic thinking doesn’t yield the answer we expect or want, we are prone to use our slower reasoning skills to fend off disconfirming evidence and seek data that fit our motivations rather than to reconsider our position (Kahan, 2013b).

In some professions, the work environment itself can correct these problems with judgment because work routines calibrate the unconscious processes of system 1 and train them to select suspected patterns for the attention of system 2’s deliberate analysis. Kahneman and Klein (2009) give the example of experienced fire commanders and nurses in neonatal intensive care units who, over years of observing, studying, and debriefing, tacitly learn to detect cues that indicate subtle and complex patterns related to outcomes, such as signs that a building will collapse or an infant will develop an infection. The cues in their work environments signal the probable relationships among causes and outcomes of behavior (valid cues). In such high-validity or “kind” environments, there are stable relationships between objectively identifiable cues and subsequent events, or between cues and the outcomes of possible actions. Standard methods, clear feedback, and direct consequences for error make it possible to tacitly learn the rules of these environments. Hunches based on invalid cues are likely to be detected and assessed for error. Pattern recognition improves. According to Kahneman and Klein (2009), we can develop excellent, expert decision-making abilities, but only when two conditions are met:

  1. The environment itself is characterized by stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions (i.e., a high-validity environment).
  2. There are opportunities to learn the rules of the environment.

In contrast, the environments in which most psychotherapy is practiced are low-validity or “wicked” environments that make tacit learning difficult (Hogarth, 2001). Cues are dynamic rather than static, predictability of outcomes is poor, and feedback is delayed, sparse, and ambiguous. Psychotherapy practice environments lack standard methods, clear feedback, and direct consequences and therefore provide few opportunities to learn the rules about the relation between clinical judgment, interventions, and outcomes. As a result, the tacit learning and development of intuitive expertise is blocked, which is a recipe for overconfidence (Kahneman & Klein, 2009). Within such low-validity environments, clinical judgment performs more poorly than linear algorithms based on statistical analysis. Even though often wrong, algorithms maintain above-chance accuracy by detecting and using weakly valid cues consistently, which accounts for much of an algorithm’s advantage over people (Karelaia & Hogarth, 2008). Without structured routines, heuristic biases outside of our awareness function like an automatic spotlight, unconsciously simplifying complex situations. Perception, attention, and problem solving are caught by a subset of the elements right in front of us. In particular, without the right conditions we are likely to fall prey to the motivated reasoning and predictable biases defined by Heath and Heath (2013):

Disciplined Improvisation: Create Kind Environments with Heuristic Frameworks

What may be needed is to create the kind environments Kahneman and Klein (2009) and Hogarth (2001) describe: improved conditions in routine practice settings that support learning the relationship between clinical judgment, interventions, and outcomes. By doing so, practitioners can engage in disciplined improvisation as applied scientists, thereby improving the probability of good client outcomes. This requires practitioners to have not only functional scientific literacy but also structured routines that correct for the most common problems with clinical judgment. “Functional scientific literacy” means specialized knowledge related to probability and chance; the tools to think scientifically, and the propensity to do so; the tendency to exhaustively examine possibilities; the tendency to avoid my-side thinking; knowledge of some rules of formal and informal reasoning; and good argument-evaluation skills (Stanovich, West, & Toplak, 2011). This “mindware” is typically haphazardly acquired in professional training.

The rest of this chapter details a short set of structured routines the practitioner can use to correct for the most common problems with clinical judgment and thereby better calibrate the decision-making process and make it possible to do meaningful EBP. In general, each proposed routine helps to generate valid cues in order to detect and learn about stable relationships between objectively identifiable cues and subsequent events, or between cues and the outcomes of possible actions.

Many of the routines involve using a heuristic in a deliberate, structured work routine. Instead of an unconscious spotlight, the heuristic works like a manually controlled spotlight (Heath & Heath, 2013) or a checklist that improves performance (Gawande, 2010). Heuristics, when used deliberately, offer general strategies about how to find an answer or produce a solution in a reasonable time frame that is “good enough” for solving the problem at hand. They help the practitioner find the sweet spot of optimality, completeness, accuracy, precision, and execution time. The following list of routine practices, easily done in a typical workflow, suggests ways to standardize methods and obtain clear feedback that increase the opportunities to learn the rules about the relation between clinical judgment, interventions, and outcomes.

Standardize Key Work Routines

Consider these three steps to standardize key work routines in order to transform a wicked environment into a kinder one that is disciplined enough to help you better detect valid cues and maximize your ability to learn from them.

1. Use Progress Monitoring and Other Assessment Methods

Monitoring progress—regularly collecting data on the client’s functioning, quality of life, and change regarding problems and symptoms—is the most important step in creating an environment with valid cues that make learning possible. Whether this step is called progress monitoring, client-reported outcomes, measurement-based care, or practice-based evidence, it has been demonstrated that tracking client change prevents dropout and treatment failure, reduces treatment length, and improves outcomes (e.g., Carlier et al., 2012; Goodman, McKay, & DePhilippis, 2013).

Where possible, use measures with standardized norms. When idiographic assessment is needed (i.e., comparing people with themselves), consider tools such as goal attainment scaling (Kiresuk, Smith, & Cardillo, 2014) or a “top problems” approach, in which clients identify the top three problems that matter to them and rate the severity of the problems on a scale of 0 to 10 weekly (Weisz et al., 2011). Further, consider standardizing any idiographic functional assessment used. Such standard assessment heuristics (if target problem is X, then use assessment method Y) may increase the speed and consistency with which problems are defined, providing a counter to the limitations of clinical judgment.

In particular, adopt heuristic rules about how to use progress-monitoring data to guide decisions in which bias is likely to be highest. For example, consider a routine such as requiring a change in the treatment plan every ten to twelve weeks if the client has not had at least a 50 percent improvement in symptoms using a validated measure (Unützer & Park, 2012).

More generally, routinely obtain high-quality standardized data to inform decisions. Consider creating invariant routines using evidence-based assessment methods, such as broad symptom rating scales, to identify presenting problems and maintaining factors; followed by more in-depth, specific rating scales; and then standardized clinical interviews (see Christon, McLeod, & Jensen-Doss, 2015, for more on evidence-based assessment). The key is to build routines that stay more or less stable and standardized to reduce method variability and thereby allow for the detection of valid signals identifying relationships between clinical judgment, interventions, and client outcome.

2. Consider Existing EBPs for the Client’s Top Problem First

Whenever possible, begin with a standardized treatment protocol for the most important problem. Beginning with a standard protocol offers many advantages. First, treating the most important problem may resolve others. Second, a standardized protocol gives you a benchmark against which to evaluate outcomes. Finally, following an evidence-based protocol allows you to limit your own inconsistency and my-side bias.

Again, although the evidence for protocols isn’t strong enough to treat them as algorithms (step-by-step instructions that predictably and reliably yield the correct answer every time), protocols do offer heuristics that usefully simplify complex situations. Therapy protocols can be thought of as means-ends analyses. Means-ends analysis is a heuristic in which the ends are defined, and means to those ends are identified. If no workable means can be found, then the problem is broken into a hierarchy of subproblems, which may in turn be further broken into smaller subproblems until means are found to solve the problem.

The structured if-then guidelines that protocols provide help simplify complex clinical situations into a series of systematic prompts to think or act. Some protocols specify what problems the therapist should analyze and how to analyze them, and they provide further heuristics on how to combine component treatment strategies based on the nature and severity of a client’s problems. In these ways, structuring clinical intervention with a protocol can help you detect valid cues and create a structured environment to promote learning.

Another useful standard routine is to systematically consider alternative, relevant treatment protocols as part of shared decision-making and consent-to-treatment conversations with clients. The more a practitioner clearly and deliberately considers alternative courses of action (Heath & Heath, 2013) and creates structured if-then tests, the more such feedback loops can help the practitioner detect whether the expected outcome happened (or didn’t) and the more learnable the environment becomes. The PICO acronym is a way to frame a clinical question for a literature search that works well for shared decision making. P stands for “patient,” “problem,” or “population”; I for “intervention”; C for “comparison,” “control,” or “comparator”; and O for “outcomes” (Huang, Lin, & Demner-Fushman, 2006).

Figure 1. Visual diagram conceptualizing the relationship among client problems

For example, figure 1 returns to the earlier client example and shows the visual diagram the client and therapist made to capture the relationship among the client’s problems. The client was most troubled by low mood, low energy, fatigue, difficulty concentrating, and feelings of intense guilt and hopelessness scoring in the severe range on the depression scale of the Depression Anxiety Stress Scales (Lovibond & Lovibond, 1995). In her view, her children’s behavior problems, and the conflicts she and her husband had over parenting, made each problem worse and greatly impacted her mood, and sometimes her sleep. She turned to alcohol to escape painful emotions. Using PICO, the therapist can explain treatment options and likely outcomes for each of these problems (see table 1 for details).

Table 1. Modular component treatment plan

Patient, Problem, Population Intervention Comparison and Outcome

#1 Depression

Behavioral activation (BA):

50–60% recover (Dimidjian et al., 2006)

Try BA for 8 to 10 sessions, then reevaluate and consider alternative treatment if there is less than 50% change in depression on the Depression Anxiety Stress Scales.

Other options to consider:

Natural recovery

Antidepressant medication (ADM): ~1/3 respond, 1/3 partial response, relapse rate high when discontinuing

Combine ADM and psychotherapy: ~53% report symptom reduction

Interpersonal therapy and other active treatment: ~50% symptom reduction

Behavioral couples therapy (Jacobson et al., 1991): 87% recover from depression; couples’ distress also reduced

#2 Problem drinking

Brief intervention for problem drinking; one of the first activation assignments of BA (O’Donnell et al., 2014)

Reduces amount and frequency for many; less studied with women. Self-help or CBT, if brief, doesn’t produce desired change on Alcohol Use Disorders Identification Test (AUDIT).

#3 Insomnia

CBT for insomnia (CBT-I); sleep log one of the first activation assignments of BA

CBT-I over medications; effectively improving insomnia may reduce other problems, especially depression.

#3 Parenting for child behavior problems

Self-help: Review The Incredible Years: A Trouble-Shooting Guide for Parents of Children Aged 2–8 (Webster-Stratton, 2006) as an activation assignment.

If self-help doesn’t achieve enough gains, consider an evidence-based parent-training program.

#3 Couples conflict

Devise activation assignments to strengthen conflict resolution and marital satisfaction.

If individual changes fail to produce sufficient desired changes, consider couples counseling.

3. Use Explicit Case Formulation for Hypothesis Testing

When a standard treatment isn’t available or doesn’t yield desired results, practitioners use case formulation to tailor interventions, based on the assumption that tailored intervention will outperform the imperfect fit of standardized protocols for the individual. Unfortunately, case formulation has a meager evidence base. Kuyken’s thorough and fair-minded review concludes that the evidence for case formulation’s

Kuyken concludes, “There is no compelling evidence that [cognitive behavioral therapy] CBT formulation enhances therapy processes or outcomes” (p. 31).

While there is a lack of strong evidence to suggest that tailored interventions based on case formulations are superior, when used systematically case formulation can serve as a disciplined method to apply the scientific method to clinical work (Persons, 2008). When the therapist must go beyond existing protocols, purposefully specifying dependent and independent variables, combined with progress monitoring, can create conditions for the therapist to learn the stable relationships between judgment, interventions, and outcome; and this method can counter problems with bias and unconsciously applied heuristics. Persons (2008) and Padesky, Kuyken, and Dudley (2011) have articulated systematic approaches to case formulation. At a minimum, the heuristic to apply with case formulation is to specify the treatment targets (dependent variables) and robust change processes (independent variables).

Use a Treatment Target Hierarchy Informed by Science

A treatment target hierarchy provides if-then guidelines that prescribe what to treat when. The target hierarchy constrains therapist variability and thereby makes it more likely that the most essential problems are addressed first, as a checklist does in an emergency room (Gawande, 2010). For example, Linehan (1999) has argued for organizing treatment targets into stages of treatment based on the severity of disorders. In pretreatment, her model directs the therapist to target maximizing initial motivation and commitment to treatment, thereby increasing engagement, and research (Norcross, 2002) supports this common factor. When behavioral dyscontrol is predominant, the therapist is to prioritize target behaviors in a commonsense way by their severity: life-threatening behaviors first, followed by therapy-interfering behavior, quality-of-life-interfering behavior, and improvement of skills.

Defined stages with target hierarchies provide a process to organize the allocation of session time, aiding the therapist’s ability to think consistently and coherently; sort the relevant from irrelevant; and manage cognitive load. As discussed earlier, these types of checklists or decision-support tools are exactly what humans need in order to detect and respond consistently to valid cues. Treatment target hierarchies may be particularly helpful or needed when a client has multiple disorders and multiple crises that make it difficult to intervene consistently.

Using a treatment target hierarchy may also have effects, because the specific targeted content produces client change. For example, it appears that directly targeting suicidal behavior as a problem in itself (rather than seeing it as a sign or symptom that will resolve when the underlying disorder is treated) is associated with better outcomes (Comtois & Linehan, 2006). Treatment target hierarchies provide a practice-friendly way to consolidate scientific knowledge.

A target hierarchy can be constructed from disorder-specific processes or transdiagnostic processes drawn from psychopathology or treatment research. For example, in adapting disorder-specific targets to treat substance abuse, McMain, Sayrs, Dimeff, and Linehan (2007) didn’t target stopping the use of illegal drugs and the abuse of prescribed drugs alone; they also targeted the physical and psychological discomfort associated with withdrawal and the urges to use, because withdrawal symptoms, urge intensity from the previous day, duration of urge, and urge intensity upon awakening predict relapse.

Additionally or alternatively, targets can be transdiagnostic (i.e., fundamental processes that contribute to or maintain disorders across what current diagnostic nomenclature label as distinct). Mansell, Harvey, Watkins, and Shafran (2009) categorize four views on transdiagnostic processes:

  1. Universal multiple processes maintain all or the majority of psychological disorders. For example, processes include problematic self-focused attention, explicit memory bias, interpretational biases, and safety behaviors (e.g., Harvey, Watkins, Mansell, & Shafran, 2004).
  2. A range of cognitive and behavioral processes maintain a limited range of disorders, but one that is wider than traditional disorder-specific models. For example, researchers propose that common processes of maladaptive cognitive appraisals, poor emotion regulation, emotional avoidance, and emotionally driven behavior are related to anxiety and depression (Barlow, Allen, & Choate, 2004) or clinical perfectionism, core low self-esteem, mood intolerance, and interpersonal difficulties with eating disorder (Fairburn, Cooper, & Shafran, 2003).
  3. Symptom or psychological phenomena themselves, rather than diagnostic categories or labels, should be targeted. For example, rather than thinking of bipolar disorder and schizophrenia as distinct entities, Reininghaus, Priebe, and Bentall (2013) argue that the data show not only a superordinate psychosis syndrome, but also five independent symptom dimensions: positive symptoms (hallucinations and delusions), negative symptoms (social withdrawal and the inability to experience pleasure), cognitive disorganization, depression, and mania. These dimensions can be treated as targets.
  4. A universal, single process is largely responsible for the maintenance of psychological distress across all or the majority of psychological disorders. For example, Watkins (2008) proposes the importance of repetitive thinking: the process of thinking attentively, repetitively, or frequently about oneself or one’s world. Hayes and colleagues (2006, p. 6) propose the importance of psychological inflexibility: the way “language and cognition interact with direct contingencies to produce an inability to persist or change behavior in the service of long-term valued ends.”

Link Targets to Robust Change Processes

Finally, when disciplined improvisation is needed because a client’s problems don’t match well with an established protocol, or they have failed to respond to an established protocol, try modular components of evidence-based protocols. Chorpita and colleagues (e.g., Chorpita & Daleiden, 2010; Chorpita et al., 2005) have led the effort to create a standardized lexicon of interventions to define the discrete therapy technique or strategy that can serve as an independent variable rather than use the treatment manual as the unit of analysis. In the chapters in section 3 of this book, and in the works of others (e.g., Roth & Pilling, 2008), components of evidence-based protocols are packaged into self-contained modules that contain all the knowledge and competencies needed to deliver a particular intervention.

Such modular approaches may prove to be more scientifically useful and practice oriented than relying on manuals as the unit of analysis. They remove duplication due to overspecification and could offer a way to reliably aggregate findings across studies and distill prescriptive heuristics (Chorpita & Daleiden, 2010). Rotheram-Borus and colleagues (2012) have suggested that reengineering evidence-based therapeutic and preventive-intervention programs based on their most robust features will make it simpler and less expensive to meet the needs of the majority of people, making effective help more accessible, scalable, replicable, and sustainable.

Few prescriptive heuristics are available to guide the matching of component interventions to targets. Further, because available data have yet to demonstrate the unequivocal superiority of the common factors model or psychotherapy-as-technology model, perhaps the best path for practitioners is to be informed by both models.

According to the common factors model, five ingredients produce change. The practitioner should create an (1) emotionally charged bond between the therapist and the client and a (2) confiding, healing setting in which therapy can take place; provide a (3) psychologically derived and culturally embedded explanation for emotional distress that is (4) adaptive (i.e., provides viable and believable options for overcoming specific difficulties) and accepted by the client; and engage in a (5) set of procedures or rituals that lead the client to enact something that is positive, helpful, or adaptive (Laska et al., 2013). From this common factors viewpoint, any therapy that contains all five of these ingredients will be efficacious for most disorders.

From a cognitive behavioral perspective, general means-ends problem-solving strategies offer guidance about how to select component elements for treatment targets. First, assess whether the absence of effective behavior is due to a capability deficit (i.e., the client doesn’t know how to do the needed behavior) and, if so, then use skills training procedures. If the client does have the skills but emotions, contingencies, or cognitive processes and content interfere with the ability to behave skillfully, then use the procedures and principles from exposure, contingency management, and cognitive modification to remove the hindrances to skillful behavior. Pull disorder-specific procedures and principles from relevant protocols as needed.

Table 1 uses PICO to illustrate how a modular component treatment plan might look. Behavioral activation (BA) serves as the basic template and starting point. BA is based on the premise that depression results from a lack of reinforcement. Consequently, you can treat multiple targets, such as problematic drinking, insomnia, parenting strategies, and the marital relationship, through the robust common procedure of activation assignments to reduce avoidance (which interferes with reinforcing contingencies) and improve mastery and satisfaction (to improve reinforcement). You can use disorder-specific principles and strategies drawn from specific evidence-based protocols (e.g., for insomnia, problem drinking, or parent training) in a modular fashion to treat specific targets.

Beyond the Therapy Room: Organizations and Practice-Based Science

Diagnostic categories, with current procedural terminology (CPT) codes for diagnoses and service arms for specific disorders, still organize the world of service delivery and reimbursement. This organization is not adequate to implement the vision discussed in this chapter. In order to move into a new era of EBP, organizational changes must be made to facilitate and support these practices.

Evidence-informed heuristics are emerging to guide these changes, including identifying key variables that determine and sustain “good enough” implementation (e.g., Damschroder et al., 2009; Proctor et al., 2009) and verify the utility of modular components models (Chorpita et al., 2015; Weisz et al., 2012). By instituting progress monitoring as part of standard practice, practitioners and organizations may be able to answer for themselves what is necessary to obtain good outcomes within their quality improvement efforts (Steinfeld et al., 2015). As barriers to practice-based research appear to be surmountable (Barkham, Hardy, & Mellor-Clark, 2010; Koerner & Castonguay, 2015), and newer single-case methods make it possible to aggregate data in meaningful ways to draw generalizable conclusions (Barlow, Nock, & Hersen, 2008; Iwakabe & Gazzola, 2009), practice-based research can offer significant contributions to the scientific literature.

Conclusion

The ubiquity of EBP implies that it is a straightforward process. However, significant challenges due to weaknesses in both the evidence base and clinical judgment suggest that practitioners and organizations create “kind” environments that will facilitate EBP. By implementing standard work routines, including the systematic use of heuristics that integrate the best current science, it becomes possible to train and better calibrate clinical judgment to detect valid cues and learn the relationships between clinical judgment, interventions, and outcomes. It also becomes possible to answer practice-based questions and to make significant contributions to the wider research literature. Many hands are going to be needed to advance the goal of science in practice.

References

American Psychological Association Presidential Task Force on Evidence-Based Practice (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271–285.

Barkham, M., Hardy, G. E., & Mellor-Clark, J. (2010). Improving practice and enhancing evidence. In M. Barkham, G. E. Hardy, & J. Mellor-Clark (Eds.), Developing and delivering practice-based evidence: A guide for the psychological therapies (pp. 3–20). Chichester, UK: Wiley-Blackwell.

Barlow, D. H., Allen, L. B., & Choate, M. L. (2004). Toward a unified treatment for emotional disorders. Behavior Therapy, 35(2), 205–230.

Barlow, D. H., Nock, M. K., & Hersen, M. (2008). Single case experimental designs: Strategies for studying behavior change (3rd ed.). Boston: Pearson Allyn and Bacon.

Bedics, J. D., Atkins, D. C., Comtois, K. A., & Linehan, M. M. (2012a). Treatment differences in the therapeutic relationship and introject during a 2-year randomized controlled trial of dialectical behavior therapy versus nonbehavioral psychotherapy experts for borderline personality disorder. Journal of Consulting Clinical Psychology, 80(1), 66–77.

Bedics, J. D., Atkins, D. C., Comtois, K. A., & Linehan, M. M. (2012b). Weekly therapist ratings of the therapeutic relationship and patient introject during the course of dialectical behavioral therapy for the treatment of borderline personality disorder. Psychotherapy (Chicago), 49(2), 231–240.

Branson, A., Shafran, R., & Myles, P. (2015). Investigating the relationship between competence and patient outcome with CBT. Behaviour Research and Therapy, 68, 19–26.

Carlier, I. V., Meuldijk, D., van Vliet, I. M., van Fenema, E., van der Wee, N. J., & Zitman, F. G. (2012). Routine outcome monitoring and feedback on physical or mental health status: Evidence and theory. Journal of Evaluation in Clinical Practice, 18(1), 104–110.

Chorpita, B. F., & Daleiden, E. L. (2010). Building evidence-based systems in children’s mental health. In J. R. Weisz & A. E. Kazdin (Eds.), Evidence-based psychotherapies for children and adolescents (2nd ed., pp. 482–499). New York: Guilford Press.

Chorpita, B. F., Daleiden, E. L., & Weisz, J. R. (2005). Modularity in the design and application of therapeutic interventions. Applied and Preventive Psychology, 11(3), 141–156.

Chorpita, B. F., Park, A., Tsai, K., Korathu-Larson, P., Higa-McMillan, C. K., Nakamura, B. J., et al. (2015). Balancing effectiveness with responsiveness: Therapist satisfaction across different treatment designs in the Child STEPs randomized effectiveness trial. Journal of Consulting and Clinical Psychology, 83(4), 709–718.

Christon, L. M., McLeod, B. D., & Jensen-Doss, A. (2015). Evidence-based assessment meets evidence-based treatment: An approach to science-informed case conceptualization. Cognitive and Behavioral Practice, 22(1), 36–48.

Collado, A., Calderón, M., MacPherson, L., & Lejuez, C. (2016). The efficacy of behavioral activation treatment among depressed Spanish-speaking Latinos. Journal of Consulting and Clinical Psychology, 84(7), 651–657.

Comtois, K. A., & Linehan, M. M. (2006). Psychosocial treatments of suicidal behaviors: A practice-friendly review. Journal of Clinical Psychology, 62(2), 161–170.

Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50.

Dimidjian, S., Hollon, S. D., Dobson, K. S., Schmaling, K. B., Kohlenberg, R. J., Addis, M. E., et al. (2006). Randomized trial of behavioral activation, cognitive therapy, and antidepressant medication in the acute treatment of adults with major depression. Journal of Consulting and Clinical Psychology, 74(4), 658–670.

Fairburn, C. G., Cooper, Z., & Shafran, R. (2003). Cognitive behaviour therapy for eating disorders: A “transdiagnostic” theory and treatment. Behaviour Research and Therapy, 41(5), 509–528.

Gawande, A. (2010). The checklist manifesto: How to get things right. New York: Metropolitan Books.

Glasgow, R. E., Vogt, T. M., & Boles, S. M. (1999). Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health, 89(9), 1322–1327.

Goodman, J. D., McKay, J. R., & DePhilippis, D. (2013). Progress monitoring in mental health and addiction treatment: A means of improving care. Professional Psychology: Research and Practice, 44(4), 231–246.

Harvey, A. G., Watkins, E., Mansell, W., & Shafran, R. (2004). Cognitive behavioural processes across psychological disorders: A transdiagnostic approach to research and treatment. Oxford: Oxford University Press.

Hayes, S. C., Luoma, J. B., Bond, F. W., Masuda, A., & Lillis, J. (2006) Acceptance and commitment therapy: Model, processes, and outcomes. Behaviour Research and Therapy, 44(1), 1–25.

Heath, C., & Heath, D. (2013). Decisive: How to make better choices in life and work. New York: Random House.

Hogarth, R. M. (2001). Educating intuition. Chicago: University of Chicago Press.

Huang X., Lin J., & Demner-Fushman D. (2006). Evaluation of PICO as a knowledge representation for clinical questions. AMIA Annual Symposium Proceedings Archive, 359–363.

Iwakabe, S., & Gazzola, N. (2009). From single-case studies to practice-based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4–5), 601–611.

Jacobson, N. S., Dobson, K., Fruzzetti, A. E., Schmaling, K. B., & Salusky, S. (1991). Marital therapy as a treatment for depression. Journal of Consulting and Clinical Psychology, 59(4), 547–557.

Kahan, D. (2012). Two common (and recent) mistakes about dual process reasoning and cognitive bias. February 3. http://www.culturalcognition.net/blog/2012/2/3/two-common-recent-mistakes-about-dual-process-reasoning-cogn.html.

Kahan, D. M. (2013a). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407–424.

Kahan, D. M. (2013b). “Integrated and reciprocal”: Dual process reasoning and science communication part 2. July 24. http://www.culturalcognition.net/blog/2013/7/24/integrated-reciprocal-dual-process-reasoning-and-science-com.html.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526.

Kanter, J. W., Santiago-Rivera, A. L., Santos, M. M., Nagy, G., López, M., Hurtado, G. D., et al. (2015). A randomized hybrid efficacy and effectiveness trial of behavioral activation for Latinos with depression. Behavior therapy, 46(2), 177–192.

Karelaia, N., & Hogarth, R. M. (2008). Determinants of linear judgment: A meta-analysis of lens model studies. Psychological Bulletin, 134(3), 404–426.

Kiresuk, T. J., Smith, A., & Cardillo, J. E. (2014). Goal attainment scaling: Applications, theory, and measurement. London: Psychology Press.

Koerner, K., & Castonguay, L. G. (2015). Practice-oriented research: What it takes to do collaborative research in private practice. Psychotherapy Research, 25(1), 67–83.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.

Kuyken, W. (2006). Evidence-based case formulation: Is the emperor clothed? In N. Tarrier & J. Johnson (Eds.), Case formulation in cognitive behaviour therapy: The treatment of challenging and complex cases (pp. 12–35). New York: Routledge.

Laska, K. M., Smith, T. L., Wislocki, A. P., Minami, T., & Wampold, B. E. (2013). Uniformity of evidence-based treatments in practice? Therapist effects in the delivery of cognitive processing therapy for PTSD. Journal of Counseling Psychology, 60(1), 31–41.

Linehan, M. M. (1999). Development, evaluation, and dissemination of effective psychosocial treatments: Levels of disorder, stages of care, and stages of treatment research. In M. D. Glantz & C. R. Hartel (Eds.), Drug abuse: Origins and interventions (pp. 367–394). Washington, DC: American Psychological Association.

Lovibond, P. F., & Lovibond, S. H. (1995). The structure of negative emotional states: Comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories. Behaviour Research and Therapy, 33(3), 335–343.

Manber, R., Edinger, J. D., Gress, J. L., San Pedro-Salcedo, M. G., Kuo, T. F., & Kalista, T. (2008). Cognitive behavioral therapy for insomnia enhances depression outcome in patients with comorbid major depressive disorder and insomnia. Sleep, 31(4), 489–495.

Mansell, W., Harvey, A., Watkins, E., & Shafran, R. (2009). Conceptual foundations of the transdiagnostic approach to CBT. Journal of Cognitive Psychotherapy, 23(1), 6–19.

McMain, S., Sayrs, J. H., Dimeff, L. A., & Linehan, M. M. (2007). Dialectical behavior therapy for individuals with borderline personality disorder and substance dependence. In L. A. Dimeff & K. Koerner (Eds.), Dialectical behavior therapy in clinical practice: Applications across disorders and settings (pp. 145–173). New York: Guilford Press.

Merton, R. K. (1968). The Matthew effect in science. Science, 159, 56–63.

Morgenstern, J., & McKay, J. R. (2007). Rethinking the paradigms that inform behavioral treatment research for substance use disorders. Addiction, 102(9), 1377–1389.

Norcross, J. C. (2002). Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. New York: Oxford University Press.

O’Donnell, A., Anderson, P., Newbury-Birch, D., Schulte, B., Schmidt, C., Reimer, J., et al. (2014). The impact of brief alcohol interventions in primary healthcare: A systematic review of reviews. Alcohol and Alcoholism, 49(1), 66–78.

Onken, L. S., Carroll, K. M., Shoham, V., Cuthbert, B. N., & Riddle, M. (2014). Reenvisioning clinical science: Unifying the discipline to improve the public health. Clinical Psychological Science, 2(1), 22–34.

Padesky, C. A., Kuyken, W., & Dudley, R. (2011). Collaborative case conceptualization rating scale and coding manual. Vol. 5, July 19. Unpublished manual retrieved from http://padesky.com/pdf_padesky/CCCRS_Coding_Manual_v5_web.pdf.

Persons, J. B. (2008). The case formulation approach to cognitive-behavior therapy. New York: Guildford Press.

Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34.

Reininghaus, U., Priebe, S., & Bentall, R. P. (2013). Testing the psychopathology of psychosis: Evidence for a general psychosis dimension. Schizophrenia Bulletin, 39(4), 884–895.

Roth, A. D., & Pilling, S. (2008). Using an evidence-based methodology to identify the competences required to deliver effective cognitive and behavioral therapy for depression and anxiety disorders. Behavioral and Cognitive Psychotherapy, 36(2), 129–147.

Rotheram-Borus, M. J., Swendeman, D., & Chorpita, B. F. (2012). Disruptive innovations for designing and diffusing evidence-based interventions. American Psychologist, 67(6), 463–476.

Rounsaville, B. J, Carroll K. M., & Onken L. S. (2001). A stage model of behavioral therapies research: Getting started and moving on from stage 1. Clinical Psychology: Science and Practice, 8(2):133–142.

Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312(7023), 72–73.

Spring, B. (2007a). Steps for evidence-based behavioral practice. http://www.ebbp.org/steps.html.

Spring, B. (2007b). Evidence-based practice in clinical psychology: What it is, why it matters; what you need to know. Journal of Clinical Psychology, 63(7), 611–631.

Stanovich, K. E., West, R. F., & Toplak, M. E. (2011). Individual differences as essential components of heuristics and biases research. In K. Manktelow, D. Over, & S. Elqayam (Eds.), The Science of reason: A Festschrift for Jonathan St. B. T. Evans (pp. 355–396). New York: Psychology Press.

Steinfeld, B., Scott, J., Vilander, G., Marx, L., Quirk, M., Lindberg, J., et al. (2015). The role of lean process improvement in implementation of evidence-based practices in behavioral health care. Journal of Behavioral Health Services & Research, 42(4), 504–518.

Stepanski, E. J., & Rybarczyk, B. (2006). Emerging research on the treatment and etiology of secondary or comorbid insomnia. Sleep Medicine Reviews, 10(1), 7–18.

Tucker, J. A., & Roth, D. L. (2006). Extending the evidence hierarchy to enhance evidence‐based practice for substance use disorders. Addiction, 101(7), 918–932.

Unützer, J., & Park, M. (2012). Strategies to improve the management of depression in primary care. Primary Care: Clinics in Office Practice, 39(2), 415–431.

Watkins, E. R. (2008). Constructive and unconstructive repetitive thought. Psychological Bulletin, 134(2),163–206.

Webb, C. A., DeRubeis, R. J., & Barber, J. P. (2010). Therapist adherence/competence and treatment outcome: A meta-analytic review. Journal of Consulting and Clinical Psychology, 78(2), 200–211.

Webster-Stratton, C. (2006). The incredible years: A trouble-shooting guide for parents of children aged 2–8 (rev. ed.). Seattle: The Incredible Years.

Weisz, J. R., Chorpita, B. F., Frye, A., Ng, M. Y., Lau, N., Bearman, S. K., et al. (2011). Youth top problems: using idiographic, consumer-guided assessment to identify treatment needs and to track change during psychotherapy. Journal of consulting and clinical psychology, 79(3), 369–380.

Weisz, J. R., Chorpita, B. F., Palinkas, L. A., Schoenwald, S. K., Miranda, J., Bearman, S. K., et al. (2012). Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: A randomized effectiveness trial. Archives of General Psychiatry, 69(3), 274–282.