AS CHAPTER 1 EMPHASIZED, this is a book about facts. Obviously some facts are clearer than others. And equally obviously there are often disagreements about facts. Sometimes those disagreements arise when there is evidence leading to opposing conclusions, such as the evidence leading to, but occasionally challenging, the conclusion that Thomas Jefferson was the biological father of the six children of Sally Hemings, as discussed in Chapter 2. There is similar disagreement about the conflicting evidence that Leonardo Da Vinci was, or perhaps was not, the painter of a painting known as Salvator Mundi.1 But even when all or most of the evidence points in the same direction, there is often, and perhaps surprisingly, still disagreement. And at least on the surface these disagreements appear not to be about values or goals or high principles, but about facts. Plain, hard facts.
Many of these disagreements about facts, and about clear facts as well as unclear ones, are influenced by preferences. In theory, preferences should not matter about hard concrete physical facts. I wish I were taller and thinner and younger, but I am what I am, regardless of my preferences. And although I wish that the days were as long in December as they are in June, what I wish matters not a bit.
Although most of the factual world successfully resists adapting to our preferences, often, and perhaps surprisingly, our perception of that factual world does conform to our preferences. This phenomenon is especially familiar to participants and spectators at sporting events. Tennis balls are in or out, and even the close ones are either slightly in or slightly out. Nevertheless, the typical tennis court argument involves—or at least did involve prior to electronic line calling—the player who hit the shot insisting that the ball was in while his or her opponent insists with equal vehemence that it was out. And when the question is whether a soccer ball (or the hockey puck) did or did not cross the line to count as a goal, whether a baseball was fair or foul, or whether the football receiver was in bounds or out when he caught the ball, it is tediously familiar that both players and fans will loudly see these entirely factual matters in the way that aligns with their outcome preferences.
And so too with things that matter more than tennis, soccer, hockey, baseball, or football. Elections, for example. Although the 2020 presidential election was noteworthy in part for the admirable willingness of election officials in Georgia, most prominently, and elsewhere to make factual decisions about vote counts that departed from their party affiliations and presidential electoral preferences, the behavior of the Georgia officials was, regrettably, more the exception than the rule.2 In the extremely close election between Frank McCloskey and Richard McIntyre for the US House of Representatives in Indiana’s Eighth Congressional District in 1984, for example, both election officials and members of the House of Representatives made decisions that closely tracked their party affiliations.3 History provides many other examples of the same phenomenon.4 The 2020 election, in more ways than one, was far more exceptional than representative. And we should not forget the millions of people who believe that former president Barack Obama was born in Kenya, which he was not; that former president George W. Bush knew in advance of the September 11, 2001 attacks, which he did not; and that the AIDS epidemic was the product of a government and pharmaceutical industry conspiracy, which it was not.5
Whether it be football, tennis, elections, or anything else, the phenomenon of seeing the factual world in ways that reflect preferences for how that world ought to be, rather than how it actually is, has been amply documented by psychologists for generations, typically under the label “motivated reasoning,” and sometimes with the catchier label “myside bias.” The basic idea—we can think of it as fallaciously deriving is from ought—was first developed in the research of the social psychologist Ziva Kunda.6 Subsequently a broad and deep research program has followed.7 And although most of primary research has been done by experimental psychologists, more recently we have seen political scientists, academic lawyers, and others adapting (and sometimes relabeling) the basic idea to fit their own domain-specific interests and purposes.8 But throughout the domains of application, the basic idea is that the receipt and evaluation of evidence, and the evaluation of that evidence, is heavily influenced by normative preferences about what it would be good for the evidence to show. And that is so even when the evidence is about a matter of hard fact—where Barack Obama was born, for example—and as to which the evidence is overwhelmingly clear.
Understanding motivated reasoning is crucial for understanding when and how evidence matters in the world, and when and how it does not. The “motivated” part of motivated reasoning is straightforward, even if lamentable, but the “reasoning” part mixes together four different phenomena, which it is important to separate. All four are about the world of evidence, and we can label them motivated production, motivated transmission, motivated retrieval, and motivated processing.
Motivated production is about the way in which evidence is generated—produced—in the first place. The Sherlock-Holmes-with-his-magnifying-glass image notwithstanding, much of the evidence we use is not just sitting there waiting to be found. Rather, the evidence is produced—or created—often by motivated parties with a preference for a particular seemingly evidence-based outcome. This is most obvious in a trial in court, where all of the evidence is supplied by advocates for one side or another. But the phenomenon also exists outside of court. An especially vivid example, documented by Naomi Oreskes and Erik Conway, comes from the story of how the tobacco industry created—and did not simply find and did not simply present—the great bulk of the evidence purporting to demonstrate the safety of smoking.9 By funding research and by funding and hiring researchers, the industry was able to stock (and thus stack) the pool of evidence with a large amount of data, and a large number of studies and conclusions, that supported their preferred outcome. Oreskes and Conway also reveal, not surprisingly, the fossil fuel industry’s role in creating much of the evidentiary foundation for climate-change skepticism and denial.
The phenomenon of creating a skewed pool of evidence is not principally about fabricating evidence for false conclusions. Nor is it about funding bogus studies purporting to demonstrate something like the effectiveness of phrenology or the empirical soundness of astrology. Instead, the phenomenon of motivated production starts from the premise that for many issues there is potentially at least some evidence supporting both sides of a contested conclusion regarding the facts. There is evidence for and against the conclusion that Leonardo himself painted Salvator Mundi, and there is at least some evidence downplaying the harmfulness of cigarettes and the dangers of climate change. In these latter cases, there is vastly more evidence for the opposite conclusion—that cigarettes are very harmful and that climate change is heading toward catastrophe—but the ratio of evidence for one conclusion rather than its negation can be upset if sponsorship, funding, and much else aims at inflating that proportion of the evidentiary universe on some question occupied by the minority position. Things that are not even close can be made to appear close if motivated, resourceful, and resourced actors can make a minority position look far more respectable than it actually is.
A good example of this phenomenon comes from the process of litigation, as documented and theorized by Sheila Jasanoff.10 The typical lawsuit, including lawsuits claiming liability for the manufacture and distribution of harmful products, involves two parties. One is the plaintiff, or the class of plaintiffs. The other is the defendant. And each is given the opportunity to present evidence. It is in the nature of the litigation process, however, that each side is given roughly the same opportunity to present evidence, regardless of the actual state of the evidence or the judge’s view of it. Judges are not permitted to say that the plaintiff will get to present four times as many witnesses as the defendant because of the judge’s belief that the plaintiff’s evidence is four times stronger than the defendants. As a result, the structure of litigation is amenable to making an unequally sound array of evidence appear more equally sound than it actually is. The same thing happens with congressional hearings, newspaper coverage, and any other setting in which the design of that setting, or the norms of the domain in which the setting exists, demands and allows evidence to be produced and presented in a proportion that diverges from the intrinsic soundness of the evidence.
On a wide range of tissues, therefore, motivated parties—especially motivated parties with financial resources, social cachet, and political power—can populate the field of available evidence by sponsoring, in the broadest sense, the creation of evidence in support of their preferred outcome. In so doing, these motivated parties have engaged in the process of motivated production, a process that influences the makeup of the array of evidence available to anyone seeking evidence on some question.
Not only can evidence be created, it must also be communicated or transmitted to those who might use it. And communication—what is communicated, what is not communicated, and how what is communicated is communicated—is also influenced by those with particular values and goals. We can label this as motivated transmission. Here the motivated parties are not only the usual suspects—tobacco companies with advertising agencies, oil refiners with public relations firms, lifestyle purveyors with their influencers, advocacy organizations with their own publications and press releases, for example—but also the seemingly disinterested media itself. Think again of horses and zebras. Horses in the neighborhood are not newsworthy, but zebras are. And so the statistically unlikely presence of zebras makes news. And whether it be “man bites dog” or “if it bleeds, it leads,” a host of venerable canards of the news business supports the conclusion that the facts of which we are made aware, and the evidence for those facts, are not only often provided by motivated actors, but are filtered and transmitted by other actors with motivations of their own. The field of evidence is mediated by motivated mediators, and the stock of evidence that is “out there” comes to us, and to anyone interested in consulting or following the evidence, in a way that bears the influence of all of those with interests of one form or another.
Often the motivations of motivated transmitters will incline in opposed directions, and the effects may be salutary. The dangers of GMOs may appeal to some of the mainstream press and to various advocacy groups, but the producers of genetically modified products are hardly without resources or power of their own. Evidence about acts of police brutality will be transmitted by victims, their lawyers, and activists, but the police possess their own communicative resources. With respect to at least some topics and controversies, the clash of slants may help the consumers of evidence appreciate and evaluate the actual weight of the evidence. But hardly always.
Here is an uncontroversial fact about evidence: There is a lot of it. And much more with the rise of the internet and social media. This observation might seem as silly as it is true, but it exposes an important way in which motivated reasoning manifests itself. When we are faced with an overwhelming barrage of evidence—as if we were drinking from a fire hose, to cite a common metaphor—we are of necessity forced to select some but not all of even the relevant evidence. The limitations of time, effort, and mental space compel us to be selective, and the well-documented lesson from the motivated reasoning research program is that not only do we often engage in biased perception, but we also just as frequently engage in biased assimilation—selecting the evidence that reinforces our existing beliefs, and our preferences—and ignoring or at least slighting the evidence that goes the other way.11 In the typology I have offered here, we can call this motivated retrieval.
Philosophers of science, especially those of a more skeptical bent, have long recognized a related phenomenon, usually labeled theory-laden observation.12 Because it is impossible simply to observe every fact about the world, we observe that world through the lens of our theories and our explanations. And thus, it is argued, not only how we observe but also what we observe is determined by our background theories or presuppositions. Moreover, and more relevantly here, it is but a short step from theory-laden observation to value-laden observation.13 There are active debates about whether observation is necessarily value-laden.14 But the far more modest claim, and one that is sufficient for our purposes here, is that observation, especially outside of the world of science, is often value-laden even if it is not necessarily so. And this modest claim warns us that the selection of evidence from all of the evidence available is commonly heavily influenced by the values and preferences of the selector and the evaluator.
Sometimes this selectivity manifests itself in people selecting evidence that reinforces their prior beliefs and ignoring evidence that challenges those beliefs. This is the well-known and well-researched phenomenon of confirmation bias, and it is not always a matter of first-order preferences.15 The physician who initially diagnoses a set of symptoms as indicating Lyme disease, to take an example from Chapter 2, might then discover another symptom or other indication that would be inconsistent with Lyme disease. Although we would hope that the physician would be willing to revise the initial diagnosis in light of new evidence, this would be inconsistent, regrettably, with how people often behave. Commonly, although by no means universally, people resist new evidence that challenges their earlier conclusions, while at the same time they welcome or even search for new evidence that confirms what they have already concluded. As a result, the process of receiving new evidence turns out to be systematically skewed in the direction of confirmatory evidence and against refuting evidence. It might be nice if we were able to say, “The evidence says that …,” but it turns out that all too often what the evidence says is partial in two senses: in the sense that it is not all of the evidence, and in the sense that the evidence we see as relevant tilts in a direction that less-partially-selected evidence might not.
A powerful example of motivated retrieval comes from studies focusing on political and ideological polarization. Cass Sunstein, for example, has documented and analyzed the way in which providing information to polarized parties increases polarization.16 People will pick the evidence that supports their own pole of the polarization, and thus the provision of additional evidence often increases rather than decreases polarization as people select—cherry-pick—only the evidence that supports what they had previously believed, and ignore the remainder.17 An even more depressing version of this phenomenon comes from research on the effects of attempting to counteract false beliefs with evidence of true ones. It would be nice if such counteracting efforts were effective, and they may well be, but there are also studies indicating that the counteracting evidence, by accentuating the salience of the evidence to be counteracted, actually turns out often to have exactly the opposite of its intended effect.18
Finally, we have motivated processing, which comes closest to the traditional idea of motivated reasoning. Even when confronted with all of the relevant evidence, people will often see that evidence in a way that reinforces their own prior beliefs. Sometimes they will simply reject whatever challenges those beliefs, and sometimes they will spin the data—no matter how unspinnable it might appear to an unbiased observer—in ways that do not force them to reject what they had previously believed or accept conclusions that are inconsistent with their preferred outcomes.
The motivated production of evidence, then filtered through the motivated transmission of evidence, then selectively retrieved in light of the retriever’s motivations and preferences, and then evaluated in light of those preferences, makes for an alarming picture. But we should not let that alarm cause us to fall into the trap that the trap is about. If we believe in the importance and power of evidence, we can use that belief as a potential antidote to the motivation that leads us to look in a motivated, and thus skewed, way at that evidence. But we also have evidence of the pervasive phenomenon of motivated reasoning. So if, again, we believe in the importance and power of evidence, then we should recognize the evidence of motivated reasoning. And we should not discard our belief in the importance and power of evidence when it is time to examine the evidence of how evidence is actually used by real people making real decisions.
It may be helpful to distinguish two forms of motivated reasoning, even while recognizing that the border between the two is both fuzzy and porous. There is a distinction between motivated reasoning when there is evidence both supporting and refuting some factual proposition, on the one hand, and the motivated rejection of evidence when all or almost all of the evidence points in one direction. The former occurs when, in more or less good faith, and with more or less of an open mind, we look at the evidence for and against some conclusion and discover that the evidence, even when filtered through the applicable burden of proof, leaves us genuinely uncertain. Under those circumstances what we can call “soft” motivated reasoning kicks in to lead us to choose the option that pleases us rather than the one that does not, even if there is some evidence for both, and even if the evidence is somewhat in equipoise. Soft motivated reasoning, by definition, is at work only when there is what we might think of as decent evidence both supporting and rejecting some conclusion, and it is perhaps too much to imagine that it could ever be otherwise.
By contrast, “hard” motivated reasoning occurs when strong or overwhelming evidence in favor of some proposition is rejected by those who do not like the implications of the evidence. These days the claim that there was widespread fraud in the 2020 presidential election is the most salient example; the claim that global warming is not real runs a close second. In these cases, a hypothetical neutral observer would see all or almost all of the evidence going in one direction. But there are people who, for reasons of outcome preference, simply reject the evidence. We should lament the existence of hard motivated reasoning—and societies, by education or otherwise, should do much more than they now do to diminish its frequency and consequences. But at the end of the day the analysis of evidence is not going to do very much, if anything, for those for whom evidence does not matter in the first instance. We can wish that evidence would persuade people to relinquish their beliefs that the Parkland school shootings were a “false flag” plot by the political left or that no airplane crashed into the Pentagon on September 11, 2001, to take two of the claims of Representative Marjorie Taylor Greene.19 But it is unlikely that more evidence can persuade people who from the outset are simply uninterested in the evidence.
The most memorable character in Voltaire’s 1759 satirical novel Candide is Dr. Pangloss, who almost three centuries later survives as the character who gave us the adjective “Panglossian.” Dr. Pangloss was an eternal optimist, and to be Panglossian now is to see the world and the future through rose-colored glasses. Even more particularly, to be Panglossian in a world of conflicting facts, competing goals, and inconsistent principles is to suppose that all of these conflicts are illusory, and that what seem to be inconsistent facts, goals, and principles can all fit together such that no conflict exists.20 Some people might suppose, for example, that freedom of speech and public order are inconsistent, forcing us to choose between them in particular contexts. Not so for the Panglossian. The threat to public order is not really an exercise of free speech, or the exercise of free speech does not really threaten public order. Either way, the conflict disappears.
Dr. Pangloss is highly relevant to questions of evidence, and in particular to the questions of motivated reasoning we are considering in this chapter. If the evidence appears to point to one conclusion, but that conclusion is in tension with someone’s values, aspirations, or preferences, Panglossianism counsels the person so conflicted to adapt their preferences to fit the evidence, or to adapt the evidence to fit their preferences. Either strategy eliminates the conflict, but we know that it is difficult for people to change their values and their preferences. They can avoid having to do this by adapting the evidence, or at least their perception of the evidence, and then, for them, the conflict disappears.21
This phenomenon has been documented by psychologists who have identified the human desire for cognitive consistency.22 The failure to achieve cognitive consistency—so familiar that the label has penetrated popular consciousness—is cognitive dissonance.23 Although this latter label is often tossed about by those who know a little, but only a little, about psychology, cognitive dissonance is the carefully documented phenomenon by which people tend to avoid having to negotiate incompatible ideas, principles, goals, desires, and even facts. It is no surprise that people have a “need to see the world as structured, consistent, and orderly.”24 And this is not only about how people see the world. It is about how people see much of what exists within and about their lives. If I would like to lose weight and I like ice cream, I can convince myself that ice cream is not all that fattening, or that I am not really that fat. Either way the consistency has evaporated, even if it took some revision of my view of the facts to produce that outcome.
In much the same way, motivated reasoning, which has as many causes as it has effects, is, at least in part, a product of this kind of dissonance avoidance. Insofar as one aspect of reasoning is reaching factual conclusions, then reaching factual conclusions that are consistent with our factual preferences is a good way of doing it. Motivated reasoning, we might say, is inconsistent with evidentiary honesty, with following the evidence where it leads. But the existence of motivated reasoning is itself the product of an evidentiary inquiry about human behavior, and if we think that motivated reasoning is something to be avoided, which it is, then we should make sure that we are not engaged in motivated reasoning when we downplay the extent or the consequences of motivated reasoning.
On February 6, 2021, on the eve of the second impeachment trial of by-then-former-president Donald Trump, Brian Schatz, a Democratic senator from Hawaii, observed, “It’s not clear to me that there is any evidence that will change anyone’s mind.”25 In the context of an impeachment trial—which might not ever be very much about evidence, and which is especially not about evidence in a situation in which the decision makers, the members of the Senate, believe they have first-person experiences flowing from having themselves been on the premises during the January 6, 2021, invasion of the Capitol building—Senator Schatz’s observation seems depressingly self-evident. Impeachment trials are not now and never have been, at least at the presidential level, events in which evidence is presented and carefully considered.
Somewhat more disturbing, however, is the fact that Schatz’s statement may not only be about impeachment, and it may not only be about proceedings in the United States Senate. What Schatz said may, regrettably, characterize much of human behavior and human decision making, processes in which making up one’s mind all too often either precedes or precludes consultation of the evidence, or distorts the process of selecting and evaluating that evidence. There is some indication that those who are more politically sophisticated are more likely—not less likely—to engage in politically motivated reasoning and evidence selection, perhaps because they are more attuned to looking for flaws in the arguments or positions with which they disagree, but less willing to search for or recognize the flaws in their own positions, their own arguments, and their own evidence.26
In an important way, this final chapter on motivated reasoning could have been part of every chapter of this book—a qualification of every chapter of this book. Every chapter of this book has been premised on the view that for some people, at some times, and on some issues, evidence matters. This book has been written for those for whom evidence matters, and for when it matters to them. For those for whom evidence does not matter, no amount of evidence, and no amount of the analysis of evidence, is going to make a difference.