9 The Case for the Social Sciences
In the last two chapters, we witnessed several examples of failure to emulate the scientific attitude. Whether because of fraud, denialism, or pseudoscience, many who purport to care about evidence fail to live up to the highest standards of empirical inquiry. One key issue to consider here is motivation. Some of these failures occur because the people involved are not really aspiring to be scientific (perhaps because they care about something like ideology, ego, or money ahead of scientific integrity) and just want a shortcut to scientific glory.
But what about those people who work in fields that do want to become more scientific—and are willing to work hard for it—but just do not fully appreciate the role that the scientific attitude might have in getting them there? In chapter 6, we saw the power of the scientific attitude to transform a previously unscientific field into the science of medicine. Is the same path now open to the social sciences? For years, many have argued that if the social sciences (economics, psychology, sociology, anthropology, history, and political science) could emulate the “scientific method” of the natural sciences, they too could become more scientific. But this simple advice faces several problems.
Challenges for a Science of Human Behavior
There are many different ways of conducting social inquiry. Social psychologists have found it convenient to rely on controlled experiments (and behavioral economists are just beginning to catch up with them), but in other areas of social investigation it is just not possible to run the data twice.1 In sociology, we have case studies. In anthropology, we have fieldwork. And, up until recently, neoclassical economists disdained the idea that reliance on simplifying assumptions undercut the applicability of their theoretical models to human behavior. Of course, in this way, the social sciences are not entirely different from the natural sciences. With Newtonian physics as a model, it is easy to overlook the fact that the study of nature too has its methodological diversity: geologists cannot run controlled experiments, and biologists are often prevented from making precise predictions. Nonetheless, ever since the Logical Positivists (and Popper) claimed that what was special about science was its method, many have felt that the social sciences would be better off if they tried to model themselves on the type of inquiry that goes on in the natural sciences.
This idea has caused some push back over the years from both social scientists and philosophers of social science, who have held that one cannot hope to do social science in the same way as natural science. The subject matter is just too different. What we want to know about human behavior is often at odds with the desire merely to reduce actions to their causal forces. So if it were true that there is really only one way to do science—one that is defined by the unique methodology of natural science (if not “scientific method” itself)—it would be easy to see why some might lose hope in the idea that social science could become more scientific.
In earlier work, I spent a good deal of effort trying to identify flaws in those arguments that held there was a fundamental barrier to having a science of human action because of the complexity or openness of its subject matter, the inability to perform controlled social experiments, and the special problems created by subjectivity and free will in social inquiry.2 I continue to believe that my arguments here are valid, primarily because complexity and openness are also a part of natural scientific inquiry, and the other alleged barriers have less effect on the actual performance of social inquiry than their critics might assume. I also believe that these problems are overblown; if they were truly a barrier to inquiry they would likely show that much of natural science should not work either. But for the time being, I wish to focus on a more promising path for defending the possibility of a science of human behavior, for I now realize that what has been missing from the social sciences all these years is not proper method, but the right attitude toward empirical evidence.3
Like Popper, I have never believed that there is such a thing as scientific method, but I have held that what makes science special is its methodology.4 Popper, of course, famously held that the social sciences were not falsifiable, so they could not be scientific.5 I have argued against this and held that in every meaningful way there could be methodological parity between natural and social inquiry. That may be, but it overlooks one crucial point. What makes science special—both natural and social—is not just the way that scientists conduct their inquiry, but the attitude that informs their practices.
Too much social research is embarrassingly unrigorous. Not only are the methods sometimes poor, much more damning is the nonempirical attitude that lies behind them. Many allegedly scientific studies on immigration, guns, the death penalty, and other important social topics are infected by their investigator’s political or ideological views, so that it is all but expected that some researchers will discover results that are squarely in line with liberal political beliefs, while others will produce conservative results that are directly opposed to them. A good example here is the question of whether immigrants “pay their own way” or are a “net drag” on the American economy. If this is truly an empirical question (and I think that it is) then why can I cite five studies that show that immigrants are a net plus, five more that show that they are not, and can probably predict which studies came out of which research centers and by whom they were written?6 I am not here accusing anyone of fraud or pseudoscience. These are purportedly rigorous social scientific studies performed by well-respected scholars—it is just that their findings about factual matters flatly contradict one another. This would not be tolerated in physics, so why is it tolerated in sociology? Is it any wonder that politicians in Washington, DC, are so skeptical about basing their policies on social scientific work and instead cherry pick their favorite studies to support their preferred ideology?
The truth is that such questions are open to empirical study and it is possible for social science to study them scientifically. There are right and wrong answers to our questions about human behavior. Do humans experience a “backfire effect” when exposed to evidence that contradicts their opinion on an empirical (rather than a normative) question, such as whether there were weapons of mass destruction in Iraq or President George W. Bush proposed a complete ban on stem cell research? Is there such a thing as implicit bias, and if so, how can it be measured? Such questions can be, and have been, studied scientifically.7 Although social scientists may continue to disagree (and indeed, this is a healthy sign in ongoing research), their disagreements should focus on the best way to investigate these questions, not whether the answers produced are politically acceptable. Having the scientific attitude toward evidence is just as necessary in the study of human behavior as it is in the study of nature.
Yet there are numerous problems in contemporary social scientific research:
(1) Too much theory: A number of social scientific studies propose answers that have not been tested against evidence. The classic example here is neoclassical economics, where a number of simplifying assumptions—perfect rationality, perfect information—resulted in beautiful quantitative models that had little to do with actual human behavior.
(2) Lack of experimentation/data: Except for social psychology and the newly emerging field of behavioral economics, much of social science still does not rely on experimentation, even where it is possible. For example, it is sometimes offered as justification for putting sex offenders on a public database that doing so reduces the recidivism rate. This must be measured, though, against what the recidivism rate would have been absent the Sex Offender Registry Board (SORB), which is difficult to measure and has produced varying answers.8 This exacerbates the difficulty in (1), whereby favored theoretical explanations are accepted even when they have not been tested against any experimental evidence.
(3) Fuzzy concepts: In some social scientific studies, the results are questionable because of the use of “proxy” concepts for what one really wishes to measure. Recent examples include measuring “warmth” as a proxy for “trustworthiness” (see details in next section), which can lead to misleading conclusions.
(4) Ideological infection: This problem is rampant throughout the social sciences, especially on topics that are politically charged. One recent example is the bastardization of empirical work on the deterrence effect of capital punishment or the effectiveness of gun control on mitigating crime. If one knows in advance what one wants to find, one will likely find it.9
(5) Cherry picking: As we’ve seen, the use of statistics allows multiple “degrees of freedom” to scientific researchers, but this is the most likely to be abused. In studies on immigration, for instance, a great deal of the difference between them is a result of alternative ways of counting the “costs” incurred by immigration. This is obviously also related to (4) above. If we know our conclusion, we may shop for the data to support it.
(6) Lack of data sharing: As Trivers reports, there are numerous documented cases of researchers failing to share their data in psychological studies, despite a requirement from APA-sponsored journals to do so.10 When data were later analyzed, errors were found most commonly in the direction of the researcher’s hypothesis.
(7) Lack of replication: As previously discussed, psychology is undergoing a reproducibility crisis. One might validly argue that the initial finding that nearly two-thirds of psychology studies were irreproducible was overblown (see Gilbert et al. 2016), but it is nonetheless shocking that most studies are not even attempted to be replicated. This can lead to difficulties, where errors can sneak through.
(8) Questionable causation: It is gospel in statistical research that “correlation does not equal causation,” yet some social scientific studies continue to highlight provocative results of questionable value. One recent sociological study, for instance, found that matriculating at a selective college was correlated with parental visitation at art museums, without explicitly suggesting that this was likely an artifact of parental income.11
All of these problems can also be found to some degree in natural scientific work. Some of the other problems identified in earlier chapters (p-hacking, problems with peer review) can be present in social scientific work as well. The issue is not so much that these difficulties are unique to social inquiry, as that some of them are especially prevalent there. Even if natural science suffers from them as well, the problems for social science are proportionally greater.
The problem with social science is not that it is failing to follow some prescribed method, or even to embrace certain scientific procedures, but that a number of its practices are not yet instantiated at the level of group practice so that they demonstrate a discipline-wide commitment to the scientific attitude. Fuzzy concepts or errors from questionable causation may not be such a big problem if one can count on one’s colleagues to catch them, but in many instances—in an environment in which data are not shared and replication is not the norm—too many errors slip by. Social science no less than natural science needs to embrace the scientific attitude toward evidence and realize that the only way to settle an empirical dispute is with empirical evidence. It should be recognized as the embarrassment that it is that there is so much opinion, intuition, theory, and ideology in social scientific research. Just as we now look back on bare-handed surgery and cringe, may we someday ask “Why didn’t someone test that hypothesis?” in social science? Nothing keeps people honest like public scrutiny. We need to do more data sharing and replication in the social sciences. We need better peer review and real scientific controls. And we need to recognize that it is shameful that up until recently much of social science has not even made an attempt to be experimental. Compared to the old neoclassical model in economics, the new behavioral one is a breath of fresh air. And this is all made possible by embracing the scientific attitude.
If social scientists were more committed at both the individual and group level to reliance on evidence and constructing better procedures for capitalizing on this, the social sciences would be better off. In this, they can follow the same path that was earlier laid out by medicine. Whether one is performing a clinical trial or doing fieldwork, the appropriate attitude to have in social inquiry should be that previously noted by Emile Durkheim: “When [we] penetrate the social world … [we] must be prepared for discoveries which will surprise and disturb [us].”12 We must abandon the belief that just because we are human we already basically understand how human behavior works. Where possible, we need to perform experiments that challenge our preconceptions, so that we can discover how human action actually works, rather than how our mathematical models and high theory tell us it should.
And all of this applies equally to qualitative work as it does to quantitative. While it is true that in social science there is some evidence that may be irreducibly qualitative (see Clifford Geertz’s work on “thick description” in his book The Interpretation of Cultures [Basic Books, 1973]), one must still be concerned with how to measure it. Indeed, in the case of qualitative work, we must be especially on guard against hubris and bias. Our intuitions about human nature are likely no more profound than eighteenth-century physicians’ were about infection.13 The data can and should surprise us. Just because a result “feels right” does not mean it is accurate. Cognitive bias and all of the other threats to good scientific work in the study of nature are no less a threat to those who study human behavior. The revolution in social science may be attitudinal rather than methodological, but this does not mean that it should not reach the four corners of how we have been doing our inquiry.
For years, many have thought that one could improve social science by making it more “objective.” The Logical Positivists in particular held fast to the fact–value distinction, which said that scientists should be concerned only with their results and not with how these might be used. But they were wrong. While it is true that we should not let our hopes, wishes, beliefs, and “values” color our inquiry into the “facts” about human behavior, this does not mean that values are unimportant. Indeed, as it turns out, our commitment to the scientific attitude is an essential value in conducting scientific inquiry. The key to having a more rigorous social science is not scientific method, but the scientific attitude.
A Way Forward: Emulating Medicine
If we think back to the state of medicine at the time of Semmelweis, the analogies with social science are compelling. Knowledge and procedures were based on folk wisdom, intuition, and custom. Experiments were few. When someone had a theory, it was thought to be enough to consider whether it “made sense,” even if there was no empirical evidence in its favor. Indeed, the very idea of attempting to gather evidence to test a theory flew in the face of belief that medical practitioners already knew what was behind most illnesses. Despite the shocking ignorance and backward practices of medicine throughout most of its history, theories were abundant and ideas were rarely challenged or put to the test. Indeed, this is what was so revolutionary about Semmelweis. He wanted to know whether his ideas held up by testing them in practice. He understood that knowledge accumulates as incorrect hypotheses are eliminated based on lack of fit with the evidence. Yet his approach was wholeheartedly resisted by virtually all of his colleagues.
Medicine at the time did not yet have the scientific attitude. Does social science now? In some cases it does, but the problem is that even in those instances where good work is being done many feel free to ignore it. In a society in which law enforcement continues to rely on eyewitness testimony and perform nonsequential criminal lineups, despite the alarmingly high rate of false positives with these methods, we have to wonder whether this is just another instance where practice trails theory.14 Public policy on crime, the death penalty, immigration, and gun control is rarely based on actual empirical study. Yet at least part of the problem also seems due to the inconsistent standards that have always dogged social scientific research. This has handicapped the reputation of the social sciences and made it difficult for good work to get noticed. As we have seen, when so many studies fail to be replicated, or draw different conclusions from the same set of facts, it does not instill confidence. Whether this is because of sloppy methodology, ideological infection, or other problems, the result is that even if there are right and wrong answers to many of our questions about human action, most social scientists are not yet in a position to find them. It is not that none of the work in social science is rigorous enough, but when policy makers (and sometimes even other researchers) are not sure which results are reliable, it drives down the status of the entire field.
Medicine was once held in similarly low repute, but it broke out of its prescientific “dark ages” because of individual breakthroughs that became the standard for group practice and some degree of standardization of what counted as evidence. To date, the social sciences have yet to complete their evidence-based revolution. We can find some examples today of the scientific attitude at work in social inquiry that have enjoyed some success, but there has not yet been a discipline-wide acceptance of the notion that the study of human behavior needs to be based on theories and explanations that are relentlessly tested against what we have learned through experiment and observation. As in prescientific medicine, too much of today’s social science relies on ideology, hunches, and intuition.
In the next section, I will provide an example of what a social science that fully embraced the scientific attitude might look like. Before we get there, however, it is important to consider one remaining issue that many have felt to be an insuperable barrier to the pursuit of a science of human behavior. Some have said that social science is unique because of the inherent problem of humans studying other humans—that our values will inevitably interfere with any “objective” empirical inquiry. This is the problem of subjectivity bias. Yet it is important to remember that in medicine we have an example of a field that has already solved this problem and moved forward as a science.
In its subject matter, medicine is in many ways like social science. We have irreducible values that will inevitably guide our inquiry: we value life over death, health over disease. We cannot even begin to embrace the “disinterested” pose of the scientist who does not care about his or her inquiry beyond finding the right answer. Medical scientists desperately hope that some theories will work because lives hang in the balance. But how do they deal with this? Not by throwing up their hands and admitting defeat, but rather by relying on good scientific practices like randomized double-blind clinical trials, peer review, and disclosure of conflicts of interest. The placebo effect is real, for both patients and their doctors. If we want a medicine to work, we might subtly influence the patient to think that it does. But whom would this serve? When dealing with factual matters, medical researchers realize that influencing their results through their own expectations is nearly as bad as fudging them. So they guard against the hubris of thinking that they already know the answer by instituting methodological safeguards. They protect what they care about by recognizing the danger of bias.
The mere presence of values or caring about what you study does not undercut the possibility of science. We can still learn from experience, even if we are fully invested in hoping that one medicine will work or one theory is true, as long as we do not let this get in the way of good scientific practice. We can still have the scientific attitude, even in the presence of other values that may exist alongside it. Indeed, it is precisely because medical researchers and physicians recognize that they may be biased that they have instituted the sorts of practices that are consonant with the scientific attitude. They do not wish to stop caring about human life, they merely want to do better science so that they can promote health over disease. In fact, if we truly care about human outcomes, it is better to learn from experience, as the history of medicine so clearly demonstrates. It is only when we take steps to preserve our objectivity—instead of pretending that this is not necessary or that it is impossible—that we can do better science.
Like medicine, social science is subjective. And it is also normative. We have a stake not just in knowing how things are but also in using this knowledge to make things the way we think they should be. We study voting behavior in the interest of preserving democratic values. We study the relationship between inflation and unemployment in order to mitigate the next recession. Yet unlike medicine, so far social scientists have not proven to be very effective in finding a way to wall off positive inquiry from normative expectations, which leads to the problem that instead of acquiring objective knowledge we may only be indulging in confirmation bias and wishful thinking. This is the real barrier to a better social science. It is not just that we have ineffective tools or a recalcitrant subject matter; it is that at some level we do not yet have enough respect for our own ignorance to keep ourselves honest by comparing our ideas relentlessly against the data. The challenge in social science is to find a way to preserve our values without letting them interfere with empirical investigation. We need to understand the world before we can change it. In medicine, the answer was controlled experimentation. What might it be in social science?
Examples of Good and Bad Social Science
Even when social scientists do “research,” it is often not experimental. This means that a good deal of what passes for social scientific “evidence” is based on extrapolations from surveys and other data sets that may have been conducted by other researchers for other purposes. But this can lead to various methodological problems such as confusion between causation and correlation, the use of fuzzy concepts, and some of the other weaknesses we spoke about earlier in this chapter. It is one thing to say that “bad” social science is all theory and no evidence, infected with ideology, does not rely enough on actual experimentation, is not replicable, and so on, but it is another to see this in action.
One example of poorly conducted social scientific research can be found in a 2013 article by Susan Fiske and Cydney Dupree entitled “Gaining Trust as Well as Respect in Communicating to Motivated Audiences about Science Topics,” which was published in the Perspectives section of the Proceedings of the National Academy of Science.15 In this study, the researchers set out to study an issue that has great importance for the defense of science: whether the allegedly low trustworthiness of scientists may be undermining their persuasiveness on factual questions such as climate change. Does it come as a surprise that scientists are seen as untrustworthy? Fiske and Dupree purport to have empirical evidence for this.
In their study, the researchers first conducted an online poll of American adults to ask them to list typical American jobs. The researchers then chose the most commonly mentioned forty-two jobs, which included scientists, researchers, professors, and teachers.16 In the next step, they polled a new sample to ask about the “warmth” versus “competence” of practitioners of these professions. Here it was found that scientists rated highly on expertise (competence) but relatively low on warmth (trustworthiness). What does warmth have to do with trustworthiness? Their hypothesis was that trustworthiness is positively correlated with warmth and friendliness. In short, if someone is judged to be “on my side” then that person is more likely to be trusted. But whereas there is empirical work to show that if someone is judged to be “like us” we are more likely to trust that person,17 it is a great leap to then start using “warmth” and “trustworthiness” as interchangeable proxies for one another.
First, one should pay attention to the leap from saying (1) “if X is on my side, then X is more trustworthy” to saying (2) “if X is not on my side, then X is less trustworthy.” By elementary logic, we understand that statement (2) is not implied by statement (1), nor vice versa. Indeed, the leap from (1) to (2) is the classic logical error of denying the antecedent. This means that even if there were empirical evidence in support of the truth of statement (1), the truth of statement (2) is still in question. Nowhere in Fiske and Dupree’s article do they cite any evidence in support of statement (2), yet the biconditional link between “being on my side” and “being trustworthy” is the crux of their conclusion that it is methodologically sound to use “warmth” as a proxy to measure “trustworthiness.”18 Isn’t it conceivable that scientists could be judged as not warm yet nonetheless trustworthy? Indeed, wouldn’t it have been more direct if the researchers had simply asked their subjects to rate the trustworthiness of various professions? One wonders what the result might have been. For whatever reason, however, the researchers chose not to take this route and instead skip blithely back and forth between measurements of warmth and conclusions about trust throughout their article.
[Scientists] earn respect but not trust. Being seen as competent but cold might not seem problematic until one recalls that communicator credibility requires not just status and expertise (competence) but also trustworthiness (warmth). … Even if scientists are respected as competent, they may not be trusted as warm.19
This is a classic example of the use of fuzzy concepts in social scientific research, where dissimilar concepts are treated as interchangeable, presumably because one of them is easier to measure than the other. In this case, I am not convinced of that, because “trust” is hardly an esoteric concept that would be unreportable by research subjects, but we nonetheless find in this article a conclusion that scientists have a “trust” problem rather than a “warmth” problem, based on zero direct measurement of the concept of trust itself.20
This is unfortunate, because the researchers’ own study would seem to give reason to doubt the truth of their own conclusions. In a follow up, Fiske and Dupree report that as a final step they singled out climate scientists for further review, and here polled a fresh sample of subjects with a slightly different methodology for the measurement of trust. Here, instead of allegedly measuring “trust,” they instead sought to measure “distrust” through the use of a seven-item scale that included things like perceptions of “motive to lie with statistics, complicate a simple story, show superiority, gain research money, pursue a liberal agenda, provoke the public, and hurt big corporations.”21 The researchers were surprised to find that climate scientists were judged to be more trustworthy than scientists in general (measured against their previous poll). What might be the reason for this? They offer the hypothesis that the scale was different (which raises the question of why they made the decision to use a different scale), but also float the idea that climate scientists perhaps had a more “constructive approach to the public, balancing expertise (competence) with trustworthiness (warmth), together facilitating communicator credibility.”22 I find this to be a questionable conclusion, for in the final part of the study there was no measurement at all of the “warmth” of climate scientists, yet the researchers once again feel comfortable drawing parallels between trustworthiness and warmth.23
By way of contrast, I will now explore an example of good social scientific work that is based firmly in the scientific attitude, uses empirical evidence to challenge an intuitive theoretical hypothesis, and employs experimental methods to measure human motivation directly through human action. In Sheena Iyengar’s work on the paradox of choice, we face a classic social scientific dilemma. How can something as amorphous as human motivation be measured through empirical evidence? According to neoclassical economics, we measure consumer desire directly through marketplace behavior. People will buy what they want, and the price is a reflection of how much the good is valued. To work out the mathematical details, however, a few “simplifying assumptions” are required. First, we assume that our preferences are rational. If I like cherry pie more than apple, and apple more than blueberry, it is assumed that I like cherry more than blueberry.24 Second, we assume that consumers have perfect information about prices. Although this is widely known to be untrue in individual cases, it is a core assumption of neoclassical economics, for it is needed to explain how it is that the market as a whole performs the magical task of ordering preferences through prices.25 Although it is acknowledged that actual consumers may make “mistakes” in the marketplace (for instance, they did not know that cherry pie was on sale at a nearby market), the model purports to work because if they had known this, they would have changed their behavior. Finally, the neoclassical model assumes that “more is better.” This is not to say that there is no such thing as diminishing marginal utility—that last bite of cherry pie probably does not taste as good as the first one—but it is to say that for consumers it is better to have more choices in the marketplace, for this is how one’s preferences can be maximized.
In Sheena Iyengar’s work, she sought to test this last assumption directly through experiment. The stakes were high, for if she could show that this simplifying assumption was wrong, then, together with Herbert Simon’s earlier work undermining “perfect information,” the neoclassical model may be in jeopardy. Iyengar and her colleague Mark Lepper set up a controlled consumer choice experiment in a grocery store where shoppers were offered the chance to taste different kinds of jam. In the control condition, shoppers were offered twenty-four different choices. In the experimental condition, this was decreased to six options. To ensure that different shoppers were present for the two conditions, the displays were rotated every two hours and other scientific controls were put in place. Iyengar and Lepper sought to measure two things: (1) how many different flavors of jam the shoppers chose to taste and (2) how much total jam they actually bought when they checked out of the store. To measure the latter, everyone who stopped by to taste was given a coded coupon, so that the experimenters could track whether the number of jams in the display affected later purchasing behavior. And did it ever. Even though the initial display of twenty-four jams attracted slightly more customer interest, their later purchasing behavior was quite low when measured against those who had visited the booth with only six jams. Although each display attracted an equal number of jam tasters (thus removing the fact of tasting as a causal variable to explain the difference), the shoppers who had visited the display with twenty-four jams used their coupons only 3 percent of the time, whereas those who visited the display with only six jams used theirs 30 percent of the time.
What might account for this? In their analysis, Iyengar and Lepper speculated that the shoppers might have been overwhelmed in the first condition.26 Even when they tasted a few jams, this was such a small percentage of the total display that they perhaps felt they could not be sure they had chosen the best one, so they chose not to buy any at all. In the second condition, however, shoppers might have been better able to rationalize making a choice based on a proportionally larger sampling. As it turned out, people wanted fewer choices. Although they might not have realized it, their own behavior revealed a surprising fact about human motivation.27
Although this may sound like a trivial experiment, the implications are far reaching. One of the most important direct applications of Iyengar and Lepper’s finding was to the problem of undersaving in 401k plans, where new employees are customarily overwhelmed by the number of options for investing their money and so choose to put off the decision, which effectively means choosing not to invest any money at all. In Respecting Truth, I have explored a number of other implications of this research ranging from automatic enrollment in retirement plans to the introduction of “target date” retirement funds.28 Not only is this good social science, but its positive impact on human lives has been considerable.
For present purposes, the point is this. Even in a situation where we may feel most in touch with our subject matter—human preference and desire—we can be wrong about what influences our behavior. If you ask people whether they want more or fewer choices, most will say they want more. But their actual behavior belies this. The results of experimental evidence in the study of human action can surprise us. Even concepts as seemingly qualitative as desire, motivation, and human choice can be measured by experimentation rather than mere intuition, theory, or verbal report.
Here again we are reminded of Semmelweis. How do we know before we have conducted an experiment what is true? Our intuitions may feel solid, but experiment shows that they can fail us. And this is as true in social science as it is in medicine. Having the facts about human behavior can be just as useful in public policy as in the diagnosis and treatment of human disease. Thus the scientific attitude is to be recommended just as heartily in social science as it is in any empirical subject. If we care about evidence and are willing to change our minds about a theory based on evidence, what better example might we have before us than the success of Iyengar and Lepper’s experiment? Just as the elegance of Pasteur’s experimental model allowed him to overthrow the outdated idea of spontaneous generation, could economics now move forward owing to recognition of the impact of cognitive bias and irrationality on human choice?
And perhaps this same approach might work throughout the social sciences. All of the recent work on cognitive bias, for instance, might help us to develop a more effective approach to science education and the correction of public misperceptions about climate change. If the researchers cited by Fiske and Dupree as the foundation for their work are right (which has nothing to do with the question of any purported connection between warmth and trustworthiness), then attitude is as much a part of making up our mind as evidence.
First, scientists may misunderstand the sources of lay beliefs. People are no idiots. The public’s issue with science is not necessarily ignorance. The public increasingly knows more than before about climate change’s causes. … Potential divides between scientists and the public are not merely about sheer knowledge in any simple way.
The second, often-neglected factor is the other side of attitudes. Attitudes are evaluations that include both cognition (beliefs) and affect (feelings, emotions). Acting on attitudes involves both cognitive capacity and motivation. Attitudes show an intrinsic pressure for consistency between cognition and affect, so for most attitudes, both are relevant. When attitudes do tilt toward emphasizing either cognition or affect, persuasion is more effective when it matches the type of attitude. In the domain of climate change, for example, affect and values together motivate climate cognition. If public attitudes have two sides—belief and affect—what is their role in scientific communication?29
If this is true, what breakthroughs might be possible once we gain better experimental evidence for how the human mind really works? Actual human beings do not have perfect information; neither are they perfectly rational. We know that our reason is buffeted by built-in cognitive biases, which allow a sea of emotions, misperceptions, and desires to cloud our reasoning. If we seek to do a better job of convincing people to accept the scientific consensus on a topic like climate change, this provides a valuable incentive for social scientists to get their own house in order. Once they have found a way to make their own discipline more scientific, perhaps they can play a more active role in helping us to defend the enterprise of science as a whole.