Chapter 9Of Experts and Expertise

ON OCTOBER 14, 2020, then-judge and now-justice Amy Coney Barrett was asked at her confirmation hearings by then-senator and now-vice president Kamala Harris whether she believed that climate change was happening. In the grand tradition of Supreme Court nominees for at least the past thirty years, Justice Barrett hedged, fudged, and dodged, responding that she would not express a view on such a “politically controversial” matter as climate change.1 And although Barrett’s characterization of climate change as politically controversial was totally accurate, she was nevertheless widely criticized for implying that climate change was scientifically controversial, even if that was not what she actually said.2

Although some of the people who criticized Barrett for what they thought she had said, or what they thought she had implied—or for not saying what they wished she had said—really do know quite a bit about the science of climate change, most of Barrett’s critics did not. Nor do most of the rest of us when we believe, correctly, that climate change is both real and of potentially catastrophic proportions. Instead we take the word of scientists. And not just this or that scientist, but the consensus of scientists who work on such matters. They are the experts, and for most of us the evidence of climate change comes neither from our own perceptions nor from our own research, but instead from what we believe the scientists have concluded. What the scientists—the experts—tell us is our evidence for what we believe. The scientists rely on evidence, but our evidence consists of what the scientists have said.

The reliance on expertise is a variation on the themes that pervade the use of testimony as evidence generally, which was the focus of Chapters 5, 6, 7, and 8. When people rely on the scientific consensus in believing that climate change is real, substantially human-caused, and leading to a catastrophe of epic proportions, they are relying on the testimony of scientists. The scientists, we believe, or at least most people believe, are the experts. And their conclusions are based on evidence, or so we hope and assume. But for the rest of us, our evidence is what the scientists have said. We rely on the testimony of the scientists. And relying on the testimony of experts, or on the collective testimony of an expert community, is different from relying on someone who claims to have firsthand knowledge. Indeed, it is different enough that it deserves to be in its own category, as it is in law, as it is in this book, and as it is elsewhere.

It is no coincidence that Justice Barrett was asked about climate change. The issue is not only politically charged, but also presents a particularly good example of an issue in which relying on expert opinion is, roughly, essential. True, there are experts in auto mechanics, furniture making, and physical fitness, but these are all domains in which many people have a bit of lay knowledge, and in which they therefore often believe (sometimes mistakenly) that their lay knowledge is sufficient, and in which they consequently believe (again, sometimes or even often mistakenly) that they have enough knowledge to be able to distinguish the genuine experts from the poseurs.3

Climate change—like rocket science and brain surgery, to take the two examples of highly technical and complex knowledge whose inaccessibility to the untrained is a staple of popular culture—is different. Putting aside those people who are ignorantly convinced that climate change is happening because it was warm last week, or the people who are equally ignorantly sure that climate change is not real because it was cold yesterday, most of the rest of us are forced to rely on expert opinion. But then we must figure out who the experts are, and here we run into problems. How are we, as nonexperts, to determine who the experts are in areas about which we know nothing?4 Isn’t it necessary to be an expert oneself to know who the experts are? And to know whether what they say can be relied on?5

There are several rejoinders to this worry about nonexpert evaluation of expertise, which appears to be necessary when nonexperts treat expert conclusions as evidence. First, nonexperts often have the ability to identify and evaluate the rationality of what experts conclude, even if the nonexperts do not understand the underlying methods and conclusions. When so-called experts offer conclusions and the reasons for those conclusions that are internally contradictory or rely on implausible initial premises, nonexpert assessment can reject the expert conclusions even if the assessors are not themselves aware of the expert methods that are allegedly being used. You do not have to be an astronomer to know that the moon is not made out of green cheese, and if somewhat purporting to be an astronomer says that it is, then non-astronomers have good reason to reject what is advertised as an expert conclusion. Getting back to earth, literally, when an expert experimental psychologist claims to have proved the existence of a paranormal ability to see the future, those with ordinary (nonexpert) knowledge have reason to doubt the soundness of the expert conclusions even if they are not themselves experts.6 At least sometimes, one need not be an expert to know when the experts have gone off the rails, and, conversely, when they have not.

The more important rejoinder to the worry that nonexperts cannot evaluate expertise about which they themselves have no expertise is that nonexperts can still identify and rely on the external trappings of expertise even if the nonexperts cannot identify and evaluate the expertise itself. These external trappings might include things like Nobel Prizes, tenured professorships at Cal Tech and MIT, grants from the National Science Foundation, and fellowships in widely recognized honorary professional associations such as the American Association for the Advancement of Science, the National Academy of Sciences, and the Royal Society. When we rely in the credentials of experts to establish their expertise, we are relying on what we as nonexperts know about the credentialing practices of various institutions. To be sure, lay knowledge of such credentials and credentialing practices might itself be less informed than the knowledge that insiders have about those credentialing practices. Still, such credentialing knowledge is likely to be more accessible to external observers than are the expert practices for which the credentials are evidence.

Using such externally observable indicators of expertise is hardly perfect. Consider phrenology. The so-called science of phrenology was more or less invented by the Austrian physician Franz Joseph Gall at the close of the eighteenth century, flourished in the early and middle nineteenth century, and did not die out until the beginning of the twentieth. The basic idea behind phrenology was that it was possible to determine a person’s psychological attributes and behavioral proclivities by examining the exterior terrain of their skull. People with hills or valleys in certain places would likely be aggressive or passive, smart or stupid, selfish or altruistic, and so on. And back when phrenology flourished, it had most of the external trappings of any other academic or scientific discipline. It had professional associations, professional degrees, professional peer-reviewed journals such as the American Phrenological Journal, conferences at which papers were presented, widely used textbooks, manuals of best practices, endorsements by prominent intellectuals such as Harvard president Josiah Quincy, and much else.7 But phrenology was still, to use the term recently reincarnated by President Biden, malarkey. None of the cranial markers of psychological makeup or behavioral tendencies actually mark or predict anything at all, as we now know.8 And so all of the external indicators of useful expertise, indicators that frequently are reliable, failed miserably in the case of phrenology.

Much the same can be said about astrology today. The British-based Astrological Association publishes the Astrological Journal and the Astrology and Medicine Newsletter. Other astrology journals, most of which resemble serious academic journals in their formatting, referencing, footnoting, and much else, include Above and Below: Journal of Astrological Studies, the Geocosmic Journal, and ISAR International Astrology, the last published by the International Society for Astrological Research. Various astrology organizations hold conferences, offer courses, and provide credentials. But unlike phrenology, astrology still attracts and retains a vast number of believers.9 And this level of belief persists in the face of numerous serious academic studies of astrology’s basic premises about the relationship between astrological sign and personality, behavior, and predictions of the future—studies that have confirmed that astrology’s basic premises are false: knowing the position of the planets and stars at the moment of someone’s birth tells us nothing at all about that person’s psychological makeup or behavioral attributes.10 Other than as nonscientific amusement, astrology has been repeatedly shown to fit into the malarkey category, notwithstanding its external trappings of academic legitimacy.

Phrenology and astrology stand as warnings against taking the external indicators of an area of expertise as sacred. These external indicators might be some evidence of genuine knowledge, but they are only evidence, and here the evidence points to something that is not true. And when evidence going in one direction is dwarfed by better evidence for the opposite conclusion, it is a mistake to move from the weaker evidence to a conclusion that only the evidence on one side might support. The fundamental caution provided by phrenology and astrology is that externally visible credentials and related indicia of evidential soundness might be misleading, and that some field’s self-created mechanisms of self-validation might still lead us astray.

If we are wise to be cautious about those external indicators of genuine and valuable expertise that are accessible to nonexperts, then we are back to the original problem. We started with the problem of nonexpert assessment of the value as evidence of expert conclusions and opinion, but if that inquiry only leads us to nonexpert evaluation of credentials and related markers, we have not made much progress. Why should we take the validating practices of the National Academies of Science more seriously than those of the Astrological Association or the International Society for Astrological Research? We should, of course, but why? Why should we take a degree in theoretical physics from MIT as a better indicator of true expertise than a degree from the kind of for-profit educational institution that used to advertise on matchbooks back when people used matchbooks? More precisely, why should we take the allegedly validating indicators seriously if we don’t actually know very much about astrology or about the scientific disciplines represented in the National Academies of Science? The examples of phrenology and astrology caution us against relying too heavily, or at least solely, on internal criteria of the soundness of expert opinion, in the sense of members of an expert community self-validating the expertise of that community.

At this point the recent history of American law on the subject of expert opinion has something to teach us. As described in Chapter 7, William Moulton Marston’s crude lie detector was the central character in what for many years was the American judicial system’s reliance on solely internal criteria of expertise. For reasons that are slightly beside the point here, experts are allowed to say things in court that nonexpert witnesses cannot. Experts can offer opinions about hypothetical examples, but nonexperts must testify only about things as to which they have personal knowledge. Experts can offer opinions based on the accumulated knowledge in their discipline in ways that nonexperts cannot. And experts can offer opinions and conclusions regarding matters where nonexperts would be restricted to hard facts.11

Whether this bifurcation between expert and lay testimony is a good idea is not our concern here.12 But the bifurcation does make it necessary for judges to determine who is an expert and who is not, and to assess which forms of knowledge involve expertise and which do not. When a court was faced with the question in 1923 whether Marston’s lie detector would qualify for the greater leeway granted to expert testimony, the court said that it would not, basing its conclusion on the fact that neither Marston’s device nor the science on which it was based had been “generally accepted” within the relevant scientific community.13 In other words, the test for expertise was internal, and if (and only if) some expert community validated some method or approach could it then count as legitimate expertise.

Eventually most courts, and the US Supreme Court, were confronted with what I have characterized as the phrenology problem. Or perhaps we should call it the malarkey problem. Certain communities of specialists have their own internal criteria, but there is no guarantee that satisfying those internal criteria would actually produce useful evidence or useful knowledge. A good example of this came in 1996 when a New York University physicist named Alan Sokal sent a fabricated and content-free collection of meaningless jargon and fashionable references purporting to establish the nonobjectivity of the existence of gravity to an academic journal in cultural studies, and the journal then proceeded to accept it for publication.14 But the article was even worse than malarkey. It wasn’t just wrong; it was nonsense. The basic point of Sokal’s hoax was to expose the emptiness of a particular journal, and, by implication, an entire field. The larger lesson is that even if fields can be trusted to say who or what satisfies the field’s own internal standards and who or what does not, that does not tell us whether the field or its standards say anything that is externally true or valuable. Phrenologists can tell you who is an expert phrenologist, but that doesn’t tell the rest of us whether to listen to expert phrenologists about disease, personality, behavior, or anything else.

Recognizing this kind of problem, the Supreme Court, which has authority over evidentiary rules in federal but not state courts, ruled in 1993 that the internal criteria of some field might be somewhat relevant in determining expertise, but that external criteria of reliability must be applied as well.15 Even if some phrenologist had won some mythical Phrenologist of the Year award five years running, that phrenologist could qualify as an expert for purposes of testifying as an expert in the federal courts only if it could be demonstrated that phrenology actually had the ability to produce reliable evidence.

And this brings us back to climate change and global warming. To return to the issue as framed in Chapter 1, there are two evidentiary issues at work here. One is about the evidence that scientists consider in concluding that there is climate change, that it is caused by humans, and that human action might mitigate its consequences. And the other is about the evidence that politicians, non-science-trained policymakers, and citizens use in determining that there is global warming. And although the former is far beyond the scope of this book, the latter is of crucial importance. If the evidence that politicians, policymakers, and citizens have for the existence of global warming consists largely of the testimony (including the conclusions) of scientists, then how should those in the groups composed on nonscientists—the politicians, policymakers, and citizens—evaluate and weigh that testimony?

One answer to this question is the one supplied by the Supreme Court—expert testimony must meet external standards of reliability. But what the Supreme Court said about the use of expert evidence in the legal system contains a lesson that is not restricted to expert evidence in courts of law. Outside of the courtroom, and in inquiry generally, there are certain standards of soundness that transcend particular fields. That is what the Supreme Court appeared to have in mind in talking about “reliability.” Insofar as a field, whether nuclear physics or phrenology, makes causal or predictive claims, those claims can be tested by externally accessible means. Those external means include the basic principles of evidence and inquiry, as well as fundamental principles of statistical inference. There might be other such overarching principles of rational inquiry as well, but the fundamental idea is that there are methods and criteria, not exclusive to any particular field or discipline, that allow us to evaluate the soundness of entire fields without getting trapped in the internal phrenology problem. Perhaps the true skeptic might obsess about what makes the basic principles of inquiry, evidentiary inference, statistics, and mathematics sound, but as long as we can leave those worries to philosophers, the rest of us can rest on the assurance that there are ways to evaluate the soundness of entire fields, and thus of the experts within it, that do not require us to rely solely on the very experts whose reliability is precisely the matter at issue.

Thus, when we rely on climate scientists to tell us about the causes and potential consequences of climate change and global warming, we are not relying only on climate scientists to tell us that climate science is reliable. We would no more leave it at that than we would rely on Saudis or Texans to assure us about the importance of fossil fuels. But climate science is itself based on the learning from other fields, and the value of those other fields has been tested against the basic principles of scientific inquiry and scientific validity. Insofar as climate science rests on physics, geology, and chemistry, along with the foundational principles of science in general, we have enough evidence of the soundness of those fields to have at least some confidence in the fields, such as climate science, that they have spawned.

In addition, we can often, even if not necessarily, have confidence in a field if it has survived under circumstances in which there are incentives to attack its methods and its conclusions. One reason phrenology was eventually exposed as worthless was that physicians challenged it and phrenology did not survive those challenges. So too with skepticism about climate science. Oil companies, airlines, and automobile manufacturers have for many years had an obvious interest in the predictions about global warming being false.16 Those predictions have not yet been shown to be false—despite the fact that there has been no lack of trying—and this in itself gives us good reason to take what the scientists, the experts, have said as good evidence for their conclusions.

To recapitulate, the question for us here is not why scientists have concluded that climate change is real, that it is caused by certain activities of human beings, and that the rate of change, especially of warming, can be slowed in certain ways. That topic has been the subject of a huge and burgeoning scientific literature.17 Even those aspects of that literature that dissent from some aspects of the consensus, or the mainstream, acknowledge the central claims, even as they quarrel about issues at the edges. For us, therefore, the question is not what produced those central claims, but, instead, why we as nonscientists ought to treat those central claims as evidence for the nature and causes of climate change. The lesson of this section is that the answer is “Because the scientists have said so”—and this lesson, with its complications, pervades the entire subject of expertise as evidence.

On Watching, Perhaps Too Closely, What We Eat

The bottle of apple juice I purchased this morning contained a large label announcing that the juice contained “No GMOS.” The apple juice also did not contain a vast number of other things, such as arsenic, strychnine, and bat guano, but the juice company did not see any reason to announce that fact. It announced the absence of GMOS—genetically modified organisms—because the company perceived, correctly, that many people object to food products containing such substances, or that are in some way the product of GMO technology.

What makes this issue relevant, especially when examined in conjunction with the issue of climate change, is that GMO technology, like climate change, is also the subject of a scientific consensus. But unlike with climate change, about which we are told we should have significant worry, here it turns out that the scientific consensus, at least in the United States, tells us not to worry. So what is going on here? If the consensus of science and scientists on the dangers of climate change is sufficient to justify major public policies and substantial public agreement, why is a similar consensus about the non-dangers of GMOs seemingly less influential in molding public policy, corporate behavior, and public opinion?

I want to spend a bit of time examining the GMO controversy, but one bit of preliminary brush-clearing is important. Some people object to GMO technology because it interferes in some way with the natural development of natural products. Given that for at least a few thousand years nature has given us disease, earthquakes, tornados, hurricanes, and floods, among other natural disasters and catastrophes, it is hardly clear that deferring to what nature without human intervention provides is necessarily a wise strategy. Perhaps Katherine Hepburn, playing Rose Sayer in The African Queen, was right in observing that “nature is what we were put in this world to rise above.” But whatever moral or religious truth there is in the “let nature alone” view, it is one I enthusiastically put aside here.18

In the context of this book, the more interesting claim by GMO opponents is that GMOs should be avoided, condemned, or banned because they present a non-negligible possibility of physical harm to humans, animals, and the environment. The issue is partly about the precautionary principle discussed in Chapter 3, a principle that cautions inaction—conservatism—in the face of almost any uncertainty.19 But deferring questions about the precautionary principle for the next few paragraphs, the point is that many people believe, for nontheological reasons, that the risks created by GMOS are, even if far from certain, quite substantial.

What makes this particular topic of special relevance here is that the consensus of American scientists is to the contrary. Although the consensus of scientists is that climate change is real, human-caused, and potentially catastrophic, the consensus of scientists—sometimes even the same ones—is that the dangers of GMOS are somewhere between nonexistent and exaggerated. And what makes this relevant issue especially interesting is that the political valence of the controversy tends to be the reverse in the two cases. To oversimplify, most of the skepticism about climate change, its causes, and its dangers comes from the political right. But much of the skepticism about GMOs comes from those who would consider themselves left of center politically, and who are in fact left of center on a wide range of other issues. Given that both cases involve trusting the expert testimony of scientists, can this difference be explained?

One hypothesis is that herd mentality is at work. Following the crowd. Sometimes for good epistemic reasons, people follow crowds because they believe that the crowds usually get it right. Recall from Chapter 5 the discussion of collective intelligence and the alleged wisdom of crowds. But often people follow crowds because they desire to be associated with a certain group of people. We wanted to hang out with the cool kids in high school because they were the cool kids. Period. But here there is also the important evidentiary issue highlighted in the previous section. What do experts know, how do we as nonexperts know who the experts are, and how, why, and when should we take what the experts say as evidence for what the experts are saying? As we will see in Chapters 10 and 11, this is not just about the physical sciences, but for the moment let us stick to physical (or natural) science, perhaps the most obvious area of expertise there is, and the area in which the expertise is often least accessible to the nonexpert. Perhaps, whether rightly or wrongly, I can reach my own conclusions about art, literature, and wine, but I simply have no view whatsoever about the scientific processes or reasoning that leads to the conclusion that GMOs are safe, or, for that matter, harmful. And the issue is confounded even more by the fact that GMO skepticism is considerably greater in Europe, even among scientists, than it is in the United States.20 Because we would not expect the physical phenomena involved in GMOs to vary with geography, the fact that American and European views on this issue are so different, even among scientists, suggests that the issues producing the disagreement are at least partly political and sociological rather than scientific.21

An important issue here—which is perhaps related to questions about the precautionary principle, and perhaps related to the difference between American and European views on the precautionary principle—is the question of what we should make of an absence of evidence. Assume that the consensus of American scientists is correct that there is little evidence (“no evidence” would be too strong) that GMOs are harmful to human health or to the environment. Strictly as a matter of language and logic, the absence of evidence of harm does not entail evidence of the absence of harm. This is a general principle, not restricted to GMOs. But even once we recognize that the absence of evidence is not the same as evidence of absence, and that the absence of evidence does not logically or rationally entail evidence for absence, a further question remains. Can the absence of evidence nevertheless, inductively and not deductively, be some evidence of absence? And here things are not so clear. GMO skeptics, or GMO opponents, might revert to the precautionary principle, believing that in the absence of evidence, especially about things we put into our mouths, we should not assume safety. Better safe than sorry. But the precautionary principle is a principle not of science but of policy. If the science tells us that GMOS are, on the best current knowledge, harmless, then a policy that would restrict them anyway on the basis of the precautionary principle is a policy that cannot be said to be following the science.22

Still, the absence of evidence can be evidence of absence when it follows actual efforts to prove presence. If no one had addressed the issue of GMO safety at all, we might conclude only that there was no evidence of GMO harmfulness. But given that we have now seen at least several decades of unsuccessful attempts to establish that GMOs are harmful, and given that there are financial and political incentives in support of (and, of course, also in opposition to) those efforts, then the existence of non-prevailing opposition is itself some evidence for the conclusion that GMOS are not harmful. If I have never tried to do a hundred pushups, then perhaps I have no evidence that I cannot do a hundred pushups. But if I have regularly tried to do so, and if I have regularly failed, then I have evidence that this is something that I cannot do. The same holds, now, for GMOS, whose harmfulness appears yet to be established despite vigorous efforts to do so.

The issue of GMOs, therefore, is of a piece with the issue of climate change, the opposite political and social valence of the two issues notwithstanding. If we adopt the principle that what the consensus of scientists say is good evidence for what they are saying, then we have good evidence for the dangers of climate change and similarly good evidence that GMOs are not dangerous. Conversely, those who remain skeptical about the harmlessness of GMOS in the face of relatively authoritative conclusions of scientists and scientific groups about that harmlessness should have to explain why they accept the relatively authoritative conclusions of many of the same scientists and many of the same groups on the issue of climate change. The idea of authority is content-independent in the sense that relying on authority means taking the source of a conclusion as at least a reason for accepting it. In the language of countless exasperated parents whose attempts at reasoning with their children have failed, “because I said so” captures well what is at the heart of the idea of authority and thus of deference to that authority. That so many of those who reject the same sources of authority as evidence with respect to GMOs that they accept with respect to climate change suggests that they are relying not on authority at all, and thus not on the idea of expertise at all, but instead on something best explained by sociologists and political scientists.

Vaccination, Then and Now

As I write this, one of the biggest topics in American public policy is vaccination. Most of the controversies surrounding the topic relate to the supply and administration of Covid-19 vaccines, including controversies over who should be given priority in being vaccinated and how the world’s limited supplies of vaccines should be apportioned—controversies that will persist as long as supplies are limited and the ability to administer vaccines remains limited. But one issue is how to deal with the fact that many people are unwilling to be vaccinated. Some of that resistance comes from some portions of the African American community, for whom the (long-standing and not Covid-19-specific) resistance is based on ugly episodes in the past where African Americans were used as experimental subjects, typically without choice, as in some of the prison experiments, and often without disclosure of the risks even when participation was allegedly voluntary.23

The resistance of some minority communities in the United States to vaccination is a crucial policy issue, but of immediate relevance to this part of this book is the resistance of many others, including many white evangelicals, not only to Covid-19 vaccination, but also to vaccination generally, a resistance that long precedes Covid-19. Vaccination skepticism, after all, has been around for a long time.24 The most relevant form of pre-Covid-19 vaccine skepticism is the belief that vaccination is a risk factor for autism. And here the politics are murkier. It appears that climate-change skepticism tilts to the right, and that GMO skepticism tilts to the left, but vaccination skepticism seems to know no ideology. Believers in a causal relationship between vaccination and autism, or between vaccination and other untoward ailments and conditions, exist throughout the political spectrum. Yet although vaccination resistance spans the full range of vaccines, much of the pre-Covid-19 resistance was focused on the MMR (measles, mumps, rubella) vaccine and the claim that it was a substantial risk factor for autism.

Vaccination skeptics may come in all political and ideological varieties, but regarding vaccination the scientific evidence is even clearer. Perhaps fueled by the size and persistence of the belief that there is a connection between vaccinations and autism, and also by the fact that non-vaccination has potentially disastrous community-wide consequences, there has been serious scientific testing of the hypothesized connection, and once again the conclusion has been that no such connection exists.25 And here, to reprise an issue mentioned earlier, it is not that there is no evidence to support the connection. It is that there is evidence—strong evidence—to support the conclusion that there is no connection. Perhaps there is more evidence for the connection between vaccination and autism than there is for the existence of the mysterious “Q,” who some people believe is ridding the federal government of evil forces, or than there is for flying saucers coming to earth. Still, there is now overwhelming evidence that there is no connection between vaccinations and autism–evidence that has come from cutting-edge scientific studies filtered through peer review and the other mechanisms of scientific validity.

Again, most of us have no way of directly knowing whether there is a connection between vaccinations and autism. Drugs do cause side effects, and on the face of it the claim that autism could be one of the side effects of a particular substance being introduced into the body in a particular way is not as preposterous as the claim that the Democratic Party has been infiltrated by a cadre of Satanist pedophiles. Still, we do have evidence of the absence of any connection between vaccination and autism, and that evidence, for most of us, comes from the testimony of scientists, either individually or collectively. That is not only good evidence, but it is also of a piece with much of the evidence on which we base our daily lives. If we can trust the testimony of scientists that climate change is real and that GMOs are not harmful, then for the same reason we can trust the scientists that vaccination is not a cause of autism, despite the insistence of a large number of nonscientists.

On the Idea of a Consensus

It would be a mistake to say that there are no dissenters to the claims that climate change poses significant dangers or the claims that GMOs and vaccination present no dangers. There are even dissenters within the relevant scientific communities. Nonetheless, we who are outside the relevant expert communities are asked, even in the face of the disagreement of some experts, to rely on the evidentiary weight of there being a consensus of experts, the occasional dissenter notwithstanding.

What exactly is a consensus? If there were strict boundaries to the relevant domains of expertise, we could treat those domains as if they were legislatures, and simply take a vote, with the majority, or two-thirds, or three-quarters, being the threshold for being able to say that the consensus of scientists is that global warming is real, just as the consensus of the Electoral College was that Joe Biden is the president, and just as the Virginia General Assembly decided, by consensus, to abolish the death penalty.

Expert communities, however, are not like this. One reason is that the judgment of expert communities does not make something so. It is only evidence of something. And this is unlike the determinations of the Electoral College or the Virginia General Assembly, whose very actions produce an outcome, just as two people saying “I will” before a duly empowered official produce a marriage. In contrast, the conclusions of experts are evidence, and those conclusions are rarely the formal judgments of an organized body whose rules determine when the body will issue an evidentiary pronouncement and when it will not. Rather, we often confront the slippery idea of a consensus. And the idea of consensus is slippery not only because the idea does not contain numerical thresholds, but also because the idea of a consensus incorporates some equally slippery sense that some of the constituent voices are to count for more than others. One-person-one-vote may be a (sometimes observed) principle of democracy and of constitutional law, but it is not necessarily the principle of what is to count as a consensus. If 50 professors of medicine at the country’s leading medical schools, in conjunction with 30 physicians who are officers of the country’s major medical societies, all agree that vaccination is not a cause of autism, then even if there are a total of 80 physicians in the United States who say just the opposite, then we can (and should) still say that the medical consensus is that no causal relationship exists.

Implicit in the forgoing conclusion is the idea that a consensus, even if partially numerical, is even more substantially sociological. We cannot know what counts as a consensus within some profession or expert community unless we know something about the sociology of that community: Who counts and who does not. Which forms of publication are to be taken most seriously and which are not, or less so. Which methods are accepted and which are rejected. And like any sociological inquiry, the sociological account of what is to constitute a consensus records the pathologies of the community as well as its strengths. When we say that the consensus of some community is such and such, we implicitly accept that community’s own inclusions and exclusions, its own hierarchies, and its own forms of discrimination, some of which are good and some of which are not. Accordingly, saying that the consensus of science supports the conclusion that climate change is real, and that the consensus of medicine supports the conclusion that there is no connection between vaccinations and autism, brings us back to the problem that dominates this chapter: Nonexpert reliance on expert conclusions is as irrevocably problematic as it is irrevocably necessary.

The Limits of Science and the Limits of Expertise

The philosopher Nathan Ballantyne has given us a very nice phrase—epistemic trespassing.26 The idea is that although there are people who know things that others do not, and although it is both necessary and desirable that we rely on such superior knowledge as evidence, members of various knowledge communities have a tendency—one we should guard against—to claim a knowledge and an expertise that go beyond the grounds for their knowledge and expertise.

The phenomenon is familiar. Public letters published as newspaper advertisements frequently tout the number of Nobel Prize winners who agree with the letter’s conclusion, even if those conclusions do not relate to what the Nobel Prize winner won the Nobel Prize for. In Ballantyne’s language, these Nobel Prize winners are trespassers, and we should beware of them. But although epistemic trespassing appears problematic, and appears to involve unwarranted claims of unjustified expertise, we might still take some qualifications and credentials as proxies for more remote forms of expertise, and thus treat the testimony of epistemic trespassers as worthy of evidential weight. Nobel Prize winners in the natural sciences are probably more intelligent than the average person, and for this reason alone we might treat what they say about areas outside their expertise as evidence. We also might suppose that natural-science Nobel Prize winners are more knowledgeable about science in general than the average person, and so what a Nobel Prize winner in chemistry thinks about some proposition in physics might be entitled to greater evidentiary weight than what some equally accomplished poet or sculptor thinks about the same proposition. Less obviously, we might assume that scientists are more concerned with the accuracy of empirical observation than most people, and so we might treat the observational testimony of a Nobel Prize winner in the natural sciences as being especially likely to be accurate and thus worthy of evidential respect.

The more serious problem of evidential trespassing, one that is not addressed by Ballantyne, comes from the frequency with which experts, most commonly scientists, engage in the particular form of evidential trespassing that we might characterize as policy hegemony—the assumption that an expert in the empirical or even theoretical aspect of some domain has some sort of privileged position with respect to what some public policy ought to be with respect to that domain’s empirical, factual, or descriptive findings.

Let us start with a controversial and widely known example. In July 1945, Leo Szilard, the physicist most responsible for the design and development of the atomic bomb, drafted a petition to President Harry Truman. The petition was signed by many of the other scientists involved in the bomb project, and it urged Truman not to use the bomb that they had helped to develop against Japan except under certain narrow conditions.27 Largely because of the resistance to the petition by senior members of the military and Secretary of War Henry Stimson’s staff, the petition was never delivered to Truman or Stimson. But there is no indication that Truman would have heeded the requests in the petition even had he seen it, and the bombings of Hiroshima and then Nagasaki were inconsistent with what Szilard and his co-petitioners desired.28

The basis for the argument in the petition was the scientists’ awareness of the enormous destructive capacity of the bomb. But although there is no reason to believe that President Truman had any more idea of what makes an atomic bomb work than I do, it seems clear from the historical record that at the moment when he authorized the dropping of the bombs on Hiroshima and Nagasaki, Truman knew the extent of the bombs’ destructive capacities. This being so, the question turns to what the scientists knew that Truman did not. Obviously, there were empirical and predictive issues, still debated, about how many lives would have been lost had there been a land invasion instead of an atomic bomb attack. There were also empirical and predictive issues about what would have been necessary to induce the Japanese to surrender prior to Hiroshima, and then after Hiroshima but prior to Nagasaki. Moreover, there were and remain immense moral issues, still debated, about how a president in time of war should value the relative importance of American and enemy lives, and so too with military and civilian lives. Issues also arise about the extent to which dropping the bomb would have encouraged further nuclear proliferation, and about what the consequences of that proliferation would have been. But with the exception of the way in which the last question may require knowledge of what it would have taken for other countries to build a bomb, none of these questions fell within the domain of expertise of those who had enabled the building of the bomb. Szilard’s aims were morally admirable, but there was no reason Truman should have listened to Szilard and his co-signers more than to anyone else with equally morally admirable goals.

Plainly there are cases in which an expert in some area has, because of that expertise, reason to learn about things in the vicinity of that expertise that others might not know. But that was not the case with the bomb. In making the decision about the costs and benefits, to put it crassly, of dropping the bomb, Truman had as much evidence as the scientists. We can question, therefore, whether in petitioning Truman the scientists were relying on any relevant expertise they had that Truman did not. No doubt their involvement gave them a feeling of personal responsibility, and in that sense writing the petition was totally understandable, both personally and morally. But none of that goes to the question whether Truman, had he seen the petition, should have taken what the scientists said about actually using the bomb as coming from the vantage point of comparative expertise, and it is not at all clear that it did.

Much the same can be said these days about climate change. Climate scientists can tell us, and should tell us, about what the world will or might look like in 2050 or 2100 if we do not cut back on fossil fuels, for example. But on the question whether a given amount of welfare (or utility, or pleasure, or even financial) sacrifice now is worth guarding against what might happen in thirty or seventy years, again it is hardly clear that climate scientists have any comparative expertise. The issues are undoubtedly vital, but once the scientists have used their expertise to tell us what is happening now and what can happen in the future, the question of what to do is no longer a question about which scientists have the kind of expertise that should lead others to treat their policy prescriptions as evidence of what ought to be done.

The issue of this kind of policy trespassing or policy hegemony is particularly salient in the area of Covid-19. Despite the urgings of many scientists—including the most prominent and respected scientist on the issue, Dr. Anthony Fauci—to “follow” the science, whether and how to follow the science is not wholly or ultimately a scientific question. Scientists can and should tell us what will happen if we do or do not take certain measures, but whether to take those measures, especially if they have costs in the largest sense, is not a scientific question. Just as the ability of traffic analysts to tell us how many lives would be saved if the speed limits on interstate highways would be reduced to fifty miles per hour does not tell us whether to take that step, epidemiology not tell us how to balance some amount of increased epidemiological risk with some other amount of loss of personal liberty or loss of economic activity.

None of this should be taken as suggesting that it is ever right to go against the science. Nor as suggesting that policy goals should lead us to ignore or distort or contradict the science. If that is all that following the science means, then there is no reason to doubt that that is exactly what we should do. But if following the science means following the scientists in their assessment of the correct balance between health and economics, or health and liberty, then there is reason to hesitate. These are monumental policy decisions, of course, but they are not policy decisions with respect to which those who supply the appropriate first words about the empirical questions are the ones who ought to have the last word about what we should do.