image

There’s good news and bad news. Good arguments are valuable, but bad arguments can be devastating, as we saw in Colin Powell’s testimony to the United Nations. In less extreme cases, bad arguments can mislead us into wasting money on superfluous insurance or unreliable used cars, believing fairy tales and delusions, and adopting destructive government programmes as well as failing to adopt constructive government programmes. These dangers make it crucial to recognize and avoid bad arguments.

Bad arguments can obviously be intentional or unintentional. Sometimes speakers present arguments that they see as good, even though their arguments are really bad. These are mistakes. In other cases, speakers know that their arguments are bad, but they use them anyway to fool others. These are tricks. The argument can be equally bad in either case. The only difference lies in the arguer’s awareness and intention. It is important to detect fallacies in both cases.

The idiosyncrasies and variety of bad arguments preclude any complete survey, but many bad arguments do fall into general patterns called fallacies. We already saw several common fallacies, including affirming the consequent and denying the antecedent in deductive arguments, plus hasty generalization and overlooking conflicting reference classes in inductive arguments. Of course, false premises can make any argument bad regardless of its form.

This chapter will introduce several more kinds of fallacies that often lead people astray. I will focus on three general groups of fallacies that are especially common.

What do you mean?

Our definition of arguments revealed not only the purpose and form of arguments but also their material: arguments are made of language. Both premises and conclusions are propositions expressed by declarative sentences in some language. It should come as no surprise, then, that arguments fall apart when language breaks down, just as bridges fall apart when there are cracks in the material out of which they are made.

Language can crack in many ways, but here the two most common and important defects are vagueness and ambiguity. Vagueness occurs when words or sentences are not precise enough for the context. In a scavenger hunt, instructions to find something tall are too vague if players do not know whether they can win by producing a person somewhat above average height. In contrast, ambiguity occurs when a word has two distinct meanings, and it is not clear which meaning the speaker intends. If I promise to meet you next to the bank, then I had better tell you whether I mean the commercial bank or the river bank. A single word can sometimes be both vague and ambiguous, such as when it matters where exactly the river bank ends.

DOUBLE ENTENDRE

Ambiguity is rampant in newspaper headlines. One of my favourite examples is ‘Mrs Gandhi Stoned in Rally in India’.1 Yes, a newspaper actually printed that headline. It can mean either that the crowd threw stones at Mrs Gandhi or that she took drugs that intoxicated her. You had to read the article to find out. Another favourite is ‘Police Kill Man With Axe’. Here the issue is not that a single word like ‘stoned’ changes meaning, but instead that the man might be ‘with axe’ or the police might be ‘with axe’. When grammar or syntax creates ambiguity like this, it is called amphiboly. Either kind of ambiguity can produce amusement not only in headlines but also in jokes, such as ‘I wondered why the Frisbee was getting bigger, and then it hit me.’2

Such ambiguity can ruin arguments. Imagine someone arguing, ‘My neighbour had a friend for dinner. Anyone who has a friend for dinner is a cannibal. Cannibals should be punished. Therefore, my neighbour should be punished.’ This argument is fallacious, but why? Its first premise seems to mean that my neighbour invited a friend over to his house to eat dinner. In contrast, its second premise refers to people who eat friends for dinner. These premises use different meanings of the phrase ‘had a friend for dinner’. And if the whole argument sticks with the same meaning in both premises, then one of the premises comes out clearly false. The first premise is not true (I hope) if it means that my neighbour ate a friend for dinner. The second premise is not true if it refers to people who have friends over to their houses for dinner. Thus the argument fails on either interpretation. This fallacy is called equivocation.

A more serious example is the widespread argument that homosexuality is unnatural, so it must be immoral. This argument clearly depends on the suppressed premise that what is unnatural is immoral. Adding that extra premise, the argument looks like this: (1) Homosexuality is unnatural. (2) Everything unnatural is immoral. Therefore, (3) homosexuality is immoral.

The force of this argument depends on the word ‘unnatural’. What does ‘unnatural’ mean here? It might mean that homosexuals violate laws of nature, but that cannot be correct. Homosexuality is not a miracle, so premise (1) must be false in this sense of ‘unnatural’. Instead, premise (1) might mean that homosexuality is abnormal or an exception to generalities in nature. This premise is true, simply because homosexuality is statistically uncommon. But now, is premise (2) true? What is immoral about being statistically uncommon? It is also uncommon to play the sitar or to remain celibate, but sitar-playing and celibacy are not immoral. On a third interpretation, premise (1) might mean that homosexuality is artificial rather than a product of nature alone, as in food with ‘all natural’ ingredients. But again, what is wrong with that? Some artificial ingredients taste good and are good for you. So premise (2) again comes out false on this interpretation.

These critics of homosexuality might mean something more sophisticated, such as contrary to evolved purposes. This interpretation is more charitable and plausible. Their idea might be that it is dangerous to go against evolution, such as when someone tries to hammer a nail with his head, since our heads did not evolve to pound nails. This principle, plus the added premises that the evolved purpose of sex organs is to produce children and that homosexuals use their sex organs for purposes other than to produce children, might seem to support the conclusion that homosexuality is dangerous or immoral.

How can homosexuals and their allies respond to this argument? First, they can deny that the only evolutionary purpose of sex organs is to produce children. We also evolved in such a way that sex can bring pleasure and express love in heterosexuals as well as homosexuals. There is nothing unnatural about those other purposes. Sex can serve many evolutionary purposes. Second, defenders of homosexuality can deny that it is always dangerous or immoral to use bodily organs apart from their evolved purposes. Our ears did not evolve to hold jewellery, but that does not make it immoral to wear earrings. By the same token, the claim that homosexuals do not use their sex organs for their evolved purposes also would not show anything immoral about homosexuality.

Finally, the argument might use ‘unnatural’ to mean something like ‘contrary to God’s plan, intention, or design for nature’. The main problem with this move is to show why defenders of homosexuality should accept premise (1), which now claims that homosexuality is contrary to God’s plan or design. This premise assumes that God exists, that God has a relevant plan and that homosexuality violates that plan. Many critics of homosexuality accept those assumptions, but their opponents do not. Thus it is not clear how this argument is supposed to have any force against anyone who did not already agree with its conclusion.

Overall, then, this argument that homosexuality is immoral because it is unnatural suffers from a central ambiguity. It commits the fallacy of equivocation. This criticism does not end the discussion. Defenders of the argument can still try to respond by delivering a different meaning of ‘unnatural’ that makes its premises true and justified. Alternatively, opponents of homosexuality could shift to a different argument. But they need to do something. The burden is on them. They cannot rely on this simple argument in its present form if it equivocates.

This example illustrates a pattern of questions that we should ask every time we suspect a fallacy of equivocation. First, ask which word seems to change meaning. Then ask which different meanings that word could have. Then specify one of those meanings at each point where that word occurs in the argument. Then ask whether the premises come out true and provide enough reason for the conclusion under that interpretation. If one of these interpretations yields a strong argument, that one meaning is enough for the argument to work. But if none of these interpretations yields a strong argument, then the argument commits the fallacy of equivocation, unless you simply failed to find the meaning that saves the argument.

SLIP SLIDING AWAY

The second way for language to lack clarity is vagueness. Vagueness is explored in a massive literature in philosophy,3 which discusses such pressing issues as how many grains it takes to make a heap of sand. Vagueness also raises practical issues every day.

My friends often show up late. Don’t yours? Suppose Maria agreed to meet you around noon for lunch, and she arrives at one second after noon. That is still around noon, isn’t it? What if she arrives two seconds after noon? That’s still around noon, right? Three seconds? Four seconds? You would not accuse her of being late if she arrived thirty seconds after noon, would you? Moreover, one more second cannot make a difference to whether or not she is late. It would be implausible to claim that fifty-nine seconds after noon is not late, but sixty seconds after noon is late. Now we have a paradox: Maria is not late if she arrives one second after noon. One second more cannot make her late if she was not late already. These premises together imply that she cannot ever be late even if she arrives a full hour after noon, since an hour is just a series of one second after another. The problem is that this conclusion is clearly false, because she is definitely late if she arrives an hour after noon.

This paradox arises partly because we started with the vague term ‘around noon’. There would be no (or less) paradox if Maria agreed to meet you before noon. But that is the point. Vagueness leads to paradox, and we cannot avoid using vague terms in our everyday speech, so how can we avoid paradox? We can’t.

Does this paradox matter? It does if we want to understand vagueness theoretically. It also matters practically if Maria is so late that we need to decide whether to complain or leave or order lunch without her. At what time do such actions become justified? I recall sitting for many minutes wondering about this issue.

No matter how long we wait, we definitely should not reach some conclusions. Several philosophers argue in effect that nobody is ever really late, because there is no precise time at which someone becomes late (at least when they promise to arrive around noon). Some also conclude that there is no real difference between being on time and being late. This kind of reasoning is a conceptual slippery-slope argument. It makes punctuality unavoidable, because you cannot ever really be late.

A different kind of slippery slope focuses not on concepts but instead on causal effects. A causal slippery-slope argument claims that an otherwise innocuous action will probably lead you down a slippery slope that ends in disaster, so you should not do that first action. If Maria arrives one minute late, and nobody complains, then her minor tardiness might make her more likely to arrive two minutes late the next time, and then three minutes late, and then four minutes late, and so on. Slippery slopes like this lead to bad habits.

How do we deal with these problems? We draw lines. If Maria starts to show up too late, then we might tell Maria, ‘If you are not there by 12:15, then I will leave.’ We also have to carry out this threat, but there’s nothing wrong with that, if Maria was warned. It might seem problematic to be so arbitrary. However, although it is arbitrary to pick 12.15 instead of 12:14 or 12:16, we still do have reasons to draw some line (how else are we going to get Maria to stop showing up later and later?), and we also have reasons to locate our line within a certain area (after 12:01 and before 1:00). Our reasons for drawing a line between limits solve the practical problem of slippery-slope arguments, even if they leave many philosophical issues up in the air.

Tardy friends are annoying, but other slippery-slope arguments raise much more serious issues, such as torture. Torture is immoral in almost all cases, but the guarding term ‘almost’ is crucial. There is no justification for useless torture, as at Abu Ghraib, but some ethicists defend torture when it is likely to avoid extreme harm, as in ticking time-bomb cases. Imagine that the police capture an admitted terrorist who has planted a time bomb that will kill many people soon if not defused. The police can stop the slaughter if and only if the terrorist tells them where the bomb is, but he refuses to talk. There is some chance that he will reveal the bomb’s location if they inflict enough pain on him, such as by waterboarding.

Such cases are controversial, but the point here is just that common arguments on both sides depend on vagueness and slippery slopes. One continuum is the number of people who would be harmed if the bomb went off. There is no precise number needed to justify torture. Another continuum is probability. Torture usually produces false information, but still has some chance of success. It is impossible to say precisely how high the probability of gaining accurate information needs to be in order to justify torture to save a certain number of lives. A third continuum is the amount of suffering caused by torture. Waterboarding for a minute is one thing, but it can go on for hours. And what about beating, burning and electrocuting? Are they also allowed? How much? How long? Again, it is impossible to say precisely how much pain is permitted for a specific increase in the chances of saving a specific number of lives.

These continuums enable conceptual slippery-slope arguments. Here’s one: Police would not be justified in inflicting extreme pain only in order to reduce the chances of a terrorist stink bomb by 0.00001 per cent. A tiny increase in the amount of harm prevented or in the probability of success or a tiny decrease in the amount of pain inflicted cannot change unjustified torture into justified torture. The same goes for the next tiny increment and so on. Therefore, no torture – indeed, no infliction of any pain during interrogation – is ever justified.

This argument is reversible. Police would be justified in making a suspect sit in an uncomfortable chair for a minute in order to reduce the chances by 10 per cent of a nuclear explosion that will kill millions. A tiny decrease in the number of people saved or in the probability of success or a tiny increase in the amount of pain cannot change justified interrogation into unjustified torture. The same goes for the next tiny increment and so on down the slippery slope. Therefore, no torture is ever unjustified.

When an argument runs equally smoothly in either direction, it fails in both directions, because it cannot give any reason why one conclusion is better than its opposite. The general lesson is that we all need to test our own arguments by asking whether opponents can give similar arguments on the other side. If so, that symmetry is a strong indication that our own argument is inadequate as it stands.

That lesson still does not tell us how to stop sliding down the slippery slope. One potential solution is definition. The US government at one point declared that interrogation is not torture unless it causes pain equivalent to organ failure.4 That definition was supposed to allow interrogators to waterboard suspects for a long time without engaging in torture. However, opponents could simply define torture more broadly. They might say, for example, that police torture whenever they intentionally cause any physical pain. Then even a few seconds of waterboarding counts as torture, but so does requiring suspects to stand (or sit in an uncomfortable chair) for an hour if that is intended to make them more compliant. Thus, as before, opponents can make the same move in opposite directions.

Nonetheless, definitions do provide some glimmer of hope. It is not enough for such definitions to capture common usage, as in a dictionary. Common usage is too vague to resolve this issue. Instead, definitions of torture aim at a practical or moral goal. They try to (and should) group together all cases that are similar in moral respects. As a result, opponents can discuss which definition achieves this goal. That debate will be complex and controversial, but at least we know what needs to be done in order to make progress on this issue: we need to determine which definition leads to the most defensible laws and policies.

What about the causal slippery slope? Here the two sides are not as symmetrical. If we start waterboarding a little bit, this first step onto the slippery slope seems likely to break down psychological, institutional and legal barriers to torture, which will lead to waterboarding for longer periods of time in more situations with less harm to avoid and less chance of success. That causal slippery slope could eventually lead to widespread unjustified torture. In the other direction, if we reduce extreme torture a little bit, it seems much less likely that this minor mercy will make police give up interrogation entirely. The strong motives for interrogation will probably stop that causal slippery slope from leading to disaster. Thus, the causal slippery-slope argument against torture cannot be dismissed as symmetrical in the same way as the conceptual slippery-slope argument for the same conclusion.

As always, I am not endorsing this argument or its conclusion. Indeed, classifying it as a causal slippery slope instead of a conceptual slippery slope reveals places where opponents can object. This argument depends on a controversial prediction: a little bit of waterboarding will eventually cause a lot of waterboarding. That premise might be accurate, but it is not obvious, especially because institutions can adopt rules that limit the degree and amount of torture that is allowed. If we want to avoid extreme torture, two options might work. One is to forbid all torture. Another is to enforce rules that limit torture. Of course, opponents of all torture will deny that such limits can be enforced effectively, but they need to argue for that claim. In reply, defenders of limited torture need to show how institutions really could restrict torture effectively. It is not clear how to establish either of these conflicting premises, but our analysis of these arguments as causal slippery slopes has made progress by locating and clarifying the crucial issue.

Whether or not you accept the argument against torture, it reveals what we need to do in order to assess any slippery-slope argument. First determine whether the slippery slope is conceptual or causal. If it is conceptual, ask whether the slope is equally slippery in the opposite direction and whether the problem can be solved by a definition that is justified by its practical or theoretical benefits. If the slippery slope is causal, ask whether setting foot on the slope really will lead to disaster. Asking and answering these questions can help us determine which slippery slopes we really do need to avoid.

Can I trust you?

Our second group of fallacies raises questions about when premises are relevant to the conclusion. It is surprising how often arguments jump from premises about one topic to a conclusion about a different topic.

Blatant examples occur when people fail to answer the question that was asked. This scam saturates political debates and undermines understanding. We all need to learn to spot it and stop it. We need to notice when people fail to answer questions and then call them out publicly.

Here we will focus on more subtle instances of irrelevance. Specifically, many arguments present premises about a person as reasons for a conclusion about some proposition or belief. These arguments can be positive or negative. One might argue, ‘He’s a bad person, so what he says is false.’ Alternatively, one might argue, ‘He’s a good person, so what he says is true.’ The former is described as an ad hominem argument, whereas the latter is an appeal to authority. The difference lies in whether the argument invites me to distrust or to trust the person.

ATTACKING PEOPLE

Here is a classic example of the negative pattern:

It’s an interesting question: Why do so many political protesters tend to be, to put it mildly, physically ugly? … [I]t is simply a visual fact that the students and non-students marching in these picket lines with hand-lettered placards are mostly quite unattractive human beings … They are either too fat or too thin, they tend to be strangely proportioned … But if nature failed to give most of these people much to work with, they themselves have not improved matters much. Ill-fitting blue jeans seem to be the uniform. Sloppy shirts. Hair looks unkempt, unwashed. They wear a variety of stupid-looking shoes. Yuck …5

This writer is clearly trying to get readers to distrust and dismiss the protesters because of their appearance.

It is hard to imagine that anyone would be misled by such a blatant fallacy, but sometimes it does work by associating the target with negative feelings such as disgust, contempt or fear. These negative emotions can produce distrust, even when the features that trigger the negative emotions are irrelevant to the topic at hand. This trick has been used to exclude the views of dissident groups throughout history. It might also lie behind laws (throughout much of the United States) that deprive ex-felons of a right to vote, even on issues which they know and care a lot about, such as criminal policy. And it infects criminal trials when juries distrust a rape victim’s allegations because she had previously had voluntary sex more than they think proper.

Ad hominem arguments vary in flavour. The most flagrant fallacy occurs when someone argues, ‘She has a bad feature, so what she says must be false.’ A less blatant form occurs when reliability is doubted, as in ‘She has a bad feature, so you cannot trust what she says.’ The crucial difference between these two variations is that the former concludes that a claim is false, whereas the latter leaves us not knowing what to believe. A third version denies someone’s right to speak at all, ‘She has a bad feature, so she has no right to speak on this topic.’ This conclusion again does not tell us what to believe, because it leaves open the question of whether her views would be true and reliable if she did speak. Often, as in the quotation above, it is not clear which of these points is being made, even though the point lies somewhere in this general area.

Each kind of ad hominem fallacy is able to mislead partly because other arguments of the same kind do provide reasons for their conclusions. Spectators do not have the right to speak during parliamentary debates, no matter how reliable they would be if they did speak. You really should not trust someone who failed physics but takes a strong stand on a controversy in physics. And sometimes the features of people even give reasons to believe that what they say is false, such as when the owner of a cheap-clothing shop tells you that his products are made of the finest silk.

Despite this possibility, ad hominem arguments are fallacious often enough that they should be inspected with great suspicion. You should always take great care before reaching a conclusion about a belief from negative premises about the believer.

Unfortunately, people are rarely this careful. As we saw in Part One, conservatives often reject their opponents’ views by calling their opponents liberal, just as liberals often dismiss their opponents’ views by calling their opponents conservative. Such classifications commit ad hominem fallacies insofar as they use premises about the person being liberal or conservative to reach conclusions about particular claims by those people. Liberals are right sometimes, and so are conservatives, so it is very dubious to argue that any belief is true or false just because the believer is liberal or conservative.

The mistake is different when someone calls their opponents stupid or crazy. These are attributes of the person, so this argument is still an ad hominem. Nonetheless, it is legitimate to distrust the views of people who really are stupid or crazy, at least when their views are idiosyncratic. The main problem here is that the premises are usually false, because the person being attacked is not really stupid or crazy.

A general tendency to be fooled by these fallacies feeds the political polarization that impedes cooperation and social progress. When we dismiss opponents on the basis of what they are, we cut ourselves off from any hope of understanding them or learning from them. That is one reason why we need to be careful to avoid this kind of fallacy.

In general, whenever you encounter any ad hominem argument that moves from premises about a person’s negative features to a conclusion about that person’s claim, you should critically evaluate whether the premises are true and also whether the negative feature really is relevant to the truth of the claim, to the reliability of the person, or to the right of this person to speak on this issue. Asking these questions will help you reduce both personal errors and social polarization.

QUESTIONING AUTHORITY

The positive pattern of arguing from people to positions is at least as common as the negative pattern. The tendency to trust people whom we like or admire has been described as the halo effect (after angels with halos), and the tendency to distrust people who we dislike has been called a horn effect (after devils with horns). We are subject to both effects: halos and horns. We trust our allies as much as we distrust our opponents. Indeed, we often trust our allies too much.

When people trust an authority, they argue from premises about that authority to a conclusion about what that authority said. I might argue, ‘My friend told me that our neighbour is having an affair, so our neighbour is having an affair.’ This argument is only as strong as my friend is reliable on issues like this. Similarly, I might argue, ‘This website or news channel told me that our President is having an affair, so our President is having an affair.’ This argument is only as strong as this website or news channel is reliable on issues like this. If a friend or news channel is not reliable on issues like this, then sources like these do not deserve our trust on this issue. But if they are reliable, then they do deserve at least some trust, even if they disagree with us.

How can we tell whether a source of information is reliable on a particular issue? There is no foolproof test, but a good start is to ask a simple series of questions.

The first question that we always need to ask is simple: ‘Did the arguer cite the authority correctly?’ The news article that we reconstructed in Chapter 8 quoted Robert Jauncey and paraphrased an Asian Development Bank (ADB) report. We should have asked, ‘Did Jauncey really say these precise words? Did the ADB really report what the article claims?’ It is surprising how often people misquote authorities either intentionally or by mistake. Even when authorities are quoted accurately, their words are sometimes pulled out of context in ways that distort the meaning. Jauncey was quoted as saying, ‘There has been a rapid rise of urban villages in recent years due to increased poverty and the negative impacts of climate change.’ Now imagine that his next sentence was, ‘Fortunately, these trends are slowing and even reversing, so we do not need to worry about urban villages in coming years.’ If he had said this – he didn’t, but if he had – then the quotation in the article would have been extremely misleading, even though he did say exactly what it reported that he said. Thus, whenever you encounter an appeal to authority, you should ask not only whether the appeal accurately reported the authority’s words but also whether the appeal correctly represented the authority’s meaning.

The second question to ask about appeals to authority is more complex: ‘Can the cited authority be trusted to tell the truth?’ Whereas the first question was about words and meanings, this second question is about motives. If the authority had some incentive to lie, or if the authority has a tendency to report its findings loosely or in misleading ways, then it cannot be trusted even when it is quoted correctly. For example, if Jauncey were trying to raise money for a charity that employs him, so that he would benefit personally if he could convince you to donate money to help solve problems of urban villages, then you would have reason to wonder whether he was exaggerating the problem for his own purposes. His self-interest then gives grounds for mistrust, since it could lead him to report a falsehood even when he knows the truth.

What should we do if an authority cannot be trusted because of self-interest or whatever? One approach is to check independent authorities. If different authorities do not depend on each other and have no motivation to promote the same view, but they still agree, then the best explanation of why they agree is usually that their belief is accurate – so we have reason to trust them. To justify trust, seek confirmation.

The third question is even trickier: ‘Is the cited authority in fact an authority in the appropriate area?’ It takes a lot of work to become an authority in even one area, so few people are able to achieve authority in a wide range of areas. People who know a lot about history usually do not know as much about mathematics, and vice versa. Real masters of all trades are extremely rare. Nonetheless, even when their expertise is limited to a specific topic, authorities often think that they know more about other topics than they actually do. Success in one area breeds overconfidence in others.

The most obvious cases occur when athletes endorse cars or other commercial products that have nothing to do with the sports in and on which they are experts. Sports heroes as well as actors, business leaders and military heroes also often endorse political candidates, even when there is little or no basis for assuming that these experts in their own fields know more than anyone else about political candidates or policies.

A similar problem arises in law. Psychiatrists and clinical psychologists are trained in diagnosis and treatment of mental illnesses, but lawyers sometimes ask them to predict the likelihood of future crimes by defendants. Are they authorities in this area? No, according to their own professional organization: ‘It does appear from reading the research that the validity of psychological predictions of dangerous behaviour, at least in the sentencing and release situation we are considering, is extremely poor, so poor that one could oppose their use on the strictly empirical grounds that psychologists are not professionally competent to make such judgments.’6 In short, authorities on psychiatric diagnosis and treatment are not authorities on the prediction of criminal behaviour. As a result, to appeal to their authority as a basis for legal decisions is fallacious. This fallacy can be uncovered and avoided by asking whether the cited authorities are authorities in the right area.

Fourth, we should ask, ‘Is there agreement among appropriate experts on this issue?’ Of course, there cannot be agreement among appropriate experts if there are no appropriate experts. Some issues cannot be settled by expert opinion. No group of experts now can settle whether there is life on Mars. They need more evidence than we have at present. No group of experts could ever settle which kind of fish tastes best. That is not the right kind of issue to settle conclusively. We can identify such gaps in expertise by asking whether this is the kind of question that can now be settled by expert consensus.

If so, we can next ask whether experts have reached agreement. Of course, unanimity is not required. There will always be a few dissenters, but the evidence can still be strong when almost all experts agree. Doctors have reached a consensus that smoking tobacco causes cancer. Of course, the experts have evidence for this claim, but few non-experts know any or many details of the studies that convinced the experts that smoking tobacco causes cancer. That is why we need to rely on expert authorities. When non-experts argue, ‘Doctors agree that smoking causes cancer, so that’s good enough for me to believe that it does’, it would not make much sense to insist that they tell us how doctors reached that consensus. It is enough for non-experts to know that experts did reach a consensus.

In some cases, the appropriate kind of expert is simply a witness. The experts on whether a government official communicated with a foreign spy include witnesses who saw them meet or heard them talk. To get agreement between experts, then, is simply to have one witness confirm what the other said. As long as their shared story is not denied by other reliable sources, such confirmation can reduce the chance of error and justify belief. That is why most good news reporters wait to deliver stories only after they are confirmed by multiple independent sources.

A fifth question is about the motives of the person who appeals to an authority: ‘Why is an appeal to authority being made at all?’ When a claim is obvious, we can simply assert it and maybe also call it obvious. Then we do not need to add an appeal to any authority. It would be pointless to argue, ‘Most mathematicians agree that 2 + 2 = 4, so it must be true.’ Thus when someone does appeal to an authority, they usually make that appeal because they know that their claim is not obvious, at least to non-experts. Their appeal signals that they know their audience could reasonably raise questions, so they cite the authority in order to head off those questions. The best response, then, is to ask the very questions that they are hoping to avoid.

To see how these five questions work together, let’s apply the series to science. Many people assume that science does not depend on any authority. In their view, religion and law depend on authorities, but science works purely by observation and experimentation. That is incorrect. Almost every scientific paper cites many authorities who have previously settled other issues so that this paper can build on those predecessors to address a new issue. Sir Isaac Newton, one of the greatest scientists of all time, said that he stood on the shoulders of giants, and he meant previous authorities.

What justifies scientists in trusting other scientists as authorities? After all, scientists are human, so they are fallible like the rest of us. The difference is that individual scientists work within larger groups and institutions that are structured to foster reliability. One virtue of science that is conducive to reliability is the insistence on replication by independent scientists or laboratories. Independent attempts at replication are unlikely to succeed when results are distorted by personal motives and mistakes. Another feature of science that breeds reliability is competition. When one scientist reports a new finding, other scientists have strong incentives to refute it. With so many smart people trying so hard to find mistakes, only the best theories survive. We have reason to trust any view that survives such a process.7 Of course, many scientific theories have been overturned, and most scientific theories today will probably be overturned in the future. Nonetheless, we can still have reason to trust the best theories and data that we have now.

One important recent example is the Intergovernmental Panel on Climate Change (IPCC), which includes hundreds of top climate scientists from around the world.8 This large and diverse group has worked long and hard to reach consensus about many, though far from all, aspects of climate change. Suppose that someone appeals to the IPCC as an authority to argue that human activities that emit greenhouse gases are causing at least some climate change. Is this appeal to authority a strong argument? To assess it, we need to ask our questions.

First, did the arguer cite the authority correctly? Some environmentalists fail to cite qualifications in the IPCC reports. This omission might distort their arguments, so we need to check carefully. Still, many passages in their reports do show that the IPCC really does conclude that human emissions are causing some climate change.

Second, can the cited authority be trusted to tell the truth? This question asks whether the scientists in the IPCC have motives to exaggerate the extent of climate change. If so, we have some reason to distrust them. In fact, the members of the IPCC have incentives to uncover mistakes, because their reputations will suffer if they mess up. It would be too far-fetched to imagine a conspiracy among so many disparate scientists.

Third, is the cited authority in fact an authority in the appropriate area? Here we need to check the credentials and areas of expertise of the members of the IPCC. We find that they were chosen because their expertise was relevant.

Fourth, is there agreement among the appropriate experts on this issue? The IPCC does not agree on every issue, and a few dissenters remain outside the mainstream. Nonetheless, the goal of bringing together so many diverse experts in the IPCC is to determine which claims they do agree on and then to get them to sign their joint report on the points of agreement.

Fifth, why is an appeal to authority being made at all? Because the future and causes of climate change are unclear without extensive research and also because proposals to reduce climate change are likely to impose serious costs on many people. This issue matters, so we need to be careful.

After asking these questions, an accurate appeal to the authority of the IPCC ends up looking very good, so we do have strong reasons to believe that climate change is being increased by human activities that emit greenhouse gases. This assessment does not mean that there are no problems in the IPCC. Nothing is perfect. The point is only that this institution is self-correcting, like science as a whole. The IPCC still might be wrong, and future evidence might undermine its claims. That is a risk with all inductive arguments. But inductive arguments can be strong without certainty, so the IPCC reports can give us strong reason to believe that at least some climate change results from human activities.

Nonetheless, this scientific conclusion by itself cannot solve the policy issues regarding what to do about climate change or global warming. The IPCC is often cited as an authority not only on the future and causes of climate change but also on what the government should do about it. To assess this different appeal to authority, we should focus on the question, ‘Is the cited authority in fact an authority in the appropriate area?’ A negative answer is suggested because climate scientists are experts on science rather than on government policy. A climate scientist who knows that reducing greenhouse gas emissions will slow global warming still might not have the expertise to know whether or how much carbon taxes or the cap-and-trade system will succeed in reducing greenhouse gas emissions, whether or how much these policies will slow economic growth, and whether these policies are politically feasible or would violate standing laws. To settle those separate issues, we need experts from outside of science. Thus our questions can illuminate not only the strengths but also the limits of science.

These questions are not foolproof of course. Opponents will often give very different answers when they ask whether there is agreement among experts and whether a certain source is an authority in the appropriate area and can be trusted to tell the truth. These continuing controversies show that we should not merely ask these questions by ourselves. We should ask other people to ask these questions. We should also not simply ask allies who agree with us. Instead, we should ask our opponents. And we should ask them not only who is an authority to be trusted, but also why they trust those authorities. We need to ask for reasons to back up any appeal to authority, at least in controversial areas. This example shows again why we need to learn to ask the right questions, including questions about reasons.

Have we gone anywhere yet?

The third kind of fallacy makes no progress beyond its premises. More technically, an argument begs the question when its premises need to be justified but cannot be justified without assuming or depending on its conclusion. This meaning is not far from common parlance, such as ‘My blood sugar levels are very high, which begs the question of why I am eating cake.’ Here ‘begs the question’ means ‘raises the question’. Similarly, an argument begs the question when it raises the question of why we should believe its premises if we doubt its conclusion.

Here’s a common example: ‘The death penalty is immoral, because it is always wrong to kill.’ The death penalty by definition involves killing, so this argument is valid in our technical sense. It is not possible for its premise to be true when its conclusion is false, because the death penalty must be immoral if all forms of killing are immoral. Despite its validity, this argument fails to justify anything, because there is no way to justify its premise that killing is always wrong without already assuming its conclusion that killing is wrong in the particular case of the death penalty. The death penalty might be the one exception that shows why not all killing is wrong, because what is really wrong is killing innocent people. Defenders of the argument need to justify its premise without assuming its conclusion, but they have not done that yet in the simple argument as stated, and it is hard to see how they would justify its premise independently of its conclusion.9 In this way, the argument assumes its conclusion from the start, so it gets nowhere.

The same fallacy can be committed on the other side by arguing like this: ‘The death penalty is moral, because we should repay a life for a life.’ Again, the premise that we should repay a life for a life already assumes that the death penalty is moral, since the death penalty for murder is repaying a life for a life. Thus this argument cannot justify its conclusion, because its premise needs to be justified and cannot be justified without already assuming its conclusion.

Here’s another infamous example: ‘The Bible says that God exists. The Bible is the word of God (as it says in II Timothy 3:16). God would not speak words that are not true. Therefore, God truly exists.’ The premise that the Bible is the word of God begs the question in two ways. First, a being cannot speak any word without existing, so this premise already assumes the conclusion that God exists. Second, II Timothy 3:16 is part of the Bible, so it also begs the question to cite that verse as evidence that the Bible is the word of God. What argument gives us reason to believe what the Bible says about itself?

The same kind of fallacy is committed by some opponents of religion when they argue like this: ‘This evolutionary biologist says that the theory of evolution is true. Evolutionary biologists would not say anything untrue about evolution. Therefore, the theory of evolution is true.’ The second premise begs the question because it assumes the conclusion that the theory of evolution is true. If the theory of evolution were not true, then evolutionary biologists would say something untrue about evolution (contrary to premise 2) when they say that the theory of evolution is true (as reported in premise 1). As a result, this simple appeal to evolutionary biologists cannot justify its conclusion any more than the preceding religious appeal to the Bible. Scientists need independent justification for their theories just as much as theologians do. The crucial question is who has such justification.

As always, this criticism of the argument does not imply that the conclusion in any of these pairs of arguments is either true or false. The point, instead, is simply that the issue cannot be resolved with arguments like these, because they beg the question. Some other argument is needed. Whether a better argument is possible will be controversial, but it is significant progress to recognize which arguments fail.

Is that all?

Have we covered all of the fallacies that people ever commit? Of course not. There are plenty more. Some fall into patterns like those we discussed. Genetic fallacies, appeals to ignorance and tu quoque (or appeal to hypocrisy) resemble ad hominem arguments. Appeals to emotion, to personal experience, to tradition and to popular opinion resemble appeals to authority. False dichotomy sometimes resembles begging the question. These other arguments can be understood by comparing them to the fallacies that they resemble. Still other fallacies form new patterns, such as the gambler’s fallacy, fallacies of composition and division, false cause and so on. Some books and websites list hundreds of fallacies.10 We will not do that here. Long lists are boring.

So-called fallacies on standard lists are not always fallacious. We saw that slippery-slope arguments and appeals to authority sometimes provide strong reasons. This potential makes it misleading to refer to the general type of argument simply as a fallacy.

The same point applies to appeals to emotion, which are often seen as fallacious and opposed to reason. When someone describes the anguish and weariness of refugees as well as their empathy for refugees and revulsion at the ways they are treated, these emotions can provide good reasons to help refugees, because the emotions point to suffering and injustice. These emotions show nothing if they are irrational, but normal emotions can sometimes be reliable guides, much like authorities. We can decide when to trust emotions by asking questions much like those we asked about appeals to authority. Why am I feeling this emotion now? Are my emotions distorted by self-interest or irrelevant motives? Do other people feel this same emotion in similar situations? Does this emotion reliably react to relevant facts in the world (such as suffering and injustice)? We need to be careful when we appeal to emotions, just as we need to be careful when we appeal to authorities, but some appeals to emotion are not fallacious.

More generally, we should not be too quick to accuse opponents of fallacies. They do not commit an ad hominem fallacy every time they criticize a person. They do not commit a slippery-slope fallacy every time they use a word that is slightly imprecise (like all words). They do not commit a fallacy of appealing to tradition every time they point out that their views align with tradition. When accusations of fallacy become a knee-jerk reaction without thought, they cease to be illuminating and become annoying and polarizing. Such name-calling is not much better than simply announcing, ‘I disagree.’

Instead of abusing opponents with names of fallacies, we need to look carefully and charitably at each argument. In particular, we should always ask whether what appears to be a fallacy can be fixed simply by adding a suppressed premise. For example, suppose someone argues that a government employee did not reveal classified information on her private server, because we cannot find any specific email on that server that revealed anything classified. Or suppose someone argues that a political candidate did not collude with the enemy, because we cannot prove that he did. In both cases, critics could retort, ‘Appeal to ignorance! That’s a fallacy!’ That label will not help anyone understand the issues. It would be much more constructive to ask whether the argument assumes a suppressed premise. It does: ‘If he or she had done it, we would know (or at least have the kind of evidence that we lack).’ That suppressed premise is true in some cases: if my son had wrecked my car last night, I would probably see dents in my car. But that same suppressed premise is false in other cases: if my son has come home late, I would know (even though I was sound asleep). In every case of appeal to ignorance, then, we need to ask whether the suppressed premise is true: if an email did reveal classified information, would we find it? If the candidate did collude, would we know it? In order to get beyond name-calling and figure out how strong an argument really is, we need to reconstruct the argument as charitably as possible and then ask how strong it is in its best form.

Of course, some arguments will still end up fallacious. We should not be too quick to accuse, but we should also not be too slow to point out fallacies and weaknesses in arguments. Moreover, we need to be able to find and explain flaws in arguments even when we do not have a name for those flaws. The next chapter will teach that skill.