4. BUILDING YOUR OWN BULLSHITOMETER

As we’ve said before, there’s one person more likely than anyone else to make you believe in bogus information, and it’s someone you see every time you look in a mirror. So, when you set out to build a Bullshitometer—a bag of mental tools to help you detect and counter even the most plausible-seeming bullshit—it’s a good idea to remember who you’re going to be using it against:

Not just everyone else, but YOU.

Of course, as soon as you start using your Bullshitometer you’re going to find that sometimes you change your mind on one issue or another. A lot of people think that changing your mind about something is a sign of weakness, or even a lack of integrity. They bray, “Inconsistency!” and use smear-words like “flip-flopping.”

One of the most important periods in human history was from the late seventeenth century to the late eighteenth century. It’s called the Enlightenment. People in Europe began to re-examine many of the ideas that for centuries they’d taken for granted and discovered that, more often than not, those ideas were false. We’ve been living in a sort of ideas revolution ever since, and each day we benefit from that. Had it not been for the sea change in human thought that occurred during the Enlightenment, human life would still be nasty, brutish, and short.

We owe a deep debt of gratitude to those people who changed their minds.

Or, as bullshitters would say, who flip-flopped.

Scientists change their minds about things all the time as new information comes along. Only once something has become established as a major theory (see page 67) will they start to regard it as likely true, and even then they’ll constantly be investigating its details, with the consensus shifting backward and forward as they get closer and closer (they hope) to the truth.

That’s the way intelligent people behave. It’s a logical way of going about things, and it very obviously works.

It’s not only scientists who operate this way. Successful companies are generally the ones that are ready to adapt the nature of their business in response to change. “If it ain’t bust, don’t fix it” is a great general rule that a lot of companies wisely subscribe to, but they need to be constantly checking to make sure it really ain’t bust.

Bullshitters, however, regard “inconsistency” as a failing. This attitude has come to prevail particularly in politics. It can be a death sentence for a politician’s career should they change their mind about an issue—should they, in fact, do what smart people do all the time.

But then politics is, let’s recall, an arena in which George H. W. Bush could score major political points by claiming he didn’t like broccoli.

THE SCIENTIFIC METHOD—THE BASICS

So we’ve established that it’s useful that scientists are constantly revising their opinions. Let’s look in a bit more detail at the way scientists approach their quest for the truth.

The Scientific Method is a way of going about the acquisition of knowledge and of understanding the world around us. Despite its name, the approach can often be applied in areas outside the sciences.

The basic form of the Scientific Method dates back to the sixteenth/seventeenth-century English politician and scientist Francis Bacon.1 To understand a subject, his recommendation was to gather as much information as possible and then try to work out some general principles. Next, test those general principles to see if they offer an explanation. Once you’ve established several sets of general principles, you can put them together as the basis for your next line of inquiry. This approach is now called induction.

In practice, induction was useful but hopelessly cumbersome, and almost no one after the seventeenth century actually used it. It was important, though, in that Bacon had recognized the best way to acquire knowledge was via a formal, rigorous approach rather than, for example, believing your hunches or jumping to conclusions.

In the nineteenth century, people formulated a different model, one that’s far closer to what we use today. This was hypothetico-deduction. In this model, scientists go through the following steps:

•  Study the relevant phenomena—gather evidence.

•  Think of a way in which those phenomena might be explained—form a hypothesis.

•  Imagine some as-yet-unobserved consequences of the hypothesis—make predictions based on the hypothesis.

•  Devise experiments to find out if these predictions come true.

If the predictions are confirmed, the scientists reckon the hypothesis could be correct. For example:

•  Studying the night sky, you notice that the stars move across it, so you realize that either the heavens are circling the earth or the earth is spinning—evidence.

•  You hypothesize that the earth is spinning—hypothesis.

•  After doing some heavy thinking, you predict that, if it’s the earth that’s spinning, liquids should form a spiral as they flow into relatively narrow apertures—prediction.

•  Next time you use the toilet, you test this prediction—and voila!—experiment.

The twentieth-century philosopher Karl Popper realized that the hypothetico-deductive method was incomplete because, strictly speaking, that final step didn’t mean the hypothesis had actually been confirmed, just that it hadn’t been proven false. A complete hypothesis should therefore include not just tests that might confirm it, but also tests that might falsify it. In practice this can often be difficult to achieve, so usually scientists are content if many different tests seem to confirm the hypothesis and, even better, if independent lines of inquiry all converge on the same conclusion.

If a hypothesis stands up to all the tests and falsification attempts hurled at it over a good period of time, it achieves the status of a theory. Unlike math, science doesn’t deal in absolute proofs, because theories are always subject to change or at least modification if better evidence comes along, but a theory that has attracted the support of a large majority of the scientists working in the relevant field1 is about as near as you can get, in scientific terms, to a proven fact.

This is a much more powerful meaning of the word “theory” than the one we use in general conversation—“I have a theory that Megan Fox is going to win an Oscar.” The fact that many people are unfamiliar with the difference between the two meanings is often exploited by bullshitters. How often have you heard Creationists say that the theory of evolution by natural selection is “just a theory”? Matters aren’t helped by the fact that scientists themselves often use the word “theory” when really they mean “hypothesis” (String Theory, for example, is actually just a hypothesis so far),—and often, too, they use it in the colloquial sense, just like anyone else.

It’s worth looking also at what we mean by the word “prediction” in the description above. One of the most dramatic scientific predictions of all time was made by the UK astronomer Edmond Halley in 1705 when he said the comet that had appeared in 1682 would reappear in 1758. He based this prediction on noticing a pattern among historical records of comets: that there seemed to be a bright one every 76 years. He realized that it must be the same comet reappearing, and, thanks to the theory of gravitation worked out by his friend Isaac Newton, could state with confidence that the object was in orbit around the sun.

It’s easy to think that all scientific predictions should be like that: of something that’s going to happen in the future. In a sense they are, but the event that’s going to happen in the future can be the discovery of something that happened in the past. When Charles Darwin was working on his theory of evolution he admitted it might be difficult to find transitional fossils—fossils or organisms that lay midway between established species (see page 60). But he predicted that, if his theory was correct, it was possible some would be found.

In fact, he was overcautious. Lots of transitional fossils have been discovered since Darwin’s time. Look up the entry “List of transitional fossils” in Wikipedia and you’ll see how long the list is—and, as its authors acknowledge, more could be added!1 The most famous is probably Archaeopteryx, a creature that can be seen as midway between the winged dinosaurs and modern birds. Another famous one is Tiktaalik, discovered as recently as 2004 and representing the transition from fish to amphibian.

The discovery of all those transitional fossils is a triumphant fulfillment of Darwin’s prediction. And it’s one of the reasons his theory retains its status as a theory—in other words, as a pillar of human knowledge.

In practice, people often don’t follow the steps of the Scientific Method precisely: a step might be missed, or a couple of steps might swap in order. But that’s the basic plan.

If we use the Scientific Method in our thinking, we have a powerful analytical tool that’ll serve us well when we’re confronted by a wide range of problems. Just on its own the Scientific Method is a very good Bullshitometer.

THE SCIENTIFIC PROCESS—THE OTHER STUFF

Science used to be advanced by the efforts of individuals—individuals like Isaac Newton and Marie Curie. But more recently, scientific research has usually been carried out by teams, and the important moment has been not so much the announcement of a team’s results as the acceptance of those results by the relevant scientific community at large. This way of advancing by consensus may not be so dramatic as individual wild-eyed scientists shouting “Eureka!”—it’s not the course Hollywood would have chosen, for sure—but it’s very effective.

Very occasionally there’ll be a major change in the consensus view. An example came in the late 1950s and early 1960s when the discovery of seafloor spreading led to the development of the theory of plate tectonics. Before then, the geophysics consensus was that the idea the continents could move around, championed by Alfred Wegener and others since the beginning of that century, was impossible. Afterward, the consensus was that the continents could and did “drift.” Such a relatively rapid change in the consensus view has been called a paradigm shift.

In the practice of professional science today, there are a few other stages that are often described as part of the Scientific Method. In reality, though, they’re part of what can be called the scientific process. You should be wary of any “great, new scientific breakthrough” that hasn’t gone through the following stages, or something like them.

•  Publication in a scientific journal. There are hundreds, if not thousands, of scientific journals in the world, and a surprising number are disreputable—they’re corrupt or phony. Whenever you see that something has been published in a scientific journal and it looks iffy to you, check the journal’s entry in Wikipedia to see if it’s for real.1

•  The properly accredited journals use the system of peer review. After a paper is submitted, it’s sent to other experts in the field (peers) to make sure it’s original, useful, and error-free. Only if those experts “okay it” do the journal’s editors accept it for publication. This can be a slow and boring process, but it does make it far less likely that erroneous information is published.2

•  The published papers don’t just include what the team concluded from the experiments or research they did. The scientists also make their raw data available so that readers can check it for possible error, not only in the data itself but also in the way it was analyzed.

•  Finally, details are given so readers can replicate the experiments—repeat them to see if they get the same results. If several other teams confirm the results, then they can be accepted as valid.3

Occasionally, errors slip past the peer reviewers, other experimenters find they can’t replicate the results, or a reader notices that a paper (or part of a paper) has been plagiarized from elsewhere or that there is actual fraud involved. For these and other reasons, a journal may retract a paper that it has published: It warns all readers that the paper is suspect, removes it from online publication, and takes whatever other measures it can to make sure the false information isn’t further disseminated. Retraction can happen fairly quickly; sometimes it takes years before a journal retracts a dubious piece of work. It took until 2004, for example, for the medical journal Lancet to retract the 1998 paper it had published by Andrew Wakefield and others claiming that the MMR vaccine caused autism (see page 164).

A major problem with the modern scientific process is that negative results tend not to get published. If you’ve discovered a way to extract usable energy from drinking water, you obviously want to tell the world your results, and the scientific journals will likely be tripping over themselves with offers to publish them. If, by contrast, you do extensive research trying to extract cheap energy from water but fail in that quest, chances are you won’t bother writing up your results; even if you do, it’s likely no one will want to publish them. After all, who’s interested in experiments that merely confirm what everyone assumed was the case? Yet those results can often be useful in themselves. If your paper recounted all the unsuccessful methods you’d tried, it could save later researchers a lot of time and effort.

Another problem with the scientific process was summed up by Albert Einstein:

Because the academic career puts a young person in a sort of compulsory situation to produce scientific papers in impressive quantity, a temptation to superficiality arises that only strong characters are able to resist.

Modern academics are under strong pressure to produce lots of published papers to bring renown to the institution where they work (so the reasoning goes). Often their jobs can depend on how many papers they publish, whether or not those papers are useful. It’s a case of “publish or perish.”

The result is that, as many scientists complain, far too many papers are published that don’t have anything new or important to say. This obviously damages the scientific enterprise as a whole, because it’s quite possible for genuinely significant work to go unnoticed in the blizzard of trivia. Aside from anything else, it means too often scientists are writing superfluous papers and working to get them published when they could be better employed doing actual science.

EVALUATING SOURCES

In the previous chapter, we talked about the form of the argument from authority in which the bullshitter cites dozens of irrelevant books and papers as “authorities.” The reason the trick often fools people is that it looks very much like the perfectly respectable practice of citing relevant sources in support of a viewpoint.

If you go to the hospital for a blood test using a note from your bank manager as an authorization, the nurses will laugh at you. If you go with a prescription from your doctor as authorization, out come the syringes. Both bank manager and doctor are authorities, but only the doctor is relevant in this instance. Likewise, if a TV talking head tells you not to worry about climate change while the assembled climate scientists of the IPCC1 tell you it’s a major emergency, it’s easy enough to work out which is the more relevant authority. Often, though, evaluating your authorities—sources—can be a bit more difficult.

If you’re checking something in the sciences, the best place to start is often Wikipedia. If you’re trying to evaluate something that was published in a science journal, Wikipedia offers—as we saw (page 72)—a quick way of checking if the journal is respectable or bogus.

Otherwise, though, you should always be a bit suspicious of using a Wikipedia article as a reliable source of information. They’re often surprisingly good, and for straight data they’re as accurate as any you’ll find—if you want to know the number of home runs Babe Ruth hit, look no further than his Wikipedia entry—but often they contain errors, and sometimes glaring ones. (The Wikipedia editors can’t be correcting everywhere at once!) Here’s the key: The real value of Wikipedia as a tool in evaluating sources lies less in the articles, and more in the sources that the articles offer.

Once you’ve established that the source you’re checking actually exists (and sometimes bullshitters just invent them, knowing that hardly anyone will trouble to check) and that the journal the paper appeared in is reputable, the next thing to do is find the paper itself. Almost all reputable scholarly journals now post an abstract of every paper they publish online for free.1 The abstract is a summary; essentially, the researchers state the question they set out to answer, the experiment(s) they did, and the conclusions they came to. Abstracts are often dry and jargon-ridden, but with a bit of effort most of them can be deciphered.

The point of reading the abstract is to check that the paper actually says what the suspected bullshitter claims it does. You’d be surprised how often it doesn’t.

Of course, many times the sources you want to evaluate aren’t obscure scholarly papers. Often information comes from newspapers, online news outlets, and TV news. Take a look at the outlet’s history, biases, and what other kinds of reporting it does; this will help you figure out if you value it as a trusted source.

Always remember that you yourself are a source that needs to be evaluated. Whenever you judge information, you do so through a filter that’s made up of your own preconceived ideas—a whole lifetime’s worth of them. You can’t hope to escape from those biases entirely, but you can at least learn to recognize them. If you continue to hear things that bolster one or more of your biases, it’s worth standing back for a moment to make sure you aren’t being fed a line of bullshit.

And remember, too, that a whole stack of unreliable sources never adds up to a single reliable one.2 You can find a million people in cafes who’ll tell you that AIDS only affects gay men but, if Scientific American says otherwise (which it does), it doesn’t take a whole lot of critical thinking to work out who to believe.

CONFIRMATION BIAS AND MOTIVATED REASONING

These are two closely related traps into which it’s far too easy to fall—and all of us do sometimes fall into one or the other.

The term confirmation bias refers to the tendency we have to notice things that reinforce our beliefs and fail to notice things that would undermine them. Even more alarmingly, if you set out looking for evidence that your beliefs are true, you may sooner or later start finding it everywhere you look!

To take an example that’s close to home, there have almost certainly been times when you’ve disliked someone enough that, the way you saw it, there was literally nothing they could do right. They smiled and said hello to you in the street? They were laughing at you. They offered to help you with your homework? They were being condescending. A few months later you probably looked back on your former self and realized how biased you’d been. But, at the time, all the evidence seemed to indicate that the person deserved your dislike.

Where this kind of interpersonal confirmation bias becomes harmful on a massive scale is in its contribution to racial and other ethnic tension. People of one ethnic group (let’s call them the Rounds) get it into their heads that people of another (the Squares) are always violent, or bullying, or dishonest, or sly—pick whatever unsavory human characteristic you want. From that point on, every time a Round sees a couple of Squares fighting, that’s confirmation of the violent nature of all Squares.

But hang on a minute! What about all the other Squares, the ones who aren’t fighting?

The confirmation bias kicks in again. The Squares who aren’t fighting aren’t doing enough to stop the ones who are! That just goes to show that they’re really supporting the fighters, which is yet another piece of evidence that Squares are fundamentally violent by nature . . .

Motivated reasoning is very similar—and equally hard to recognize when you’re doing it. Imagine you’re confronted by a problem of some kind—say, what is the moon made of? Rather than looking at all the evidence, you decide the answer first—it’s made of blue cheese—and go out in search of evidence to support your case, ignoring anything that might suggest you’re wrong.

A good example is the frenzy that erupts periodically over the issue of voter fraud. We’re told that voter fraud—people voting twice, for example—is a serious problem. And it’s easy enough to find evidence, if you choose to look only at sources that will support the contention. Sometimes, for instance, there are people who are registered to vote in more than one state. What better indication could there be of a dishonest voting intent? The trouble with this argument is that, as any polling official will tell you, when people move from state A to state B, it can often take two or three electoral cycles before their information is scrubbed from the register in state A. There’s no intent to defraud; it’s just that the bureaucracy can be slow to catch up with the reality. Similarly, people can remain on the register for years after they’ve died.

In fact, voter fraud is not so much a serious problem as it is a virtually nonexistent one. In 2012, when Pennsylvania was pursuing a voter ID policy that could have disenfranchised tens of thousands of voters on the pretext that voter fraud was rampant, the state was forced to admit in court that it could find no examples at all of the crime having been committed!

How do you avoid confirmation bias and motivated reasoning? It’s not hard—just difficult to remember to avoid it. Whenever you come to a conclusion, pause for a moment to ask yourself if you simply jumped to that conclusion or if you actually surveyed all the relevant evidence to get there, checking that evidence as you went along.

Remember, it’s easier to get the crossword right if you’ve read the clues.

PATTERNS

One of the things our brains are very good at is recognizing patterns—in fact, it’s a fundamental characteristic of human intelligence. Our distant ancestors recognized that the seasons followed a pattern through the year, and that so did the rising and setting of the stars in the night sky. Putting the two patterns together, our ancestors were able to tell when was the best time to plant their crops, or to hunt for mammoths. Millions of years later, that same ability to recognize patterns is what makes some people better at chess than others. Music, math, art, sports—there are patterns in almost all human activities, and our competence in detecting them quickly can be a tremendous advantage in problem-solving. Sometimes we do it unconsciously, in which case we call it intuition, or hunch (or, very misleadingly, instinct).

Unfortunately, it’s far too easy to see patterns that aren’t in fact there. How often do those hunches of ours actually pay off? If you’re like me, surprisingly infrequently. It took me years to learn how lousy my hunch success rate was, because of course I tended only to remember the ones that were successful—the more successful they were, the more I was likely to remember them.

Whenever you feel a hunch coming on, pause and ask yourself if you have any rational cause to act on it.1

Bullshitters often rely on the fact that we tend to trust our intuition far more than we should. They present themselves in a guise that they think most people will intuitively trust —the kind-eyed, genial evangelist, the bluff, seemingly authoritative TV commentator, the politician who’s all heart (yeah, right)—and let our assumptions do the rest. We see aspects of the bullshitter that fit a pattern we’ve created in our minds, whether it’s truth or baloney—“he’s got presidential hair”—and, if we’re not careful, we act accordingly.

LABELING AND STEREOTYPING

Because of pattern recognition, we tend to classify things—we put them in mental boxes to make it easier for us to think about them. For the most part this is an invaluable tool. Dogs come in all shapes and sizes, from chihuahuas to great danes, but the category “dog” isn’t a random or un-useful one. If someone tells you they’ve got a new dog, you immediately know what they mean, even if the details need to be filled in.

We categorize stuff all the time. Since the earliest days we’ve sorted the plant and animal kingdoms into mental categories based on their characteristics—carnivores, herbivores, poisonous, smelly, et cetera— and it was a great leap forward for the biological sciences when, in the eighteenth century, the Swedish botanist Karl von Linné (Carolus Linnaeus) introduced some rigor to taxonomy, as this scheme of classification is called. Without the recognition that there are different species, and that the relationships between them can be deduced, Darwin would have had considerable difficulty working out his theory of evolution by natural selection, if indeed it would have been possible at all.

Not all of our classifications are wise, though. The trouble arises when we rely on categorization as a substitute for thinking. Demagogues exploit the trick of persuading us to make false classifications of our fellow human beings. Most famously, the Nazis persuaded the German people that various subsets of the population (like Jews and Serbs, for instance) were not to be classified as fully human, and could thus be exterminated without qualm. This technique of dehumanization has been used to inspire genocidal attempts throughout history and all over the world: It was also responsible for the near-annihilation of the Native Americans and the Australian Aborigines, for example.

Here’s a more everyday example: It’s all too easy for liberals to dismiss someone as a “fascist” when really they’re just a conservative. And, on the right, some critics of President Obama have been known to criticize him as socialist, Marxist, fascist, atheist, and Islamist all at the same time, even though these labels are mutually incompatible and not one of them is in fact demonstrably true.1

Soon after he was first elected, President George W. Bush cracked that “If this were a dictatorship, it would be a heck of a lot easier . . . just so long as I’m the dictator.” It was a joke. Most of us have made similar jokes: “If I ruled the world . . .” Yet many liberal commentators pretended to take it at face value, and there were plenty among their audience who assumed Bush really did want to be a dictator. It was easier to stick the label “wannabe dictator” on him than to dissect his actual politics.

Labels and stereotypes take critical thinking out of the equation, which isn’t good for anyone. Just like when you feel a hunch, or whenever you make a quick assumption about someone, you should ask yourself if it’s based in rational fact, or based on a stereotype.

YOUR VERY OWN BULLSHITOMETER: A QUICK CHECKLIST

You read something in a book, you hear something on the radio, a TV commentator makes a statement, your irritating little brother tells you what he thinks . . . and your antenna starts to quiver because all of a sudden you detect

BULLSHIT!

Here’s a handy checklist of the steps you can take when you think there’s bullshit in the air . . .

•  Ignore the flim-flam and focus on the substance. Page 9.

•  Think about whether the authorities that someone’s quoting really are authorities. Page 56.

•  Check the context of the quotes that people present you. Are they in context? They could mean something quite different from what you thought when taken in the context of an entire speech or article. Page 48.

•  Similarly, watch out for the straw man tactic. Page 52.

•  Don’t take data at face value if you’re suspicious of the person presenting it to you. See if you can find the source of the raw data. Page 72.

•  It’s what people say or think that’s important, not what they look like or what country they come from. Avoid labeling or stereotyping people. Page 80.

•  The plural of “anecdote” isn’t “evidence”! Page 58.

•  If someone keeps shifting his ground (or “moving the goalposts”) when he’s trying to persuade you, be suspicious. Page 60.

•  Be alert for false balance in news coverage. The balance point between rational and batshit crazy is . . . batshit crazy. Page 62.

•  Just because you can’t immediately explain something doesn’t mean you have to believe the first “explanation” someone offers you. Take your time and do some more research. Page 63.

•  Hunches are not your friends. If you find yourself jumping to conclusions, they’re probably the wrong ones. Page 67.

•  That impressive list of authorities someone produces in supposed support of what they’re saying? Try to find time to fact-check at least a few of them. Page 56.

•  If all the evidence you find seems to confirm your beliefs, pause for a moment to make sure you’re being objective. Page 77.

•  The word “theory” has several uses. If someone tries to tell you that a piece of established science—like evolution—is “just a theory,” either they’re ignorant or they’re trying to bullshit you. Page 68.

•  Wherever it makes sense to do so, apply the Scientific Method or some variant of it. It’s probably the single most powerful component of your Bullshitometer. Page 66.

images

_______________

1. Others formulated the idea earlier, notably the tenth/eleventh-century Arab scientist Ibn al-Haytham (Alhazen), but Bacon’s formulation was the most influential.

1. The “in the relevant field” part is important: It’s irrelevant whether dentists agree or disagree with an astronomy theory, for example.

1. Creationists and others often sniff that you shouldn’t look to Wikipedia for information—as if some Answers in Genesis tract might be more reliable! In fact, for this sort of quick information Wikipedia is usually excellent. I’d double-check the spellings of the species names before using them in a term paper, though . . .

1. http://en.wikipedia.org/wiki/Reliability_of_Wikipedia

2. Some of the phony journals claim to be peer-reviewed too. For example, the creationist organization Answers in Genesis publishes Answers Research Journal, supposedly peer-reviewed. Trouble is, the people doing the reviewing are as batshit as the people writing the articles.

3. A problem with the process is that other scientists tend to be busy doing their own research, and therefore only interested in attempting to replicate someone else’s experiments if the results are fundamentally important. Results of minor concern far too often go unchecked. It would make a significant contribution to scientific progress, and thus to the GDP, if governments could establish institutions dedicated to attempting replications of published results. But then of course all the usual suspects would scream about the “waste” of taxpayers’ money.

1. Intergovernmental Panel on Climate Change—see page 195.

1. Sometimes you can find the whole paper, but don’t count on it. And sometimes authors/journals will put online a copy of the paper in “pre-publication.”

2. Well, to two reliable ones. It’s always a good idea to cross-check even the best-accredited authorities!

1. Of course, this can be difficult to do in some contexts. If you have a hunch your tennis opponent is going to send the next ball down the line or cross-court, probably better to go with it.

1. If in doubt, go look up “Marxism” in any decent dictionary.