CONCLUSION

Toward a Truthier Future

Early in 2018, I stood in the ruins of the Mayan city of Tulum and watched a small, adorable mammal gleefully eating the flesh out of a coconut. The animal in question was a coati, also known as a Brazilian aardvark—a relative of the raccoon, but cuter, and with less of an air of witchcraft about it. I was delighted to spot one of them, because the Brazilian aardvark is an animal that tells us an awful lot about truth, and how bad we are at it.

You see, there’s one thing that’s particularly interesting about the coati (also known as the Brazilian aardvark). The interesting thing is this: it isn’t actually known as the Brazilian aardvark at all. Or, at least, it wasn’t until 2008, when things went weird.

That’s when Dylan Breves, a student from New York, went on holiday to Brazil, saw some coatis and thought—very wrongly—that they were aardvarks. Not wanting to be embarrassed by his woeful lack of mammal knowledge, he jokingly made a minor edit to the Wikipedia page for the coati, inserting the claim that (you’ve guessed it) they were also known as the Brazilian aardvark.

As far as we can tell, before that exact moment—11:36 p.m. Brasília time, on July 11, 2008—nobody had ever used the phrase “Brazilian aardvark.” It hadn’t been written on the internet, it had appeared in no scholarly articles and it had never been printed in a book.173

Now, normally, a little light Wikipedia vandalism like that would be quickly caught and removed by the site’s ever-vigilant army of volunteer editors. But for whatever reason, despite the fact that aardvarks do not live in South America and literally no one had ever written the phrase “Brazilian aardvark” before Dylan, this one slipped through the net.

And then, because it was on the internet and people trust Wikipedia, it wasn’t long before people started calling the coati a “Brazilian aardvark” for real.

As the New Yorker’s Eric Randall reported in 2014, by that date newspapers like the Daily Mail, the Telegraph and the Independent had all picked it up and run it uncritically.174 The BBC had also used it.175 “Brazilian aardvark on the loose in Marlow,” shouted the headline of one local paper in Buckinghamshire when a coati escaped from a private collection. “So that’s what an aardvark looks like,” runs the headline in another local paper in Worcester, above a picture of a coati not looking like an aardvark.176 You can find photos of coatis captioned as Brazilian aardvarks on the websites of Time and National Geographic, while Scientific American even went as far as flipping the traditional name order in an article on conservation, calling it a “Brazilian aardvark, also known locally as coati.”177 There now appears to be at least one serious scientific paper from a group of actual Brazilian zoologists which uses the name,178 and this completely made-up phrase has been used in books from at least two of the world’s leading academic publishers. One is from University of Chicago Press (“The coati, also known as the hog-nosed coon, the snookum bear or the Brazilian aardvark”179); the other, from Cambridge University Press, rather wonderfully repeats the mistake in a passage about the great eighteenth-century naturalist, Buffon, criticizing other naturalists for repeating mistakes by copying from other naturalists. “The multiplication of errors was one of the most common features of eighteenth-century natural history.”180 Indeed.

All of this raises the question: Is it even wrong anymore? Is the coati in fact now actually also known as the Brazilian aardvark? Did a dumb joke manage to change an animal’s name, just because, if something is on Wikipedia, then it will spread out into the world until it becomes sort of true?

The answer, as is often the case, is, “Er, maybe.” The Wikipedia page for coatis no longer features the claim that they’re also known as the Brazilian aardvark, on the grounds that there’s not enough evidence of it being in widespread use. And, since 2014, when the New Yorker article came out and the claim got deleted, references to it in the wild do seem to have slowed down somewhat (there was one mention in the Guardian in 2017, but that might have been an in-joke181). But there’s no doubt the Brazilian aardvark is out there now, in the wild, and that, if all of us just agree to start calling coatis by entirely the wrong name, then, damn it, that’s what they’ll be called.

An aardvark eating an acorn

A Brazilian aardvark, pictured enjoying a snack in Tulum, Mexico.

This might sound like it’s intended as a cheap joke at the expense of Wikipedia, which it really isn’t—although, in fairness, this is far from the only incident of its type involving the site. There’s the regrettable case of the inventor of the modern hair iron, in which a correct reference—Madam C. J. Walker, a pioneering African-American entrepreneur—was replaced in August 2006 with “Erica Feldman (the poopface).”182 Wikipedia admins quickly noticed the vandalism...and removed only the words “the poopface,” leaving Erica Feldman, whoever the hell she is, with the credit. The problem was fixed long ago on Wikipedia, and yet, if you google “Erica Feldman hair straightener” today, you’ll still find a vast number of websites that will cheerfully tell you about Ms. Feldman’s contributions to African-American hair care.

Oh, and there was also that time the report of the Leveson Inquiry (Lord Justice Leveson’s examination of “the culture, practices and ethics of the UK press”) named a twenty-five-year-old Californian student named Brett Straub as one of the founders of the Independent newspaper, because one of Brett’s friends had added his name to Wikipedia as a prank.183 To say the UK press enjoyed that one would be understating things a little.

In fact, Wikipedia even has a list of times this has happened, under the title “citogenesis”—a term coined by xkcd cartoonist Randall Munroe—which includes gems such as “the first commercial cardboard box was produced in England in 1817 by Sir Malcolm Thornhill” (now replicated all over the internet) and an entirely invented disease called “Glucojasinogen,” which has subsequently appeared in multiple scientific papers.184

Readers with long memories may recall that, near the beginning of chapter 2, I wrote, “I promise that I’m not going to make a habit of cut-and-pasting from Wikipedia in this book.”185 I can only apologize. I lied. Deal with it.

But the thing is, in all of these cases, the problem isn’t so much Wikipedia as it is people blindly copying from a single source and assuming it’s correct (and further people taking that new source as evidence that the first source was correct, and so on). As we’ve seen time and time again in this book, this kind of circular reporting isn’t something that’s limited to the internet age; bullshit feedback loops have been with us since the invention of print, and probably long before. The fact that the naturalist Buffon was complaining about exactly the same thing in the late 1700s should probably tip us off that our issue here possibly isn’t Jimmy Wales’s excellent invention.

It’s quite easy for us to blame Wikipedia (or Twitter, or telephones, or the printing press) for a long-standing systemic problem in the ways we gather and distribute knowledge, because blaming new things is easy and fun. But it does miss the point rather. That’s something that was shown up in a cheeky experiment that an Irish student named Shane Fitzgerald conducted in 2009, when news broke that the French composer Maurice Jarre had died. Realizing that the world’s journalists would be heading to Jarre’s Wikipedia page, Fitzgerald fabricated a too-good-to-not-use quote from the maestro—“When I die there will be a final waltz playing in my head that only I can hear”—and quickly added it to his page. This particular bit of vandalism was caught and deleted rapidly, but in the brief window of its existence the quote still made it into many of the world’s leading newspapers. And, unlike Wikipedia, none of them caught and deleted it; that only started to happen a month later, when Fitzgerald wrote to them to tell them what he’d done. By this test, Wikipedia was actually considerably more reliable than the world’s press.

If anything, Wikipedia—and the internet generally—just lets us lift the lid on the kind of mistakes we’ve been making for a very long time. Anybody with a data connection can go and see for themselves, down to the minute, the exact moment when the false idea that coatis are called Brazilian aardvarks entered the world. For the pre-internet age, tracking something like that down was the stuff entire PhDs were made of.

This is a real problem with history—there’s a lot we don’t know, and there’s also a lot that we think we know that we might not actually know, except unfortunately we just don’t know what we don’t actually know. Take, just for example, the story of the incredible coincidence that led to the First World War starting. The assassination of Archduke Franz Ferdinand by Gavrilo Princip in Sarajevo on June 28, 1914, all came down to the fact that Princip just happened to stop and buy a sandwich from Moritz Schiller’s delicatessen—a sandwich that he was eating when he saw the Archduke’s limousine (which had diverted from its planned route) drive past. He seized his opportunity, and the rest is...well, history. If Princip hadn’t felt peckish at that exact moment, or maybe if he’d decided that he wanted something different for lunch, then he would never have been in the position to fire the fateful shot, and perhaps the continent would not have descended into war.

It’s a great story about how the tiniest of details can have huge outcomes. It’s also completely untrue.

The source of the tale appears to have been a BBC documentary from 2003 that included the sandwich story—although, according to the journalist Mike Dash, who tracked the origin of the sandwich story down, the director of the documentary can’t remember where they got the sandwich detail from—and it spread like wildfire. It now appears all over the internet, and was even included in a book by the respected BBC journalist John Simpson, which was titled, er, Unreliable Sources.186

This isn’t a new phenomenon. If you’re a fan of financial bubbles, you may have been surprised that, in listing other financial bubbles a few chapters ago, I didn’t mention the “tulip mania” of 1637. This was possibly the most famous financial bubble of all time, where the price of tulips in the Netherlands soared in price before collapsing, leaving many tulip speculators ruined. It’s been a staple of discussions about the human tendency for foolishness ever since it appeared in Charles Mackay’s classic 1841 book, Extraordinary Popular Delusions and the Madness of Crowds (from whom I shamelessly poached the title of chapter 8, and also basically the idea for this book). Unfortunately, it also seems to have been, if not completely false, at least wildly overstated; Mackay got his information from a pamphlet put out by opponents of financial speculation, and, in reality, nobody was ruined by the rise and fall in the price of tulips.

The problem that lots of things we think we know turn out to be resting on shaky foundations isn’t limited to history either. Right now, science is going through a “replicability crisis”—where we’re discovering that an awful lot of bits of knowledge that we thought were well-founded are actually possibly entirely illusory. This all comes down to one of the foundational bits of the “scientific method” (note to sociologists of science: yes, I know there’s no such thing as a singular scientific method—let me live). That’s the fact that scientific experiments are set up to let anybody else replicate them—that’s why schoolchildren are drilled to write up their attempts to prove Newton right with the classic form of Aims, Methods, Results, Conclusion.

The trouble is, a lot of the time, nobody’s actually bothered to replicate the major experiments. That’s partly due to the incentive structures in science: nobody gets the big grants or the prestigious university posts on the basis of copying what someone already did before. If you want to get ahead in academia, you need to produce new, original work that expands our knowledge. Which regrettably means that nobody bothered double-checking a lot of what we thought was our existing knowledge.

This is particularly acute in the field of psychology, where some recent large-scale efforts to replicate a bunch of highly cited, widely referenced studies have come back with the disturbing conclusion that around 50 percent of them may not actually replicate—they might just have been chance findings all along. What’s even more interesting is that, deep down, it seems like experts in the field have an inkling which results are dodgy. The experimenters gave a large group of experts not connected to the study a betting market, where they could place wagers on which experiments they thought would replicate and which wouldn’t. The betting markets proved uncannily accurate, which is perhaps good news for fans of the human desire to make a quick buck, but less good for the system of peer review.

Oh, and if anybody’s going, “But that’s just psychology—it’s not even a real science, anyway,” then, fun news: there’s a replication crisis in physics too. Stick that in your pipe and smoke it, Einstein. (For what it’s worth, it’s now believed that around 20 percent of Einstein’s published papers contain mistakes of some kind. A lot of the time, he seems to have somehow come to the right conclusion despite the fact that he was working off incorrect assumptions. That’s geniuses for you, I guess.)

So where does all this leave us? Is truth in crisis? Are we doomed to live out our lives in a fog of misinformation? Deep down, are we all little more than coatis, hopping around the ruins of an ancient civilization, with tourists pointing at us and going, “Look, Doris, it’s a Brazilian aardvark”?

I think not. For sure, yes, we all swim in a sea of half-truths and sort-of lies, because the world is dumb and complicated, nobody knows exactly what’s going on, and that’s just the way our brains are made. But that isn’t a crisis. That’s just how things have always been.

The quote that began this book, from the reckless Arctic explorer Vilhjalmur Stefansson—“The most striking contradiction of our civilization is the fundamental reverence for truth which we profess and the thoroughgoing disregard for it which we practice”—might sound like it’s from a work that is going to bemoan our failure to live up to the standards of truth. But, actually, he takes the opposite tack, suggesting that maybe we shouldn’t be so surprised by the fact that truth is a bit thin on the ground. “It is a bit naive of the philosophers to diagnose from the mere scarcity of truth that the world is sick with an incurable malady,” he writes. “Is it not just possible that they cannot cure us for the basic reason that we are not ill?”187

I think this is the first thing we need to do if we want to move the needle back from untruth and toward truth: we need to not freak out. We have to appreciate that bullshit will always be with us, and the best we can ever hope to do is to keep it in check.

But there are also some practical things I think we can do—both as a society, and for ourselves.

We need to counter the effort barrier, and the way to do that is to...well, put in a bit more effort. That means being willing to pay for people to actually check things (I mean, I’m a fact-checker, of course I’m going to say that), but it also means that all the different groups in our society whose job is roughly in the truth field need to get a lot better at working together. Academics need to learn to talk to journalists, journalists need to learn to talk to academics, and ideally, if they could not do this only via the medium of press release, that would be great.

But we can all also help to counter the effort barrier ourselves, simply by putting in a tiny bit of effort the next time you’re tempted to share something outrageous on the internet. Just a few seconds. Check the source. Google it. Ponder whether it seems too good to be true.

Speaking of which, we also need to check ourselves—any of us, no matter how committed we think we are to the truth, can easily fall into the ego trap and find ourselves liking the lie. In fact, the more honest we think we are, the less likely we might be to be on the alert for these kinds of biases. So, when you’re pausing to check the source of something, also ask yourself if it’s playing to your personal biases, and whether you’re approaching it as skeptically as you might. And we can reflect this up into wider society—everybody makes mistakes, and we need to get better at celebrating those of us who are open to admitting them. Yes, ideally politicians wouldn’t say wrong things in the first place, but, hey—at least let’s give them a little bit of credit when they correct themselves.

We’re also going to need to fill the information vacuums that exist. That’s an ongoing process, of course, carried out around the world every day by millions and millions of people, working in a wide variety of fields, who strive to increase the sum of knowledge by a fraction. But we can still do more: too much information that does exist is locked away, hiding in a database or in an unreleased report or behind a paywall. We have to step up our efforts to make more of that good information available widely, because, without it, bad information will just flow right back in to fill the void. It’s not enough for us just to pull up weeds in the information garden—we need to plant flowers, as well.

And we need to believe it will work, and that it matters. Giving up and deciding that nobody cares about truth just because the candidate you preferred lost an election is, shall we say, a little premature. Believing that the internet is just a giant bullshit engine and there’s nothing anybody can do to tame it is almost as bad. As this book’s shown, this is very far from the first time in history that we’ve had these worries. Uncontrolled rumors, panics over new communications technology, horror at false news and fears of information overload—they’ve all been around for centuries. We got through it then, and we can get through it now, just as long as we don’t throw our hands up and go all, “LOL—nothing matters.” The greatest worry about the idea of “fake news” isn’t actually that people believe false news—it’s that they stop believing real news.

And we need to celebrate the times when we get it right, because sometimes we really do make big steps forward. And sometimes that happens in the most unlikely places—like, for example, a back garden in Paris.

All the same questions that we’ve pondered here—how to disentangle small, unexciting truths from a glittering skein of thrilling nonsense—confronted the upright citizens of that city in the 1780s, when our old friend and quack Dr. Anton Mesmer rolled into town. As we mentioned in chapter 7, King Louis XVI was not hugely pleased that Marie Antoinette was letting Mesmer work his hypnotic charms on her. And so he assembled a sort of Empiricism Avengers to test Mesmer’s theories. The group included some of the finest minds in Paris at the time, such as the father of modern chemistry, Antoine Lavoisier, and the renowned doctor Joseph-Ignace Guillotin (who, the following year, would propose an invention Louis XVI would ultimately become very familiar with).

In their pursuit of truth, the members of the commission did something that, as far as we know, nobody had ever done before in scientific history. They conducted the world’s first ever placebo-controlled, blinded medical trials. In the back garden of the lead author’s house, the commissioners invented a fairly hefty chunk of the scientific method, as—pioneering the concept of a blinded experiment in a very literal way—they led a literally blindfolded subject around and got them to hug supposedly “magnetized” trees (before eventually fainting). Through this and other controlled experiments, they conclusively proved that Mesmer’s theories were bunk.

You might think that, when they came to write up their findings, they’d have been tempted to brag about this triumph of truth over bullshit. But, instead, they struck quite a different tone: they were almost celebratory about Mesmer’s wrongness, finding it far more fascinating than the mundane truth.

“Perhaps the history of the errors of mankind, all things considered, is more valuable and interesting than that of their discoveries,” the report’s lead author wrote. Echoing Montaigne’s observation from centuries earlier, he continued, “Truth is uniform and narrow; it constantly exists, and does not seem to require so much an active energy, as a passive aptitude of soul in order to encounter it. But error is endlessly diversified; it has no reality, but is the pure and simple creation of the mind that invents it. In this field, the soul has room enough to expand herself, to display all her boundless faculties, and all her beautiful and interesting extravagancies and absurdities.”188

This book has covered just a tiny fraction of that “history of the errors of mankind.” You could write a hundred other versions of the book with no overlap.

Hopefully we have managed to follow in the footsteps of the author of that foundational piece of debunking, poised as he was in the very human state of being caught between the push and pull of fact and fiction—the pioneering truth-seeker who, nonetheless, seems to have been strangely captivated by the inexhaustible, soul-expanding possibilities of untruth. Because that’s what we need to do if we’re going to become more truthful—we need to study more deeply the vast and bountiful fields of wrongness, to know better what it is we’re doing wrong before we try to do it right. Basically, we need to become scholars of bullshit.

Oh, what was the name of that author, the one whose back garden that pioneering piece of truth-seeking was carried out in?

It was, of course, Benjamin Franklin.


Turn the page to read an excerpt from Tom Phillips’ first book,

Humans: A Brief History of How We F*cked It All Up