The first thing to do when evaluating a claim by some authority is to ask who or what established their authority. If the authority comes from having been a witness to some event, how credible a witness are they?
Venerable authorities can certainly be wrong. The U.S. government was mistaken about the existence of weapons of mass destruction (WMDs) in Iraq in the early 2000s, and, in a less politically fraught case, scientists thought for many years that humans had twenty-four pairs of chromosomes instead of twenty-three. Looking at what the acknowledged authorities say is not the last step in evaluating claims, but it is a good early step.
Experts talk in two different ways, and it is vital that you know how to tell these apart. In the first way, they review facts and evidence, synthesizing them and forming a conclusion based on the evidence. Along the way, they share with you what the evidence is, why it’s relevant, and how it helped them to form their conclusion. This is the way science is supposed to be, the way court trials proceed, and the way the best business decisions, medical diagnoses, and military strategies are made.
The second way experts talk is to just share their opinions. They are human. Like the rest of us, they can be given to stories, to spinning loose threads of their own introspections, what-ifs, and untested ideas. There’s nothing wrong with this—some good, testable ideas come from this sort of associative thinking—but it should not be confused with a logical, evidence-based argument. Books and articles for popular audiences by pundits and scientists often contain this kind of rampant speculation, and we buy them because we are impressed by the writer’s expertise and rhetorical talent. But properly done, the writer should also lift the veil of authority, let you look behind the curtain, and see at least some of the evidence for yourself.
The term expert is normally reserved for people who have undertaken special training, devoted a large amount of time to developing their expertise (e.g., MDs, airline pilots, musicians, or athletes), and whose abilities or knowledge are considered high relative to others’. As such, expertise is a social judgment—we’re comparing one person’s skill to the skill level of other people in the world. Expertise is relative. Einstein was an expert on physics sixty years ago; he would probably not be considered one if he were still alive today and hadn’t added to his knowledge base what Stephen Hawking and so many other physicists now know. Expertise also falls along a continuum. Although John Young is one of only twelve people to have walked on the moon, it would probably not be accurate to say that Captain Young is an expert on moonwalking, although he knows more about it than almost anyone else in the world.
Individuals with similar training and levels of expertise will not necessarily agree with one another, and even if they do, these experts are not always right. Many thousands of expert financial analysts make predictions about stock prices that are completely wrong, and some small number of novices turn out to be right. Every British record company famously rejected the Beatles’ demo tape, and a young producer with no expertise in popular music, George Martin, signed them to EMI. Xerox PARC, the inventors of the graphical interface computer, didn’t see any future for personal computers; Steve Jobs, who had no business experience at all, thought they were wrong. The success of newcomers in these domains is generally understood to be because stock prices and popular taste are highly unpredictable and chaotic. Stuff happens. So it’s not that experts are never wrong, it’s just that, statistically, they’re more likely to be right.
Many inventors and innovators were told “it will never work” by experts, with the Wright brothers and their fellow would-be inventors of motorized flight being an example par excellence. The Wright brothers were high school dropouts, with no formal training in aeronautics or physics. Many experts with formal training declared that heavier-than-air flight would never be possible. The Wrights were self-taught, and their perseverance made them de facto experts themselves when they built a functional heavier-than-air airplane, and proved the other experts wrong. Michael Lewis’s baseball story Moneyball shows how someone can beat the experts by rejecting conventional wisdom and applying logic and statistical analysis to an old problem; Oakland A’s manager Billy Beane built a competitive team by using player performance metrics that other teams undervalued, bringing his team to the playoffs two years in a row, and substantially increasing the team’s worth.
Experts are often licensed, or hold advanced degrees, or are recognized by other authorities. A Toyota factory-certified mechanic can be considered an expert on Toyotas. The independent or self-taught mechanic down the street may have just as much expertise, and may well be better and cheaper. It’s just that the odds aren’t as good, and it can be difficult to figure that out for yourself. It’s just averages: The average licensed Toyota mechanic is going to know more about fixing your Toyota than the average independent. Of course, there are exceptions and you have to bring your own logic to bear on this. I knew a Mercedes mechanic who worked for a Mercedes dealership for twenty-five years and was among their most celebrated and top-rated mechanics. He wanted to shorten his commute and be his own boss so he opened up his own shop. His thirty-five years of experience (by the time I knew him) gave him more expertise than many of the dealer’s younger mechanics. Or another case: An independent may specialize in certain repairs that the dealer rarely performs, such as transmission overhaul or reupholstering. You’re better off having your differential rebuilt by an independent who does five of those a month than a dealer who probably only did it once in vocational school. It’s like the saying about surgeons: If you need one, you want the doctor who has performed the same operation you’re going to get two hundred times, not once or twice, no matter how well those couple of operations went.
In science, technology, and medicine, experts’ work appears in peer-reviewed journals (more on those in a moment) or on patents. They may have been recognized with awards such as a Nobel Prize, an Order of the British Empire, or a National Medal of Science. In business, experts may have had experience such as running or starting a company, or amassing a fortune (Warren Buffett, Bill Gates). Of course, there are smaller distinctions as well—salesperson of the month, auto mechanic of the year, community “best of” awards (e.g., best Mexican restaurant, best roofing contractor).
In the arts and humanities, experts may hold university positions or their expertise may be acknowledged by those with university or governmental positions, or by expert panels. These expert panels are typically formed by soliciting advice from previous winners and well-placed scouts—this is how the Nobel and the MacArthur “genius” award nomination and selection panels are constituted.
If people in the arts and humanities have won a prize, such as the Nobel, Pulitzer, Kennedy Center Honors, Polaris Music Prize, Juno, National Book Award, Newbery, or Man Booker Prize, we conclude they are among the experts at their craft. Peer awards are especially useful in judging expertise. ASCAP, an association whose membership is limited to professional songwriters, composers, and music publishers, presents awards voted on by its members; the award is meaningful because those who bestow it constitute a panel of peer experts. The Grammys and the Academy Awards are similarly voted on by peers within the music and film industry, respectively.
You might be thinking, “Wait a minute. There are always elements of politics and personal taste in such awards. My favorite actor/singer/writer/dancer has never won an award, and I’ll bet I could find thousands of people who think she’s as good as this year’s award winner.” But that’s a different matter. The award system is generally biased toward ensuring that every winner is deserving, which is not the same as saying that every deserving person is a winner. (Recall the discussion of asymmetries earlier.) Those who are recognized by bona fide, respectable awards have usually risen to a level of expertise. (Again, there are exceptions, such as the awarding of a Grammy in 1990, which was later retracted, to lip-syncers Milli Vanilli; or the awarding of a Pulitzer Prize to Washington Post reporter Janet Cooke, which was withdrawn two days later when it was discovered that the winning story was fraudulent. Novelist Gabriel García Márquez quipped that Cooke should’ve been awarded the Nobel Prize for literature.) When an expert has been found guilty of fraud, does it negate their expertise? Perhaps. It certainly impacts their credibility—now that you know they’ve lied once, you should be on guard that they may lie again.
Dr. Roy Meadow, the pediatrician who testified in the case of the alleged baby killer Sally Clark, had no expertise in medical statistics or epidemiology. He was in the medical profession, and the prosecutor who put him on the stand undoubtedly hoped that jurors would assume he had this expertise. William Shockley was awarded a Nobel Prize in physics as one of three inventors of the transistor. Later in life, he promoted strongly racist views that took hold, probably because people assumed that if he was smart enough to win a Nobel, he must know things that others don’t. Gordon Shaw, who “discovered” the now widely discredited Mozart effect, was a physicist who lacked training in behavioral science; people probably figured, as they did with Shockley, “He’s a physicist—he must be really smart.” But intelligence and experience tend to be domain-specific, contrary to the popular belief that intelligence is a single, unified quantity. The best Toyota mechanic in the world may not be able to diagnose what’s wrong with your VW, and the best tax attorney may not be able to give the best advice for a breach-of-contract suit. A physicist is probably not the best person to ask about social science.
There’s a special place in our hearts (but hopefully not our rational minds) for actors who use their character’s image to hawk products. As believable as Sam Waterston was as the trustworthy, ethical district attorney Jack McCoy in Law & Order, as an actor he has no special insight into banking and investments, although his commercials for TD Ameritrade were compelling. A generation earlier, Robert Young, who was much loved on TV’s Marcus Welby, M.D., did commercials for Sanka. Actors Chris Robinson (General Hospital) and Peter Bergman (All My Children) hawked Vicks Formula 44; due to FTC regulations (the so-called white coat rule) the actors had to speak a disclaimer that became a widely known catchphrase: “I’m not a doctor, but I play one on TV.” Apparently, gullible viewers mistook the actors’ authority in a television drama for authority in the real world of medicine.
Some publications are more likely to consult true experts than others, and there exists a hierarchy of information sources. Some sources are simply more consistently reliable than others. In academia, peer-reviewed articles are generally more accurate than books, and books by major publishers are generally more accurate than self-published books (because major publishers are more likely to review and edit the material and have a greater financial incentive to do so). Award-winning newspapers such as the New York Times, the Washington Post, and the Wall Street Journal earned their reputations by being consistently accurate in their coverage of news. They strive to obtain independent verifications for any news story. If one government official tells them something, they get corroboration from another. If a scientist makes a claim, they contact other scientists who don’t have any stake in the finding to hear independent opinions. They do make mistakes; even Times reporters have been found guilty of fabrications, and the “newspaper of record” prints errata every day. Some people, including Noam Chomsky, have argued that the Times is a vessel of propaganda, reporting news about the U.S. government without a proper amount of skepticism. But again, like with auto mechanics, it’s a matter of averages—the great majority of what you read in the New York Times is likelier to be true than what you read in, for example, the New York Post.
Reputable sources want to be certain of facts before publishing them. Many sources have emerged on the Web that do not hold to the same standards, and in some cases, they can break news stories and do so accurately before the more traditional and cautious media do. Many of us learned of Michael Jackson’s death from TMZ.com before the traditional media reported it. TMZ was willing to run the story based on less evidence than were the Los Angeles Times or NBC. In that particular case, TMZ turned out to be right, but you can’t count on this sort of reporting.
A number of celebrity death reports that circulated on Twitter were found to be false. In 2015 alone, these included Carlos Santana, James Earl Jones, Charles Manson, and Jackie Chan. A 2011 fake tweet caused a sell-off of shares for the company Audience, Inc., during which its stock lost 25 percent. Twitter itself saw its shares climb 8 percent—temporarily—after false rumors of a takeover were tweeted, based on a bogus website made to look a great deal like Bloomberg.com’s. As the Wall Street Journal reported, “The use of false rumors and news reports to manipulate stocks is a centuries-old ruse. The difference today is that the sheer ubiquity and amount of information that courses through markets makes it difficult for traders operating at high speeds to avoid a well-crafted hoax.” And it happens to the best of us. Veteran reporter (and part of a team of journalists that was awarded a 1999 Pulitzer Prize) Jonathan Capehart wrote a story for the Washington Post based on a tweet by a nonexistent congressman in a nonexistent district.
As with graphs and statistics, we don’t want to blindly believe everything we encounter from a good source, nor do we want to automatically reject everything from a questionable source. You shouldn’t trust everything you read in the New York Times, or reject everything you read on TMZ. Where something appears goes to the credibility of the claim. And, as in a court trial, you don’t want to rely on a single witness, you want corroborating evidence.
The three-digit suffix of the URL indicates the domain. It pays to familiarize yourself with the domains in your country because some of the domains have restrictions, and that can help you establish a site’s credibility for a given topic. In the United States, for example, .edu is reserved for nonprofit educational institutions like Stanford.edu (Stanford University); .gov is reserved for official government agencies like CDC.gov (the Centers for Disease Control); .mil for U.S. military organizations, like army.mil. The most famous is probably .com, which is used for commercial enterprises like GeneralMotors.com. Others include .net, .nyc, and .management, which carry no restrictions (!). Caveat emptor. BestElectricalService.nyc might actually be in New Jersey (and their employees might not even be licensed to work in New York); AlphaAnd OmegaConsulting.management may not know the first or the last thing about management.
Knowing the domain can also help to identify any potential bias. You’re more likely to find a neutral report from an educational or nonprofit study (found on a .edu, .gov, or .org site) than on a commercial site, although such sites may also host student blogs and unsupported opinions. And educational and nonprofits are not without bias: They may present information in a way that maximizes donations or public support for their mission. Pfizer.com may be biased in their discussions about drugs made by competing companies, such as GlaxoSmithKline, and Glaxo of course may be biased toward their own products.
Note that you don’t always want neutrality. When searching for the owner’s manual for your refrigerator, you probably want to visit the (partisan) manufacturer’s website (e.g., Frigidaire.com) rather than a site that could be redistributing an outdated or erroneous version of the manual. That .gov site may be biased toward government interests, but a .gov site can give you most accurate info on laws, tax codes, census figures, or how to register your car. CDC .gov and NIH.gov probably have more accurate information about most medical issues than a .com because they have no financial interest.
Could the website be operating under a name meant to deceive you? The Vitamin E Producers Association might create a website called NutritionAndYou.info, just to make you think that their claims are unbiased. The president of the grocery chain Whole Foods was caught masquerading as a customer on the Web, touting the quality of his company’s groceries. Many rating sites, including Yelp! and Amazon, have found their ratings ballot boxes stuffed by friends and family of the people and products being rated. People are not always who they appear to be on the Web. Just because a website is named U.S. Government Health Service, that doesn’t mean it is run by the government; a site named Independent Laboratories doesn’t mean that it is independent—it could well be operated by an automobile manufacturer who wants to make its cars look good in not-so-independent tests.
In the 2014 congressional race for Florida’s thirteenth district, the local GOP offices created a website with the name of their Democratic opponent, Alex Sink, to trick people into thinking they were giving money to her; in reality, the money went to her opponent, David Jolly. The site, contribute.sinkforcongress2014.com, used Sink’s color scheme and featured a smiling photo of her, very similar to the photo on her own site.
Illustration of the website for Democratic Congressional candidate Alex Sink
Illustration of the GOP website used to solicit money for Alex Sink’s Republican opponent, David Jolly
The GOP’s site does say that the money will be used to defeat Sink, so it’s not outright fraud, but let’s face it—most people don’t take the time to read such things carefully. The most eye-catching parts of the trick site are the large photo of Sink, and the headline Alex Sink | Congress, which strongly implies that the site is for Alex Sink, not against her. Not to be outdone, Democrats responded with the same trick, creating the site www.JollyForCongress.com to collect money meant for Sink’s rival.
Dentec Safety Specialists and Degil Safety Products are competing companies with similar services and products. Dentec has a website, DentecSafety.com, to market their products, and Degil has a website, DegilSafety.com. However, Degil also registered Dentec Safety.ca to redirect Canadian customers to their own site in order to steal customers. A court case ruled that Degil had to pay Dentec $10,000 and to abandon DentecSafety.ca.
An online vendor operated the website GetCanadaDrugs.com. A court found the site name to be “deceptively misdescriptive.” Major points included that the pharmaceutical products did not all originate in Canada, and that only around 5 percent of the website’s customers were Canadian. The domain name has now ceased to exist.
Knowing the domain name is helpful but hardly a foolproof verification system. MartinLutherKing.org sounds like a site that would provide information about the great orator and civil rights leader. Because it is a .org site, you might conclude that there is no ulterior motive of profit. The site proclaims that it offers “a true historical examination” of Martin Luther King. Wait a minute. Most people don’t begin an utterance by saying, “What I am about to tell you is true.” The BBC doesn’t begin every news item saying, “This is true.” Truth is the default position and we assume others are being truthful with us. An old joke goes, “How do you know that someone is lying to you? Because they begin with the phrase to be perfectly honest.” Honest people don’t need to preface their remarks this way.
What MartinLutherKing.org contains is a shameful assortment of distortions, anti-Semitic rants, and out-of-context quotes. Who runs the site? Stormfront, a white-supremacy, neo-Nazi hate group. What better way to hide a racist agenda than by promising “the truth” about a great civil rights leader?
Are there biases that could affect the way a person or organization structures and presents the information? Does this person or organization have a conflict of interest? A claim about the health value of almonds made by the Almond Growers’ Association is not as credible as one made by an independent testing laboratory.
When judging an expert, keep in mind that experts can be biased without even realizing it. For the same tumor, a surgical oncologist may advise surgery, while a radiation oncologist advises radiation and a medical oncologist advises chemotherapy. A psychiatrist may recommend drugs for depression while a psychologist recommends talk therapy. As the old saying goes, if you have a hammer, everything looks like a nail. Who’s right? You might have to look at the statistics yourself. Or find a neutral party who has assessed the various possibilities. This is what meta-analyses accomplish in science and medicine. (Or at least they’re supposed to.) A meta-analysis is a research technique whereby the results of dozens or hundreds of studies from different labs are analyzed together to determine the weight of evidence supporting a particular claim. It’s the reason companies bring in an auditor to look at their accounting records or a financial analyst to decide what a company they seek to buy is really worth. Insiders at the company to be acquired certainly are expert in their own company’s financial situation, but they are clearly biased. And not always in the direction you’d think. They may inflate the value of the company if they want to sell, or deflate it if they are worried about a hostile takeover.
A special Google search allows you to see who else links to a web page you land on. Type “link:” followed by the website URL, and Google will return all the sites that link to it. (For example, link:breastcancer.org shows you the two hundred sites that have links to it.) Why might you want to do this? If a consumer protection agency, Better Business Bureau, or other watchdog organization links to a site, you might want to know whether they’re praising or condemning it. The page could be the exhibit in a lawsuit. Or it could be linked by an authoritative source, such as the American Cancer Society, as a valuable resource.
Alexa.com tells you about the demographics of site visitors—what country they are from, their educational background, and what sites people visited immediately before visiting the site in question. This information can give you a better picture of who is using the site and a sense of their motivations. A site with drug information that is visited by doctors is probably a more trusted source than one that isn’t. Reviews about a local business from people who are from your town are probably more relevant to you than reviews by people who are out of state.
In peer-reviewed publications, scholars who are at arm’s length from one another evaluate a new experiment, report, theory, or claim. They must be expert in the domain they’re evaluating. The method is far from foolproof, and peer-reviewed findings are sometimes overturned, or papers retracted. Peer review is not the only system to rely on, but it provides a good foundation in helping us to draw our own conclusions, and like democracy, it’s the best such system we have. If something appears in Nature, the Lancet, or Cell, for example, you can be sure it went through rigorous peer review. As when trying to decide whether to trust a tabloid or a serious news organization, the odds are better that a paper published in a peer-reviewed journal is correct.
In a scientific or scholarly article, the report should include footnotes or other citations to peer-reviewed academic literature. Claims should be justified, facts should be documented through citations to respected sources. Ten years ago, it was relatively easy to know whether a journal was reputable, but the lines have become blurred with the proliferation of open-access journals that will print anything for a fee, in a parallel world of pseudo-academia. Reference librarians can help you distinguish the two. Journals that appear on indexes such as PubMed (maintained by the U.S. National Library of Medicine) are selected for their quality; articles you return from a regular search are not. Scholar.Google.com is more restrictive than Google or other search engines, limiting search results to scholarly and academic papers, although it does not vet the journals and many pseudo-academic papers are included. It does do a good job of weeding out things that don’t even resemble scholarly research, but that’s a double-edged sword: That can make it more difficult to know what to believe because so many of the results appear to be valid. Jeffrey Beall, a research librarian at the University of Colorado, Denver, has developed a blacklist of what he calls predatory open-access journals (which often charge high fees to authors). His list has grown from twenty publishers four years ago to more than three hundred today. Other sites exist that help you to vet research papers, such as the Social Science Research Network (ssrn.com).
On the Web, there is no central authority to prevent people from making claims that are untrue, no way to shut down an offending site other than going through the costly procedure of obtaining a court injunction.
Off the Web, the lay of the land can be easier to see. Textbooks and encyclopedias undergo careful peer review for accuracy (although that content is sometimes changed under political pressure by school boards and legislatures). Articles at major newspapers in democratic countries are rigorously sourced compared to the untrustworthy government-controlled newspapers of Iran or North Korea, for example. If a drug manufacturer makes a claim, the FDA in the United States (Health Canada in Canada, or similar agencies in other countries) had to certify it. If an ad appears on television, the FTC will investigate claims that it is untrue or misleading (in Canada this is done by the ASC, Advertising Standards Canada; in the U.K. by the ASA, the Advertising Standards Authority; Europe uses a self-regulation organization called the EASA, European Advertising Standards Alliance; many other countries have equivalent mechanisms).
The lying weasels who make fraudulent claims can face punishment, but often the punishment is meager and doesn’t serve as much of a deterrent. Energy-drink company Red Bull paid more than $13 million in 2014 to settle a class-action lawsuit for misleading consumers with promises of increased physical and mental performance. In 2015, Target agreed to pay $3.9 million to settle claims that the prices it charged in-store were higher than those it advertised, and that it misrepresented the weights of products. Grocery retailer Whole Foods was similarly charged in 2015 with misrepresenting the weight of its prepackaged food items. Kellogg’s paid $4 million to settle a lawsuit over misleading ads that claimed its Frosted Mini-Wheats were “clinically shown to improve kids’ attentiveness by 11 percent.” While these amounts might sound like a lot to us, to Red Bull ($7.7 billion in revenue for 2014), Kellogg’s ($14.6 billion), and Target ($72.6 billion) these fines are little more than a rounding error in their accounting.
Unlike books, newspapers, and conventional sources, Web pages seldom carry a date; graphs, charts, and tables don’t always reveal the time period they apply to. You can’t assume that the “Sales Earnings Year to Date” you read on a Web page today actually covers today in the “To Date,” or even that it applies to this year.
Because Web pages are relatively cheap and easy to create, people often abandon them when they’re done with them, move on to other projects, or just don’t feel like updating them anymore. They become the online equivalent of an abandoned storefront with a lighted neon sign saying “open” when, in fact, the store is closed.
For the various reasons already mentioned—fraud, incompetence, measurement error, interpretation errors—findings and claims become discredited. Individuals who were found guilty in properly conducted trials become exonerated. Vehicle airbags that underwent multiple inspections get recalled. Pundits change their minds. Merely looking at the newness of a site is not enough to ensure that it hasn’t been discredited. New sites pop up almost weekly claiming things that have been thoroughly debunked. There are many websites dedicated to exposing urban myths, such as Snopes.com, or to collating retractions, such as RetractionWatch.com.
During the fall of 2015 leading up to the 2016 U.S. presidential elections, a number of people referred to fact-checking websites to verify the claims made by politicians. Politicians have been lying at least since Quintus Cicero advised his brother Marcus to do so in 64 B.C.E. What we have that Cicero didn’t is real-time verification. This doesn’t mean that all the verifications are accurate or unbiased, dear reader—you still need to make sure that the verifiers don’t have a bias for or against a particular candidate or party.
Politifact.com, a site operated by the Tampa Bay Times, won a Pulitzer Prize for their reporting, which monitors and fact-checks speeches, public appearances, and interviews by political figures, and uses a six-point meter to rate statements as True, Mostly True, Half True, Mostly False, False, and—at the extreme end of false—Pants on Fire, for statements that are not accurate and completely ridiculous (from the children’s playground taunt “Liar, liar, pants on fire”). The Washington Post also runs a fact-checking site with ratings from one to four Pinocchios, and awards the prized Geppetto Checkmark for statements and claims that “contain the truth, the whole truth, and nothing but the truth.”
As just one example, presidential candidate Donald Trump spoke at a rally on November 21, 2015, in Birmingham, Alabama. To support his position that he would create a Muslim registry in the United States to combat the threat of terrorism from within the country, he recounted watching “thousands and thousands” of Muslims in Jersey City cheering as the World Trade Center came tumbling down on 9/11/2001. ABC News reporter George Stephanopoulos confronted Trump the following day on camera, noting that the Jersey City police denied this happened. Trump responded that he saw it on television, with his own eyes, and that it was very well covered. Politifact and the Washington Post checked all records of television broadcasts and news reports for the three months following the attacks and found no evidence to support Trump’s claim. In fact, Paterson, New Jersey, Muslims had placed a banner on the city’s main street that read “The Muslim Community Does Not Support Terrorism.” Politifact summarized its findings, writing that Trump’s recollection “flies in the face of all evidence we could find. We rate this statement Pants on Fire.” The Washington Post gave it their Four-Pinocchio rating.
During the same campaign, Hillary Clinton claimed “all of my grandparents” were immigrants. According to Politifact (and based on U.S. census records), only one grandparent was born abroad; three of her four grandparents were born in the United States.
One way to fool people into thinking that you’re really knowledgeable is to find knowledgeable-sounding things on other people’s Web pages and post them to your own. While you’re at it, why not add your own controversial opinions, which will now be enrobed in the scholarship of someone else, and increase hits to your site? If you’ve got a certain ideological ax to grind, you can do a hatchet job by editing someone else’s carefully supported argument to promote the position opposite of theirs. The burden is on all of us to make sure that we’re reading the original, unadulterated information, not someone’s mash-up of it.
Unscrupulous hucksters count on the fact that most people don’t bother reading footnotes or tracking down citations. This makes it really easy to lie. Maybe you’d like your website to convince people that your skin cream has been shown to reverse the aging process by ten years. So you write an article and pepper it with footnotes that lead to Web pages that are completely irrelevant to the argument. This will fool a lot of people, because most of them won’t actually follow up. Those who do may go no further than seeing that the URL you point to is a relevant site, such as a peer-reviewed journal on aging or on dermatology, even though the article cited says nothing about your product.
Even more diabolically, the citation may actually be peripherally related, but not relevant. You might claim that your skin cream contains Vitamin X and that Vitamin X has been shown to improve skin health and quality. So far, so good. But how? Are the studies of Vitamin X reporting on people who spread it on their skin or people who took it orally? And at what dosage? Does your skin product even have an adequate amount of Vitamin X?
You may read on CDC.gov that the incidence of a particular disease is 1 in 10,000 people. But then you stumble on an article at NIH.gov that says the same disease has a prevalence of 1 in 1,000. Is there a misplaced comma here, a typo? Aren’t incidence and prevalence the same thing? Actually, they’re not. The incidence of a disease is the number of new cases (incidents) that will be reported in a given period of time, for example, in a year. The prevalence is the number of existing cases—the total number of people who have the disease. (And sometimes, people who are afraid of numbers make the at-a-glance error that 1 in 1,000 is less than 1 in 10,000, focusing on that large number with all the zeros instead of the word in.)
Take multiple sclerosis (MS), a demyelination disease of the brain and spinal cord. About 10,400 new cases are diagnosed each year in the United States, leading to an incidence of 10,400/322,000,000, or 3.2 cases per 100,000 people—in other words, a 0.0032 percent chance of contracting it. Compare that to the total number of people in the United States who already have it, 400,000, leading to a prevalence rate of 400,000/322,000,000, or 120 cases per 100,000, a 0.12 percent chance of contracting it at some point during your lifetime.
In addition to incidence and prevalence, a third statistic, mortality, is often quoted—the number of people who die from a disease, typically within a particular period of time. For coronary heart disease, 1.1 million new cases are diagnosed each year, 15.5 million Americans currently have it, and 375,295 die from it each year. The probability of being diagnosed with heart disease this year is 0.3 percent, about a hundred times more likely than getting MS; the probability of having it right now is nearly 5 percent, and the probability of dying from it in any given year is 0.1 percent. The probability of dying from it at some point in your life is 20 percent. Of course, as we saw in Part One, all of this applies to the aggregate of all Americans. If we know more about a particular person, such as their family history of heart disease, whether or not they smoke, their weight and age, we can make more refined estimates, using conditional probabilities.
The incidence rate for a disease can be high while the prevalence and mortality rates can be relatively low. The common cold is an example—there are many millions of people who will get a cold during the year (high incidence), but in almost every case it clears up quickly, and so the prevalence—the number of people who have it at any given time—can be low. Some diseases are relatively rare, chronic, and easily managed, so the incidence can be low (not many cases in a year) but the prevalence high (all those cases add up, and people continue to live with the disease) and the mortality is low.
When evaluating evidence, people often ignore the numbers and axis labels, as we’ve seen, but they also often ignore the verbal descriptors, too. Recall the maps of the United States showing “Crude Birth Rate” in Part One. Did you wonder what “crude birth rate” is? You could imagine that a birth rate might be adjusted by several factors, such as whether the birth is live or not, whether the child survives beyond some period of time, and so on. You might think that because the dictionary definition of the word “crude” is that it is something in a natural or raw state, not yet processed or refined (think crude oil) it must mean the raw, unadulterated, unadjusted number. But it doesn’t. Statisticians use the term crude birth rate to count live births (thus it is an adjusted number that subtracts stillborn infants). In trying to decide whether to open a diaper business, you want the crude birth rate, not the total birth rate (because total birth rate includes babies who didn’t survive birth).
By the way, a related statistic, the crude death rate, refers to the number of people who die at any age. If you subtract this from the crude birth rate, you get a statistic that public policy makers are (and Thomas Malthus was) very interested in: the RNI, rate of natural increase of a population.