Chapter 7Of Lies and Liars

ONE OF THE MANY PECULIARITIES of the law of evidence is what is called the “excited utterance” exception to the rule against hearsay. The details of the rule need not concern us here, but the basic premise of the rule is that what people say under circumstances of sudden and great excitement—high stress—is especially reliable, and therefore should not be excluded by the rule against hearsay.1

On the face of it, the excited utterance exception seems psychologically naive.2 We have long known that what people say when excited may be vulnerable to the lapses of memory and failures of perception that excitement and stress can cause. If anything, it seems as if excited utterances ought to be treated not as especially reliable, but just the opposite—as especially unreliable.

But that is only half the story. And only the modern half. Back when the excited utterance exception developed, courts were less aware of the various ways in which what people perceived, remembered, and described might be inaccurate. Perception was understood as the primary way in which people gained knowledge, and the possibilities of inaccurate perception, mistaken recollection, and confused recounting were rarely acknowledged. But although the courts and people in general were less attuned to the possibility of honest but erroneous perception, and often oblivious to the risks of honestly mistaken recall and innocently misspoken reports, they were keenly aware of the possibility of intentional fabrication. Lying. Just as the oath developed to try to keep people from lying, the excited utterance exception developed at a time when people were less worried than they should have been about mistakes. But they were very worried, and not less than they should have been, about lies and liars. And so the excited utterance exception was based on the largely accurate view that lying requires advance thought and planning. What people unthinkingly blurt out on the spur of the moment, especially under conditions of high anxiety, is at least what they honestly believe at the time, or so it was thought. The excited utterance exception thus stands as a reminder that the legal system, relying as it does so much on courtroom testimony about events that neither the judge nor the jury have themselves observed, is especially concerned about lying.

Lying is a worry not only in court. Concern about lying has existed as long as there has been lying. The Ten Commandments would hardly have commanded people not to “bear false witness” had false witness not, even then, been perceived as a serious problem. And although the Ten Commandments tried mightily to get people to stop lying, the practice persists. Husbands lie to their wives. Children lie to their parents. Parents lie to their children. Merchants lie to their customers. Criminals lie to the police. The police lie to suspects. Politicians lie to their constituents. Students submitting late papers lie to their professors. Lying, it seems, is everywhere.

Volumes have been written about lying.3 Sometimes the focus is on why lying is wrong. At other times the concern is with the exceptions—the conditions under which lying might not be wrong, as in the traditional example of lying to the prospective murderer about the whereabouts of his intended victim. And then there are so-called white lies, in which we lie to avoid hurting someone’s feelings, and social lies, which are designed to soften the sharp edges of refusals and rebuffs. There are also lies that are the harmless or even necessary part of some practice, such as bluffing in poker, or deceiving an opponent on the football field or the enemy in time of war. And the lies that we call “fish stories” are so common and so commonly discounted that they might not even qualify as lies at all.

Our interest here is neither with the moral rightness or wrongness of lying nor with the moral justifications for the alleged exceptions to the traditional strictures against lying. Rather, the immediate issue is how lying affects the reliability of testimony as evidence. If what people say can be evidence of the content of what they have said—and that is what the idea of testimony is all about—then the value of that testimony is dependent on the truth of what is said. Lies undercut that value, and thus undercut the worth of testimony as evidence. It would be good, as a matter of evidence and not only as a matter of morality, to be able to tell when people are lying, and thus be able to dismiss or discount the testimony of the liar in reaching our factual conclusions.

Not surprisingly, the law has long wrestled with this problem. Given that most of the evidence in a trial consists of testimony, we can easily understand why the law remains especially concerned with that testimony being reliable. And one way testimony might not be reliable is if the witness is lying. Initially, we note that witnesses might lie—and not just be mistaken—if they have an interest in one outcome rather than another. Defendants charged with crimes were long prohibited from testifying in their own defense because of the perception—hardly unfounded—that most people would rather lie than be hanged or imprisoned. And so it was thought that lying by the defendant, even under oath, was so predictable, and therefore a defendant’s testimony so predictably unreliable, that it was better not to permit that testimony at all.4 And the same unease about the effect of self-interest on veracity was applied to civil lawsuits as well, where again the parties were traditionally prohibited from testifying, on the assumption that the pull of self-interest would typically override any perceived obligations to the truth.5 These prohibitions on defendant and party testimony were eliminated in English law in the early nineteenth century and in most other common-law countries at about the same time, but the worry still persists that criminal defendants will lie to save their skins or their liberty, that defendants in civil lawsuits will lie to save their money, and that plaintiffs in those same civil lawsuits will lie for reasons of retribution or financial gain. The list of reasons for lying is long, and even now it is both permissible and common to cross-examine or otherwise “impeach” a witness in order to elicit the possibility that the witness has an interest in the outcome and is therefore more likely to be lying.

In attempting to guard against lies and lying, the legal system has long relied, at least in part, on the oath that witnesses are required to take before testifying, an oath to tell the truth, the whole truth, and nothing but the truth. As we examined in Chapter 6, the oath, whether formally in court or less formally in many other contexts, has long been a part of numerous testimonial practices. But oaths have only limited value, both in court and out. With the decline in serious belief in an afterlife and in a God insistent on punishing fabricators, with formal sanctions for perjury so rare, and with statements such as “I swear to God” becoming little more than verbal tics, the question remains about to how to guard against lying in a world in which verbal testimony is such an important part of the evidence we use throughout our lives. Although the maxim that the truth hurts is about people’s reluctance to face up to difficult facts about themselves, the truth hurts in the broader sense that people often have strong incentives to avoid telling the truth when that truth will be to their personal, professional, social, financial, or other disadvantage. And because the incentives to lie are often great, the incentives to find ways to identify liars and lying have also been great. The better we can identify liars, the more we can rely on testimony.

What Is a Lie?

It was not a good four years for the word “lie.” Back when Immanuel Kant was condemning lying in the eighteenth century, and back when Sissela Bok and others were analyzing lying in the twentieth, most people had a pretty good idea of what a lie was. A lie was a false statement used intentionally by the liar to induce the hearer into having a false belief. As recently as 2014, Seana Shiffrin offered a nuanced definition of lying that still contains the basic elements of intended falsity with the aim of deceiving the listener.6

Thanks in large part to the Trump administration’s casual concern with the truth, much of the public understanding of “lie” has been transformed.7 The transformation was not the goal of the Trump administration, which was understandably reluctant to use the word “lie” to describe its own behavior, but of the mainstream press as it struggled with how to describe patent falsehoods emanating from what used to be thought of as reliable official sources. Indeed, from the very beginning of the Trump era, members of the press, as well as commentators on the press, engaged in public debates about whether clear falsity alone should be described as a lie. Slate generally permitted such an expansive use of the word “lie,” and the New York Times (eventually) permitted it on its opinion pages. By contrast, National Public Radio and the Wall Street Journal decreed that the word “lie” should be reserved for those falsehoods that were plainly intentional, and not merely negligent, even if grossly so, and not merely the product of self-deception, no matter how troubling that self-deception might be.8

Although never expressed in exactly these terms, those who encouraged or at least tolerated the expansion of the word “lie” to include a plain falsehood even without evidence of intent appeared to rely on the evidentiary inferences of a blatant falsehood—the very blatancy of the falsehood being taken as evidence that anyone who said something so obviously false must have known of the falsity and therefore knowingly said something false with intent to deceive. If I claim to be the Easter Bunny, to have been awarded the Medal of Honor, or to have run a mile in under four minutes, the obvious falsity of those assertions would seem to count as sufficient evidence of my knowledge of their falsity to justify the label of “lie” even under a traditional definition requiring intent.9 Similarly, we might infer from the patent implausibility of the claim that “diet slippers” could produce weight loss that those who sold them knew of the falsity of their claims.10 Indeed, more or less the very question of whether obvious falsity could be evidence of intentional falsity arose in the wake of then-president Trump’s now-notorious telephone call to Georgia secretary of state Brad Raffensperger on January 2, 2021, in which Trump encouraged the latter to “find” sufficient votes to change the outcome of the election, at least in Georgia.11 Public discussion ensued over whether the president had thereby committed election fraud under federal law by “knowingly” attempting to influence the outcome of an election.12 One view was that because the president believed, however unrealistically, that he had actually won, he could not have knowingly and intentionally (or “willfully,” as the statute puts it) attempted to change the outcome. But those who maintained the opposing view argued that because no one, not even Donald Trump, could genuinely believe that he had won, he was attempting to produce a result contrary to what he knew was reality, and had thus violated the law.

Stepping back from this particular event, we can see that those who insist that calling an obvious falsehood a lie even without explicit evidence that the person knew their statement was false are plausibly understood not as seeking to change the traditional meaning of the word by removing the requirement of intentionality. Instead they are relying on the inference that saying something patently false, and widely understood to be patently false, would itself be evidence of knowing—and not merely negligent, or even reckless—falsity. All the same, it is plain that contemporary journalistic usage is heading in the direction of a willingness to label as a lie anything that is a clear falsehood, even without further evidence that the person accused of having lied knew that what they said was false at the time they said it.

Regardless of the outcome of this ongoing linguistic debate about just what a lie is, the requirement of intent to deceive—intentional falsity—in order for some statement to count as a lie is not only consistent with long-standing usage, but also compatible with most attempts to identify lies. When someone asserts something for which there is no evidence other than their assertion—“the dog ate my homework”—it is useful to know whether they actually believed what they said. Perhaps we should disregard the statement as evidence even if made sincerely, or perhaps not, but if even the person who made the statement doesn’t believe it, then neither should we. In other words, although modern usage is becoming increasing compatible with the view that knowing that one’s statement is false is not a necessary condition for calling that statement a lie, it is clear that knowing that one’s statement is false is a sufficient condition. And if we are worried about people who lie in court, who lie to public officials, who lie on college applications, who lie to health care providers about their eligibility for Covid-19 vaccination, and much more, we should be concerned with trying to root out those whose falsities are intentional. This will not eliminate all falsity, but it will at least eliminate some. As a result, we have witnessed the long-standing efforts, to which we will turn presently, to search for ways of identifying lies under the traditional understanding of what counts as a lie.

Paltering

Traditional definitions of lying have included not only intentionality but also literal falsity. It turns out, however, that getting someone to believe something that is not true often does not require such literal falsity. Suppose a colleague who knows that I am an amateur furniture maker comes into my office and admires my store-bought desk. And then suppose I respond by saying “Thank you.” My colleague infers from this that I have made the desk. By saying nothing, I have encouraged this inference, even though it is false. And if I accurately and publicly observe that another colleague is sober today, I have misleadingly suggested that there are other days on which he is not. Or, to return to the student submitting the late paper, if the student who has yet to start on the paper tells me about the close relative who has died, and if the relative has in fact died, the student, in making an accurate statement, nonetheless wants me to believe, inaccurately, that the death was causally responsible for the lateness, even if it was not.

There is a nice but obscure word for this practice of attempting to deceive without saying anything that is literally false—paltering.13 And once we understand the possibility of paltering and recognize its widespread occurrence, we can appreciate the way in which the traditional definition of lying is potentially too narrow when we are concerned with the conditions of social interaction and social trust. For those purposes we have every reason to worry as much about paltering as we do about flat-out lying.

If we are concerned somewhat more narrowly about evidence, however, and even more narrowly about testimony, it is appropriate to focus more precisely on statements that make explicitly factually false assertions. Narrowing the focus in this way may leave paltering and other forms of non-factually-false deception untouched, but the narrow focus allows us to concentrate on the mechanisms that have been used traditionally, and that might be used now or in the future, to determine whether factual statements are accurate or whether instead they are false. When some statement is to be taken as evidence for what it asserts, and especially when there is little or no other evidence leading to that conclusion, we have strong motivations for trying to determine whether that statement—that testimony—is true or false. In legal proceedings, in public policy, and in everyday life we frequently need to determine whether what some statement—the testimony of the testifier—states as a fact actually is a fact. And here, it turns out, there is a long, illuminating history.

Lie Detection—Then and Now, Good and Bad

Among the most noteworthy characteristics of the comic book (and then motion picture) character Wonder Woman is her ability to detect or forestall lying by others. It is not clear whether her Magic Lasso, forged from the Magic Girdle of Aphrodite, was originally intended by her creator to be an implement to secure veracity or instead only to induce submission, but as the character developed over the years, it was Wonder Woman’s ability as a lie detector that endured.

Wonder Woman may or may not be interesting in her own right.14 But what is particularly noteworthy is that her creator, William Moulton Marston, a senior faculty member in the Harvard University Department of Psychology at the time, was also the inventor, in the 1920s, of one of the early polygraphs—lie-detection machines. And that itself is of particular interest because the judicial decision that rejected the courtroom use of Marston’s polygraph—United States v. Frye—has had a lasting impact in two different ways.15 One was in establishing what was for a long time, and what still is in some states, the test for determining whether scientific expert testimony would be admissible in legal proceedings.16 And the other was in launching a century of official skepticism about lie-detecting technology and expertise, a skepticism that, at least for courts, persists even as the technology has improved dramatically.

Marston’s polygraph was not the first. That honor apparently belongs either to Cesare Lombroso, a prominent criminologist who in late nineteenth-century Italy invented a device that purported to use measurements of blood pressure to identify lies, or to James MacKenzie, a Scottish cardiologist who created a similar device at about the same time premised on the same basic theory.17 The theory is that telling a lie is more stressful (or requires more mental exertion in other ways) than telling the truth, and that the heightened stress is reflected in higher blood pressure. Subsequent advances, including Marston’s, and including an even more sophisticated polygraph devised by John A. Larson in 1921, improved on Lombroso and MacKenzie by including respiration rate as an additional indicator of knowing deception. And post-Larson polygraphs, especially the one invented by Leonard Keeler in the 1930s that is the principal precursor of the modern polygraph, have added galvanic skin response and heart rate.18 Even with the improvements, however, the basic principle throughout has remained the same—that there are physiological markers of deception, and that the physiological markers of stress level are chief among the physiological markers of deception. Stress, in other words, is evidence of deception, and this too-crude observation is at least the starting point for most of the far more sophisticated physiological approaches to lie detection.

The physiological markers of deception, including more contemporary approaches to be discussed presently, are to be distinguished from behavioral markers. Most people, including most jurors listening to witness testimony in court, believe that certain behaviors are reliable indicators of lying. One of those behaviors is looking directly at the questioner, the common belief being that liars will avoid eye contact.19 Similarly, liars are generally believed to speak less confidently than truth-tellers, to fidget and display other overt signs of nervousness, and to look down rather than up even apart from the question of eye contact. And there are others as well. But most of these beliefs are false.20 Or, to put it more precisely, the behavioral cues that most people, including most police officers, believe are indicators of intentional deception are nothing of the sort.21 Belief in the soundness of these unsound behavioral indicators of lying leads ordinary people to be quite poor at distinguishing liars from truth-tellers. Indeed, most of the studies on interpersonal lie detection reveal that even people who are consciously aware of the indications they are watching for, and even people seemingly trained to identify liars, are scarcely better than random.22 Courts have traditionally rejected lie-detection technology for use at trials, and most courts still do (although some states, such as New Mexico, tend to allow it).23 And their support for this policy tends to be the view that “the jury is the lie-detector in the courtroom.”24 But this view is inconsistent with the fact that jurors, like people in general, are simply not very good at lie detection.

Here, as elsewhere, one of the most important questions to ask about any evidentiary conclusion, especially conclusions purporting to cast doubt on some piece of evidence or some method of obtaining evidence, is “Compared to what?” The question to be asked about any form of lie detection, therefore, is not whether the method is perfect, and not whether it is highly accurate, but whether the method is better than lie detection through the use of all of the folk wisdom, urban legends, uninformed amateur psychology, and countless other varieties of conventional but mistaken approaches that people have traditionally used to evaluate the credibility of testimony, both in court and out. Although the reliability tests on polygraphs vary widely in their results, even the most cautious or skeptical conclusions put the reliability of the modern polygraph as being at least 70 percent, both for the identification of true statements and the identification of false ones (the two not necessarily being the same), with the more common conclusions being that polygraphs tend to be 80 to 85 percent accurate in identifying both true and false statements.25 The 2002 National Research Council report’s conclusion that the traditional lie detector, administered competently, could identify deception at a rate that is “well above chance” but “well below perfection” and not having “extremely high accuracy” captures what most of the research concluded then, and still concludes now. But even that level of accuracy dwarfs the accuracy of the nontechnological alternatives used by ordinary people, including the ordinary people who sit on juries.

Various modern techniques for lie-detection are different from the traditional polygraph, but they do not reject the basic principle that deception tends to produce measurable physiological indicators. It is not clear how much better, if at all, any of these techniques are than the traditional polygraph, but it seems clear that they are no worse.26 One of these techniques, periorbital thermography, with accuracy rates around 75 percent, measures the temperature around the eyes, and is based on the premise that the rate of blood flow around the eyes is especially sensitive to stress and thus correlates with deception.27 Near-infrared-spectroscopy, with similar or slightly greater accuracy, assesses the optical properties of brain tissue, properties that again have been shown to vary with stress level.28 And electroencephalography, sometimes referred to as “brain fingerprinting,” measures the electrochemical emissions of the brain, in particular brain wave P300, in response to various stimuli, with the theory being that the level of emission is a measure of consciousness of guilt, and with consciousness of guilt being a measure of deception.29

All of these techniques are still in use and are still being developed, as is the traditional polygraph in its best form. Much of the contemporary attention to lie detection, however, has focused on the use of functional magnetic resonance imaging—fMRI, “brain scans”—to detect deception.30 Even here the basic idea is the same. Common notions notwithstanding, brains do not light up when engaged in certain tasks, and fMRI scans do not take pictures of brains. Instead, an fMRI scan measures and displays a physiological response to various activities. For lie detection purposes, fMRI is used to measure the extent to which certain regions of the brain recruit higher levels of oxygenated hemoglobin when the possessor of that brain is being deceptive than they do when that person is telling the truth.

Research seeking to advance these technologies continues apace, with the largest single track of the current research being done by neuroscientists using fMRI approaches. And although much of that research is being done in the service of pure science and knowledge for its own sake, at least some of the interest is fueled by the wide range of practical uses for such technology, and not only in the courtroom or other parts of the criminal justice system. A no longer extant company called No Lie MRI, Inc., for example, recognized that prosecutors and defense attorneys might not be the only ones interested in identifying liars—the rest of us might be interested as well, especially when we distrust our spouses, our business collaborators, or the people who are trying to sell us houses and cars.

But we are getting ahead of things. The issue before us starts with the proposition that testimony can be evidence, but that its value as evidence increases as our confidence increases that the testifier is not trying to deceive us, whether by literal lying, by paltering, or in some other way. We thus seek a way of assessing the value of an act of testimony as evidence by determining the likelihood that the testifier is lying. When Marston’s polygraph (crude by modern-day standards) was rejected as courtroom evidence, the rationale for its rejection was that the methods had not been generally accepted by any relevant scientific or professional community. As noted above, this test of general acceptance has been replaced in all federal and most state courts by a focus on reliability and accuracy rather than acceptance, a question to which we will return in Chapter 9. But despite the change in the nature of the test, the traditional judicial skepticism continues. There are exceptions. As noted above, New Mexico now generally allows polygraph evidence subject to constraints of relevance and avoidance of prejudice, and a number of federal courts have been open to accepting it in particular cases. But these are exceptions, and courts persist in ruling that evidence based on lie-detection technology is inadmissible, even as the degree of reliability increases.

In the popular press, and also in much of the scientific literature, especially the neuroscience literature, there has been widespread skepticism about the use of any of these techniques, even the best of them, for courtroom use.31 That skepticism seems based on two related factors, each of which deserves closer scrutiny. One is the worry that the current level of reliability is nowhere near high enough to justify using it to convict people of crimes and deprive people of their liberty. And of course this is right. Even the most optimistic conclusions about the best of the modern lie-detection techniques rarely have a level of reliability above 90 percent. It is clear, therefore, that the use of lie detection by itself, even assuming that the use could somehow circumvent the constraints of the Fifth Amendment’s bar on compulsory self-incrimination, would be insufficient to justify a criminal conviction. But if lie detection is not good enough alone to prove guilt beyond a reasonable doubt—which it is not—is it good enough to show, on behalf of a defendant, that a reasonable doubt exists?32 Suppose that some eyewitness testimony, whose hardly certain reliability we will explore in Chapter 8, places the defendant at the scene of the armed robbery, but the defendant claims that the eyewitness was mistaken and that he, the defendant, was two hundred miles away and in a different state at the time of the crime. In that context, it is hardly obvious, to put it mildly, that we should deprive that defendant of the opportunity to support his alibi defense with the result of a polygraph or fMRI examination showing that he was 85 percent likely to have been telling the truth. Or perhaps even that the witness against him might have been lying. In other words, what is plainly insufficient to support a conviction under the beyond a reasonable doubt standard might nevertheless be sufficient to defeat a conviction precisely because of that standard.

A second source of skepticism is the worry that jurors and maybe even judges will take lie-detection evidence as being more reliable than it actually is. Jurors, it is said, will see an fMRI scan, which they erroneously believe to be a picture of a brain, and take this as absolute proof of lying or truth-telling, which of course it is not. Interestingly, however, research on exactly this question by neuroscientists Martha Farah and Cayce Hook shows this worry to be unwarranted. In the face of claims that brain scan images have a “seductive allure” for laypeople, Farah and Hook experimentally demonstrate that there is nothing about a brain scan that makes it inordinately influential.33 All sorts of evidence might, of course, be overvalued, but Farah and Hook show that overvaluation of fMRI evidence is no more likely than overvaluation is for any other type of evidence.

At this point it is important to bear in mind that, even in courtroom contexts, judges and jurors prohibited from knowing lie-detection results are not going to exit the jury box and go home. And they are not going to refuse to decide. Neither of these is a permitted option, even though something like that—simply not offering or publishing a conclusion—is an option for the scientist whose experiments neither confirm nor disconfirm a hypothesis. Unlike scientists, however, judges and juries must reach a decision at a particular time. And if in making that decision they are unable to use the results of lie-detection science or technology, they are going to evaluate credibility in the same way that lay people always have—by relying on the widely, but not wisely, accepted indicators of deception that dominate lay decision making, popular culture, and television dramas, but whose empirical basis is far more fragile than the empirical basis of a wide array of lie-detection techniques. Here, as is so often the case, “Compared to what?” is the right but too-rarely asked question.

Leaving the Courtroom

That lie-detection technology turns out to be better than the courts think it is, and better than some of popular journalism thinks it is, explains why it is so widespread outside of the legal system. Government uses it to screen job applicants, especially for law enforcement, intelligence, and national security positions, especially but not only by evaluating the accuracy of the representations on employment applications and in interviews. Government security and intelligence agencies use it not only to evaluate existing and potential employees, but also to assess the accuracy of the information they receive. Insurance companies use it to determine the veracity of claims and the claims records of their insureds. And although a federal law called the Employee Polygraph Protection Act of 1988 prohibits polygraph screening of employees and applicants by private employers, it contains exceptions for the pharmaceutical and security industries.34 Less commonly, public figures sometimes use polygraph results to attempt to rebut claims that they have engaged in some variety of misconduct, as Virginia lieutenant governor and then-gubernatorial candidate Justin Fairfax did in seeking to challenge the two accusations of sexual assault made against him.35 And sometimes those who make such accusations use polygraph results to buttress their accusations when they are called into question, as Christine Blasey Ford did when her claims that then-nominee and now Supreme Court Justice Brett Kavanaugh had sexually assaulted her when the two were teenagers were called into question.36

Indeed, given the accuracy level of most forms of lie detection, it is perhaps surprising that there is not more use of it by those whose public claims have been directly challenged. Some of this reluctance might be a spillover from widespread knowledge that lie detection is not generally usable in court. And some might be a function of public skepticism flowing from public knowledge of the possibility that one can “beat” the lie detector if properly trained, and especially if you get to pick your own technology and technician. And some might result from the fear felt by people engaged in public disputes about the factual truth of their statements that exposed deception might be fatal to their public claims. Better, perhaps, to rely on confident assertions of truth than on imperfect technological endorsements of that truth. That said, however, it is hardly irrational to wonder whether someone who is unwilling to use the best of modern lie-detection technology to bolster their public claims might have reason to be afraid of what that technology might reveal.

When the lie-detection potential of fMRI began to become known, the potential for such use out of the courtroom was not lost on some entrepreneurs. No Lie MRI no longer exists, and another, Cephos, barely does, both companies having relied heavily and mistakenly on the likelihood that their methods would eventually be accepted for courtroom use, especially in noncriminal cases involving matters such as business disputes and child custody. But others have taken their place, one being the Utah-based company Converus, which uses a device it calls EyeDetect to measure pupil size and other aspects of the eye, and which the company claims to be able to detect deception with 86 percent accuracy.37 The Arizona company Discern Science International, with its origins at Arizona State University, employs a device it has named Avatar to analyze a collection of facial microexpressions, and claims that its device also approaches 90 percent accuracy in identifying those whose answers to a digital customs agent are not truthful.38 Others have joined the fray, and even more undoubtedly will, recognizing that the interest in ferreting out lies and liars is as old as lying, and that the demand for both is unlikely to decrease.

Two Larger Lessons

Buried in the previous sections are two larger lessons that are not only about lies and lying, and not only about testimony. And it is worthwhile, if only for purposes of emphasis, to repeat both of them. One is the recurrent question “Compared to what?” Evidence is the path we travel in determining whether some statement about a fact is true or testing some factual hypothesis. We do not start with evidence. Instead we start with something we want to know, with a question about the likely truth of some factual assertion or the soundness of some hypothesis of interest to us. To make this point, the great philosopher of science Karl Popper once began a lecture by instructing his audience simply to “observe.” Puzzled by the instruction, the members of the audience eventually understood that Popper wanted them to grasp that unguided observation, even if not technically impossible, is generally pointless. And that is why we start with some hypothesis or topic or statement of fact that interests us and not with unsorted and unfocused evidence.

If we have a hypothesis we want to evaluate or a question to which we seek—and often require—an answer, we start with a need for evidence. And what is important about evidence being need-based is that the need allows and sometimes forces us to recognize not only that evidence comes in degrees, but also that sometimes imperfect evidence is the best we can do. Weak evidence is better than evidence-free guessing, and slight evidence is better than superstition. Of course, better evidence is better than worse evidence, and it is often useful to require the best evidence we can get. Nevertheless, worse evidence, at least in this sense, often is better than nothing.

The second lesson, which follows from the first, is that whether some form of evidence is good enough depends on what follows from there being sufficient (or insufficient) evidence. That was one of the important lessons from our discussion of the burden of proof in Chapter 3. It is also the lesson that emerges from the difference between lie-detection technology being good enough to put people in prison, which it plainly is not, and lie-detection technology being good enough to keep people out of prison, which it very well might be. Indeed, even more with respect to policymaking than with the truth or falsity of a particular factual hypothesis, evaluating evidence in light of its potential use and in light of the consequences of its sufficiency is vital. Evidence that is not strong enough to justify a restriction of individual liberty might be strong enough to justify a government warning, and evidence that is not strong enough to ban an otherwise legal product might be strong enough to justify an individual consumer in refusing to purchase it. Not only as we evaluate lies and lying, and not only as we evaluate testimony in general, a pervasive question about evidence, whether in individual bits or in the aggregate, is not whether it is good, but whether it is good enough.