RHETORICAL FRAUDS AND SOPHISTICAL PLOYS
Ten Classic Tricks
PLATO REMARKED that arguments, like men, are often pretenders.1 Shoddy reasoning can deceive us just as people do. The classical Greeks made a special point of studying the shoddy reasoning that emerged from the simple democracies of their time; they wanted to know how large numbers of fellow citizens were being bamboozled and hoodwinked. Likewise, in modern times, the great stimulus to the study of poor reasoning has been the growth of representative democracy and the escalating force of public opinion. From the late eighteenth century onward, public opinion has increasingly influenced lawmaking. The result has been greater attention to the ways in which public opinion can be manipulated. Vast majorities in many modern states now expect their opinions about law to count for something, and, even if this expectation is sometimes illusory, the illusion is nevertheless encouraged. Public sentiment now has great power, and many individuals and organized groups have responded to this situation by trying to deceive large masses of ordinary people. Propagandists and demagogues have become serious dangers in the modern world. The rise of the modern mass media has also strengthened the sway of public opinion in politics, and the effect on logicians has been to make them ever more alert to the dangers of public sophistry.
The rise of democracy and the modern study of sophistry have gone hand in hand, and the reason is that many intelligent observers over the last two centuries have come to fear that, without some such study, democratic institutions can easily run off the rails. Working against the background of the Athenian Assembly, Aristotle produced the first known list of rhetorical frauds (his Sophistical Refutations);2 all the same, the study of sophistry in modern times actually began in eighteenth-century England when a thoroughly unrepresentative British Parliament presumed to dictate the affairs of an increasingly commercial empire. Parliament represented an elite, but the growth of British trade, backed by the British navy, weakened this elite by strengthening the hand of the middle classes.
At the same time, demands for political reform also gained momentum because of the continuing success of Newtonian science, and this affected Parliament too. As a result of Newton’s work, more people, especially intellectuals, came to believe in the power of reason as opposed to tradition—an outlook that defined what we now call the Enlightenment. The political consequence was to subject the laws and customs of the age to ever more criticism. The overall effect was enormous pressure for political change, and the reaction of those who opposed this change was to fall back increasingly on bogus arguments to divert public attention. The Old Guard defended its privileges with rhetorical ploys and slippery propositions.
The battle for Parliamentary reform played out over many decades, but in the end the period’s more radical reformers began to analyze public sophistry in the hope of counteracting it—being convinced that, if the public’s eyes could finally be opened, the world might be made a better place. The leader of this approach was the English philosopher Jeremy Bentham, who would eventually become the inspiration for a new generation of Parliamentary reformers, the “philosophical radicals.” And Bentham’s attempt to analyze sophistry has been imitated ever since.
THE BATTLE FOR PARLIAMENTARY REFORM
Britain’s Parliament in the eighteenth century was highly aristocratic. The House of Lords consisted of hereditary peers (nobles, all male, whose political privilege depended solely on birth). The House of Commons was elected from a small fraction of Britain’s male population; for the most part, only the wealthy (and only men) could vote for a member of the Commons at all. In addition, the Lords were, in those days, a coequal branch of the legislature, so their consent was necessary for any new laws.3 Parliament had already evolved a great deal since its origins in the thirteenth century, but the effect during the eighteenth century was still rule by an upper crust. There were conflicting factions within this upper crust, of course, but the great majority of citizens were shut out. Nevertheless, there was also constant agitation in the press for political reform, agitation that derived ultimately from the growing power of the middle classes. And many members of Parliament wanted that agitation suppressed.
Parliament responded to agitation and criticism by trying to keep its debates secret. It reserved the right to imprison anyone who revealed them. Parliament had long insisted on this secrecy, but in 1736 a periodical called The Gentleman’s Magazine began giving accounts of those debates anyway, after its daring publisher, Edward Cave, managed to gain access to the gallery of the Commons, along with several friends. Cave and his friends took notes on the debates surreptitiously and then repaired to a nearby tavern to work out together their version of what had been said. Next, they had a stylist rewrite their account so as to give it a lofty tone, and afterward they published the result in the magazine.
When the Commons threatened to prosecute Cave for these accounts, his magazine related the remarks of real members under fictitious names and called its reports “Debates in the Senate of Lilliput” (the government of one of the imaginary islands in Jonathan Swift’s Gulliver’s Travels). Even the literary critic Samuel Johnson contributed to this effort, giving the members’ real names instead, and in later years several newspapers (The Morning Chronicle and The London Evening Post) joined in.
Finally, in 1771, the Commons ordered the arrest of several printers for their reporting, but the Lord Mayor of London, Brass Crosby, refused to recognize the arrest warrants. The Commons then retaliated by having the Lord Mayor imprisoned in the Tower of London, but public opinion reacted in his favor, and he was released. In the end, public opposition had become so great that the Commons backed down; too many ordinary citizens had come to think that, as Parliament was determining the public’s business, so the public had a right to know about it. And in the following year, the Commons made no further attempt to suppress the reporting of its deliberations. The modern study of sophistry grew out of this struggle, but behind this collision between Parliament and the citizenry was the growth of British trade, which had increased the wealth and numbers of Britain’s commercial classes.
As an island nation, Britain had a natural advantage over other European states in the development of sea power; states on the Continent had the additional burden of defending against overland invasion. The naval powers of Portugal and Spain had already been eclipsed in the seventeenth century by the Netherlands and France, but by the eighteenth century, the seas belonged increasingly to the British. Dutch power succumbed to a combination of British naval pressure (Britain’s Navigation Acts effectively banned Dutch shipping from British ports) and a series of threats from land armies on the Continent. As for France, the British gradually whittled down French naval strength through five periods of war that spanned the whole of the eighteenth century and culminated in Britain’s victory over a combined fleet of French and Spanish warships at the Battle of Trafalgar in 1805.4
As British commerce grew, sustained by overseas markets, so also did the numbers and influence of the commercial classes; London became the busiest port in the world, and its merchants wanted an ever greater say in how British officials managed their affairs. In addition, even the aristocracy increasingly came to support Britain’s commercial interests, in part through the enforcement of strict laws of primogeniture. This, too, strengthened the hand of the middle classes.
Primogeniture generally required that the landed estates of hereditary aristocrats be passed down intact, upon the death of the owner, to the owner’s eldest son, and this caused many aristocratic families to seek support for their other children through marriage alliances with prosperous merchants and to establish their younger sons in middle-class professions like lawyering. The result was increasing cooperation, thanks to family ties, between the middle classes and the nobility, and this increased cooperation also tended to make the laws friendlier to investment—as long as the agricultural interests of the nobility were preserved.5 (In France, by contrast, many aristocratic estates had been unprofitably subdivided, and the nobles were forbidden to engage in trade. The French nobility fell back on ever greater taxes imposed on the middle classes and the peasantry, with the result that bitter class antagonisms finally erupted in the French Revolution.)
As the middle classes gained increasing social power in Britain, so they studied with ever greater zeal the political power of Parliament. And one of the sharpest and most determined of Parliament’s critics was the son of a lawyer with much influence in the press: Jeremy Bentham.
JEREMY BENTHAM AND THE LEGACY OF THE ENLIGHTENMENT
Born in London in 1748, Bentham followed Parliamentary debates closely through much of his life, and he was also the person who popularized the famous “principle of utility,” the thesis that the whole difference between right and wrong is strictly a matter of producing the “greatest happiness for the greatest number.” Bentham’s principle now defines the philosophy of “utilitarianism.” His principle has always been controversial (different interpreters still construe it in different ways), but Bentham’s chief complaint against the laws of his time was that they favored not the greatest number, but the few. He believed British government was essentially a conspiracy of the elite at the expense of everyone else, and he devoted nearly all his rhetorical energies to changing it. He proposed many specific new laws to make Britain more democratic and more prosperous (wider voting rights, the secret ballot, limitations on hereditary privilege, prison reform, free trade, public sanitation, and the humane treatment of animals), but as a student of argumentation, he also made a catalogue of the verbal swindles and subterfuges by which clever speakers in his day had sought to turn back needed reforms and make foolish laws look just.
Bentham and his fellow reformers didn’t invent their ideas out of nothing, of course. Their efforts were actually part of a larger intellectual movement in Europe through much of the eighteenth century, a movement that had sought to apply the methods of reason and science to the solution of social problems—the Enlightenment.
Bentham’s efforts were, in many ways, a natural consequence of the Enlightenment. Earlier, in the first half of the seventeenth century, Galileo and Kepler had acted almost as if they were political subversives, carefully measuring any utterance lest they be arrested. With the explanatory success of Isaac Newton, however, large numbers of literate people had come to see science in a new way, and a further effect of this change was to see politics in a new way too. Just as Newton’s analysis had seemed to replace superstition with reason, so had many people begun to seek similar changes in politics. Laws and customs, they argued, often had no justification except the dead weight of tradition. But in that case, they asked, shouldn’t society be rationally reformed just as physics had been reformed? Since logic and evidence had given rise to a true physical science, they supposed, there might soon be a true political and social science, also supported by logic and evidence. This was Bentham’s inspiration.
Shortly after Newton’s death in 1727, the English poet Alexander Pope captured the reaction of many of Europe’s leading intellectuals in lines that he intended as an epitaph for Newton’s grave in Westminster Abbey:
Nature and Nature’s laws lay hid in night:
God said, Let Newton be! and all was light.
This Newtonian synthesis (combining Galileo’s terrestrial laws with Kepler’s celestial ones) eventually became the rational inspiration for many eighteenth- and nineteenth-century reformers who wanted to make politics and society as reasonable as physics.
Many important reforms were propelled by this impulse. For example, official prosecutions for witchcraft died out, ending the so-called witch craze, as more judges came to believe that action at a distance through spells and incantations was physically impossible. More generally, the disposition to analyze and challenge became increasingly fashionable, not only in literate sections of European society, but also beyond Europe. (For example, it motivated the great Bengali reformer Raja Rammohan Roy, whose campaign for social reform in India in the early decades of the nineteenth century helped outlaw suttee, the burning of widows. Roy advocated religious tolerance, helped to found India’s first indigenous-language newspapers, published many essays seeking a common ground among the world’s different religions and ethical teachings, and established new schools for children.)
This new, critical approach that challenged tradition was especially attractive to literate members of the middle classes, but it also rubbed off on a progressive portion of the European nobility and even on an occasional despot. In Eastern Europe, the “enlightened despots,” rulers like Frederick II of Prussia and Catherine the Great of Russia, used the Enlightenment’s egalitarian ideals to weaken their traditional rivals, the lesser, hereditary nobles. (Many of their “enlightened” policies were actually self-serving grabs for power; these rulers faced no real insurgent middle classes in their own domains, and so the true effect of their reforms was to centralize their own authority.)
The real centers of the Enlightenment, however, were London and Paris, and in Paris the movement had already found expression, even while Bentham was still a boy, in the “philosophes,” the French social thinkers who sought sweeping political reforms and who, in many instances, joined with Denis Diderot in producing the Encyclopedia, the greatest publishing venture of the century. (The Encyclopedia’s purpose, in Diderot’s words, was to “gather knowledge now scattered over the earth” and “make it accessible to future generations.”6 The Encyclopedia began to appear in installments in 1751, and it took another fourteen years to complete, with articles stressing science, technology, and a critical analysis of contemporary society.) These were the assumptions and attitudes that Bentham had already absorbed when he finally turned his analytical powers against a series of dead traditions in his own country. He attacked the structure of an unreformed Parliament, and his aim was to usher in a more democratic future.
BENTHAM’S BOOK OF FALLACIES
Sophistry has many applications, of course, not just to the making of laws. Nevertheless, the role of public opinion in the battle over Parliament soon made the question of sophistry more sensitive, and this is what gave Bentham his audience. In earlier times, after the decline of the ancient Greek democracies, politics had excluded the great mass of the population; peasants and serfs had never been consulted in politics and had never expected to be. But all this changed as England became commercial. Periodicals and newspapers multiplied, the press soon functioned as a “fourth estate” (a politically powerful social stratum, in addition to the clergy, the nobles, and the commons), and reformers became bold. Freedom of dissent had already been expanding in British society for more than a century—religious and political dissenters were already common by the mid-1600s—and Britain’s security from land invasion had also made it harder for the British government to arrest dissidents on the supposed pretext that they threatened national security, as often happened on the Continent.7 The upshot was that Bentham soon found himself trying to influence the battle for public opinion in a new way: cataloging false and fraudulent reasonings. Like Aristotle, he composed a list, but his list was much longer than Aristotle’s, and it derived from Bentham’s firsthand observation.
So far as Bentham’s personal life was concerned, he was startlingly eccentric. He rarely finished the books he started, and so his masterwork on sophistry, his Book of Fallacies, had to be pulled together from loose papers by several admirers, Pierre étienne-Louis Dumont, Peregrine Bingham, Francis Place, and John Stuart Mill. (Working from Bentham’s manuscripts and with Bentham’s permission, Dumont began publishing French versions of Bentham’s works, heavily edited, in 1802, and in 1816 he produced a work that contained much of The Book of Fallacies. One result was that Bentham was initially far better known to French readers than to English ones. In 1824, an English version of The Book of Fallacies appeared, mainly the work of Bingham, but with help from Place and Mill.)
More eccentric still was Bentham’s decision to be converted after his death into a mummy. As his death approached in 1832 (the same year the great Reform Bill made Parliamentary representation more democratic),8 Bentham asked himself how he could still be “useful.” The question of usefulness had always appealed to his utilitarian instincts, but he finally came to the view that his dead body could be quite useful in at least two ways: first, by serving as a cadaver to be dissected for the advancement of medical science (his body was dissected after his demise before a group of invited guests, many of them former friends), and, second, by providing a continuing inspiration to future generations of rational reformers if embalmed and exhibited publicly. His corpse was subsequently exhibited at University College, London, where it still resides, dressed in Bentham’s original clothing and sitting in a chair in a large wooden box. (The head on display is made of wax; the embalming of the real head proved unsuccessful.) Bentham’s last destination is surely bizarre, but his example reinforces a point much stressed by his friend and intellectual heir John Stuart Mill in On Liberty: eccentric people are sometimes our most creative citizens.9
To return to sophistry: Sophistry is the art of deluding an audience with arguments that one knows to be illogical or misleading (a practice we also considered, in chapter 2, among the classical Greeks). And each of its frauds can be called an individual “sophistry” or “sophism.” On the other hand, when an illogical argument is used innocently, without the speaker’s being aware that it is, in fact, illogical, then logicians call it a fallacy, a term that implies nothing about the speaker’s motives. Thus each of these logical mistakes can be a premeditated sophistry or merely an innocent fallacy, depending on the speaker’s intentions. But how many sophistries are there, and what are their basic forms?
As it turns out, in some respects, there are more sophistries than we can count, but the fortunate thing is that there are few classic frauds; the rest are usually just variations on the old themes. Contested issues come and go, but the greatest sophistries are practically eternal. And the best defense against them is simply to become familiar with them. A trick that at first seems baffling will later become obvious once it is studied, and in time such devices will start to seem less like tragedy, more like farce.
We have compiled a list of additional fallacies and sophistries, in alphabetical order, in our Appendix, but for the present we offer ten classic frauds. Several come from Bentham (who liked to give fallacies unusual names of his own invention), and one comes from the twentieth-century American journalist Richard Rovere. Others come from writers who are now unknown to history.
1. The straw man. A speaker or writer attacks a mere caricature of his opponent’s view rather than the view itself, just as a timid fighter, rather than facing a real opponent, attacks a man of straw. (“Do you support the Republicans and fascism?” “Are you for the Democrats and godless Communism?” Instead of running against the Republicans or Democrats as they really are, you run against the Nazis or the Bolsheviks.) The idea is to attribute to your opponent a silly proposition and thereby cast him in the role of defending the indefensible. A common method is to attack “those who say . . .” without naming the opponents directly. “I’ve never endorsed the theory of those who say . . .” The theory then turns out to be something so stupid that no one would say it.
Another common method is to exploit verbal ambiguities. Take the expression “left-liberal.” In ordinary speech a “leftist” is further left than a “liberal”; thus the advocate calls his opponent a left-liberal when he wants the accuracy of the liberal label and the alarming connotations of the leftist one. He runs together two different groups. The same device can be used by many different factions; “right-wing conservative” is the same trick used by the other side.
Again, the terms “right-wing” and “left-wing” have a similar effect when applied to people slightly to the right or left of center; such language suggests that those near the center have something in common with those at the extremes. The terms “extreme” and “extremist,” though often accurate, can also be used inaccurately to erect a straw man.
The straw man is usually an exaggeration, but it can also be an amalgam of disparate notions held together by no common feature other than the speaker’s antipathy. Hitler’s “international Jewish conspiracy” was something of this sort. A variety of different ideas, political trends, or even artworks are run together and treated as parts or instances of some great Evil Thing. The expression “late capitalism” sometimes plays this role for speakers on the left—late capitalism turns out to include almost every bad thing under the sun—and so do some uses of the term “globalization.” (“Late” is tacked on to “capitalism” to suggest that the speaker somehow understands the timetable according to which one economic system will suddenly give way to another. To the extent that this understanding is bogus, the expression also constitutes pretentious diction, which we discuss later.)
For speakers on the right, a sometime favorite is “secular humanism” or “secular progressivism”; such phrases can have a precise meaning, but in many contexts they denote a broad array of different targets: communism, socialism, liberalism, Darwinism, hedonism, atheism, and fluoridated water. The idea is to use a word that means seven different things and show that your opponent advocates one of these things; in the minds of the audience, he now advocates all seven. You give the audience a bogeyman.
2. The vague generality. The disputant defends a specific policy by lecturing the audience on the need for a large, general object. Bentham illustrated this fallacy by describing a man who defends a bad law by pleading for “law” in general (as in, “Gentlemen, we must have law, we must have law!”).10 The particular law he defends turns out to be a monstrosity.
Similarly, a speaker defending an increase in military spending calls for “security” in general while his opponent pleads for “peace” in general. (“Give peace a chance.”) Again, someone defending the morality of a specific government policy lectures the audience on the need for Morality, for Government, and for Policy. The tactic consists in deflecting attention to a general abstraction when the real issue is a specific plan. In addition, the vague generality can be used negatively, as an object to be campaigned against; “To vote for us is to vote against war, poverty, and racism.”
3. The multiple untruth. A series of falsehoods or many aspects of the same falsehood stated in such quick succession that the opponent can’t refute them without confusing the audience. The tactic was named by Richard Rovere, who wrote for the New Yorker magazine, to describe a device used by U.S. Senator Joseph McCarthy. Rovere explains the tactic this way:
The multiple untruth need not be a particularly large untruth but can instead be a long series of loosely related untruths, or a single untruth with many facets. In either case, the whole is composed of so many parts that anyone wishing to set the record straight will discover that it is utterly impossible to keep all the elements of the falsehood in mind at the same time. Anyone making the attempt may seize upon a few selected statements and show them to be false, but doing this may leave the impression that only the statements selected are false and that the rest are true. An even greater advantage of the multiple untruth is that statements shown to be false can be repeated over and over again with impunity because no one will remember which statements have been disproved and which haven’t.11
In effect, McCarthy would hurl a series of falsehoods or many aspects of the same falsehood in such quick succession that his opponents would only confuse the audience by trying to refute them. He would bombard the U.S. Senate with many accusations at once. The public was often confused by both sides in these disputes, but the sheer size of the charges suggested that they had to contain some truth. Throw enough mud, and some of it sticks. (McCarthy turned out to be a master of many sophistries, but his folly lay in never really seeing how easily his methods could be exploded by a skilled opponent; McCarthy finally met that opponent in the person of Joseph Welch, a trial lawyer of great experience who effectively destroyed McCarthy’s career on national television during the so-called Army-McCarthy hearings of 1954.)
4. The ad hominem. The disputant offers an assertion about the advocate of an idea as a reason for rejecting the idea itself. (The expression “ad hominem,” from Latin, means “to the person.”) The ad hominem takes several forms, some rather subtle, but the most obvious is direct insult (“Communist!” “Fascist!” “Terrorist!” “Reactionary!”). The speaker implies that the opposing idea must be bad merely because it is defended by a bad person. A subtler form is the so-called circumstantial ad hominem, which discounts the opponents’ views because of their backgrounds or their other opinions. (“This idea is typical of a foreigner.” “The chief advocate of the new tax bill hasn’t paid taxes in years.” “Opponents of abortion plead the case of the unborn, but they do nothing for the children of the poor.” “Defenders of abortion plead the case of the poor, but they do nothing for the unborn.” “The architects of the plan are just people who want to feel morally superior to the rest of us.”) The aim throughout is to talk about the opponents instead of the specific issue. The shrewdest ad hominem is perhaps the oblique version, which consists not in dwelling on the opponent’s character but in praising one’s own. (“As a person deeply committed to social justice, I believe . . .”; “As a truly religious person, I suggest . . .”) The idea is to wear your heart on your sleeve. Tell an audience you are pure and have an opponent, and many in the audience will suppose the opponent must be impure. In all its forms, the ad hominem consists in offering an assertion about the champion of an idea as a reason for rejecting the idea itself. (In slanted news coverage, the ad hominem often consists not in covering issues but in dwelling on the personal foibles of those who speak about issues. During an unpopular war, for example, the main story becomes not the war itself but the opinions and idiosyncrasies of those who defend or attack it.)
It is not hard to see why the ad hominem works. It works because it resembles a perfectly rational attempt to assess expert testimony. In matters requiring special experience or training, we often defer to experts. When we go to a doctor, we usually take the doctor’s word for what ails us; most of us lack the medical knowledge needed to supply our own diagnosis. Yet our trust in the doctor’s word depends crucially on our trust in the doctor’s character. We might reject a diagnosis if we learned that the doctor was somehow incompetent or dishonest. As a result, the soundness of the doctor’s character is logically relevant to the soundness of the doctor’s testimony, and the appeal of the ad hominem comes from confusing this situation with an attempt at analysis.
The two situations are different. When we rely on another person’s testimony, that person’s character is relevant. But when we assess that person’s argumentation, the person’s character is irrelevant—a distinction that is easier to see, perhaps, in a courtroom. When a witness gives testimony in court, all parties to the controversy are usually permitted, through their lawyers, to examine the witness’s personal character as part of determining whether the testimony is credible. Only if the witness is honest and competent is the testimony reliable, and consequently any trait of character that reflects poorly on this witness’s honesty or competence becomes fair game. On the other hand, when a lawyer then tries to infer what follows from the testimony, no one is normally permitted to examine the lawyer’s character, because the lawyer isn’t testifying to what he or she saw or what he or she knows; the lawyer is only acting as an advocate—a mouthpiece for a line of reasoning. As a result, the lawyer’s character is irrelevant.12
Then why does the ad hominem occur at all? Because in ordinary life, we often find ourselves straddling these two situations. When we hear a person analyze an issue that we only partly understand, we can often follow the person’s reasoning for a while, but then we begin to drift. We think to ourselves, This person obviously knows the subject—the remarks are smart and informed—but now this person is going over my head. Perhaps, therefore, I should simply take his word for it. Ah, but in that case, I must first ask myself, Is this person really as smart as he seems, and can I trust him?
From this moment forward, we are no longer treating the speaker as advocate. Instead, we have begun to see the speaker as expert witness, and, as with all witnesses, the speaker’s believability now depends on the speaker’s discernment and integrity. In many instances, this shift in approach is the only practical course, but on other occasions it is merely a symptom of laziness. What the ad hominem does, then, is encourage the audience to be lazy. The ad hominem encourages the audience to forgo the hard work of analyzing an argument that is still within its reach and to judge the question on personalities alone. In consequence, the device will be most effective in just those cases where an audience is most inclined to be lazy—and least inclined to be diligent.
(Bentham discusses the ad hominem under the heading of “vituperative personalities” and characterizes it in the following way: “In bringing forward or supporting the measure in question, the person accused has a bad design; therefore the measure is bad. He is a person of bad character; therefore the measure is bad. He is actuated by a bad motive; therefore the measure is bad; he has fallen into inconsistencies; therefore the measure is bad. He is on a footing of intimacy with this or that person, who is a man of dangerous principles and designs, or has been seen more or less frequently in his company . . . therefore the measure is bad.” Bentham regarded all such methods of inference as foolish. He adds, “In proportion to the degree of efficiency with which a man suffers these instruments of deception to operate upon his mind, he enables bad men to exercise over him a sort of power the thought of which ought to cover him with shame.”)13
5. The sham distinction or distinction without a difference. The speaker favors A and opposes B without giving any idea how A and B are different. (“Money should buy influence, but it shouldn’t buy votes.” “We allow free discussion but not antistate agitation.” Bentham gives this example: “I believe in the liberty of the press, but not the licentiousness of the press.”)14 As a result, the speaker is able to turn aside what would otherwise be telling objections by pretending that the objections apply to something else.
6. The red herring. The disputant uses an irrelevant issue to distract attention from the real one. The device gets its name from the supposed practice of drawing a bag of smoked fish across a trail to distract tracking dogs.15 (“I believe in our platform, I believe in our candidate, but most important I believe in God, in freedom, and in the greatness of America.”) The disputant appeals to the noble or the sublime; the audience supposes that the disputant’s specific plan must be sublime too. (Another example: “Yes, our company dumps toxic chemicals into the drinking water; still, liberty and free enterprise are precious rights, and think what will happen if we surrender these freedoms to an all-powerful state!”)
7. The genetic fallacy. The speaker attacks an idea because of its origins (its “genesis”) or because it was embraced in the past by people who are now repudiated. (“My opponent in this election wants to nationalize the railroads; Lenin had the same idea.” “Our adversaries would enlarge the military; that was Hitler’s plan.”) The genetic fallacy is closely related to the ad hominem but consists in attacking something other than the idea’s current advocates. It suggests that the idea is historically associated with the wrong people. The guilty association can also be established by labeling. (“This is merely a bourgeois idea,” “a Russian idea,” “a French idea,” “a European idea,” “a liberal idea,” “a reactionary idea,” “a socialist idea,” “a Western idea.”) Where the idea comes from is invoked as a reason for rejecting it.
8. Superfluous displays of learning. The disputant invokes irrelevant statistics, unnecessary references to great names or events, unneeded mathematical symbols, esoteric terminology, or foreign words and phrases in place of their ordinary English equivalents. The aim is to mystify. This, too, was a favorite tactic of McCarthy, who was particularly fond of irrelevant documents. Rovere remarks, “Photostats and carbon copies and well-kept newspaper clippings have, I think, an authority of their own for most people; we assume that no one would go to the bother of assembling them if they didn’t prove something.”16 College students use this tactic when they engage in “data dumping”—loading an essay with irrelevant information so as to make it seem impressive. Professors use the same trick when they pad an academic article with special symbols, scientific findings, or historical details they know to be irrelevant to the subject.
9. The ad populum or bandwagon. The speaker argues for a proposition on the grounds that most people already believe it. (“The best proof of our position is that the vast majority of Americans agree with us.”) “Ad populum” means “to the people.” The ad populum is no fallacy when the people who already believe are somehow better informed than the audience to be persuaded. (“Nine out of ten doctors recommend . . .”) In that case, the argument is much like an appeal to the weight of expert testimony. But it is indeed a fallacy when the audience has just as much information as the believers. (Consider: if the audience still needs more information to decide, then so do those who already believe, since they have the same information, and in that case the opinion of the believers proves nothing. But if the audience doesn’t need more information, then the opinion of the believers is superfluous anyway; the audience can already decide for itself. The ad populum assumes that the audience is too lazy to think the matter through.)
In contemporary politics, the ad populum is sometimes advanced by establishing a “message of the day,” which is an opinion to be repeated simultaneously by all operatives of a political organization. As a result, many people will come to believe the opinion merely because they hear it widely repeated, quite apart from whether there is any evidence for it. (The tactic can be justified if it calls attention to an idea that might otherwise be drowned out in a cacophony of distractions. But the effect is different when the audience is already familiar with the idea because the idea is already controversial. In 2004, the documentary film Outfoxed revealed that two senior executives of the Fox News Channel, Roger Ailes and John Moody, had been crafting daily memos for circulation to all the network’s producers, anchors, and commentators, often telling them what collective opinions they should express and defend on the controversial issues of each particular day. In many cases, the resulting method of persuasion was ad populum. Earlier, Ailes had worked in the political campaigns of Ronald Reagan, among other candidates, where a “message of the day” had become basic strategy.)17
In addition, to believe in majority rule is not, in itself, to embrace the ad populum fallacy. A majority may exercise authority without being necessarily right, and an enlightened democracy can still encourage its citizens to base their decisions on good reasons. The ad populum, by contrast, asks them to base their decisions on the opinions of others—without asking whether the reasons are good.
10. The neologism. A neologism is a “new word,” meaning a word that has never been heard before, or, alternatively, an old word used in a new way. Nearly every major innovation brings a need for new words, and thus many neologisms are perfectly reasonable. But new words can also disguise a speaker’s confusion or lack of new ideas. A speaker’s frequent temptation, when devoid of original thought, is to concoct an original vocabulary.
There are many neologisms today, and many are useful additions to modern language, but one of our personal favorites comes from the French thinker Jacques Derrida in a talk that he gave in 1966 to an assembly of professors in Baltimore. During this talk, delivered in French, Derrida made this puzzling remark: “The Einsteinian constant is not a constant, is not a center. It is the very concept of variability—it is, finally, the concept of the game.”18 Exactly what this means, we don’t know, but, to interpret it, we fall back on a bit of commentary supplied by the physicist Steven Weinberg, who is a Nobel laureate. Weinberg writes,
When . . . I first encountered this paragraph, I was bothered not so much by the obscurity of Derrida’s terms “center” and “game.” I was willing to suppose that these were terms of art, defined elsewhere by Derrida. What bothered me was his phrase “the Einsteinian constant,” which I had never met in my work as a physicist. True, there is something called Newton’s constant which appears in Einstein’s theory of gravitation, and I would not object if Derrida wanted to call it “the Einsteinian constant,” but this constant is just a number (0.00000006673 in conventional units), and I did not see how it could be the “center” of anything, much less the concept of a game.
So I turned for enlightenment to the talk by Derrida from which [this paragraph was taken]. In it, Derrida explains the word “center” as follows: “Nevertheless . . . structure—or rather, the structurality of structure—although it has always been involved, has always been neutralized or reduced, and this by a process of giving it a center or referring it to a point of presence, a fixed origin.” This was not much help.
Lest the reader think that I am quoting out of context, or perhaps just being obtuse, I will point out that, in the discussion following Derrida’s lecture, the first question was by Jean Hyppolite, professor at the Collège de France, who, after having sat through Derrida’s talk, had to ask Derrida to explain what he meant by a “center.” . . . It was Hyppolite who introduced “the Einsteinian constant” into the discussion, but while poor Hyppolite was willing to admit that he did not understand what Derrida meant by a center, Derrida just started talking about the Einsteinian constant, without letting on that (as seems evident) he had no idea of what Hyppolite was talking about.19
The danger of esoteric terminology is not only that it fools the audience but makes us fool ourselves by persuading us that we are being profound when, in fact, we are saying little. This point was much stressed by the English writer George Orwell, especially in his essay “Politics and the English Language.” Orwell remarks of such phrasings, “They will construct your sentences for you—even think your thoughts for you, to a certain extent—and at need they will perform the important service of partially concealing your meaning even from yourself.” He adds, “If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself.”20 (Derrida’s critics sometimes complain that his writings are “meaningless” in the sense of being unintelligible, but on the whole this criticism is mistaken. If his writings were literally unintelligible, they would have no impact at all. Instead, the problem is not an utter lack of meaning but an ambiguity of meaning, which is a different thing.)
On the whole, when obscure language misleads, it does so by describing familiar things in unfamiliar terms. The ideas are commonplace, but the language used to express them is uncommon. As a result, the audience supposes that the ideas must be uncommon too. When the vocabulary is unusual, the result is pretentious diction. (Neologisms are, thus, a common form of pretentious diction.) On the other hand, when the speaker uses familiar words but strings them together so as to imply an obscure analogy, the result is a vague metaphor. Derrida’s remarks, as quoted by Weinberg, include several vague metaphors, especially “center,” “game,” “point of presence,” and “fixed origin.”
In effect, Derrida begins with what is actually a commonplace: language is often ambiguous. We see this, for example, in poetry, where a common aim is to evoke multiple meanings. We also see ambiguity in daily life; the same words can mean different things to different people, and even the same people use words differently on different occasions. From these facts, it seems obvious that words get their meanings from social conventions that are largely arbitrary. In addition, many words are not only ambiguous but vague.
These points are hardly deep, but what Derrida does is give a series of novel phrasings for them. Thus, instead of saying that words are often vague or ambiguous—and ambiguous in different ways—he tells us they lack a center, a point of presence, and a fixed origin; and he says that using them is a game in that both languages and games have rules. He also adds that, between words and their meanings, there is always a certain “slippage.”
Yet all these utterances are really just new ways of describing the old phenomena of ambiguity and vagueness. We are getting old wine in new bottles. That Derrida’s terminology is itself ambiguous, applying equally to different sorts of ambiguity and vagueness, also helps to make it seem more profound; an expression like “slippage” now applies not to one sort of ambiguity but to several, and so the unwary listener seems to be entering a secret world of startling conundrums.21
Many fallacies like this are ways of advancing a sham insight, meaning a commonplace that is made to look penetrating. (We discuss sham insights somewhat more in our appendix.) Bentham’s primary concern, however, was always with sophistry in politics; he feared sinister interests would throw dust in the eyes of the public.
For Bentham, detecting sophistry wasn’t simply a matter of clear thinking; it was a basic part of responsible citizenship. He thought every citizen of the modern world had a fundamental duty to become acquainted with the common forms of rhetorical fraud and to learn the most effective ways of exposing them. Where logic is abused, the people will be abused. But where the people are vigilant, they will also need to be logical. For Bentham, the battle against sophistry was never ending. And in today’s world, with new problems, new questions, new media, and new forms of rhetorical misrepresentation, his intellectual heirs have continued the struggle ever since.