7. Going viral

Malware of the mind

In the discussion so far, I have been helping myself to what is sometimes called an “epidemiological” approach to human culture.1 Epidemiology in the strict sense is the study of public health or of diseases in populations. What is crucial about this perspective is that it does not look at the specific etiology of disease, the sort of thing that doctors are interested in: how the patient caught it, what the symptoms are, how it can be cured. Instead, it adopts a broadly statistical perspective, looking at how diseases come and go in populations. From this perspective, things like diet, sanitation, international travel, commuting time, social inequality, and hundreds of other unexpected factors may all turn out to have an important impact on public health that no one would have thought of just by looking at individual cases.

Thinking epidemiologically about culture obviously involves a set of analogies to biological phenomena (such as the comparison to viruses in the previous chapter, or talk about “memes” as units of cultural reproduction—equivalent to “genes” in the biological realm). Some people find this off-putting. With the development of the internet, however, many of these analogies have become irresistible. It is common now to talk about the latest internet “meme,” or of a video having “gone viral.” The origin of these expressions is quite revealing. The initial extension of the virus metaphor predated the internet, and was used to describe the way in which malicious programs could hijack a computer. The key feature of a biological virus is that it has no way of reproducing without a host. It contains bits of DNA or RNA, which it uses to reprogram the enzymes in a host cell, tricking it into producing copies of the virus. A computer virus works in very much the same way. Early Apple Macintosh computers, for example, were quite vulnerable to viruses, because they were the first machines to use a graphical user interface. When you put a disk in the drive, the computer had to search for and run a small program found on the disk, in order to “mount” it to the desktop. It was easy to rewrite that program, inserting some malicious code that would be loaded into the computer’s memory and would cause the machine to copy that same code into the boot sector of any subsequent disk inserted into the machine. The virus metaphor was impossible to resist, since the mechanism of reproduction was almost identical: anyone who stuck their disk into the drive of an infected machine would get infected. If you went on to stick your infected disk into another machine, that one would get infected, and it would go on to infect several other disks. It’s pretty obvious why the virus metaphor suggested itself.

Viruses were fairly common even before the internet created new possibilities for transmission. The major contribution made by the internet was the realization that bits of nuisance code could get transmitted not just by infecting the machine, but also by infecting the user. One of the earliest examples of this was the “$250 cookie recipe” email. Back when the internet was young, long before there was such a thing as spam, people nevertheless found themselves receiving multiple versions of nuisance email from friends and family. One of the most common was a story supposedly from a person who had been tricked into purchasing a cookie recipe from the department store Neiman Marcus for $250, and so, in order to avenge himself, was sharing the recipe with as many people as possible. The email began, “This is a true story … Please forward it to everyone that you can … You will have to read it to believe it…,” which is typically not an instruction that anyone would feel obliged to obey, except that what followed was a not entirely unbelievable story about why you should forward the email to as many people as possible.

What was noteworthy about the email was that the mechanism through which it was propagated was almost identical to that of a computer virus. Except that now, instead of a bit of malicious code hijacking the computer and tricking it into reproducing the code, this was a malicious email hijacking the brain of the user and tricking him into reproducing the email. The mechanisms were so similar that it was impossible to resist the analogy: so people began to talk about “viruses of the mind” or of “wetware” (rather than “software”) viruses. This lent force to the epidemiological perspective. For example, people began to realize that the prevalence of a belief, or a “meme,” in the population might not be due to the intrinsic credibility of the belief, but simply to the power of its associated reproduction mechanism. Even if only 5 percent of recipients of the cookie-recipe email believed the story, but each one sent it to twenty or more people, then the email could continue to circulate (although eventually the population would develop immunity—those who had been infected once and transmitted it to others would be unlikely to do so again2).

Since that time, the contagion metaphor has become ubiquitous, so that we talk about things going “viral” on the internet, or of a “viral” marketing campaign. The term here refers simply to the propagation mechanism—if one person sends a link to two friends, who both send the link to two more friends, and so on, completely obscure videos or web pages can suddenly attract the interest of millions of people, in a completely decentralized way. Calling these memes is fair enough, but the term viral is sometimes a bit of a misnomer, since it is central to the operation of a virus that it be able to trick its host into reproducing it. In the case of internet videos, there is often no trickery involved. What’s striking about real viruses is that—like the cookie-recipe email—they are false yet you can’t get rid of them even with concerted effort. (In fact, there’s probably someone out there right now reading that very email, thinking, “Wow, I can’t believe they charged him $250 for a cookie recipe! I’m going to send this to everyone I know.”)

As far as mental viruses are concerned, there is nothing particularly special about the internet. The kind of infections you get there can be found in any human culture—it’s just that on the internet, the propagation speed is so high one can see dramatic effects in a very short time. But cultural memes have the same dynamic. In a sense, culture is just a big, slow version of the internet. “Urban legends,” for instance, are a lot like chain-letter emails—if they can convince people to retell them, then they can persist forever. And some of them can be quite noxious. Consider, for instance, the Protocols of the Elders of Zion, an anti-Semitic tract first published in 1903, which alleges a vast Jewish conspiracy to achieve world domination. Although the original Russian text was exposed as a forgery in the early 1920s, the book has proven almost impossible to get rid of. The primary reason is that if the secret conspiracy alleged in the document was real, then propagating the belief that the document was a forgery would not only be in the interests of the “council of Elders,” but also well within its power. So for those who want to believe in it, the document itself provides a plausible explanation for why most other people do not.

This makes the book a perfect instance of those mental viruses belonging to a subgenre known as “conspiracy theories.” As we have seen, these all live within the sheltered ecosystem provided by confirmation bias in human judgment. They withstand rational scrutiny because people fail to “think the negative” and so ignore the need to rule out other hypotheses. Furthermore, given that the human pattern detector is calibrated to go off too easily—think of all the patterns people saw in the random song sequence of their iPod shuffle—it’s easy to see how these theories could be attractive. Taking in the scope of world events and the trajectory of human development, it is possible that history is just one damn thing after another. Or it could be the unfolding of some complex plan, orchestrated by a mysterious puppet master. Our natural inclination is to think that it is the latter and to ignore completely the need to rule out the former.

Another very common type of mental virus is superstition, particularly of the magical variety. Among Pascal Boyer’s studies, some of the most interesting involve his efforts to figure out why some fanciful beliefs capture people’s imagination and therefore tend to get reproduced despite their implausibility. The Uduk of the Sudan, for example, believe that certain ebony trees listen to and remember the conversations that people hold in their shade.3 This is not particularly believable, yet there is something irresistible about it. (In fact, now that I’ve mentioned it, you will probably never forget it.) Boyer points out that when we are given information about the world, we tend to slot things into a relatively small number of categories, which then provide us with a large number of assumptions about how objects of that type will behave. For instance, when told that a walrus is an animal, even young children will immediately assign to it all of the typical characteristics of an animal (that it must breathe, that it can have babies, or that if you cut it in half it will die). Boyer observed that popular superstitions tend to involve single-point deviations from the standard set of expectations. He tested the hypothesis by putting together a collection of fanciful beliefs, some of them real, some of them invented. Some violated expectations in only one dimension, whereas others violated multiple expectations. He then told these stories to experimental subjects and brought them back a few days later to see what they remembered. The mundane beliefs were easily forgotten, as were those that violated too many expectations. But the ones that involved a single-point deviation from expectations seemed to exercise a peculiar fascination and were more easily remembered.4

One can see this structure in many popular superstitions. For instance, ghosts and spirits are taken to be immaterial beings, which gives them the ability to do things like pass through walls. And yet they seldom sink through the floor. Indeed, they are subject to all the usual laws of the universe save one (for instance, they are obviously affected by gravity, otherwise they would be left behind by the earth as it hurtles through space at incredible speed). They are also taken to have all the properties of a standard subject—for example, they remember things that they have seen, they get angry and sad, and so forth. This is, in Boyer’s view, what makes these ideas attractive. When a story violates expectations, it attracts our interest, partly just for its novelty value. But if it violates too many expectations, we feel that we don’t understand the rules, and so we cannot make any inferences or predictions. It’s when a story conforms to all our standard expectations except one that we become genuinely intrigued.

From this, it is not difficult to see how superstition evolves into magical beliefs that are, in turn, taken seriously. Getting remembered is the first step on the road to being believed. We all suffer to varying degrees from a problem called source amnesia, where we have trouble remembering how we came to know something. To take just one example, almost totally at random, I happen to know that the Polish city of Gdansk was once called Danzig. How I came to know that, I haven’t the faintest idea. And yet if someone were to say to me “I’m planning a trip to Danzig,” I would automatically say, “You probably shouldn’t call it that when you get there,” despite not having any way of sourcing the information. So if I was living in the Sudan and someone said to me, “I’m going for a picnic under that ebony tree,” I might automatically say, “Watch out—some of those ebony trees like to eavesdrop on conversations,” without really knowing how I know this. As a matter of fact, I don’t really know this, but that doesn’t matter so much, because if the person I’m speaking to is a bit credulous, he will now take it to be true, based on my authority.

This may be a bit simplistic. In fact, the central mechanism that turns stories into beliefs is repetition. Because our memory retrieval system is based on association, and because repetition builds associations, repetition makes certain ideas spring to mind more readily than others. And because we are used to forgetting exactly how we came to know something or where we learned it, we will often treat a belief that springs to mind as true simply because it springs to mind so easily. The basic set of relationships is very well established in experimental psychology: “The more times one is exposed to a particular statement, the more one is likely to believe the statement to be true. This relationship between repetition and perceived truth is mediated by familiarity; repetition increases familiarity and, in turn, familiarity is used as a heuristic for determining the truth of a statement.”5

This may be seem innocuous, but it is fraught with consequence. The core mechanism was well understood, implicitly, long before it was demonstrated experimentally. Consider, for example, the dead-on accuracy of Joseph Goebbels’s reflections on propaganda. Goebbels said that “propaganda must … always be essentially simple and repetitive. In the long run basic results in influencing public opinion will be achieved only by the man who is able to reduce problems to the simplest terms and who has the courage to keep forever repeating them in this simplified form, despite the objections of the intellectuals.”6

Indeed, one might even define an intellectual as “a person who thinks that beliefs should be assessed on their merits.” What the epidemiological perspective reveals is that a belief that is deficient with respect to its epistemic merits may nevertheless attract widespread allegiance if it has other, reproductive merits. Such a belief may not be rational, as long as it is contagious. Thus if the person who holds the belief feels obliged to “spread the word,” or if the idea tends to get stuck in one’s head like a pop song, or if the idea just “feels right” because it coheres with an intuition, or if people can’t see the problem with it because of confirmation bias, then that idea can enjoy widespread popularity, despite its failure to withstand rational scrutiny.7

These are problems that can be found within any culture. There is, however, reason to think that cultural contact and exchange may be resulting in some of the most virulent ideas being transmitted back and forth—like diseases and addictive substances—and beginning to pool within the major global civilizations. These are the superbugs of human culture. One can find the most striking evidence in the development of the “new age” spiritual movement in the United States. Perhaps the most striking feature of this movement—and the part that is most confounding to traditional intellectuals—is that absolutely no attempt is made to unify any of the belief systems that the movement draws upon. Eclecticism is seen as a virtue, so no one really tries to produce a coherent cosmology or worldview. It is a religion without a theology. Walk into any health food store, pick up the local natural living magazine, and you can find ads for crystal therapy, Reiki massage, therapeutic touch, chi energy realignment, aura reading, and so on, all sitting next to one another on the page, with no apparent tension. No one seems to worry about the fact that the people offering to unblock your chi and the ones trying to cleanse your chakras can’t both be right about what’s causing your breast cancer.

The Wikipedia page on “New Age” actually captures the flavor of the movement quite nicely:

The New Age movement includes elements of older spiritual and religious traditions ranging from atheism and monotheism through classical pantheism, naturalistic pantheism, pandeism and panentheism to polytheism combined with science and Gaia philosophy; particularly archaeoastronomy, astronomy, ecology, environmentalism, the Gaia hypothesis, psychology and physics. New Age practices and philosophies sometimes draw inspiration from major world religions: Buddhism, Taoism, Chinese folk religion, Christianity, Hinduism, Islam, Judaism, Sikhism; with strong influences from East Asian religions, Gnosticism, Neopaganism, New Thought, Spiritualism, Theosophy, Universalism and Western esotericism.8

Again, it seems almost pointless to observe that there is inconsistency here. (How does one combine “elements” of atheism with monotheism and polytheism, without at some point declaring a winner?) The better approach is simply to regard it as a long list of mental viruses, including some of the most contagious and powerful in human history—malware of the mind.

It is certainly no accident that this sort of eclecticism occurs in a movement that privileges intuitive over rational thinking styles. There is a general lesson here for everyone. It has to do with the unusual circumstances in which we find ourselves, living in modern, globalized cultures. In the same way that we are called upon to exercise greater self-control, in order to avoid falling victim to the extraordinary range of addictive substances that are available, we are also called upon to exercise greater rational control, in order to avoid providing an ecological niche for one of these viral thought systems. This is true even for those who adhere to some of the views itemized in the list above. (It is important to realize that, as Boyer points out, even people who are religious are committed to the view that the overwhelming majority of religious beliefs are false, simply because the truth of one religion implies the falsity of all the others. Thus an epidemiological view suggests itself as the most plausible explanation for the phenomenon of religious belief even for those who are religious. The only difference between the atheist and the theist is that the latter subscribes to some special story that explains why her own beliefs are exempt from the general principle that all religious beliefs are false.)

One can find an analogous phenomenon on the internet, where particularly virulent bits of malware, like the Blaster worm, can be found in circulation despite being over a decade old. Indeed, it is widely reported that a computer running an unpatched version of Microsoft Windows XP, if connected to the internet, will become infected before the user has a chance to download and install the first security patch.9 These bits of malware fade away only when operating systems change and eliminate the ecosystem in which they once thrived. Unfortunately for human beings, our operating system never changes. We walk around with the same old bugs, vulnerabilities, and security holes, without even being able to implement a patch—all we can do is monitor the traffic and intervene when we see something that is too obviously suspicious. Seen from this perspective, is it so implausible to see analogies between Falun Gong, or the Mormon Church, and a botnet? And is it not reasonable to worry that our mental environment is becoming more and more contaminated all the time?

There is one issue that I have not yet said anything about but that must loom large in any discussion of irrationality in our culture. It is the commercial motive. The person who had the brilliant idea of putting mayonnaise on a hamburger or a giant cap on a bottle of laundry detergent or a resonator under the hood of a car was not just trying to make the burger taste better, the detergent easier to pour, or the car more satisfying to drive. He was also trying to make money. This is not an accident. The environments that are the most hostile from the standpoint of rationality are those that are the most commercial. If you had to give a child just one piece of advice upon entering a shopping mall, a good suggestion might be “Remember that everything—and by that I mean everything—is a trick to take your money.” This is, of course, almost the exact opposite of what a natural environment is like, and so it is not unreasonable to expect that it will degrade our cognitive performance.

By now we’re all so used to this complaint that we seldom stop to wonder why it must be so. Why does the marketplace seem to be biased against rationality? There is, in fact, a straightforward answer to this, although it is a bit esoteric. It comes from an area of probability theory known as “Dutch book” arguments.10 One of the big problems that probability theorists have is that, unlike the principles of arithmetic, which most people find quite obvious, the basic principles of probability can be rather unintuitive. Furthermore, some of the logical consequences of these principles produce results that are very surprising, and thus positively counterintuitive.11 This is not really the fault of the principles, but rather of our intuitions. The bread and butter of the “heuristics and biases” research project was to show that our intuitions about probability and uncertainty are seriously flawed.

In any case, because people find probability theory unintuitive, probability theorists wind up dealing with a lot of skeptics who say, “Why should I reason this way?” or “If this is what rationality requires, then why should I be rational?” This can be a tough question to respond to. After all, you can’t make someone be rational. The solution that probability theorists cooked up was to show that if you violated any of their fundamental principles, it would be possible for a clever bookie to persuade you to make a bet that you would be guaranteed to lose money on, regardless of how things turned out.12 The details of the bet would be somewhat more complicated than “heads I win, tails you lose,” but the principle would be the same. This is what’s called a “Dutch book” (based upon some vague apprehension that the Dutch had pioneered the art of constructing such “books”). Later, as a generalization of this idea, it was shown that if you violated one of the key principles of rational decision theory, you could be turned into a “money pump,” allowing an unscrupulous trader to take arbitrarily large amounts of your money.13 Since this argument is easier to follow, I’ll focus on it.

Suppose that you prefer apples to oranges, oranges to bananas, and bananas to apples. Taken as a whole, your preferences violate an important consistency constraint called transitivity. You prefer A to B and B to C, but you do not prefer A to C. This actually happens to us all the time, and it’s not the end of the world. Our brains are not logic machines, so inconsistency doesn’t make smoke come out of our ears. It does, however, make us vulnerable to exploitation. If you are standing around holding a banana, a person could come up to you and offer to trade you the banana for an orange plus a small service charge. Once you get the orange, she could then offer to trade you that for an apple, again with a small service charge. But of course, once you have the apple, she can then trade you that for the banana you started out with … plus a small service charge. You have now become a money pump. Unless you change your preferences, this can go on forever, and the service charges will keep adding up. So you might as well just hand over all your money as soon as she walks up to you, and not even bother with the fruit trades.

If hearing the word “service charge” makes this all sound rather familiar, you may suspect that this is actually the business model of many corporations. And you wouldn’t be too far off. A lot of the “financial innovation” in the past few decades, at both the retail and the wholesale level, has involved finding new ways to take advantage of human irrationality. This is all motivated by a fundamental asymmetry between rationality and irrationality, which is that the rational can take advantage of the irrational in a way that the irrational cannot take advantage of the rational.14 If you’re not rational, then you are exploitable, in the technical sense of the term used by probability theorists (namely, that a rational person can come along and get you to give her all of your money).15 This is a particularly devastating vulnerability, because it allows other people to take you not just for a little, but for everything you’ve got. Being rational is of course no guarantee that you won’t get suckered (for example, you may still be ignorant, and so enter into trades that are not in your interest). But it does offer a defense against the most extreme form of exploitation, which is that of being turned into a money pump.

Many people manage to maintain something like a consistent set of consumer preferences, and so are relatively immune to exploitation. Yet even when they succeed in maintaining static consistency, they are often dynamically inconsistent. This means that their preferences change over time in ways that make them exploitable. This is all because of the warp in the way that we evaluate the future. One could see this quite clearly in the type of financial products that were being sold to consumers, which were the underlying cause of the subprime mortgage crisis in 2008. Subprime mortgages were the leading edge of a relatively new set of essentially despicable business models developed in the late twentieth century, which gave rise to what is known as the “poverty industry.”16 There was a time when businesses—particularly banks—were loath to do business with poor people, simply because poor people were unreliable and didn’t have much money. The thing about poor people, however, is that they also tend to have a bigger warp in the way that they evaluate the future. That’s partly why they’re poor. The consequence, however, is that even though they have less, they can also be much easier to “pump” for money. This is something that drug dealers have known for a long time; eventually mainstream businesses began to realize as well, and clamored for a piece of the action.

Consider, for example, credit cards. Intuition says that lending money to poor people, who may have no job or are always in danger of losing their jobs, is going to be less attractive than lending it to affluent consumers, who have stable jobs and tend to spend a lot more. Yet in certain cases the opposite is true. That’s because credit card companies don’t make their money from people who pay them back. Although they make some money off the transaction fees they charge, the big money is made when consumers maintain a negative balance on their card and make monthly interest payments. From the credit card company’s perspective, prudent, affluent consumers may actually be bad customers.

The same principle is what explains the rise of the “subprime” mortgage market, which precipitated the financial crisis of 2008. As with any loan, the big money is made from people who never pay the principal but just keep making interest payments. The whole idea of subprime mortgages was to lend money to people who did not qualify for “prime” loans because of an impaired credit history, financial insecurity, or just straight-up poverty. The initial thought was that if you made enough of these loans to enough people, the law of large numbers would guarantee you fairly stable returns, simply because no more than a small fraction of them would default in any given year. Once investors began accepting this argument and started wanting to purchase the right to the income stream generated by repayment of these loans, so-called mortgage originators began to employ increasingly clever and aggressive tactics in order to get people to take out the loans.

Lenders had already developed the practice of making interest-only mortgage loans, where the monthly payments covered only the interest charges, making no payment against the principal. These were initially popular with affluent consumers who worked in businesses where they received very large annual bonuses (such as bankers and stockbrokers). They would often want to keep their monthly expenses low but would make large payments against the principal once a year. In theory, however, there was nothing that said you had to pay off the loan at all; you could just keep making interest payments forever. So this was a perfect product to sell to poor consumers. By keeping the monthly payments low—just enough to cover the interest—you could sell loans to people who otherwise would be unable to afford them. You just had to trust that it wouldn’t bother them too much when, after making payments for years, they would be no closer to owning the house than they were on day one. And if they stopped making payments, all you had to do was repossess the house and find someone else to take it on.

With these interest-only loans, however, the consumer still had to make the interest payments. The first big innovation in the subprime industry was the adjustable-rate mortgage (or ARM). Rather than charging the person a uniform rate over the term of the mortgage, the ARM would start out with a lower introductory, or “teaser,” rate—say, 2 percent instead of 5 percent. But the rate wasn’t actually lower. The consumer was typically being charged the 5 percent, it’s just that she wasn’t being forced to pay the full interest charge. The unpaid amount was being added to the principal, and so the principal on the loan got bigger over time. Initially some effort was made to hide this from the consumer, but over time it was discovered that there was no need. With consumers being sufficiently short-sighted, the total amount paid over the life of the loan mattered very little—the only thing consumers paid attention to were the immediate monthly payments. This led to a huge proliferation of products, including a variety of baroque schemes whereby consumers could purchase “points” that would lower their initial interest rate, with the cost of these points being added to the principal. Almost all of these involved an extremely disadvantageous trade-off between present and future obligations.

These products were defensible on the grounds that a sophisticated, risk-tolerant investor who calculated that the value of a property would rise dramatically in the short term might have some reason to take out a loan of this sort. (Sure, the size of the loan would increase over time, but if the value of the property was increasing at a faster rate, then that wouldn’t matter.) The people that the products were sold to, however, were not sophisticated investors, but simply short-sighted consumers who either didn’t do the math on what they were owing or didn’t care. From the lender’s perspective, the objective was to give people loans that they would never pay off—to pump as much money from them as possible, then foreclose and take the property. (This was combined with a set of initiatives aimed at traditional borrowers, designed to deter them from paying off their mortgages, such as easy refinancing terms or even turning the mortgage loan into a line of credit, essentially transforming their home equity into a bank account that they could withdraw from at any time.)

Whatever the theoretical justification for these products, in practice they served as little more than increasingly clever ways of trying to money-pump borrowers. There was a time when people used to put money in the bank in order to help themselves resist temptation. Over time the role of banks changed, so that they began to serve as a source of temptation, rather than as a bulwark against it. Instead of helping us to become more rational, they began competing to find new ways to undermine our ability to make sound financial decisions.

You can usually tell how old an advertisement is just by reading the copy, in much the same way that archaeologists can date artifacts by noting the geological stratum in which they are found. Most nineteenth-century advertising is extremely discursive, in more ways than one. In a typical print advertisement, not only is most of the space occupied by text, but the text itself is argumentative. It tries to explain, in rational terms, the superiority of the product. Consider, for example, the following advertisement for Fell’s Coffee, from 1866:

The Universal Practice of mixing Chicory and other adulteratives with Coffee, has very much damaged in public estimation, what ought to be the most delicious of Beverages. So effectually have the public been drugged with such mixtures that the true properties have been lost sight of, and many prefer a black and thick infusion to a drink rich in spirit and aroma. General as is the use of Coffee, it is little known that in condensing the vapors extracted from the berry in roasting, a liquor is obtained of the most nauseous taste and of a scent the most unbearable. Under such circumstances it is evidently important that all the gases and fluids extracted by roasting should be carried off as quickly as possible in order to prevent their returning again to the Coffee, which is the case in the confined cylinder. This object is admirably accomplished by the new and patent “Conical Coffee Roaster” as used by Fell & Co., Victoria, in which the berry is directly exposed to the radiated heat, and the vapor extracted carried off instantaneously.17

This ad copy is like a small portrait of an earlier age of innocence. By now we are so familiar with the rules of commercial speech that anyone can spot the problems with it. Most obviously, it contains an unflattering portrayal of the very product that it is trying to sell. It violates the basic rule that you never repeat a criticism when responding to it. Who knew that the process of coffee roasting generated a byproduct that is “nauseous” and has an “unbearable” smell? Of course, the ad goes on to explain how Fell & Co.’s technique disposes of this byproduct more effectively than does that of the competition. The assumption being made, however, is that the force of this argument will triumph over the unappetizing associations produced through the use of the words “nauseous” and “unbearable” to describe the “gases and fluids” emitted by coffee beans. This is a sales pitch aimed at the head, not the heart.

During the 1920s and ’30s, the introduction of images began to have a noticeable impact on advertising—most obviously, the amount of text shrank as pictures became increasingly dominant. It is also noticeable that a lot more attention was paid to tone and to the rhythm of the words being used. And yet the text remained essentially argumentative, typically consisting of a list of reasons why you should buy the product. Reading an ad was not all that different from listening to someone trying to convince you to buy the product. Consider the following coffee ad, from 1932:

Drink It and Cheer, Drink It and Sleep

Drink what? Sanka Coffee.

Why will you cheer? Because Sanka Coffee is so delicious. Yes, so downright delicious that if you’re not absolutely satisfied, we’ll return your money.

Why will you sleep? Because 97% of the caffein has been removed from Sanka Coffee. And if coffee keeps you awake—or causes nervousness or indigestion—remember: it’s the caffein in coffee that does it!18

One can see some obvious improvements here in the fluidity of the prose. And yet problems remain. The ad contains what looks like, from a modern perspective, too many arguments. (Do you drink it to cheer? Or do you drink it to sleep?) And strangely, it continues the curious practice of drawing attention to problems with the product (it causes “nervousness and indigestion”) before attempting to rebut the charges. The reader is being given a lot of credit. Next time she is offered coffee, she may immediately think “indigestion,” but she is supposed to go on to think, “Oh yes, but that is caused by caffein, and Sanka has no caffein.” In order to get to the positive thought about the product, the consumer must consciously suppress the negative thought that springs to mind unbidden.

After the Second World War, advertisers became significantly more sophisticated. Most importantly, they realized that, in the typical run of cases, the seller could not count on having people’s full attention, and that this had important implications for the way that a product should be presented. The Fell & Co. coffee ad is clearly written on the assumption that the prospective consumer will be sitting down, reading through the copy carefully, following the argument, making note of the specific claims being made. In the early twentieth century, advertising copy was still being written as though it were for an audience who could be expected to be paying attention. This all began to change in the 1940s.

Things changed in part because of an increase in the sheer volume of advertising. Consumers became not only less likely to believe what they read, but also more likely to skip past ads or skim through them very quickly. Furthermore, advertisers began to realize that fewer consumers were starting out by reading ads, then going and buying the product based on the information they had received. The development of self-service stores, such as modern supermarkets, created a situation where instead of having to ask someone behind the counter for a specific good, consumers found themselves confronted with a shelf full of unfamiliar products, free to pick their own. In this situation, the most important thing is what they are able to remember. Even just recognizing a product name, in the absence of any specific information, can have a powerful (positive) impact on purchasing decisions.

The immediate postwar era is often described as the golden age of the “unique selling proposition.” This was largely a consequence of diminished expectations with respect to the consumer. Advertisers realized that most people looking at an ad are not going to read through a big long argument, much less remember it. In fact, you’d be lucky if you could get them to remember just one thing about the product. And so ad agencies began to ask their customers, “If you could tell people just one thing about your product, what would that one thing be?” Companies were encouraged to figure out the unique quality that distinguished their product from its competitors, and to focus all of their energies on that.

It was during this period, in the 1950s, that the coffee industry came up with its most powerful marketing coup: “Give yourself a ‘coffee-break’! There’s a welcome lift in every cup!”19 The term “coffee-break” appears between quotation marks because, at the time, it was still relatively unfamiliar. It has, of course, since passed into everyday language, and remains the most effective marketing concept for the product. The suggestion is that drinking coffee is something you do to reward yourself. After working hard, you take a break, and during that break, in order to recharge, you get yourself a cup of coffee. The associations are not just with pleasure and relaxation, but also with accomplishment. No one has ever been able to beat this, which is why the same concept (coffee as a time-out or escape from the pressures of everyday life) is still central to the marketing campaigns of most major coffee-shop chains, including Starbucks. (Supermarket coffee, on the other hand, is usually consumed at breakfast, and so has to be marketed differently.)

The unique selling proposition is still dated, however, by the fact that it persists in giving the consumer a reason to purchase the product. The shift to brand marketing in the late twentieth century was based on the discovery that it is not necessary to appeal to the consumer’s rationality at all in order for an ad to be effective. You actually don’t need to give people a reason to buy your product. Brands are about trust, and trust can be cultivated through entirely emotional and intuitive appeals. Thus, increasingly, advertising seeks to bypass the consumer’s rationality completely. The most obvious evidence of this is the steady but inexorable decline in the amount of language in advertisement. Language is the vehicle of rational thought, so if you want to bypass reason, cut out the language and stick to pictures. This is why so many ads today feature no text at all, just an image and the company name. Starbucks has even gone so far as to remove the company name from its cups, leaving only its trademark mermaid image on them. Other firms have moved in the same direction, replacing their company names with acronyms (KFC for Kentucky Fried Chicken, H&M for Hennes & Mauritz, RBC for Royal Bank of Canada, etc.).

To say that advertising seeks to bypass people’s rational faculties is not to say that people are being brainwashed or programmed to buy things that they don’t really want. When advertisers first started using sophisticated psychological techniques, there was a lot of hysteria about “subliminal” advertising, mass hypnosis, the “Manchurian consumer,” and so forth.20 In part this stemmed from an overly credulous attitude toward the boasts of the advertising agencies, who naturally claimed great powers for themselves. The fact that advertising seeks to bypass rationality does not mean that consumers lose their capacity for rational decision making when they make a purchase. It just means that the overwhelming majority of advertising aims to manipulate, rather than to convince. For the most part it just latches onto existing desires that people have—for money, sex, love, attention, status, affirmation, control, and so on—and then pushes us in the direction of thinking that a particular product will help us to satisfy those desires.21 Often it does so simply by building an association, or even just getting our attention—something that is increasingly difficult to do in the current media environment.

The development of advertising over the course of the twentieth century was, of course, an evolutionary process. It was not the brainchild of some cabal of evil geniuses on Madison Avenue. The kind of advertising that we see around us is there because it is effective. The techniques that work have been discovered one at a time, bit by bit. Because of that, there’s not much that can be done about them. No amount of hand-wringing or social criticism is going to change the character of advertising or undermine its effectiveness.22 (The one thing that has been proven beyond a shadow of doubt is that writing books complaining about the nefarious tactics of advertisers doesn’t do a bit of good. Ever since the 1950s, each decade has spawned a new set of books expressing shock and outrage over the insidious new forms of mind control we are being subjected to, which each new generation treats as a revelation. And yet the world keeps turning. Comparing Vance Packard’s The Hidden Persuaders to Naomi Klein’s No Logo to Martin Lindstrom’s Brandwashed, it should be apparent that there is little new under the sun.)

Unfortunately, the standard apologetic for the dominant trends is not much comfort either. Starting with the response to Packard, apologists for the advertising industry have claimed that salvation lies in the new generation. As far back as the 1960s, one can find people arguing that the kids these days are so media savvy that they are immune to the old tricks being played on them by advertisers. Complaining about advertising, the apologists suggest, is something that only old fuddy-duddies do.23 The kids are, like, way beyond that.

This argument has a tiny bit of truth to it, but it ignores the bigger picture. If people have become more “savvy” these days—although a better word might be “cynical”—this is because they are more likely to treat with suspicion, and therefore to override, the intuitive response that they may have to a product. This is because they know they are being manipulated, by everything from the packaging color and the brand name to the music playing in the store and the ambient lighting. And yet it is important to realize that there is no way of reprogramming the unconscious to discount this or that factor. You cannot just say to your brain, “Ignore the beautiful woman, she doesn’t come with the car” and expect it to comply so that in the future, you can rest assured that whatever warm feelings you have about a particular brand of car are based entirely on fuel economy and performance. You have to keep exercising the rational override, every single time.

This is why, no matter how media savvy the kids may be these days, with their Facebook updates and Twitter feeds, they still click on links with pictures of “hot chicks” or “LOLcats” like trained seals. It’s as though nothing ever changes. And there’s a reason for that. It’s because nothing ever changes. Or at least not with the adaptive unconscious.

As a result, no matter how savvy we all are, it still takes a conscious effort each and every time we have to override some deleterious response or second-guess some intuitive judgment or “gut feeling” we have. The sheer amount of cognitive effort required to navigate a modern environment without being suckered has increased dramatically. There is reason to think that this grinds us down over time. They say that the average North American is exposed to over 5,000 advertising messages a day. While each message may have its own little trick, the net effect is little short of an all-out assault on reason. It would be unsurprising to discover that this environment resulted in a general degradation of cognitive performance.

In the Mike Judge film Idiocracy, Luke Wilson and Maya Rudolph play two average citizens who are enlisted in a cryogenics experiment being conducted by the U.S. Army. Shortly after they are put to sleep, the experiment is discontinued, then promptly forgotten about. The two of them are awakened five hundred years later, in an America that is still recognizable and yet radically transformed. Most significantly, everyone has become an utter and complete idiot. Wilson’s first encounter is with an irate citizen sitting in his living room watching a show called Ow, My Balls on The Violence Channel, eating congealed fat, which he scoops out with his hand from a large tub, while slurping soda from the built-in dispenser in his La-Z-Boy-style chair. He, like everyone else in this world, turns out to be somewhat difficult to communicate with, since by this time “the English language had deteriorated into a hybrid of hillbilly, Valley Girl, inner-city slang and various grunts.” Every time Wilson tries to talk, people just look at him strangely, call him a “fag,” tell him to shut up, or threaten to hit him. (The people in this world exhibit one of the traits that has fascinated Judge for years—dating back to his best-known creation, Beavis and Butt-head—namely, impenetrable stupidity. They are too stupid to know that they are stupid.)

What made people uncomfortable about the film—and what led its distributor and producer, 20th Century Fox, essentially to bury it—was that its image of an intellectually degraded world was, in each case, a recognizable extension of tendencies that are already present in contemporary society. Fox may also have been worried about the reaction of sponsors, because of the broad parody of commercialism throughout the film. One memorable sequence was shot inside a giant Costco store, so large that it had its own internal light rail system to service the thousands of aisles. Every article of clothing or piece of furniture in the movie is branded. The fast food chain Carl’s Jr. has taken over the entire food supply with its new slogan “Fuck You, I’m Eating.” This is actually not that hard to imagine: the company’s current slogan is “Don’t Bother Me, I’m Eating,” and the chain has provoked consumer ire through its extraordinarily vulgar commercials, including the decision to use the song “Baby Got Back” to promote meals aimed at children. In fact, most of the time the film seems like it’s set fifty years in the future, not five hundred.

One of the reasons for the longer time frame, however, is the mechanism that the film posits as an explanation for the degradation. It is none other than the familiar bogeyman of social Darwinists everywhere, namely, overbreeding among the inferior classes. Early on, the movie shows a montage featuring an anxious, overachieving, educated, intelligent couple putting off having children until it is too late. Meanwhile, the couple living in the trailer park down the road have long since become grandparents. The idea is simple: dumb people have more kids, so the population as a whole is getting dumber.

The crass commercialism in the future society is taken to be a product of this general decline in intelligence. In reality, however, the more likely chain of causation would run the other direction. In the real world, at the same time that there has been an obvious coarsening of popular culture, there has also been an increase in general intelligence. In the United States, average IQ scores have increased by approximately 3 points per decade, for a total gain in average IQ of just under 22 points between 1932 and 2002.24 Improvements in nutrition, along with environmental regulation (in particular, dramatic reductions in exposure to lead), are credited with a several-point increase in average intelligence in the American population. So what’s making people dumb cannot be a change in biology. It is a change in the culture that is driving the transformation. In other words, it isn’t stupidity that causes commercialism, but rather commercialism that causes stupidity. The film gets the explanation exactly backward.

If the argument of this chapter is correct, then the process of reverse adaptation that is driving this process is able to produce not just a coarsening of the culture, but an actual decline in cognitive performance. While none of us may feel particularly dumb or irrational, looking at the world we live in, it’s hard to resist the suggestion that we may be. As Robert Cialdini puts it, “More and more frequently, we will find ourselves in the position of the lower animals—with a mental apparatus that is unequipped to deal thoroughly with the intricacy and richness of the outside environment.” The irony is that, “unlike the animals, whose cognitive powers have always been relatively deficient, we have created our own deficiency by constructing a radically more complex world.”25

This is right, except again it is important to recognize that the deficiencies we experience are not just the result of the world becoming more “complex”—as though it were all an inevitable byproduct of technological and social progress. The correct word is hostile. Our environment has changed so that the correlations we relied upon in the environment of evolutionary adaptation no longer obtain. This makes it so that our hardwired problem-solving routines, rather than producing answers that are approximately correct, will increasingly provide answers that are exactly wrong. Furthermore, a number of our social institutions encourage this tendency. Doing something to counteract this trend is a political problem of the first degree. Unfortunately, many of our political institutions, far from providing anything like a solution, have become a significant part of the problem.