Incongruity
Absurdism, Mystery, and Puzzle
Incongruity often generates a desire to comprehend. In spite of what you might think of your fellow human beings on a day-to-day basis, we are the most intelligent and intellectually curious species around. The main reason we’ve come to dominate the planet as we have is because we’ve outsmarted the other species we compete with. Primates in general are curious, but humans take curiosity to the limit.
During our evolutionary history, once we occupied our niche, our intellectual capacities and our brain size exploded. Many studies have found that for humans in general, bigger brains mean smarter brains, accounting for 16 percent of the variance in intelligence among humans.1 Theories abound for the specifics of why the brain got so big so fast, including selection pressure for social competition, sexual selection, gut reduction, hunting, and niche construction. But all agree that our brains got bigger because they made us, in some way or other, smarter.
In fact, one of the reasons childbirth is so painful and dangerous for human beings (more than for any other primate2) is that there has been an evolutionary tension between a head that wants to get bigger and a pelvis that wants to stay small. If women’s pelvises were any bigger, they’d have trouble running.3 As a result, the infant head can barely fit through the birth canal. Infants’ brains at birth are as big as they are going to get, unless we’re going to be a completely
C-section species. We evolved to be able to do more and more things, and this evolution made our brains get bigger and bigger. We are bipedal probably because being bipedal gave our ancestors a better view.4 Walking on two legs made the pelvis small, and gave us better use of our hands. This happened about 250,000 years ago. The brain also already uses a quarter of cardiac output, so there might be energy limitations at work as well.
Recently (in the past 20,000 years or so), human brains have been getting smaller, having lost as much volume as a tennis ball. We’re not sure why. One theory holds that our brains have gotten more efficient. Another holds that it’s part of our self-domestication: as our bodies got smaller, we required smaller brains to operate them. (I will return to the topic of human self-domestication in chapter 6.) A related theory holds that with civilization we just don’t need to be as smart to stay alive. However, given that intelligence worldwide tends to rise every year, I have my doubts about this theory. In the last 200 years, however, brain sizes in humans have been growing again, probably due, mostly, to better nutrition.
To deal with the small size of the birth canal, one evolutionary strategy is that we’re born with our skulls in pieces, the better to fit through the small birth canal. This is why babies have that soft spot on their heads. The bones of the skull fuse together later. The bones in the mother’s pelvis shift around too. But there was still a head size competition. Evolution needed to find yet another workaround. How can brains get smarter without getting bigger?
Here is the strategy that evolution ended up using: rather than being hard-coded how to act in every situation the world might throw at us, we evolved to have a general-purpose learning system. That is, rather than being born knowing everything we need to know (as “precocious” species are), humans and other apes are relatively “altricious,” born knowing very little but with an amazing ability to learn what we need to as we experience the world.
Birds, in contrast, are quite precocious. Some birds start hopping around and looking for food within seconds of hatching. Humans, in stark contrast, are the most altricious species known, and helpless for a very long time after birth. As evolutionary scholars Peter Richerson and Rob Boyd put it, “We are the largest brained, slowest developing member of the largest brained, slowest developing mammalian order.”5 We have to learn to survive and our parents have to take care of us in the meantime. Traditionally, this intense parental care lasted until puberty. Nowadays, it seems to last until the offspring are leaving graduate school at age thirty-three. Or maybe that was just me.
Paradoxically, our utter inability to take care of ourselves as babies is a key to our success as a species. Rather than being made perfect for our environments right out of the gate, we can adapt to almost anything the world presents us with. This has allowed human beings to live in an astonishing variety of natural environments, from the icy areas of the northernmost parts of North America, to the deserts of Africa, to the rain forests of South America.
Not only has this ability allowed us to thrive in many natural environments, but it allows us to live in different cultures. We adapt to the world we’re born into, and part of that world was created by changes other people have made to it. This allows for a positive feedback loop: we are smart, we make changes to our environment (writing books, building cities, etc.), and our children learn that environment as normal, and eventually they make changes to it, physically and culturally, themselves.
Humans are more altricious than other species, for sure, but it might be that some individual people are more altricious than others. These differences in altriciousness might account for differences in intellectual ability. Turns out that the smart kids younger than eight years of age have an unusually thin cerebral cortex!6 But by the time they are in late childhood, they have a thicker than usual cerebral cortex.7 Perhaps these kids are smarter because they are more altricious than their peers—they are more competent later because they are less competent earlier. They’re more built to learn. If all this is true I would predict that gifted children are actually less competent in the world before age eight, because, being less precocious, they start with less, but learn more.
If we’re not born with built-in routines of how to interact with the world, then we must learn how to do it. A ramification of this is that we have evolved a great desire to learn things about our environment.
As psychologist Alison Gopnik puts it, if seeing pattern is the “aha” phase, the experience of incongruity is the “hmm . . .” phase.8 To make an analogy with eating, the desire for understanding, often caused by incongruity, is like hunger, and the experience of understanding, which is a kind of pattern perception, is like the pleasure of actually eating.
There is an entire subfield of psychology based on the desire for new information: the “looking paradigm.” It’s used to find out what babies understand. They are shown impossible situations (demonstrated in puppet shows). When babies see something they think is interesting (usually something new, unusual, or impossible), they look longer at it (just as you would).
When children play with blocks, they are learning how the physical world works through exploration, using basic hypothesis testing. Knocking down the pile of blocks is just as important for learning as building them up in the first place. This desire to learn never goes away. As a result, we have a desire to figure things out, to make sense of chaos, and we get a rush of pleasure when we do. When we experience things that violate our expectations, we respond with increased attention and thinking about that inconsistent information. The more unexpected an event is, the stronger our emotional reactions are.
Along and in the Ottawa River, near where I live, people stack rocks on top of one another. The most interesting are the ones that seem the most improbable—large stones on top of small ones, or stones balancing with their smaller sides down. Locals are most intrigued by the ones that make us wonder how they manage to stay up. The childhood fascination with blocks—making towers and knocking them down—manifests itself in sculpture as adults.
Beauty and interestingness are different things. We find something interesting when we feel there is more to discover about it. We are curious. As we learn, and find patterns, we might find the stimulus more beautiful but less interesting. Any stimulus, as we experience it more, provokes a weaker and weaker neural response. This adaptation happens with the senses as well as with emotions. For a work to stay interesting after multiple exposures, the artist must find a way to counteract this adaptation. Great works of art are, experts say, endlessly fascinating.
Incongruity is the flip side of our desire to find patterns. Too little order is confusing, too much order is boring. The sweet spot is that area where tantalizing contradictions are visible, but the stimulus gives us an inkling of a hidden order that can be figured out. The notion that there’s a hidden order draws people in. Repetition is related to complexity—an increase in repetition means a decrease in complexity. We like to look at things we can make sense of, but we also like to be challenged.9
We want this sweet spot not only over space, as in visual arts, but over time as well. If you experience too many complicated stimuli, you will start to find them tiring and prefer things to be more simple. When a stimulus has an incongruity, and a resolution to it (either in the stimulus itself or created in the audience’s mind), it takes advantage of our love for incongruities as well as patterns. The mind is always trying to minimize surprise and confusion. It does this by seeking out incongruity and making sense of it—turning incongruities into patterns, as described in the last chapter. Overexposure to pattern such as symmetry and balance can lead to a lowering of arousal due to habituation.
There’s a sweet spot.
One study by education scholar Hy Day presented people with visual images that varied in things like asymmetry, number of elements, and so on. The simpler patterns were initially viewed as more pleasing, which is to be expected. However, over time the pleasingness rapidly diminished. The moderately complex images got more pleasing (presumably as the participants perceived patterns in them) and then at some point started to decline in pleasingness.10 This is what one would expect with the tension between pattern and incongruity. People see something moderately complex, and they are driven to find patterns in it. As they habituate to the patterns, they lose interest. The most complex images were the least popular, because the people could find no patterns in them at all.
Computer games, too, are the most fun when they consistently challenge the player enough, but not too much, and gradually increase the difficulty of the challenges. Halo 3, a popular Xbox 360 game, was specifically made so that each obstacle was tough enough to be interesting and challenging, but not so hard that people got frustrated. Microsoft’s human-computer interaction laboratories were used to help make the game like that.11 They were going for the sweet spot between ease and difficulty, understanding and confusion, pattern and incongruity.
A study by Peter Hekkert of people’s taste for product designs found that people most preferred designs that looked quite conventional—with one unusual feature. The completely conventional designs, as well as the truly avant-garde designs, were preferred less.12 In landscape art, as well, people prefer moderately complex landscapes. Impenetrable jungles and simple plains are less attractive.
However, these findings need to be tempered with some knowledge of how people tend to behave in psychology experiments in general. It is known that when asked to rank a bunch of things, they tend to prefer things in the middle of whatever range they are presented with. That is, people tend to like things in the middle, no matter where that middle happens to be. This is the “central tendency effect.” This means that we might not be able to predict the relationship between the complexity of a particular thing and how interesting or pleasing it’s going to be, because doing so depends so much on the context in which it is presented. Also, there is some counterevidence to the sweet spot theory. One study of aesthetics, by psychologist Flip Phillips, found that people preferred the very simple or the very complicated imagery to imagery in the middle of the complexity scale.13 In a different study, led by psychologist Colin Martindale, people preferred more complex paintings.14 Both of these studies serve as counterevidence to consider, but it is interesting that they had different results.
The artistry in music performance is not in the composition of the piece, but in the manner in which it is performed. The musician can choose, for example, to play in a more or less expressive manner. Psychologist Daniel Levitin created versions of a Chopin piece that varied in expressiveness. The least expressive versions played exactly on beat, and they sounded dull. But if there was too much expression, the piece became chaotic.15
Even in fantasy and science fiction, good storytellers tend to stick to the relatively, but not overly, familiar. Filmmaker James Cameron reflected this aesthetic when describing his film Avatar:
If you’re outlandish all the time, you’ve got no place to hang your hat. People have to feel connections to things that they recognize, even down to the design of the Na’vi. There’s no plausible justification—unless you go to some really arcane explanation—for the Na’vi to look that human. It’s just that science fiction is not made for a galactic audience. It’s made by human beings for human beings.16
Indeed, even science fiction needs to be a bit conservative about the future. Famed science fiction writer William Gibson said,
If one had gone to talk to a publisher in 1977 with a scenario for a science-fiction novel that was in effect the scenario for the year 2007, nobody would buy anything like it. It’s too complex, with too many huge sci-fi tropes: global warming; the lethal, sexually transmitted immune-system disease; the United States, attacked by crazy terrorists, invading the wrong country. Any one of these would have been more than adequate for a science-fiction novel. But if you suggested doing them all as an imaginary future, they’d not only show you the door, they’d probably call security.17
In film, there is a style of editing called continuity editing that is designed to make the film easily comprehensible. One of its rules is called “match on action.” It involves placing a cut where a character is doing some physical action, such as sitting down. Seeing the beginning of the action in one cut and the action finished in the other helps make the cuts appear more continuous.
Sergei Eisenstein, an early Soviet filmmaker, did not like continuity editing, because he thought audiences should have to do some intellectual work to understand a film.18 Sometimes we’re in the mood for a book, or perhaps a film, that is challenging and interesting, and other times we just want pop culture that requires no effort on our part. The sweet spot moves depending on mood.
Personality traits probably affect our taste in incongruity as well. One of the “big five” personality traits—those that social psychologists believe generally explain personality—is called “openness to experience.” In a study by psychologists Gergory Feist and Tara Brady, people who scored high on this trait preferred art works with more weirdness, dissonance, and incongruity.19 I expect that factors such as mental exhaustion and stress contribute to what kind of complexity we want. One finding by psychologist Piotr Winkielman that supports this notion is that when people are feeling insecure or sad, they prefer familiar images.20 It takes more mental work to process challenging stimuli. Sometimes you’re in the mood for Bruckheimer, sometimes you’re in the mood for Eisenstein.21
The relationship between order and incongruity is a fascinating tension. Personal discoveries of order and meaning are more compelling than order that is obvious from the outset. It has been suggested that the discovery of an object or pattern is more pleasing when it takes some effort. To discover order means that it is not obvious at first. This means that there is an initial impression of either neutrality or incongruity. The discovery makes the stimulus feel deep and meaningful. The perceiver feels proud to have found hidden depths.
As sweet as this sweet spot is, people (usually) do not skip to the last chapter of a mystery novel. Rather, we like to put ourselves in states of curiosity with the expectation that the curiosity will be satisfied. Although people will often choose to have everything explained, studies show that people are happier, and their pleasure lasts longer, if there is still some uncertainty left over. Sometimes the pleasure of the resolution is too tempting, even though the mystery might ultimately make us happier. This is the pleasure paradox.
Anthropologist Pascal Boyer’s theory of supernatural agents being minimally counterintuitive ideas is another expression of this sweet spot theory. Intuitive ideas are easy on the mind and benefit from facility of processing, but they are not particularly memorable or noteworthy. One bit of counterintuitiveness makes the idea of the supernatural being just mentally challenging enough to capture our interest.
The exploration of uncertainty and the resolution is something that can unfold over time. This is obvious with music and narrative, but it happens with painting and sculpture too. One cannot focus on every aspect of a painting all at once. As we experience a work of art, we see new patterns. Great works of art afford new insights with repeated exposure and study. The art stays the same; we’re the ones who change.
Some arts are immediately pleasing to anyone, but for many kinds of art we have to earn appreciation—as familiarity grows, we learn more patterns. As we have seen above, connoisseurs’ ability to see more patterns makes simpler works more boring to them, but they have a greater ability to appreciate complex works. They can see patterns that laypeople cannot.
Turning from art to activities, the concept of flow, pioneered by Mihály Csíkszentmihályi, is a feeling one gets while engaging in certain activities, characterized by absorption in the activity, forgetting the self, and positive emotions. Some get it surfing, some drawing, some bartending. What makes his theory relevant to the sweet spot I’m describing is that getting to it requires a bit of challenge but not too much.
Returning to computer gambling addictions, discussed in chapter 2, people have described using the gambling machines in a way frighteningly similar to flow. One addict says “It’s like being in the eye of a storm, is how I’d describe it. Your vision is clear on the machine in front of you but the whole world is spinning around you, and you can’t really hear anything. You aren’t really there—you’re with the machine and that’s all you’re with.”22
Recall that we can view high dopamine levels as causing a person’s sensitivity for pattern detection. People with too much dopamine, such as schizophrenics, find meaning in meaningless stimuli. Explanations, works of art, and ideas in general can vary in how incoherent they are. It stands to reason that people with more dopamine will have their sweet spot pushed more toward the nonsensical than people with less. This theory predicts that people with high dopamine will prefer absurd art (e.g., surrealism, absurdist theatre, ghazals) more than people with normal dopamine levels, and people with low dopamine will prefer (relative to others) art with more literal, obvious meaning (e.g., landscapes, portraiture, most novels). Similarly with ideas, low dopamine should predict a preference for clear explanation, and high dopamine, in contrast, should prefer obtuse explanation. I call this the “dopamine incongruity hypothesis.” It has yet to be tested.
* * *
Play can be understood as a response to incongruity. It is a fuzzy concept in English, describing everything from a child playing house to tournament poker games to hockey games. I look at play as any kind of interactive entertainment. Passive entertainment, such as watching television, is a kind of noninteractive play. One essential element of play is that it involves dealing with some kind of make-believe world. That world might be cards and the rules of poker, or a baseball game, or a make-believe space station kids imagine themselves to be in.
Human beings are not the only animals who play. Birds and other mammals play, and learning seems to be its evolutionary function. Predators play by stalking, wrestling with, and pouncing on each other. Hunted animals play by leaping and running around. Human play can be looked at in the same framework. Sports prepare us for physical feats we might need for survival or mate attraction. More intellectual games such as chess and poker stretch our mental abilities.
At the time of writing, 90 percent of children in the Western world play computer games. Although computer games are often thought of as a waste of time, studies show that people who play computer games can respond faster to things (for example, identifying whether a bunch of letters is a word or not) without sacrificing accuracy and are more creative. It’s even good for their vision. Players of computer role-playing games are better at planning and strategic thinking. Some neuroscientists believe these games teach people to learn in general. Surprisingly, the best kind of game for learning seems to be the violent first-person shooters. Playing violent military games such as Medal of Honor is better for you than playing puzzle games like Tetris or word games, as measured by a variety of tests of visual abilities. Violent military games can even increase reasoning about scientific material.23
Arts of all kinds inhabit the sweet spot between pattern and incongruity. Incongruity can take many forms, from formal visual aspects such as asymmetry, to more subtle mysteries in visual arts.
When we are looking at real places as well as images that depict places, real or otherwise, we have an urge to explore unknown territory. We like to see scenes that look like they would reward us with more information if we were to explore them. Interior designers, architects, and planners have hit upon this wisdom in their design of art galleries, museums, and parks. When we walk by and can see through the doorway that there is only a single room beyond, we are less likely to enter the room than if that room contains a door or a wall that might be hiding another passage. This is because we think we’ve seen everything there is to see and lose interest. So gallery owners often put in a wall that hides the back wall and part of a side wall. This makes people curious to see what is on the other side of the wall. Likewise, in museums, dead-end rooms are more likely to get a mere glance from the doorway. This is called “exploratory movement.” This is probably why modern museums are set up as a collection of connected rooms, rather than as rooms that branch off of a central hallway, like many office buildings and homes are. Park design takes advantage of this too. A trail that goes straight is less interesting than one that winds around. If you can’t see what’s ahead, you want to explore. Similarly, residential districts, where beauty is more important than transportation efficiency, are made of winding roads. They invite exploration and are more attractive.
These examples are all from designed real-world experiences. But the same effects occur with static paintings and sculpture. Paintings that feature hidden places to explore are particularly compelling.
Our desire to figure things out attracts us to contradictions and impossible objects depicted in art. The work of M. C. Escher is perhaps the clearest example of this. Many of his illustrations explicitly depict impossible scenes. Just as babies spend more time looking at impossible situations in laboratory situations, we love to gaze at Escher’s prints.
Some paintings are easy on the eyes. They’re just plain beautiful. Others are difficult. But there are three different ways that paintings can be difficult: they can be perceptually complex, horrific, or surreal.
Some paintings are just complex, visually, and they take time or effort to understand. Cubism and various forms of abstraction, in addition to paintings that are quite realistic but involve a great deal of detail, fall into this class of difficult paintings. Horrific paintings have disturbing subject matter, but need not be difficult to make sense of. Like horrific films and books, they are compelling because we are attuned to attend to what we fear. Surreal paintings might not be disturbing, but they present a world unlike our own. The paintings of René Magritte are often peaceful but profoundly weird. The mind struggles to make sense of them. They are compelling for different reasons. Surreal paintings draw us in because the incongruity makes us want to solve the puzzle or contradiction they present. Certain of Salvador Dalí’s paintings are both horrific and surreal, packing a compellingness one-two punch.
Incongruity can also be found at the level of color. Artists often decide to use a particular palette, the set of colors used in a work of art. Most of these colors look like they belong together, but a few others will appear as outsiders. For example, interior designers and decorators often will pick an “accent color.” Such colors have been said to maintain interest in the audience.
A similar effect can be found in music.
In general, music gets much of its emotional impact from delaying and manipulating gratification based on the listener’s expectations. It is seen in electronic dance music, a form of popular music including subgenres such as techno, trance, and jungle. Typically, a track will begin with a simple sound, such as a drumbeat, repeating bass line, or melody. As the track progresses, other elements are gradually introduced. Your mind is fed new things at a certain pace. As it becomes predictable, it changes.
Pop songs have a similar structure, but rather than new elements being introduced after some number of bars, the repetition occurs at the level of verses and choruses. It is common for a song to have two verses, a chorus, a verse, a chorus, a bridge, a verse, and then a few choruses. We get used to the structure of the verses, and then the chorus arrives, providing us with something fresh. As the whole verse-chorus structure begins to get familiar, a bridge arrives. The bridge fits, musically, but is often substantially different from the rest of the song. Then we are satisfied by the return to the verse-chorus structure for the end of the song, which we take pleasure in recognizing.
Repetition is important for music at the level of particular recordings. In fact, the best predictor of how much people will like a song is how many times they’ve heard it. Marketing researcher Mario Pandelaere found that when we hear two versions of a song, we tend to like the one we heard first—the one we are already familiar with.24
This appears to happen with musical genres as well. Before we are familiar with a particular genre, it all sounds the same. Usually bad. Once we have familiarity with it, we learn to appreciate the differences between the pieces. We get pleasure from recognizing bits and we might learn to like it. Like getting used to any new, unfamiliar form of art, getting used to new music genres often requires some exposure so that we can learn what things to listen for.
Familiarity might even affect what sound formats we like. In a six-year study of his students, music scholar Jonathan Berger found that every year more and more students preferred music encoded as MP3s as opposed to higher-quality sound files!25 As MP3s become more common, the particular distortion that MP3s have becomes normal, expected, and preferred.
Sound, and therefore music, is made of patterns of vibration in the air. Most music is composed of sounds that have a regular vibration. Many other sounds, such as that of a cough or a tree falling, have an irregular, chaotic pattern. Regular vibrations have a constant frequency, which is interpreted as a musical note.
With musical training, notes are heard more often—even when a more irregular pattern is played. In fact, untrained people are more sensitive to pitch discrimination than trained musicians! This surprising finding is because of the mind’s tendency to autocorrect. Just as you might not notice if someone says “kitar” rather than the proper “guitar,” people who are familiar with music hear a note that is close to (but not quite) middle C as middle C. This autocorrection is called “categorical perception,” which is our tendency to perceive things in terms of the categories that we’re used to. Interestingly, Indian music uses different notes that are closer together than those in Western music. As a result, when they hear a vibrato (that wavering sound that opera singers use when singing a single note), people more familiar with Indian music don’t hear it as one note at all, but as a wavering sound that can be interpreted as agitation.26
Western musical notes are organized into keys, such as C minor, which are sets of notes that sound right together. Western musical composition tends to focus on the notes in the chosen key. Music that remains in one key is said to create a sense of stability and calm. Notes from outside the key can feel intrusive, but can be interesting in the same way that an accent color can be. Music that uses all the notes more or less equally (ignoring key) sounds dissonant, abrasive, and less coherent. Such music is also harder to remember. This is not to say that this kind of music can’t be compelling—it can be employed to connote dark forces and atmosphere.
Many people think that music in minor keys sounds tense or sad, relative to music in major keys. This has been found to be true experimentally, in both Western and south Indian music. Even sad speech tends to produce notes that fall along minor scales, according to work by neuroscientist Daniel Bowling.27 The saddest music has minor modes, low pitch, slow tempo, and dissonance. Although there appears to be some cross-cultural support for this idea, musician David Byrne reports in his book How Music Works that prior to the Renaissance in Europe there was no connection of minor keys and sadness, and that much Spanish music uses minor keys for happy music.28
* * *
Music videos are a fascinating new art form because, until Internet videos became available, they were the only way to get avant-garde film pieces to the masses. Music video directors could do crazy things given their short time scale and lack of narrative constraints. Long avant-garde films exist (The Cremaster Cycle films of Matthew Barney come to mind), but such works are a difficult sell to the masses. As I would predict, a study by Georgia Tech psychologist Fredda Blanchard-Fields showed that watching music videos (as compared to dramatic television shows) encourages people to think more critically and draw more conclusions about what the videos mean, presumably because of their weirdness.29 When people see incongruity, they try to figure things out.
I believe that one of the reasons we find dance so watchable is because it violates the normal biological motion we’re used to seeing. Most of the time, this means it is more beautiful than normal motion, tending toward the smooth and fluent.30 But some dancing is deliberately awkward. Examples include butoh and popping.
Butoh is a dance form that typically involves grotesque movements, sometimes evoking derangement or sickness. Popping is a hip-hop dance form based on quickly contracting and relaxing muscles to make them jerk unnaturally. “The robot” is a popular popping dance. In both butoh and popping, the unnatural movements of the dancers evoke interest in the audience, among other ways, by violating our expectations of how bodies normally behave. Musician Brian Eno described this movement as “somadelic,” like a psychedelic but for the body.31
This is in contrast with ballet, which, if not exactly natural looking, appears (to the Western eye, anyway) to be an idealized, beautiful style of movement. Butoh and popping are compelling because the dancers look like people with something awry; ballet dancers look like people who appear to be more graceful and beautiful than normal, an exaggeration of Western ideas of physical beauty. It is, in some sense, a caricature of beauty.
Ballet dances and Jessica Rabbit both work, in part, because of the peak shift principle. Peak shift is a concept that comes from animal learning. If an animal, such as a rat, is trained to react to a set of stimuli, the rat will react more strongly to an exaggerated version of the same stimuli, even if the rat has never before experienced it.
To put it in human terms, if a person has good associations with big balloons, he or she will get a great reaction from encountering huge balloons. Jessica Rabbit is like the big balloons, and not just in the obvious way. In general, this class of exaggerated experience is known as “supernormal stimuli.” If men like a small waist, full lips, and wide hips, then men will find very attractive an artificial stimulus that features a waist-hip ratio exaggerated beyond anything they’ve ever seen. Like Jessica Rabbit, many statues of women from ancient times have exaggerated buttocks and breasts. Some have called them fertility statues; I prefer to call them Pleistocene pornography. The peak shift principle has been suggested to be a major force in artistic appreciation in general.
Peak shift effects can also work over evolutionary timescales. Suppose a creature lives in high grasses and the taller ones can see over the grasses to look out for predators. The others in the species might evolve to find tallness sexually attractive, because it results in taller offspring. This evolved preference would put selection pressure on them to grow even taller, perhaps even far beyond the height of the grasses. Sexual selection might even generate creatures so tall that the tallness is overkill in terms of its original purpose (spotting predators) but still adaptive in terms of sexual selection.
Ballet, I conjecture, looks so beautiful (to Western eyes) because it is a peak shift from what we consider graceful motion in everyday life.
Another interesting aspect of dance is that even when it incorporates movements that might be familiar, it does so in an unusual context, often without any other physical objects. Movement in dance can set our mind reeling, seeking a goal-based explanation for what we are seeing without the contextual cues to make it possible.32
* * *
Can our love for incongruities help explain why we like certain foods? Although taste in food varies from culture to culture, there are some constants, to wit: protein, fat, sweetness, and saltiness. These tastes signal the presence of nutrients that were difficult to come by in our ancestors’ situations. However, there are many tastes that are acquired. Who really enjoyed his or her first taste of oysters, hot peppers, raw onions, or even the ubiquitous cup of coffee?
Some cultures delight in cultivating tastes for things that are considered disgusting by the uninitiated. The Japanese eat the puffer fish, and some diners request that enough of the neurotoxin be left in so that the lips and tongue are numbed. Every year people die from eating this. Sixteenth-century Europeans developed a taste for meat that was just this side of rotting, based on what they thought the peoples of ancient Rome and Greece ate.
Why would cultures make these choices? Science has some partial answers. Spicy foods, such as those found in Thailand, Mexico, and India, release endorphins when consumed. We literally get a pleasure boost when we feel the pain of eating spicy foods, and psychologist Brock Bastian and his team found in a study that feeling pain actually alleviates feelings of guilt.33 There are also biological reasons to eat spicy foods—spices prevent spoilage. This is why spicy foods tend to appear in warmer climates, according to a survey by biologists Jennifer Billing and Paul Sherman.34
As for why people might cultivate a taste for almost-rotting meat, I don’t think anyone knows for sure, but psychologist Paul Rozin has suggested that we experience “benign masochism” over the mastery of mind over body when we eat painful foods such as chili peppers.35
* * *
Just as we might get bored with a predictable song or story, we also will get bored with a predictable sporting event. Even though we might root for a particular team, and very much want that team to win, we prefer a close game to one where the team we’re rooting for completely demolishes the other. Why is this? Close games are more interesting because we don’t know what is going to happen. This fact suggests the counterintuitive idea that we want our favorite teams to win, but only by a little.
* * *
Let’s use the notion of incongruity to explain popular quotations. Suppose you were to hear someone say that it’s important to believe in some ideology, and it doesn’t really matter which one, because if you don’t, you’ll be more likely to be swindled. Upon hearing this, you might reflect on whether or not this is true and perhaps ask why someone would believe it. Lacking some evidence, or reasoning, or at least some anecdote, you’d probably be unlikely to believe it or have a drive to repeat it to other people.
Compare this to hearing someone say “If you don’t stand for something, you’ll fall for anything,” which more or less says the same thing, but sounds so much better. Why? There is pattern, in that something sounds a bit like anything, and there is also the contrast between stand and fall.
When I gave my first TEDx talk, I monitored the tweeting that people did, and the thing I said that people tweeted the most was my statement: “It takes a firm understanding of reality to make compelling fantasy.” This statement has a hint of contrast, an apparent contradiction in it. People love this. You’ll notice that a great many famous quotations contain ideas that have at least a superficial contradiction (e.g., “art is the lie that tells the truth”).
Take the common phrase used in educational studies and minimalist art, “less is more.” On the surface, of course less isn’t more. It’s less. What it really means is “less is better.” We can all intuitively feel that “less is better” is less sticky, less riveting, than “less is more.” The apparent contradiction draws us in.
Churches love to put sayings on their signs outside. I like to look at these and reflect on what makes them compelling. Often it involves incongruity or pattern: “Do not wait for the hearse to take you to church” almost rhymes, and “Forbidden fruit creates many jams” is incongruity in the form of a pun.
As we will see, incongruity is the essence of all humor.
* * *
Although most of us take laughter and humor for granted, they are mysterious things. Most contemporary theories of laughter and humor (the study of laughter goes by the amusing name of gelotology) involve surprise at a perception of incongruity. The incongruity theory of humor holds that we find things funny because they juxtapose ideas normally thought of in different contexts. The incongruity leaves a mystery to be figured out.
The relief theory of laughter and humor holds that laughter originally was a signal that something thought to be dangerous turns out not to be. In other words, a false alarm, or a benign violation. The sound made by our ancestors, this precursor to laughter, was contagious in the sense that an individual hearing it is likely to imitate it.
This idea, that humor occurs when there is a feeling of danger along with an assurance of safety, explains the occurrence of laughter in many things patently unfunny, such as the experience of riding a roller coaster. A friend recounted a story to me of how he and his buddy were mugged at gunpoint. After the mugger left, the friends looked at each other and burst into laughter.
The common use of profanity in stand-up comedy likely supports this idea as well. Profanity is, by definition, socially unacceptable, and is usually used by someone who is angry or hurt. Its use in stand-up comedy is effective because it involves a signal for danger, but is delivered in a safe setting so it makes us laugh. Language use requires the newer parts of our brains, but profanity uses our older brain, particularly the parts that control emotion and moving our bodies. Hearing profanity creates a powerful impact in the listener, and a study by linguist Jean-Marc Dewaele found that profane words were remembered about four times better than other words.36 Can you believe that shit?
It would be interesting to determine if the same parts of the brain were used to perceive both humor and profanity. Many people believe that the use of profanity in stand-up is a cheap short cut and they have a point: it makes a joke evoke a laugh the same way a naked women in a photo makes you look. It’s not clever, it’s just a trick that nearly always works. Many comics create very funny routines without profanity. On the other hand, swear words touch us in ways that other words cannot.
Tickling, which is not funny, but generates laughter anyhow, is also explained by the false alarm theory. Tickling tends to only work on physically vulnerable parts of the body. You’re being touched in a place that can easily be hurt, but it’s not hurting.
Often a joke will require its audience to bring in knowledge it has about the world to get the humor, but telling the audience this information just prior to the joke kind of spoils it. The incongruity explanation endorsed here predicts this because if you tell the audience the information needed to get the joke, it reduces the surprise necessary for the joke to be funny.
The false alarm is one kind of incongruity (between danger and safety). Cognitive scientist Bruce Katz has a neural theory how this happens.37 A joke or story has a set-up, which makes you expect a certain context or outcome. This prediction becomes active in your brain. When the incongruous information appears (e.g., the punch line), a new, unexpected context or outcome is activated. For a brief time, both the predicted and the perceived are active at the same time. According to Katz, this association results in pleasure and a perception of something being funny.
It could be that the incongruities perceived in funny situations prime the brain for detecting associations between distant concepts. Experimental participants who heard a stand-up routine performed better on word-association puzzles right afterward than those who didn’t.
Jokes have become such a common phenomenon that they themselves can be the subject of jokes, such as the classic “Why did the chicken cross the road?” The answer, “To get to the other side,” is funny because it’s not clever, violating our expectations of what a joke should be.
* * *
Sometimes people like things because they are confusing and hard to understand. To explain this I created the concept of idea effort justification.
The harder you have to work for something, in general, the more you value it. This effect is called effort justification and is used to explain, in part, why fraternity hazing works so well.38 The pledge appreciates fraternity membership so much in part because he had to work so hard to get it. This probably is also related to the process of getting and valuing a PhD, but that hits a little too close to home and will not be discussed further.
I have extended the theory of effort justification to the realm of ideas: the harder you have to work to come up with an idea, or to understand something, the more you will appreciate it. Meaning is more valuable to a person if it is attained through mental effort. This is idea effort justification. It predicts that an idea (e.g., a belief, interpretation, or meaning) that is inferred or otherwise discovered by the person through effort is valued more by that person than the same idea that is simply presented to the person. Idea effort justification happens for five reasons.39
First, we get a rush of pleasure when we perceive that we have figured something out. If there is no difficulty in comprehending something, the feeling of accomplishment is lessened. But that “aha” moment of understanding something difficult associates pleasure with the discovered idea.
Second, it is a reduction of cognitive dissonance, a classic finding in psychology. If you’ve worked hard for something, you look for (and possibly invent) a good reason you did so in order to make your life make more sense. In cases where we put in effort, or endure something difficult for little gain, we experience dissonance, which is uncomfortable. So our mind takes steps to resolve it. Perhaps by convincing ourselves that we didn’t really work hard after all, or that we enjoyed the process, or that what we got out of it was well worth it. How would this work for ideas? When something is difficult to understand, and we put in some effort to get meaning out of it, our minds might try to endow that meaning with more value in order to resolve the dissonance created by the effort it took to get that meaning in the first place.
Third, we like ideas that we perceive to be ours. Psychotherapy uses a principle like this called “nondirective therapy,” in which the therapist doesn’t just come out and tell the client what is wrong (even though it might be perfectly clear to the therapist), but rather tries to get the client to discover it for himself or herself. The idea is that the clients are more likely to accept something they concluded themselves than something told to them explicitly by the therapist.
You might have been in a situation in which a group comes up with an idea in a meeting, but later several of the people at the meeting believe that it was their idea. You also might have let people think your good idea came from them. People want good ideas to be theirs.
Fourth, when you try hard, you find meaning for yourself and, when you do, I conjecture, you pick the meaning that’s most precious to you—the one that resonates most deeply with what you believe: the meaning that’s compelling. We tend to choose the meaning we already agree with—evidence suggests that ambiguous information gets interpreted in favorable ways.40
There is a fifth reason why self-generated interpretations are viewed so favorably: they are simply more easily brought to mind. In the previous chapter, I described how ease of mental processing makes us believe and like things more. When we figure something out, not only do we remember what we’ve figured out, but we also remember all the reasoning and justifications that went into coming up with it. When one reads a difficult text, one will speculate on what the author had in mind and what he meant by this or that word. All these inferences are stored in memory, linked to the final interpretations. Ideas in memory that are well-connected to other ideas (in this case, justifications) are more easily remembered. The fact that information created by one’s own mind is better remembered than information simply read is known as the “generation effect.”41
When writing is clear, one does not have to work hard to understand it. But if one slaves over a difficult article, whatever meanings one manages to scry from it will seem more valuable, if only to justify for oneself why one spent so much effort trying to figure it out in the first place.
Another reason is that when we find meaning ourselves, we find those meanings that we already believe to be true. Some texts, such as postmodernist works, poetry, and many religious scriptures, are written to encourage different interpretations in different people. Different interpretations will result whenever people are in a situation in which meaning is ambiguous and they have a favorable attitude toward the author—which is most of the time. People tend to seek out information they believe they will agree with (this is the congruence bias). In these cases, they give the text the benefit of the doubt; they assume it means something correct. So they come up with interpretations they already find plausible. So of course they like the text—it’s telling them what they already believe! Confirmation bias kicks in (the tendency to better remember and pay attention to information that supports what they already believe) and they find the text compelling.
If meaning is even more satisfying and rewarding when you have had to work hard to find it, then the same meaning might be less compelling if it was directly understood from a clear sentence as opposed to heavily interpreted from an obscure one.
The fact that easily remembered ideas are more likely to be believed works against clearly written text! When reading something clear, one does not need to come up with justifications for it and, as a result, it will be (all else being equal) remembered less well and believed less.
Okay, but why would these effects occur for postmodernism and some writers in the humanities but not for the sciences? In science, the quality of your results is much more objective. The data, in some sense, speak for themselves in a way they do not in the arts and humanities. You can be a pretty lousy writer in science but still get published if your findings are important enough. Not so for literary criticism. If there is little or no data to speak of, on what should someone’s work be judged? Complexity of the writing steps up to the plate as a criterion.
One might counter that scientific text is just as dense and impenetrable as anything in the humanities. One only needs to open a biology or physics journal to see text that is nearly impossible for a nonexpert to understand. What’s the difference? Couldn’t postmodern text similarly be simply a matter of jargon-ridden prose that is intended for experts in the field?
Not so fast. Scientific text can certainly be jargon laden. Here’s an example from a biology abstract: “RPA stimulates BLM helicase activity as well as the double Holliday junction dissolution activity of the BLM-topoisomerase III complex.”42 I have no idea what this means. The key difference is that for scientific writing, the goal is that for an expert reader there will be a single interpretation. This is not so with postmodern writing, which is written to be interpreted in multiple ways—even by experts in the field.
Has the analysis of the works of Immanuel Kant been done to death? A philosopher I spoke to chuckled at the idea. It seems to be that Kant, famous for being abstruse, provides a bottomless well of opportunity for interpretation. This is striking to me because great works of art are described the same way—the longer experts look at certain paintings, the more they reveal. This suggests that obscure scholarly writing is (or is like) an art form in itself, more akin to poetry than to scientific writing. To understand it, we need to understand how idea effort justification is part of what makes poetry compelling.
I never much liked poetry. When I came to Carleton University, there was a Monday night writer’s circle. We would read our stuff aloud, think about it, and offer feedback. I was writing short stories, but most of the participants were poets. So I ended up thinking really hard about these particular poems.
A magical thing happened. I loved many of them! It was not until I put in the hard work of trying to find meaning in poetry that I finally understood what was special about it. Even now, some of my favorite poems of all time are written by Carleton undergraduates, because those are the only ones I’ve taken the time to find meaning in.
This idea can be tested. If poetry is more appreciated when heavily interpreted, then reading it on paper, and having lots of time with it, should produce more appreciation than hearing the same poem read aloud a single time (because it’s hard to review a poem that you only hear once). I would also predict that if people like a poem read aloud, they are likely appreciating more surface-level features—word choice, rhyme, and literal meaning.
You can find a parallel to this effect in the artificial intelligence (AI) of art, a field that tries to make computer programs that create art. A general finding in the field is that the more work the audience has to put into the artwork to appreciate it, the easier it is for an AI to make stuff that people think is good.43 For example, computer programs can make some interesting haiku, but are as yet incapable of making engaging novels, or indeed, novels that even make any sense at all.
Poetry can be thought of as the opposite of clear writing. Some poetic traditions explicitly and deliberately obfuscate language. Studies show that readers of the same piece of poetry can differ widely in the interpretations offered, and often they base them on associations between words and personal experiences.44 However, unlike analytic philosophy, that’s the whole point. We appreciate poetry in part because of idea effort justification.
What is striking is how much appreciation of poetry is like interpretation of religious texts and myth.
In many religions, as we grow up we listen to sermons and engage in rituals over and over for years. As Harvey Whitehouse says, “the knowledge that one has endured for years the burden of routinized activity and strict discipline elicits a marked reluctance to ‘give up’ lightly or to tolerate the waywardness of others.”45 We have a drive to continue to believe in something to justify the time and effort we’ve already put into it—that’s effort justification.46
But idea effort justification can explain some of why we find religions compelling, too.
Some have argued that religious texts should be interpreted metaphorically, rather than literally. As mythologist Joseph Campbell puts it, it’s a mistake to read myths (religious or otherwise) as prose rather than as poetry.47
Metaphors can carry enormous meaning. In literature, Macbeth describes life as a “brief candle.” Metaphors allow us to communicate a great deal with just a few words.
A similar view can be taken with religious scripture, too. There is a literal meaning and a metaphorical one, and for some the truth is one and for some the other. Should heaven be interpreted as an actual location or a state of being that can be reached during our lifetime? Interpreting text literally, as fundamentalists try to do, requires less reading-into than a metaphorical approach.48 As such, idea effort justification should have a stronger affect on metaphorical readings than literal ones.
Recall that people remember self-generated ideas better. When a person perceives an idea, that idea is associated with other things in memory and the perceptual context in which it’s perceived. These associations allow retrieval of the memory in the future. For example, if a green dog tells you ice cream is poisonous, you might be better able to recall the idea that ice cream is poisonous when you think of a green dog. The more links one has to an idea, the easier it is to recall, because there are more ways for one to retrieve it. One way to get lots of links to an idea is through elaboration. Thinking about the idea, how it relates to your life and other ideas, etc., creates many links. It is probable that when generating ideas on one’s own, there are more links to the idea than if the idea were simply presented.
Self-generated ideas are better remembered. But why should this increase belief or value in those ideas? For this, we return to the availability heuristic, which I discussed in chapter 3, on patterns: the more easily an idea is brought to memory, the more probable and common it is assumed to be. Thus, an idea that is better remembered is (unconsciously) perceived to be more probably true.
Religions can benefit from incongruity in ways other than through idea effort justification. Philosopher David Hume, in the chapter “Of Miracles” in Philosophical Essays Concerning Human Understanding, writes that we have a tendency to believe in miracles because of their surprising nature. Incredible things generate agreeable feelings of surprise and wonder, and the association of those feelings with the miraculous explanation makes us more likely to believe it. It’s the surprising event, the incongruity, that makes us look for some supernatural meaning.
If a creed’s belief system contains statements that are falsifiable, there’s a chance that they’ll get, well, falsified. For example, suppose a religion maintains that there are gods living on the clouds. I mean literally on the clouds. It follows from this belief that if you flew up there you’d find them in a visible, physical form. But once people actually start flying in planes and fail to find those gods, that belief, and by association the religion, loses credibility.
New religions are being created all the time. Perhaps two or three religions are created every day.49 A tiny number of religious ideas stick around, getting passed on to others, while most others fade away. And what do we find whenever we have diversity, heritability, and differential reproduction? We get evolution by selection. The evolution of species is only the most famous example of it.
Not only do ideas evolve by being passed from one person to another, but they evolve (that is, slowly change over time) in people’s individual minds. Different parts of your mind and brain are generating ideas, and some are remembered, some are forgotten, some are compelling. These ideas change and compete with other ideas. There is a marketplace of ideas inside your own head. Sometimes you like an idea enough to repeat it to someone else.
We can look at religions that are successful and begin to understand what properties make them so. From this perspective, it is clear that if a religion maintains beliefs that cannot in principle be found to be false, then they won’t ever be disproven. To return to our example, a version of the same religion that holds that the gods in the clouds are made of some immaterial substance that cannot be directly perceived will have a better chance of surviving natural human skepticism (and eventual exploration of the sky) than the original, literal form of the belief. A better version would also hold that the statement is metaphorical—well, not really in the clouds. The moral is this: religious beliefs benefit from being undisprovable, as philosopher Daniel Dennett points out.50 That is, unfalsifiable and metaphorical religions have a survival advantage over those that are not. Indeed, much religious text is metaphorical, allowing multiple meanings for statements such as “God is my shepherd.” Evidence from the psychology laboratory of Himanshu Mishra suggests that vague information gets interpreted in a favorable way.51 If you interpret something favorably, you’re more likely to believe it—because you already believed it to begin with.
From a scientific perspective, falsifiability is a good thing for a theory. If a theory is falsifiable it means that there is some kind of evidence that could, in principle, be found that would show that the theory is false. When one tries and fails to find this falsifying evidence, it shows support for the theory.
Because science is so good at testing theories and because technology keeps giving us better and better ways to observe our world, it seems likely that as technology develops, the religious beliefs that we cannot ever find to be false will have an increasing survival advantage. That is, the more science and technology we have, the more important it is for a religion to have beliefs that are unfalsifiable. I call this the “increasing religious unfalsifiability hypothesis”: over time, religions will, more and more, have beliefs that are unfalsifiable.
Imagine that someone is starving to death on a raft on the ocean. She prays for help, perhaps to a god or perhaps to the spirits of her ancestors. Which response seems more likely? (A) the supernatural agent makes a hamburger, root beer, and French fries appear, or (B) the supernatural agent makes a fish jump from the water into the boat.
Even an atheist will agree that A sounds kind of silly and B sounds more likely. I will coin the term minor miracle for a supernatural event that could be perceived by a nonbeliever to be a natural event. In contrast, a major miracle is one that clearly seems to be breaking the natural order of things. The hamburger and fries appearing out of thin air on a boat in the middle of an ocean would be a major miracle.
Most people’s personal experiences with miracles are of the minor variety. Many people believe in major miracles, but they know of them, for the most part, second hand: either they read about them in scripture or in a book or heard about them. This makes them much less credible, because of people’s tendency to create tall tales.
Let’s look at the phenomenon of UFO sightings. It’s easy for someone to say they saw a light in the sky that they didn’t understand. It’s a better story to say they saw a spaceship. With nobody to say they didn’t see it, why not make the story a little better? There’s some evidence that people do this—the rate of reported UFO sightings has gone down since the widespread placement of cameras in phones.52 Why? If these UFOs were actually spaceships, we would have seen a steady reporting of UFO sightings, but an increase in photographic evidence. But that’s not what happened. People who report seeing a flying saucer can now legitimately be asked, “Well, why didn’t you snap a picture of it with your phone?” My interpretation of this is that most historical UFO sightings were embellished.
Let’s get back to miracles. Many people with the same intuition about the likelihood of minor and major miracles also believe in an omnipotent god. That is, a god who can do anything. Why would a supernatural agent disguise boons and curses in terms of natural-looking events? Put another way, why would an omnipotent god choose to help someone with a minor miracle rather than a major one? A religion will have trouble surviving with too many claims of major miracles. The consequence of this, in terms of which religions last and which fade away, is that religions who claim that miracles are minor have a better chance of lasting.
Jesse Bering recounts the behavior of a church group that interprets natural disasters as signs that God disapproves of our society’s increasing acceptance for gays:
Members of the Topeka, Kansas-based Westboro Baptist Church, a faith community notorious for its antigay rhetoric and religious extremism (they run a charming little website called GodHatesFags), see signs of God’s homophobic wrath in just about every catastrophe known to man. To them, the natural world is constantly chattering and abuzz with antigay slogans.53
If an omnipotent god wanted us to know of his disapproval unambiguously, then wouldn’t he use a major miracle to communicate it?54 This is where some religions latch onto the idea of faith as a virtue—that is, belief without reason or evidence. Because there is no indisputable evidence of the existence of souls, spirits, or gods, the “evidence” has to be interpretation of minor miracles. It is sometimes said that God uses minor miracles to test our faith. Religions have evolved, over time, to make some kind of sense in a world that would look much the same if they were not true.
Religious miracles and magical belief systems describe a world that would look to the casual observer to be nonmagical—it allows people to simultaneously believe scientific as well as biological causes. That is, people can understand that a virus causes a particular sickness, or that termites caused a house to collapse, but at the same time seek an explanation for why it happened to this particular person at this particular time. In many cases, science’s answer is that there isn’t any “reason” at all. This can be unsatisfying, and religion enters to fill the void.
Another powerful way for an idea to be unfalsifiable is for it to be simply incomprehensible. Take, for example, the Holy Trinity of Christianity, a doctrine that holds that three persons are one being. This strange idea has received a good deal of theological attention, in part, I would imagine, because it contradicts common sense. But even if we are to make some kind of sense of it, it’s hard to imagine how science could test it.
Religious people deal with such evidence in a different way—
because interpretation of religion is often metaphorical, disconfirming evidence triggers reinterpretation, not rejection. The religion is assumed to be true, so other beliefs have to make way so that the religious beliefs can remain so.
Confusing statements are compelling for other reasons: there is an inherent beauty in mystery and any meaning you get out of it is hard won and therefore more valuable to you. The term mystery is even a technical term in the Catholic Church for something divine that cannot be explained. In the religious context, I believe that what is working so well is the generation of the feeling of awe, which is triggered for experiences with two features: vastness and an inability to fit the experience within the existing structures we have in the mind. In some male initiation rituals, paradox is built right in. Anthropologist Michael Houseman found that in one ritual, boys are instructed to wash in mud puddles. They are beaten if they don’t for not doing as told, and beaten if they do because they get dirty.55
Clarity of explanation is also compelling, for another reason, and religious clergy can be pulled in two directions—cultivating mystery on the one hand and clarifying themes on the other.56 But in general, analytical thinking is the enemy of religion. In support of this, a study by psychologists Will Gervais and Ara Norenzayan found that people who are more analytical tend to be less religious, and even making people think more analytically makes them less religious.57
* * *
Pascal Boyer has a fascinating theory of why public religious rituals work the way they do.58 Some rituals mark a change in social status—becoming an adult, being married, graduating from a university, and so on. However, many of the events that the rituals are marking happen, in reality, rather gradually. The function of social ritual is to specify a precise moment, even though that moment might be somewhat arbitrary, for society to treat these changes as having occurred. This is why it’s important that coming of age and wedding rituals are public.
So what does this have to do with religion? It turns out that there is no clear line to be drawn between religious rituals and nonreligious rituals. People can find them meaningful with or without the existence of supernatural agents and some rituals involve supernatural agents only peripherally.
Because people tend to treat the subjects differently after these rituals, it seems as though the ritual actually caused the change, rather than just marked a time to acknowledge the change. After the wedding, everything was different. But how can this be? How can a bar mitzvah actually turn a boy into a man? That’s the incongruity. This need for explanation is where gods and spirits come into the picture to resolve it. Cultures create an explanation involving supernatural agents to understand what might otherwise appear to be a common-sense-defying event.
Intuition provides answers, not explanations. Religion fills in the blanks.
* * *
Riveting incongruities have three types: absurdity, mystery, and puzzle.
Magic shows are a great example of absurdity at work. Once I saw a live Penn and Teller performance. For most of the show I was amazed at what was happening on the stage. I, like most of the audience, had no idea how they did what they were doing. Like babies watching impossible puppet shows, we reveled in the incongruities on the stage in front of us. At one point, however, they performed a complicated trick that took over five minutes to execute. It was impressive and I don’t know how they did it, but I have a pretty good idea: at one point, Teller picked up a piece of paper that was supposedly in a sealed glass container a moment before. It would not have been difficult to use basic sleight of hand to switch that paper with another. I believe the whole five-minute trick hinged on this one sleight of hand that took less than two seconds. So what appeared to be huge and complicated was actually done by the same sleight-of-hand techniques that your uncle uses when he pulls a coin out of your ear. What’s important about this example is that the solution to the incongruity is much less interesting than the incongruity itself. Although conspiracy theorists might not want to believe it, sometimes big, complicated events have simple explanations. We find both magicians and self-proclaimed psychics fun to watch—the difference between them is that the magician doesn’t try to convince the audience that he or she is actually capable of using real magic. The magician might talk that way, but it is a convention that is understood by the audience. While our old brains are fascinated with the incongruities we see, our new brains know that it’s all a delightful trick.
If magicians are like fantasy fiction writers, then psychics are like those who write “nonfiction” describing fictional events.
Psychics try to entertain your whole brain, old and new, by making you really believe it. They use some general guidelines to make the tricks more believable. For example, tricks like telekinesis must be small: if you levitate yourself and do flips, the audience will assume you’re using wires, but if you concentrate and move a paperclip a few inches, it strikes people as more believable. So-called psychics also make their shows more compelling by telling the audience that they have merely better harnessed a power that lies latent in all of us—appealing to our sense of hope. Since there is no psychic ability (as many, many years of research has shown), they are frauds who appeal to compellingness at the expense of real understanding. Some might actually believe they are psychic, doing tricks such as “cold reading” without knowing it. They deserve less blame, perhaps, but their actions are just as destructive, in that they perpetuate belief in the paranormal.
Incongruous phenomena like psychic shows intrigue us because they make our minds search for meaning. This mindset we are put into makes learning more likely to happen.59 One is less of a passive observer when engaging with the absurd.
Likewise, many fantasy books, particularly those written for children, such as Charlie and the Chocolate Factory, the Oz books, and the Alice in Wonderland books, feature absurdity of a kind novelist Anthony G. Francis terms “inexplicable wonderfulness.” The reader of these stories does not believe that the author has an internally consistent model of the narrative’s world that could be figured out. As a result, the inexplicable wonderfulness is appreciated aesthetically and not for the possible solution to the incongruities that might be found with enough thought and discussion.
Many stories from the horror genre make use of absurdity with a feature I call “inexplicable awfulness.” For example, in the Clive Barker novel The Damnation Game, a character appears for a short scene to escort the protagonist. She has no lips. This creepy image is never explained. Nor does the reader feel the need for an explanation—perhaps the mystery is intrinsically riveting and makes the image more frightening.
There is a danger, though, that if the incongruity is too great, the audience will not have enough hope that it can be resolved. That is, if the audience feels there is no hidden pattern, no solution to be found, the lack will reduce the compellingness of the narrative for some audiences. Even in the no-lips example above, as intriguing as it is, I would predict that if the image had some information that could be interpreted as a clue to understanding, it would be even more compelling.
Other things are compelling for the presentation of the incongruity followed by a resolution, such as mystery stories, many jokes, and scientific explanations. In a mystery novel, the revealed solution, in concert with the proposed mystery, is more interesting than the mystery alone. I will refer to this class of things, the resolved incongruities, as mysteries. This clever tactic takes advantage of our love for incongruities (in the beginning of the narrative) and our love for patterns and figuring things out (at the end). We are delighted twice.
Scientific communication works as a mystery, and the resolution to the scientific puzzles are interesting. When the resolution fails to be compelling, it leaves the audience feeling that the theory has drained the beauty from the incongruous phenomena, which had its own absurdist beauty. Is the knowledge that the hormone oxytocin causes attachment an insult to the bond someone feels with his or her spouse of forty years? It unweaves the rainbow, as Keats said.
The final class in the taxonomy consists of incongruities that can be figured out but are not presented explicitly. I call these resolvable incongruities simply puzzles.
Riddles are set up as puzzles. A riddler sets one up and invites the audience to figure out the answer. If the audience correctly figures it out, it is a success, and the riddle is appreciated as a puzzle. In case of failure, the audience gives up, and the riddler then reveals the solution. In this case the riddle is appreciated as a mystery. Unlike riddles, jokes are essentially mysteries and the polite listener will not guess the punch line.
Because of idea effort justification, audiences feel a swell of pride and happiness at finding the solution to a puzzle all by themselves. It’s been found that people understand characters’ mental states in literature even better when those mental states are not made explicit.60 This suggests that puzzles have the potential to be the most rewarding kind of incongruity of all. With puzzles, the audience gets to appreciate so many things: the initial incongruity, the pleasure of knowing the solution, the pride of having discovered it themselves, and an increased value of the found solution due to idea effort justification.
The Star Wars universe is an example of a set of artworks that work together to generate puzzles for the audience. It is one of the most fleshed-out fictional worlds ever created.61 It has canonicity, a term coined by T. S. Blakeney to refer to an imagined world that, throughout the various stories and related art works, maintains an internal consistency. Online discussion boards for Star Wars feature endless discussions, with participants trying to figure out, for example, why some Jedi vanish when they die and others don’t.
My beloved enjoyed a puzzle experience while fulfilling her prenuptial agreement to watch all six of the Star Wars films with me (not in the same night; I have a heart). At one point while watching the prequels, it dawned on her that the Clone Wars were orchestrated by the same man who was leading both sides. Nobody in the films ever comes out and says this, but it’s something a viewer figures out. For her, it was a jaw-dropping realization. When something dawns on you, that’s the satisfaction of the puzzle.
All riveting incongruities attract us because of a bit of piqued curiosity. We see that something needs resolution and we are drawn to it with the subtle promise of a solution we might discover or that might be revealed. The word mystery should be used when the solution is difficult to arrive at, and interesting in its own right. Puzzle should always be used if the author is confident that the audience will be able to figure it out, because idea effort justification will make that solution better than if it was handed to them, as it is in a mystery. Absurdity should be used to encourage meaning-making in the audience, or when you have some reason to believe that the audience members are secure, bored, or have high dopamine levels.
To reflect briefly on the very book you’re reading, it is mostly mystery: I present problems to pique interest and then present solutions I hope the reader will find elegant and satisfying. It is also peppered with absurdism—there are still things in the world that are misunderstood.