6
The World Is Alive
Animals, Objects, and Gods Are People, Too
 
 
 
 
 
I’m kneeling next to Wes Richardson, a resident at the Veterans Affairs Medical Center in Washington, D.C. Seated in a wheelchair and wearing a green polo shirt, striped pajama bottoms, and a 101st Airborne baseball cap, he’s telling me a story, and my attention is rapt.
“I’m used to rabbits. I used to raise rabbits. White rabbits. Easter rabbits. One of them came, a jackrabbit. Had huge ears. They would stick straight up in the air. He would fight you, too. He was dangerous. Couldn’t let the little kids mess with him. That’s the jackrabbit. But the other rabbits, you know. But then they would run away. Then I had a dog. He was a fighter, like a pit bull. He wasn’t a pit bull. He was a German shepherd and a boxer. He was a killer dog. I’m serious. Named him Rocky. . . .”
Okay, so it’s not Old Yeller. But the story is notable if only for the fact that Richardson is recovering from a stroke and only a few months ago didn’t speak at all. Now he’s painting pictures with words for a stranger.
There’s something else worth mentioning. As Richardson speaks, he’s petting a $6,000 robotic baby seal. And the seal may have something to do with his ability to tell a story.
The robot is manufactured under the name Paro (short for personal robot) and sold to hospitals and nursing homes around the world. Takanori Shibata, its Japanese inventor, recognized the benefits of animal-assisted therapy—pets lower blood pressure and reduce depression in their owners—but many facilities don’t allow animals, so he set out to find a solution. The result is a six-pound robot modeled after a baby harp seal.
Paro can move its neck up and down and side to side, it can move its flippers and its tail, and it can blink its eyelids. It’s sensitive to light, sounds, and touch. And it plays recordings of real baby harp seals. When you hold Paro in your arms and stroke its soft white fur, it squirms around, looks at you, and makes appreciative bleats. Your heart melts.
Wes Richardson might have started talking without Paro’s help, according to Dorothy Marette, a clinical psychologist at the medical center and the caretaker of this Paro (or “Fluffy,” as many have come to call it), but when his words came back, Marette says, “he was speaking to Paro rather than to people.”
Richardson has become more active in other ways. “Before, he was in his room all the time,” Marette says. “He wasn’t out at all. Now he’s in the dining room every day eating, he’s out watching TV, he participates in some of the exercise groups.” Again, maybe he would have come out of his shell anyway, but the seal seems at least a likely enabler.
Richardson is not out of touch with reality. He realizes he’s holding a robot. “It’s amazing how somebody can make this,” he tells me. But his first reaction to seeing Paro on my visit was a greeting, to the robot: “What’s up?”
018
People will treat anything as a person. We name our cars, yell at tables when we bump into them, give birthday gifts to our pets, call plants “thirsty,” cajole clouds to part, and pray to gods for forgiveness. We act as though every encounter were a social one, as if the world were populated with minds like our own, as though the universe itself were conscious. “We find human faces in the moon, armies in the clouds; and by a natural propensity, if not corrected by experience and reflection, ascribe malice or good-will to every thing, that hurts or pleases us,” David Hume wrote some 250 years ago. These seem the symptoms of a madman, and yet they describe the mind of a healthy human. Wherever we go, we are not alone.
Treating something not alive as if it’s alive is called animism. Treating something nonhuman as if it’s human is called anthropomorphism . The two often go hand in hand, as animism feeds into anthropomorphism: when we treat something as alive, we treat it as alive like us, with our full catalog of mental states.
The rich variety of our animism and anthropomorphism really comes to light in art, idiom, and metaphor, where we not only don’t suppress our habits but amplify them for effect. “Hence the frequency and beauty of the prosopopoeia in poetry; where trees, mountains and streams are personified, and the inanimate parts of nature acquire sentiment and passion,” Hume wrote.
The anthropologist Stewart Guthrie provides a comprehensive review of our propensity to personify in his 1993 book Faces in the Clouds. In documenting examples from popular culture and everyday life, he notes, for instance, that gardeners speak of what conditions their plants like, mechanics talk about cranky engines, sailors refer to ships as she, and rescue workers battle raging fires. Abstractions have human qualities, too: time waits for no man, especially with death at your doorstep.
Even scientists, who aim to look at nature from a distance, project themselves onto their work. Astronomers refer to the birth and death of stars, neuroscientists describe the way cells talk to each other, physicists remind us that nature abhors a vacuum, and Darwin described natural selection (“the selecting power of nature, infinitely wise”).
Piaget noted animism in children, but as we age we don’t grow out of our childish magical thinking. We learn to correct for it. At heart, we still live in an enchanted kingdom where spirits inhabit everything around us, where the wind literally howls, the brooks literally babble, and a guardian in the sky looks after us.
Anthropomorphism is worth understanding “not just because it tells us about why people dress up their pets or talk to their plants, but it’s really about how people understand minds more broadly,” Adam Waytz, who has studied the phenomenon extensively, told me.
And it has implications for animal rights, parenting, product design, entertainment, marketing, human-computer interaction, environmentalism, and religion. It’s a promiscuous and powerful instinct.
So hey, anthropomorphism: what’s up?

Seeking Minds

At the top of my field of vision is a blurry edge of nose, in front of me are waving hands.... Around me bags of skin are draped over chairs, and stuffed into pieces of cloth, they shift and protrude in unexpected ways.... Two dark spots near the top of them swivel restlessly back and forth. A hole beneath the spots fills with food and from it comes a stream of noises. Imagine that the noisy skin-bags suddenly moved toward you, and their noises grew loud, and you had no idea why, no way of explaining them or predicting what they would do next.
This unnerving scene is not from a horror movie or Franz Kafka novel. Concocted by the psychologist Alison Gopnik, it’s a typical family dinner, as someone who does not see minds even in other people would experience it. Without the ability to attribute beliefs and desires to those around us, the world would be a confusing place indeed. Just a jumble of random movements by skin bags stuffed into cloth.
We see minds in things (magical thinking) because we’ve evolved to see minds in people. Recognizing minds opens up another dimension of reality and sets humans apart from most of the rest of the animal kingdom. This capacity for mentalizing, sometimes called theory of mind, provides enormous benefits to those who wield it. Inferring mental states in other people allows us to understand and predict their behavior. And awareness of mental states in ourselves allows for self-reflection and executive control. Once we enter the world of social strategizing, mind-sight becomes our natural mode of navigation. As the cognitive scientist Dan Sperber has put it, “Attribution of mental states is to humans as echolocation is to the bat.”
In the last few years, three psychologists from the University of Chicago—Nicholas Epley, Adam Waytz (now at Harvard), and John Cacioppo (I’ll call them EWC)—have set an agenda for the study of anthropomorphism. They hope to explicate how mental attribution works and why we apply it to inanimate objects (as well as why we sometimes don’t apply it to our fellow humans). To guide the field, EWC have organized the literature into three main factors influencing anthropomorphism. The first factor is how well we understand agents and how easily our knowledge of agents is activated. (In psych-speak, an agent is not a spy but any entity that can initiate an action according to goals—typically a person.) The second is how much we desire control over a situation; identifying agents and predicting their behavior allows us to better anticipate the events around us, as well as interact with those agents. And the third is how lonely we are—how much we desire a companion. EWC, rearranging the factors to form a snappy acronym, call their theory SEEK, for Sociality, Effectance (i.e., control), and Elicited agent Knowledge. It earned them a Theoretical Innovation Prize from the Society of Personality and Social Psychology, and it will serve as a handy framework for organizing the bulk of this chapter.

What We Know about People

We wouldn’t anthropomorphize if we didn’t know so much about people. From birth, we begin collecting intelligence on agents, crawling toward an awareness of their existence, and of our own identity as one. By three months of age, infants can follow another’s gaze, looking in the same direction as someone else. By twelve months, they’re using communicative gestures such as pointing. By eighteen months they can recognize people’s preferences, even if those likes and dislikes differ from their own. By eighteen months they also acknowledge goals. At two years, kids are openly discussing feelings in themselves and others. And at four or five, children can explicitly reason about false beliefs, mental representations that don’t accord with reality. (Some evidence suggests children are sensitive to another’s point of view in false-belief tasks as young as seven months.)
All of this goes to illustrate a speedy developmental trajectory in which before we can even use the big-person potty we’re making inferences about mercurial abstractions we cannot see—thoughts. By imputing minds, we’re turning bags of skin into agents. Agents who can be angered, or soothed, or implored for assistance, or manipulated into buying us candy. Other people provide comfort and protection and funny faces; they are the most important parts of our environment. Through years of continuous interaction and careful observation, we build a massive storehouse of knowledge about them.
There are debates about how we accumulate and apply this knowledge. Some researchers have argued that theory of mind resembles scientific theory in that we use an abstract body of laws to understand people’s thinking. Others believe mind reading is more organic, and that the term “theory of mind” is misleading. They propose a strong simulation component, in which we relate to other people by implicitly putting ourselves in their shoes.
Support for the simulation account has come from the study of mirror neurons, discovered in the 1990s. A group of Italian neuroscientists noticed that certain cells in monkeys’ brains would fire whether the monkeys performed an action or the monkeys watched an experimenter perform the same action. In observing another agent, a monkey’s brain acts almost as if it’s the other agent. Mirror neurons were soon located in humans, and were found to represent not just actions but emotions, physical sensations, and intentions. They help us understand what other people are doing, feeling, sensing, and planning, and they may play an important role in empathy.
Further evidence that simulation guides our mind reading (and thus anthropomorphism) comes from our relentless egocentrism. One consequence of our egocentrism is the false consensus effect: we tend to overestimate the popularity of our own opinions and judgments. When trying to get inside other people’s heads, our own epistemic baggage encumbers our entry. We can’t shed what we already know. As we mature, we become better at taking others’ perspectives, but Epley has shown that adults are just as egocentric in their initial reading of others’ minds as children are; rather than outgrowing egocentrism we merely learn to correct for it. Regarding the simulation versus theory accounts of mind reading, it seems we use a mixture of both; we start with a self-centered simulation of another’s experience and then refine it based on what we know of the person and the situation.
So from an early age we become experts on minds—especially our own, which we use to understand others’. This rich and ready psychological expertise lays the groundwork for the anthropomorphizing of nonhuman agents. It just needs to be triggered. Anything showing signs of being an agent we treat as a person. If it thinks, it must think like us. Specifically, it must think like you.
Let’s look at how we humanize three types of agents: biological, technological, and theological.

Animals

Pets are the prototypical example of nonhuman agents treated as people. Go on Amazon and you’ll find books such as Dogwear, Canine Couture, and Wearable Arf. People constantly talk to their pets and occasionally allow man’s best friend to substitute for a real man. It’s hard to suppress interpreting dogs’ behavior in human terms, reading emotion into every glance and wag. Last week as my roommate and I were making fun of her beagle, Molly, for having such a small brain, Molly incidentally bowed her head. My roommate immediately voiced Molly’s apparent feelings: “Come on, guys, stop making fun of me!”
I mentioned in chapter 4 that strict adherence to the definition of magic offered in the introduction would mean that attributing consciousness to the human brain is magical thinking, because consciousness is a mental phenomenon and the brain is material. Let’s refine the definition to say magical thinking involves overattributing mental states to physical objects. So inferring uniquely human mental states in an animal brain fits the bill. Similarly, since degree of personhood sits on a spectrum (at what time during gestation do cells suddenly become a person?), I think it makes sense even to talk about anthropomorphizing humans, as we sometimes overestimate the richness of their conscious experience or the intention behind their behavior.
To know how magical our thinking about animals is, we must first know how much mind animals actually have. Cognition in chimps and other primates is an area of great controversy, so I’ll stick to the slightly more clear-cut case of dogs.
Sam Gosling, a psychologist at the University of Texas at Austin, has extensively studied “personality” in dogs. (He uses that word despite heckling from some colleagues.) Psychologists typically measure human personality in five dimensions, referred to as the Big Five. Gosling measured analogs of four of these traits in dogs: energy (similar to human extroversion), affection (similar to agreeableness), emotional reactivity (for neuroticism), and intelligence (for openness to experience). There was no suitable approximation of the fifth dimension, conscientiousness. Gosling found that human judges consistently agreed on the strengths of these traits in individual dogs, and that these ratings accurately predicted behavior in the animals. Dogs appeared to have personalities.
In my opinion, personality is too strong a word for simple behavioral tendencies, which is all he’s demonstrated. For comparison, read how the inventor of the Paro robot describes how he’s made his baby seal appear to have moods and a personality: “Paro has internal states that can be described with words that indicate emotions. Each state has a numerical level, which changes according to the stimulation. Moreover, each state decays with time. Interaction changes its internal states, and creates Paro’s character.” One can imagine mechanisms inside the dog that are just biological versions of this handful of changing numerical states. If that’s all it takes to arouse our anthropomorphism of a robot, we should be careful about how much complexity we impute to the character of a canine.
Dogs may not have complicated personalities, but I will tentatively allow them rudimentary subjectivity—experiential feelings of pain, pleasure, fear, happiness, and anger. These fall into the category of primary emotions—states with clear behavioral signatures, short durations, early appearance in human childhood, and apparent cross-species commonality. Secondary emotions are more extended and reflective. These include optimism, nostalgia, resentment, and admiration. Scientists have made little serious effort to study secondary emotions in pets—mute subjects make their identification difficult—but Alexandra Horowitz, a psychologist at Barnard College, has studied guilt in dogs, or at least behavioral displays of guilt.
In her study, owners ordered their dogs not to eat a particular treat, then left the room. Half the dogs ate the treat, and half didn’t. Within each group, half the owners were told upon returning that the dog ate the treat, and half were told the dog didn’t. Dogs’ reactions to their owners’ return were analyzed. Behaviors people associate with feelings of guilt, such as lowering of the head or tail and avoidance of eye contact, were highly correlated with what the owner thought the dog had done. They were not at all correlated with what the dog had actually done. Horowitz concludes, “What the guilty look may be is a look of fearful anticipation of punishment by the owner.” Dogs don’t feel guilty; they just hear your rising voice and brace for a firm scolding, whether or not they did anything wrong.
But people don’t know this. One study found that 74 percent of pet owners report that dogs can feel guilt. As for other secondary emotions, 81 percent said dogs can feel jealousy, 64 percent empathy, 58 percent pride, 51 percent shame, 49 percent grief, 34 percent disgust (really? dogs drink out of the toilet), and 30 percent embarrassment (really? see disgust).
While dogs may have consciousness, it’s doubtful they have self-consciousness, an awareness of their own existence. Again, we can’t ask them, but Gordon Gallup Jr. has developed a test of self-awareness called the mirror test. You put a spot on an animal and see if the animal notices in the mirror that the spot is on its own body. Supposedly, if an animal can recognize itself visually, it has some concept of itself. Humans older than eighteen to twenty-four months, apes, dolphins, magpies, and elephants pass the test, but dogs don’t.
If dogs don’t have self-awareness, they can’t have episodic memory either—autobiographical reconstructions of previously experienced events. They can’t place themselves in another time, in another context. So they have no idea what they were doing a minute ago or what they might be doing a minute in the future. They don’t ruminate or premeditate. They live reflexively, on instinct and conditioning, forever in the moment.
When we anthropomorphize things we assume their actions are intentional and have the same meaning to the actor as they do to us, which is why Reader’s Digest gives out a Hero Pet of the Year award; they think heroism is an appropriate descriptor of a dog’s instinctive or trained actions. The flip side of the potential for praise is the potential for blame. People see a lot of intelligence in dogs and thus have high expectations of their moral behavior. We become enraged when dogs let us down, by, say, eating the cake on the counter. Didn’t Fido realize from the candles that he was ruining a five-year-old’s birthday party? No, animals don’t know any better. In the wake of the tiger bite on the neck of Roy Horn of Siegfried & Roy, Chris Rock reminded us that “That tiger ain’t go crazy; that tiger went tiger! You know when he was really crazy? When he was riding around on a unicycle with a Hitler helmet on!” Horowitz notes that overestimating a pet’s understanding of right and wrong can be frustrating and harmful for both the pet and the owner.
The question of animal smarts is not identical to the question of animal rights, however. The English philosopher of law Jeremy Bentham wrote a couple hundred years ago, “The question is not, Can they reason ? nor, Can they talk? but, Can they suffer?” People sometimes assume that because animals have less intellectual capacity than humans, they also have a reduced capacity to feel pain, and thus deserve fewer rights. I had an experience a couple years ago that made me at least question this line of reasoning. I was watching the roller-derby documentary Hell on Wheels in the theater with my family. After a graphic leg-breaking scene, I passed out in my seat for a few seconds. Coming to, I felt severe physical discomfort—nausea, sweats—and had minimal awareness of who I was, where I was, why I felt so shitty, or even that the shittiness would ever end. This must be how animals experience suffering. When you take them to the vet for a shot, they don’t know it’s for their own good or that it will only hurt a moment. They just feel a world of pain. It’s a sad thought. I’m not going to stop eating meat, though.
The best way to get people to protect something is to make them think of it as human or at least capable of humanlike suffering. Slowing the destruction of the animals and plants in our natural environment might depend upon such a tactic. The phrase Mother Earth makes for good ammo in this battle. As the writer Jack Handey has concluded after much deep thought, “If trees could scream, would we be so cavalier about cutting them down?” (He went on to allow that, “We might, if they screamed all the time, for no good reason.”)

Technology

Paro’s white coat and meek purrs make it easy to love, but robots can get away with much less fur power in the battle for our hearts. Consider AIBO, the small plastic robotic dog manufactured by Sony that looks only nominally like a real dog. In one study, after twenty sessions with AIBO, nursing home patients were less lonely. And researchers at the University of Washington dove into chat rooms to sample owners’ spontaneous descriptions of their toy pets. They found that 60 percent of forum members referred to their AIBO’s mental states and 59 percent to its capacity for social rapport. One owner wrote, “The other day I proved to myself that I do indeed treat him as if he were alive, because I was getting changed to go out, and [my AIBO] was in the room, but before I got changed I stuck him in a corner so he didn’t see me!”
Okay, let’s strip away the face completely and see what happens. It turns out that Roomba, the robotic vacuum cleaner manufactured by iRobot, elicits emotions too. In a survey of Roomba owners, a quarter of them gave theirs a name, one in eight talked to it, one in eight dressed it up, and one in eight ascribed a personality to it, calling their cleaner silly, temperamental, flirty, or stubborn. In another study of thirty Roomba owners, eighteen felt their Roomba had intentions, feelings, and personality traits. Several didn’t want to send it in for replacement when it became sick. One wrote, “I can’t imagine not having him any longer. He’s my BABY!!” And three respondents, when asked to provide names and ages of family members in the household, provided demographic information for their Roombas.
We can’t avoid sympathizing with robots. A European study found that subjects’ mirror neurons charged up whether they watched a human hand or a robotic claw perform various actions, such as picking up a cocktail glass. “Now we know,” the researchers wrote, “that our [mirror neuron system] may be part of the reason why, when in Star Wars, C3PO taps R2D2 on the head in a moment of mortal danger, we cannot help but attribute them human feelings and intentions, even if their physical aspect and kinematics are far from human.”
The behavior of computers does not even need to be embodied in physical movement to appear agentic, as anyone who has yelled at his laptop for losing a file will tell you. Clifford Nass, a sociologist at Stanford, has found that human-computer interaction closely mirrors human-human interaction; it automatically triggers certain social expectations. For example, people implicitly apply gender stereotypes about friendliness and competence to computers based on the gender of the computer’s speaking voice, despite denying any difference between male-voiced and female-voiced computers. People also attempt to be polite to computers: after working with one computer, they’ll give higher ratings of that computer when answering a questionnaire on that same machine than on an identical one or on a piece of paper. They’ll also reciprocate assistance: if a computer provides useful versus useless search results, subjects are more likely to help the computer create a color palette. And users will reciprocate self-disclosure: they offer more personal information in response to the prompt “There are times when this computer crashes for reasons that are not apparent to its user....  What have you done in your life that you feel most guilty about?” than to the question alone. You want to share something of yourself in response to the silicon’s vulnerability. It’s like a little heart-to-hard-drive.
Designers of robots, gadgets, and computer interfaces are striving to make the interfaces “invisible,” so that instead of dealing with a machine, you’re collaborating with a partner. As the technology around us becomes more responsive and interactive, “it will make sense to have a degree of magical thinking just to be able to deal with the devices,” Erik Davis, the author of TechGnosis: Myth, Magic, and Mysticism in the Age of Information, told me. One of the dangers of applying social expectations to silicon is that your new e-friend will at some point let you down; we haven’t advanced past the “artificial” in AI.

God

The term anthropomorphism comes from the Greek philosopher Xenophanes, who 2,500 years ago described the way we see gods in our own image: “Ethiopians say that their gods are snub-nosed and black; Thracians that they are blue-eyed and red-haired.” He even speculated that if horses and oxen and lions could paint, they would depict their gods to look like themselves. Rumor has it Xenophanes had a killer “white people pray like this and horses pray like this” stand-up bit.
Stewart Guthrie notes in Faces in the Clouds that around the world and across eras, gods have existed on a spectrum with humans, eating and drinking and dying and making love and war with people. Contemporary Western religions make more of an effort to separate God from the riff-raff, giving him such otherworldly qualities as omnipotence, omniscience, omnipresence, and indifference to reality television. But our intuitions don’t always stick to scripture. The psychologists Justin Barrett and Frank Keil told various fictional stories about divine intervention to a set of study participants, then asked the subjects to retell the stories in their own words. The subjects had already affirmed that God knows everything, is everywhere, can read minds, and can multitask, but their story retellings recast the Almighty as basically a superhero with limited bandwidth. Subjects automatically reshaped the material in their heads so that God had to, say, finish answering one prayer before attending to another, and could not even hear a pair of birds above the noise of a jet engine.
People also attempt to communicate with God as if he were, well, a he (or a she), rather than an it. The sociologist Christopher Badcock wrote that we believe in “divine agents who can be influenced in men-talistic ways analogous to those in which ordinary humans can be: through supplication (prayer), flattery (praise), generosity (sacrifice), apology (confession), restitution (penance), visitation (pilgrimage), and lobbying (intercession via saints, angels, or other deities).”
And we read God’s mind as egocentrically as we do other people’s—actually more so. Nicholas Epley and collaborators asked subjects about their own beliefs, God’s beliefs, and other people’s beliefs on several issues including abortion, same-sex marriage, and the war in Iraq. God’s views correlated with their own views more strongly than others’ did. To show that God’s purported attitudes echo subjects’ attitudes and not vice versa, Epley manipulated participants’ takes on the death penalty and found that God’s opinion budged too. Finally, the researchers found that the parts of the brain recruited when thinking about God’s beliefs resemble the areas used in pondering one’s own beliefs more closely than the areas used in pondering other people’s beliefs. Epley suggests three potential reasons we’re more egocentric with God’s mind than with our human peers. First, we don’t have a lot of reliable background information on God, so we just use ourselves as a model. Second, disagreement with God would be threatening. And third, we assume we believe what is true, and God believes what is true, so God should believe what we believe.
In light of Epley’s research, the fact that half of all Americans consult God for advice daily should give us pause. Praying for guidance is about as productive as asking your imaginary friend where you left your keys—he doesn’t know any more than you do. It’s also more dangerous, because a divine stamp of approval on your own fallible hunches might send you on a wild goose chase with extra vigor. As Bob Dylan sang in 1963, cynically reprising a go-to justification for war, “That land that I live in has God on its side.”
But anthropomorphism is not only unavoidable in religion; as many theologians have noted, it’s the foundation of religion. If God were merely an impersonal phenomenon, to whom would one pray? Some people try to avoid reducing God to a humanlike agent by denying him any definable qualities—by saying he is everything and nothing—but such an entity is incomprehensible and thus meaningless. According to the 19th-century German philosopher Ludwig Feuerbach, “The denial of determinate, positive predicates concerning the divine nature is nothing else than a denial of religion . . . it is simply a subtle, disguised atheism.”
So we perceive humanlike qualities in God, but he’s still somehow different. I suggested earlier that one could think of mental presence as lying on a spectrum rather than existing as a discrete quality, with animals somewhere between humans and rocks. But we actually think about mind in terms of two distinct dimensions, Heather Gray, Kurt Gray, and Daniel Wegner have found; there’s agency—the capacity for self-control and planning—and experience—the capacity for states such as pain, pleasure, and fear. We attribute both agency and experience to adults, experience but not agency to babies (they’re sensitive but not very useful), and agency but not experience to God (he wields great power but is fairly immune to anything that might befall him). Robots are low on experience and at the midpoint on agency, and animals and fetuses are low on agency and with some amount of experience. So as the researchers note, having a mind, in others’ eyes, is not simply a matter of presence, or even degree of presence, but of type. Is the mind a doer, a feeler, or both? God is a doer.
The significance of treating God as a doer, in particular a doer who shares human concerns and motivations, will become apparent in the next chapter as we discuss destiny.

Detecting Agents

Egocentrism can explain the anthropomorphism of nonhuman agents, but what makes us treat inanimate objects as agents in the first place? What features bring something into the realm of lifelike creature and make it worth trying to psychoanalyze? Let’s look at three triggers: behavioral, morphological, and moral.

Behavior

The most obvious sign of agency is movement. In a seminal study published in 1944, Fritz Heider and Marianne Simmel showed Smith College students a simple two-and-a-half-minute black-and-white animation featuring a large triangle, a small triangle, and a circle, and asked them to write down what they saw. Of thirty-four subjects, thirty-three spontaneously described the shapes in animate terms, usually as people. Nineteen subjects connected the series of events into a narrative. One wrote, for example, “Lovers in the two-dimensional world, no doubt; little triangle number-two and sweet circle . . . we regret that our actual knowledge of what went on at this particular moment is slightly hazy, I believe we didn’t get the exact conversation.”
Other researchers have stripped down their animations to locate the minimum and essential movement cues for seeming agentlike, in some cases showing subjects a simple dot moving across a screen. A change in behavior in the absence of obvious outside causes—starting from a stop, or shifting speed and direction without bouncing off anything—often appears lifelike. Movement contingent upon the environment or another agent, such as avoiding an obstacle or chasing another entity, works especially well: living things react to their world and interact with other living things.
Sometimes we imagine motion. As Guthrie wrote, “Banging our eye on an unexpected door edge, we kick the door, because we see it in that moment, automatically and unconsciously, as a malefactor. It appears to have struck us.” We weren’t predicting the door in our path so its mere presence feels like a deliberate act, and we reflexively go into revenge mode. This is why when people trip on a root, they say, “Where did that come from?” An ex-girlfriend still teases me about the time I stomped on an egg to punish it for jumping out of my hand onto the kitchen floor. I certainly didn’t mean for it to drop.
When we anthropomorphize triangles and doors and other objects, is it just an exercise in metaphor, or is it a bubbling up of instinctual mind detection—our impulse to see intentionality in behavior? Well, babies are notoriously weak with metaphors, but they get in on the anthropomorphism action as soon as they start thinking about minds in people. Infants as young as twelve months prefer watching a shape approach another shape that “helped” it earlier to watching a shape that was mean to it. They appear to see the shapes as social actors. (Of course in studying anthropomorphism in babies, researchers make efforts to avoid anthropomorphism of babies; they try to rule out simpler mental processes not involving theory of mind that could explain the infants’ behavior.)
Neuroimaging also points to genuine mind-attribution in anthropomorphism. Watching animated shapes move in a goal-directed way engages parts of the brain used for detecting biological motion, such as the superior temporal sulcus, as well as areas employed in judging others’ mental states, such as the medial prefrontal cortex. The woman quoted in Heider and Simmel’s 1944 study was not just having a gas; some part of her really saw the triangle and circle as lovers.

Morphology

In 1996, a short report titled “The Case of the Haunted Scrotum” appeared as the last item in the Journal of the Royal Society of Medicine. A forty-five-year-old man had visited J. R. Harding, a radiologist, for examination of an undescended right testis. Harding didn’t find it with an ultrasound, so he performed a CT scan and saw something odd on the left side: what appeared to be “a screaming ghostlike apparition.” He was kind enough to include an image in the report. And as for the right testis? “None was found,” he wrote. “If you were a right testis, would you want to share the scrotum with that?”
We see faces everywhere. In 2004, a woman sold a grilled cheese sandwich that appeared to have the Virgin Mary’s visage on it for $28,000. In 1976, NASA’s Viking 1 spacecraft sent back an image of a rock formation on Mars that distinctly resembled a human mug. And a famous photograph from 2001 shows a demon apparently grinning in the smoke billowing from the World Trade Center.
We process these images much the same way we process real faces. One study measured brain activity in subjects as they saw photos of faces; photos of everyday objects vaguely resembling faces, such as a three-pronged socket; and photos of other objects. Both the faces and the facelike objects activated an area of the fusiform gyrus called the fusiform face area (FFA) that responds strongly to faces. And the sockets and such didn’t tickle the FFA as a result of a considered top-down reinterpretation; they triggered facial processing systems automatically, within 165 milliseconds of appearing. Which means we see faces in clouds before we can even say “cloud.”
Seeing faces or other elements of the human form in random noise is not in itself magical thinking; it’s just an example of pareidolia, the finding of patterns in vague sensory stimuli. But physical similarity to humans can trigger or amplify the attribution of human mental qualities to these nonhumans, which is magical thinking. The brain reacts to the emotions displayed by robotic faces within about 100 milliseconds, which is as long as it takes to recognize emotion in human faces. The brain will even react to the sentiments expressed by simple smiley faces in about that time. And in one study, twelve-month-olds followed the “gaze” of a brown furry lump with felt patches for eyes and a nose. Those felt patches were enough to engage the baby. Eyes are particularly important for expressing animacy, as we read so much into what’s behind them. As Dorothy Marette, the psychologist at the VA Medical Center, said, “When Paro looks over at you with those eyes, you can’t help but say, ‘Hello.’”
Robot designers must be careful not to make their creations look too much like a human, however, unless they can get every detail right. Otherwise they run the risk of entering the “uncanny valley,” that terrain situated between stylized and spot-on depictions of life-forms where things are just off enough to give you the willies. (Recall the human characters in the animated movies The Polar Express and Final Fantasy.) Takanori Shibata modeled Paro after a seal because true animal realism was out of reach. He designed dog and cat prototypes, but people preferred interacting with a creature whose behavior didn’t clash with their expectations. They knew too much about how dogs and cats should act, but it turns out that the limitations of the robot’s capabilities fall just behind the limitations of people’s knowledge of baby harp seals.
We read into the face on the seal, and we also read into the faces of babies and fetuses. In 2010 a British couple received an ultrasound image in which their four-month-old fetus appeared to be smiling. The Daily Mail writer who reported the story speculated, contra the experts she interviewed: “The scan implies that a baby can experience feelings such as happiness and pain much earlier in its development than previously thought.” As a writer for Gawker put it, “The abortion debate has devolved to the zygote version of a LOLCat,” referring to those funny cat pictures on the Internet with ungrammatical captions. She added that the fetus “is 17 weeks old, and much like a LOLCat, people keep trying to put words in its mouth. They ask, What does LOLFetus have to say about abortion politics?” In reality, the apparent smile is probably no more than an example of pareidolia. The bulbous head with a creepy grin actually looks a lot like the haunted scrotum found by the British radiologist. Twins, separated at conception?
A friend of mine utilizes the evocative power of faces to avoid losing her stuff. She goes to festivals a lot and has lost bags and other items but hasn’t lost her current bag because she’s painted a face on it. And she used to lose water bottles frequently, but not the current one, with a face on it. The faces make her take better care of her things. “I don’t leave stuff behind,” she says, “because if I start to walk away, I’ll think, ‘I’m forgetting someone.’”

Morality

In ancient Greece, a statue of the athlete Nikon was once toppled off its pedestal by Nikon’s enemies, crushing one of the perpetrators. The statue was brought to trial and sentenced to be cast into the sea. When Ivan the Terrible’s youngest son died in the Russian town of Uglich in 1591, the town bell was rung in celebration. It was banished to Siberia for its political offense and not fully pardoned until 1892. In the 1940s a young bell ringer at the Mexico City Metropolitan Cathedral died when the bell struck him in the head. The bell was silenced as punishment, until its pardoning in 2000.
Throughout history and around the world, animals and inanimate objects have been formally tried, convicted, and punished for their involvement in negative events. It’s the old-school equivalent of suing McDonald’s when you spill hot coffee on yourself. Something bad happens and you need someone, or something, to blame.
But why does there need to be blame? Can’t bad things just happen? For every moral action involving help or harm, there is a doer—a moral agent—and a receiver—a moral patient. Kurt Gray and Daniel Wegner argue that this dyadic template of morality is so prominent in our thinking that whenever we see someone harmed we automatically apply the template, casting that person as a moral patient and looking for someone to play the role of agent. If no human scapegoat is available, we cast a real goat, or a statue, or a bell. If you think as a modern society we’re above ceremonial execution of inanimate objects, recall from chapter 2 what happened to the baseball that eluded the embrace of the Cubs outfielder Moises Alou in the 2003 playoffs: it was detonated before a television audience.

Agents Everywhere

Autonomous behavior, the appearance of a face, and participation in harmful events all seem like reasonable signs of agency in certain circumstances, but what doesn’t seem reasonable is the willy-nilly treatment of anything that displays any of those signs as having a mind. Shouldn’t we be more discriminating? Wouldn’t we be better off if we personified less promiscuously?
It turns out that sluttiness pays.
Agents typically play the starring roles in our environment. During evolution, we had to pay attention to predators and prey so as not to miss a meal or become one. And still today other humans offer cooperation and competition as affiliates or adversaries; whatever the people around us are up to, we want to know. So as long as we’re unable to identify instantly and with complete accuracy what’s an agent and what’s not, it’s better to go on the assumption that something is an agent until proved otherwise. Our eager agency detection system is a demonstration of error management theory (discussed in chapter 3): if one type of error (a false alarm) harms us less than the alternative error (a miss), we’ll increase our false positives, even at the expense of more overall errors. As Stewart Guthrie wrote in Faces in the Clouds, “It is better for a hiker to mistake a boulder for a bear than to mistake a bear for a boulder.”
Animals have the same bias. If you’ve ever seen a scarecrow at work, heard your dog defend you from an intruding garbage can in the middle of the night, or played with a cat and a laser pointer, you know that liberal animacy detection is not restricted to humans.
Fear further biases our judgment toward seeing agents—the bump-in-the-night phenomenon. EWC, together with Scott Akalis, found that people are more likely to interpret abstract sketches as faces after watching three minutes of Silence of the Lambs.
We’re also better off overestimating the probability of someone witnessing us harming someone else, because a witness or his friends can harm us. A few psychologists have suggested that fear of third-party condemnation has aided the evolution of religion. The predominant view among psychologists of religion is that belief in gods and spirits is a side effect of basic cognitive mechanisms—such as dualism (the belief that spirits can exist independently of bodies, explained in the last chapter) and a tendency to infer a designer behind significant objects and events (explained in the next chapter)—combined with catchy cultural traditions that build on these cognitive inclinations. But Jesse Bering, Dominic Johnson, and a few other researchers argue that the idea of a supernatural judge became so useful to our ancestors that those who believed in it most strongly had a survival advantage over their peers, and that evolution then specifically promoted a religion-friendly disposition. The habits of mind underlying religious belief, then, would be a bit like feathers, which evolved for one reason (insulation) but then became handy for something else (flight) and eventually specialized to assist in their new function. We have a dedicated system in our brain for thinking about God, they argue.
Belief in God is useful because God enforces morality. If you act selfishly and get away with it, you obtain an advantage. But if someone catches you, you’re perhaps disproportionately disadvantaged—your reputation is ruined, you may be expelled from the group, and you may even be killed. So, according to error management theory, it would be better to overestimate the likelihood of an anthropomorphized agent watching you. And if you believe in invisible spirits who are interested in the affairs of humans—in anthropomorphic gods and ghosts—then you always have a witness over your shoulder. Indeed, the researchers Azim Shariff and Ara Norenzayan found that a subtle reminder of God encourages subjects to act generously in an anonymous economics game.
But you needn’t believe in God to act morally. My conscience guides me without any (explicit) fear of magical punishment. Psychologists have a few explanations for anonymous altruism—prosocial behavior that doesn’t benefit oneself, one’s kin, or one’s reputation. One possibility is that we do good because we overestimate the chances of another human finding out about it. The piece of wisdom expressed by the author H. Jackson Brown Jr.’s mother that “Our character is what we do when we think no one is looking” may be based on a false premise—that we ever think no one is looking. For a counterpoint, consider what the satirist H. L. Mencken wrote (albeit somewhat sardonically): “Conscience is the inner voice that warns us somebody may be looking.” In line with this suggestion, Shariff and Norenzayan found that while reminders of God didn’t significantly increase the anonymous generosity of atheists, subtle exposure to words like “police” and “jury” did; just the idea of recrimination heightened their paranoia and their prosociality. Other research shows that people place more money in an honesty box after pouring coffee when a simple photo of human eyes is displayed nearby.
Another explanation for kindness without recompense is that we internalize social norms. Yet another is that ensuring people get what they deserve, whether help or harm, affirms our belief that the world is just. And a fourth proposes that altruism emerges from our capacity for empathy and simulation; we don’t want to harm others because we feel their pain. There are many ways to be good without God.

The Need for Control

Adidas introduced a new eight-panel ball for the 2010 World Cup. The ball was named the Jabulani, Zulu for “celebrate,” but not everyone celebrated the ball. Some critics took to calling the ball “Jumanji,” after the book in which wild animals come to life from a game.
“Obviously, it’s quite unpredictable, the way the ball flies,” the Australia goalkeeper Mark Schwarzer told the BBC. “Sometimes the ball has a genuine flight, and other times it has a mind of its own.”
“It’s very weird,” the Brazil striker Luis Fabiano told the Associated Press. “All of a sudden it changes trajectory on you. It’s like it doesn’t want to be kicked.... You are going to kick it, and it moves out of the way. I think it’s supernatural.”
In soccer, ball control is a must, and on elite fields it’s a given. So when a ball’s behavior defies expectations, players turn to animacy: the ball must have a mind of its own.
People will similarly yell at their cars or their computers when these machines malfunction—when they disobey orders.
We react to uncertainty by forming hypotheses, and hypotheses that generate a lot of predictions usually win out over others; they’re more useful. Betting on the presence of an agent allows for all kinds of predictions, because we understand minds and are familiar with many of the goals they might have. Minds make flailing bags of skin comprehensible. In Guthrie’s words, when hikers bet on the presence of a bear (versus a boulder) and are right, “the jackpot is whatever they know about bears.” For instance, that bears sometimes like to eat hikers.
Feeling especially mystified or anxious will increase our effectance motivation, a term coined in 1959 by the Harvard psychologist Robert White to describe our persistent urge toward competence—toward an ability to understand, predict, and control our environment. As we saw in chapter 3, when people feel out of control they’ll attempt to make up for it with superstitious rituals. And thinkers dating back at least to Hume have suggested that a desire for competence pushes us toward explaining the world using anthropomorphism. According to Freud, “If everywhere in nature there are Beings around us of a kind that we know in our own society, then we can breathe freely . . . we can try to adjure them, to appease them, to bribe them, and, by so influencing them, we may rob them of a part of their power.”
In an experiment EWC conducted with Scott Akalis, subjects completed a measure of their desire for control, then watched a video of a small, quick, unpredictable dog and a large, plodding one. Overall, subjects said the unpredictable dog had more personality and conscious will than the larger one, and this difference was amplified in subjects with a high desire for control.
Then EWC ran a series of experiments with Carey Morewedge, George Manteleone, and Jia-Hong Gao. In the first, they found that people who reported more problems with their computers were more likely to say their computers behaved as if they had their own beliefs and desires. In the second, subjects read about thirty different robotic gadgets, described as either predictable or unpredictable. For example, Clocky the alarm clock can be programmed to either run away from you or jump on you when you hit SNOOZE (predictable), but Pillow Mate will decide on its own whether to hug you or curl into a ball when squeezed (unpredictable). People attributed more intentions and emotions to the unpredictable devices.
Maybe you’re thinking at this point that people aren’t anthropomorphizing to reduce unpredictability; they just associate unpredictability with humans. For sure, seeing variation in an entity’s routine can be reminiscent of people, who typically don’t act like assembly-line robots, but acting completely erratically doesn’t seem intentional either; it’s a sign that one has lost one’s mind. In any case, subjects told the experimenters that predictable behavior is more typical of humans than unpredictable behavior is. Pillow Mate was not just a reminder of a familiarly fickle bedmate; subjects were trying to understand it.
Rating unpredictable gadgets on their level of agency also produced more activation in parts of the brain used for mind reading than rating the predictable gadgets did. “Perceiving an agent as having a mind of its own may not be mere metaphor,” the authors wrote.
If competence motivation increases anthropomorphism, then anthropomorphism should increase a sense of competence. Subjects watched clips of a dog, a robot, Clocky, and animated shapes and were told to get inside the heads of two of them, and to treat the other two as a behaviorist would, noting only objective behaviors. Afterward, they said they felt more able to predict the future behavior of the entities they psychoanalyzed.
But does magical thinking of the anthropomorphic variety provide actual competence or just the feeling of competence? The philosopher Daniel Dennett refers to the inference of an organizing force—a mind—behind behavior as “taking the intentional stance.” Other options include taking the “physical stance”—applying what one knows about natural laws—and taking the “design stance”—consider-ing the intended purpose and functionality of an entity. Each stance has its advantages and disadvantages. You could take the intentional stance and explain a chess-playing computer’s sitting still on a table by saying it “wants” to remain there, rather than taking the physical stance and appealing to gravity and inertia, but you would sound silly and, worse, you might go on to make mistaken predictions about its behavior, such as the expectation that a good pep talk could convince it to move about. On the other hand, taking a physical stance won’t help you beat it at chess; you want to know which strategies it’s considering, not how many electrons it’s sending through each circuit. (The design stance tells you how to turn the thing on and start the program.)
So the heuristic of taking the intentional stance with a computer could be considered productive anthropomorphism, at least until you try to guilt it into not taking your pawn. And gardeners talk about “tricking” plants into blooming at certain times. But there aren’t many other cases where anthropomorphism allows for actionable predictions. Taking the intentional stance with an unpredictable soccer ball certainly doesn’t provide any traction on the situation.

Loneliness

“The bad thing about being in here is you don’t get to really have a relationship with anybody.” Pierre Carter is describing to me his life as a resident at the VA Medical Center in Washington, D.C. Carter, who suffers from post-traumatic stress disorder, is a large man in a wheelchair wearing reading glasses, a baseball cap with a military insignia, and earrings, including a peace sign dangling from his right ear. The hospital’s Paro robot, which he calls Fluffy, rests on his belly looking up at him, appearing to suckle on the shock of white goatee extending from his chin. The shared color of their fur contrasts with the black biker gloves Carter uses to gently stroke the seal. “Being in here, it’s lonely. It’s very, very lonely,” he says. “The animal, it’s almost like it’s a real individual, and it gives you back what you give it. It kind of keeps you company. . . . When she looks right at you, goodness gracious.”
Humans have a fundamental need to belong, and over the years people have suggested that we anthropomorphize to populate the world with allies. A Greek folklorist argued that ancient navigators personified mountains and rocks in response to solitude. The screenwriter for Cast Away gave Tom Hanks’s character a companion in a volleyball named Wilson. And the proverbial crazy cat lady always lives alone.
EWC and Scott Akalis brought back Clocky to study how loneliness would affect people’s interpretations of an alarm clock that runs away from you. Subjects rated Clocky and three other gadgets on the degree to which they had emotions and free will, then noted how often they themselves felt isolated from other people. The most lonely subjects gave the most anthropomorphic ratings. In another study, inducing loneliness by showing a clip of Cast Away increased subjects’ anthropomorphism of their pets.
A third study followed from the hunch that isolation makes us not only anthropomorphize agents around us but also believe more strongly in commonly anthropomorphized religious agents. (Previous research had shown that both singles and people who’d grown up with cold mothers tended to turn to God for accompaniment.) Subjects made to feel like loners in the lab reported greater credence in several supernatural agents—ghosts, angels, God, and the Devil—as well as acts perpetrated by these agents—miracles and curses.
Does magical thinking provide the same benefits as real social connection? An inflatable doll does not a real girlfriend make, but pets increase survival rate, reduce blood pressure, and ameliorate depression in their caretakers. And Shibata and collaborators have conducted several studies on Paro’s therapeutic value over the past several years. The robot improves people’s moods and makes them more active and communicative. After leaving Paro in a public area in a nursing home for five weeks, researchers found that residents had stronger social ties to each other, and urine tests revealed biomarkers for reduced stress. And in a study of fourteen patients with dementia, an EEG revealed improved cortical functioning in half the subjects after a single twenty-minute interaction with our favorite robotic seal.
“A lot of veterans, particularly with PTSD, don’t trust very easily,” Dorothy Marette at the VA Medical Center says. “They don’t disclose very easily, they don’t connect very easily. And it’s a safe way to start. That unconditional love you get from an animal, you don’t worry about being judged and what they’re going to think about you.”
“It makes you bring out your deepest feelings, and they don’t discriminate,” Carter says of the robot. “Whatever you give it, it gives you back five hundred percent. It lets people know that you don’t really have to be special, but it makes you feel special.” He directed some half whispers at Fluffy. (“How you doin’, how you doin’, how you doin’, all right, all right, all right.”)
“It makes everybody feel like a king.”

Treating Humans as People

So we’re adept at seeing minds because seeing minds is a useful skill. It allows us to predict the behavior of predators, prey, and people, and to interact with them. We look for minds especially when we have a need for control or when we feel lonely. And our mind reading continually spills over to entities that don’t actually have minds.
The consequences of this spillover include the relationships we build with animals, the trust we accord technology, the blame we direct at gods, the protection we provide our environment, the desire we have for cute consumer products, and the metaphors we use to understand a range of phenomena in fields such as science, economics, and politics (selfish genes, bullish markets, the war on drugs). But anthropomorphism may have its greatest implications when directed at people.
Despite research showing people act largely on autopilot, the psychologist Evelyn Rosset has argued that by default we see everything anyone does as intentional. Seeing an unintentional act as intentional, I argue, constitutes momentary anthropomorphizing. You’re looking at a human, a biological creature, performing some act, perhaps inattentively, and imputing full awareness of the action and its repercussions. Say your son tracked mud inside the house. Was he doing it to piss you off? Maybe he was busy texting and didn’t even notice his shoes were dirty. Humans can’t be aware of everything they do at all moments.
In one study, Rosset gave subjects either 2.4 or 5 seconds to judge whether various acts were done on purpose. For actions always or typically done accidentally (“She kicked her dog”), time pressure increased ratings of intentionality. In another study, reducing participants’ inhibition by getting them drunk also increased their rate of finger-pointing, a finding that explains why harmless bumps at the bar so easily escalate to inebriated brawls. I often say that life would go a lot more smoothly if we just assumed people were idiots instead of assholes. (But even smoother if we gave people the benefit of the doubt all around.)
Beyond everyday misunderstandings, we endanger ourselves and those we should be looking after when we overestimate the level of moral and criminal responsibility in children and the mentally ill or disabled. As discussed in the section on free will in chapter 4, we shouldn’t jump to call these offenders—or anyone—evil.
Anthropomorphism of damaged or elderly brains also complicates the difficult issues of life support and euthanasia. Is the person we know still there inside that head, or are we projecting?
And encouraging the personification of fetuses has been a primary strategy in pro-life campaigns. In several states, women opting for an abortion are now legally required to undergo an ultrasound and be offered the chance to look at the image.
It’s worth remembering—whether you’re navigating rush-hour traffic or debating public policy—that even humans can be mindless sometimes.
019
In healthy, nonautistic adults, mind perception is easily triggered, but it still needs to be triggered. The opposite of anthropomorphism is another constant in human history: dehumanization. We often treat social outsiders, the elderly, enemies, children, and anyone we have no particular motivation to engage with as less than equal. Overattributing mind through magical thinking may bring to life otherwise inert elements of the world, but the danger of underattributing mind lies in robbing your fellow humans of their personhood. They become animals or objects, fit for disregard, mistreatment, even elimination. EWC note irony in the fact that “the same rights conferred to animals, plants, or rivers through anthropomorphism can be denied to people.”
It’s a denial of rights that sometimes occurs in settings like the VA hospital. Although anthropomorphism of robots (and animals) can aid in therapy, as demonstrated by the studies of pets and Paro, “a lot of people are afraid that Paro will serve as an inanimate object that goes between people and directs the patient’s response to a robot instead of to a real person,” Dorothy Marette says. According to this argument, by leaving the patient to anthropomorphize Paro, we’re dehumanizing the patient.
Marette uses Paro largely as an icebreaker, however. “I’ve found that maybe the patients wouldn’t have spoken to me too much, but they start off feeling freer to interact with the robot,” she says, “and then they look up and start talking to me.”
That’s when the real magic happens.