THE DANGER OF INADVERTENTLY PRAISING ZYGOMATIC ARCHES

ROBERT SAPOLSKY

Neuroscientist, Stanford University; author, Monkeyluv: And Other Essays on Our Lives as Animals

I don’t think there is free will. That conclusion first hit me in some sort of primordial ooze of insight when I was about thirteen years old, and it has only become stronger since then. What worries me is that although I think this without hesitation, there are times when it’s simply too hard to feel as though there’s no free will—to believe it, to act accordingly. What really worries me is that it’s so hard for virtually anyone to truly act as if there’s no free will—and that this can have some pretty bad consequences.

If you’re a neuroscientist, you might be able think there’s free will if you spend your time solely thinking about, say, the kinetics of one enzyme in the brain, or the structure of an ion channel, or how some molecule is transported down an axon. But if you devote your time to thinking about what the brain, hormones, genes, evolution, childhood, fetal environment, and so on have to do with behavior, as I do, it seems simply impossible to think there is free will.

The evidence is broad and varied. Raising the levels of testosterone in someone makes him more likely to interpret an emotionally ambiguous face as a threatening one (and perhaps act accordingly). A mutation in a particular gene increases the odds that she will be sexually disinhibited in middle age. Spending fetal life in a particularly stressful prenatal environment increases the likelihood of overeating as an adult. Transiently inactivating a region of the frontal cortex will render someone more cold-hearted and utilitarian when making decisions in an economics game. Being a psychiatrically healthy first-order relative of a schizophrenic increases the odds of believing in “metamagical” things like UFOs, extrasensory perception, or literalist interpretations of the Bible. Having a normal variant of the gene for the vasopressin receptor makes a guy more likely to have stable romantic relationships. The list goes on and on (and just to make a point that should be obvious from this paragraph but which can’t be emphasized too frequently, lack of free will doesn’t remotely equal anything about genetic determinism).

The free-will concept requires us to subscribe to the idea that despite the swirl of biological yuck and squishy brain parts filled with genes and hormones and neurotransmitters, there’s an underground bunker in a secluded corner of the brain—a command center containing a little homunculus who chooses your behavior. In that view, the homunculus might be made of nanochips, or ancient, dusty vacuum tubes, or old crinkly parchment, or stalactites of your mother’s admonishing voice, or streaks of brimstone, or rivets made out of gumption. In that view, whatever the homunculus is made of, it ain’t made of something biological. But there is no homunculus, and no free will.

This is the only conclusion I can reach. Still, it’s hard to believe it, to feel it. I’m willing to admit that I have acted egregiously at times as a result of that limitation. My wife and I get together for brunch with a friend who serves fruit salad. We exclaim, “Wow, the pineapple is delicious!” “They’re out of season,” our host smugly responds, “but I lucked out and was able to find a couple of good ones.” And in response to this, the faces of my wife and I communicate awestruck worship—you really know how to pick fruit, you are a better person than we are. We are praising the host for this display of free will, for the choice made at the split in the road that is Pineapple Choosing. But we’re wrong. Genes have something to do with our host’s olfactory receptors, which help him to detect ripeness. Maybe he comes from a people whose deep and ancient cultural values include learning how to feel up a pineapple to tell whether or not it’s good. The sheer luck of the socioeconomic trajectory of our host’s life has provided him with the resources to prowl an overpriced organic market that plays Peruvian folk Muzak.

It’s hard to feel as though there’s no free will—to not fall for the falsehood that whereas there’s a biological substrate of potentials and constraints, there’s a homunculus-like separation in what the person has done with that substrate. (“Well, it’s not her fault if nature has given her a face that isn’t the loveliest, but after all, whose brain is it that chose to get that hideous nose ring?”)

This issue transcends mere talk of nose rings and pineapples. As a father, I am immersed in the community of neurotic parents frantically trying to point our children in the direction of the most perfect adulthoods imaginable. When considering our kids’ schooling, we cite a body of wonderful research by Carol Dweck, a colleague of mine. To wildly summarize and simplify it: Take a child who has just done something laudable academically and, indeed, laud her—“Wow, that’s great, you must be so smart!” Alternatively, in the same circumstance, praise her instead with “Wow, that’s great, you must have worked so hard!” The latter is a better route for improving academic performance: Don’t praise the child’s intrinsic intellectual gifts, praise the effort and discipline she chose to put into the task.

Well, what’s wrong with that? Nothing, if the research simply produces value-free prescription. But it is wrong if you are patting the homunculus on the head, concluding that a child who has achieved something through effort is a better, more praiseworthy producer than a child running on plain raw smarts. That’s because free will falls by the wayside even when considering self-discipline, executive function, emotional regulation, and gratification postponement. For example, damage to the frontal cortex, the brain region most intimately involved in those functions, produces someone who knows the difference between right and wrong yet still can’t control his behavior, even his murderous behavior. Different versions of a subtype of dopamine receptor influence how risk-taking and sensation-seeking a person is. Someone infected with the protozoan parasite Toxoplasma gondii is likely to become subtly more impulsive. There’s a class of stress hormones that can atrophy neurons in the frontal cortex; by early elementary school, a child raised amid the duress of poverty tends to lag behind in the frontal cortex’s maturation.

Maybe we can come to fully realize that when we say, “What beautiful cheekbones you have,” we’re congratulating the person based on the unstated belief that she chose the shape of her zygomatic arches. It’s not that big a problem if we can’t achieve that mindset. But it is a big problem if, when addressing, say, a six-year-old whose frontocortical development has been hammered by early life stress, we mistake his crummy impulse control for lack of some moral virtue. Or to do the same in any other realm of the foibles and failures, even the monstrosities, of human behavior. This is extremely relevant for the criminal justice system. And to anyone who says that it’s dehumanizing to claim that criminal behavior is the end product of a broken biological machine, the answer must be that it’s a hell of a lot better than damning the behavior as the end product of a rotten soul. Likewise, it’s inadvisable to think in terms of praise, good character, or good choice when looking at the end products of lucky, salutary biology.

But it’s difficult to believe there’s no free will when so many of the threads of causality are not yet known—or are as intellectually inaccessible as having to weigh the behavioral consequences of everything from the selective pressures of hominid evolution to what someone had for breakfast. This difficulty is something we should all worry about.