11

Protect and Defend:

Behavior and Disease

After the experience of the last few years, it would be hard to overestimate the impact of infectious disease. The COVID-19 pandemic changed virtually everything about our lives, from how we worked and played to the overuse of words like “unprecedented” and the phrase “we are all in this together.” Plagues have always been with us, of course, and the most recent pandemic will not be the last. Humans have been dealing with disease since before we became modern Homo sapiens, and we have and use many tools to protect ourselves. Our immune systems evolved to fight disease, and the triumph of vaccine development is nothing short of breathtaking. But for millennia, the first thing we have been doing to combat illness is to change our behavior. Indeed, for the first months of even this most recent pandemic, humble changes in behavior were all we had: washing our hands, keeping our distance, putting on a mask.

Although animals do not use hand sanitizer or wear masks, they too are subject to diseases, and those diseases can alter their behavior in two major ways, one that helps the infected individual and one that harms it. The helpful kind of behavior change is similar to what humans do when they are sick—fumigate their homes, perform triage on wounded community members, and eat special foods to help them heal. The harmful kind is instigated not by the sick individual, but by the organism that has infected it. In this chapter I discuss both, and how behavior can help animals and humans cope with disease.

First, a few words about terminology. I am confining myself to infectious diseases, which includes more serious threats like COVID-19 and bubonic plague as well as relatively minor ones like the common cold and ringworm. Infectious diseases are those that are caused by other living things, such as bacteria. That distinguishes them from illnesses like hypertension or diabetes, in which the body has a defect that was not caused by another creature. The distinction is important from an evolutionary perspective because infectious diseases are a threat in a way that the others are not: they can evolve back at you. Diabetes doesn’t evolve to circumvent insulin injections, but the bacteria that causes plague or tuberculosis will evolve to become more or less deadly depending on the circumstances. Any microbe that happens to have the ability to resist a particular defense will have an advantage and will out-reproduce its less resistant kin. That in turn imposes selection on a host to evolve an even more effective response. This is, of course, natural selection at its finest. It is also why we are struggling with antibiotic resistance. The back-and-forth has been described as an evolutionary arms race, in which neither party will ever be able to declare victory. Because selection for behaviors that counter the pathogen’s effects is ongoing, and new defenses are continually required, anti-disease behaviors are one of the best places to look for the way that behavior evolves.

Another point about terminology is really a way to broaden our perspective about the cause of disease. From the point of view of an animal or person that gets infected, it doesn’t much matter whether the infection is caused by viruses, bacteria, or tapeworms. Although the latter are usually considered a parasite, and not necessarily a disease, the distinction does not mean much. The treatment differs, of course, between viruses and tapeworms, but it also differs among various kinds of microbial infections. Worms can be welcomed into the fold, so to speak, along with their smaller disease-causing cousins.

Whose Side Are You on, Anyway?

As I noted at the open of this chapter, behavior that evolves in response to disease can actually harm the infected individual. Being sick is generally bad for the one infected and good for the organism doing the infecting, of course, so when a diseased animal does something that makes itself worse, an explanation is needed.

That explanation lies in what is termed parasite manipulation of host behavior. Ants, for example, when infected by a particular fungus, climb to the top of a blade of grass or a leaf and clamp onto it. Although uninfected ants would never dream of such a mountaineering feat, scaling the grass blade fosters the growth of the fungus and allows it to release its spores at the best possible place. Fish infected with worms will swim close to the surface of the water with their shiny white bellies exposed, again making them easy prey for birds, the final host for the worm. And a lovely insect called a jewel wasp injects venom into a cockroach, rendering her prey docile enough to lead by its antenna to her nest, where she places her larvae on it. The young wasps then have a catatonic and perfectly preserved food source to munch while they grow. In this case, the wasp is more akin to the flies that parasitize my Hawaiian crickets than to a disease, but the manipulation is just the same.

Perhaps the best-known example of parasite manipulation is when mice lose their fear of cats. This happens when the mice are infected with a one-celled parasite called Toxoplasma. The parasite occurs in many kinds of mammals all over the world, including humans, but it reproduces only in members of the cat family. In people, toxoplasmosis, the disease that results from an infection, can cause miscarriages, which is why pregnant women are urged to avoid cleaning the litter box.

Cats acquire Toxoplasma by eating infected mice or other prey. As you might imagine, anything the parasite can do to increase the likelihood of its rodent host being eaten by a cat would evolve under natural selection. How to make a mouse more vulnerable to being eaten by a cat? Simple: make the mouse less afraid of cats. And indeed, that is exactly what the parasite seems to do. Experiments dating back to the 1990s show1 that Toxoplasma-infected mice and other rodents are actually attracted to areas scented with cat urine, whereas their uninfected counterparts would retreat from the odor. The fearlessness would, of course, make the infected mice more vulnerable; hence, the parasite manipulates its rodent host and thereby increases its chances of getting into the final host where it can reproduce.

The idea that parasites manipulate host behavior to achieve their own ends has become widely accepted, and stories about such behavior are often in the media under headlines that contain words like “zombie,” “hijack,” and “fatal attraction.” Like many such stories, however, the truth is a bit more complicated.

Two recent studies have questioned the conventional wisdom about the way that manipulation happens. The first was a study2 examining the brains, genes, and behavior of mice that were infected with Toxoplasma. As in previous work, the experiment allowed mice to go into areas scented with cat odors—in this case bobcat—as well as areas scented with odors of other animals, including foxes and guinea pigs. Infected mice did explore the cat-scented places more than the uninfected mice did, but, somewhat to the researchers’ surprise, the infected mice also went into the other areas more often than their healthier counterparts, even though guinea pigs are not going to get Toxoplasma from a mouse under any circumstances. The investigators found that infected mice had generally lower anxiety levels and were more exploratory overall, which they said explained the altered behavior and meant that the parasite wasn’t responsible for the change. Coverage of the work suggested that the old story had been debunked.

I am not so sure. After all, infected mice are still more likely to be eaten. Just because they might waste their time getting up close and personal with a guinea pig, that doesn’t mean their decreased anxiety levels won’t get them eaten by a cat at some point. Or, as parasitologist Laura Knoll at the University of Wisconsin-Madison put it in a commentary3 on the work, Toxoplasma “clearly manipulates the crap out of the host.” A generalized loss of anxiety due to changes in the brain is a much more plausible mechanism than one that requires the parasite changing the mouse in such a way that it is less fearful of a cat, but only a cat. Once again, complex behaviors can evolve using relatively simple mechanisms, and they also do not require any diabolical intent on the part of the one-celled Toxoplasma.

That diabolical intent, along with all the sensationalized language about zombies and mind control, was the subject of the other recent critique4 of parasite manipulation. Jeff Doherty, a scientist at the University of Otago in New Zealand, thinks that scientists like me as well as the general public are too quick to embrace what he terms “inherently vague and misleading” metaphors about parasites. Whether it is due to the influence of sci-fi and horror films like Alien or simply out of a desire for a good story, he says we are succumbing to quick and easy explanations that are often inaccurate.

Is he just being a spoilsport, even a killjoy, in an age when people may not always want to read about science without a bit of added punch? Maybe. Many of my colleagues think it’s fine to personify animals, plants, and even one-celled organisms as a way of encouraging the public to embrace nature and find otherwise peculiar or—let’s face it—dull creatures interesting. I shouldn’t be casting the first stone here, either, having done my share of using human terms to describe animals. At the same time, such personification can lead to misconceptions about the way behavior evolves. If we emphasize the idea of a puppet master when we talk about a single-celled parasite, it’s hard not to visualize an animal behaving the exact same way a person would.

It seems to me that it’s even more marvelous to think that such behavioral manipulation can happen without intent. Indeed, that was demonstrated in a recent study5 of the ants that crawl up onto a leaf and die when infested with a fungus. A group of researchers from Pennsylvania State University used highly sophisticated imaging techniques and computer learning methods to visualize the fungus inside the body of an infected ant. Somewhat to their surprise, the fungal cells never even enter the ant’s brain. Instead, they surround its muscles and manufacture compounds that directly influence the ant’s movements. No mind control is necessary if the mind, and the brain, aren’t involved. And it means we don’t need to wonder how the fungus manages to act like the world’s best air traffic controller.

Social Undistancing

We humans think that we are above such manipulation, and probably assume that in any battle of wills between ourselves and a parasite outside the movies of Hollywood, we would be able to emerge triumphant. But the fact that these parasites don’t need any special mental abilities to alter the behavior of an ant or cockroach, and that the hosts’ higher cognitive powers aren’t involved either, means that manipulating humans is perfectly within the realm of possibility. As with all behaviors, individuals who act in a certain way just need to be better at surviving and reproducing than the ones who do not act that way, and they need some way of passing on their abilities.

So, remembering the example of a fungus that does better by making its ant host climb up a grass stem, what human behavior might benefit a pathogen that infects humans? Pathogens that are spread by contact between infected individuals do better by increasing the likelihood of that contact, and in highly social creatures like humans, making humans even more social should do the trick. You cannot, of course, measure the predilection for party-going in people after injecting them with a disease, but biologists from Colorado State University and Binghamton University hit on a clever way to test the idea6 by taking advantage of the next best thing: vaccinations.

Vaccines stimulate the immune system, which at least on a short time scale imitates the effect of a real disease. So the scientists asked if getting the flu vaccine, as many people do when the season is upon us, would alter the social behavior of those who received it.7 They noted that if such a response existed, it could favor either the virus—because sick people would spread a disease more if they contacted more potential hosts—or the infected individual, perhaps because they would solicit more care from their companions. They used people getting flu shots at a campus clinic, telling them they were participating in a study about illness and social behavior but giving no particulars. At three time periods, the subjects were asked to reconstruct their activities during the previous two days: before the shot, forty-eight hours afterward, and four weeks later. Even after controlling for variables like the day of the week the shot was administered, the occurrence of holidays, other special events, and more, the results were striking: in the forty-eight hours after their immunization, people were much more social than they had been beforehand, although none of the subjects self-reported a sense that their behavior had changed. The boost disappeared at the four-week measure.

Whether this was the doing of the virus or the host can’t be disentangled from the study, but it certainly suggests that we can consider ways that the evolution of behavior affects how we treat disease and what we can expect from various public health interventions. The paper was published before the COVID-19 pandemic, and it is tempting to wonder whether social distancing is psychologically made even harder by an infection. Along these lines, a 2020 paper8 by scientist Patricia Lopes musing on the effects of the pandemic noted, “By adopting social distancing as part of our battle against a novel infectious disease, we are fighting against some of what it means to be human: to live socially.”

Noting that viruses can evolve and potentially change their hosts’ behavior, even when those hosts are people, shouldn’t come as a surprise. Since virtually all animals have anti-disease behavior, the virus needs to counteract those behaviors and make the host behave so that it spreads the virus. In early 2021, as Europe was experiencing a resurgence of COVID-19, Mayor Francesco Vassallo of Bollate, Italy, a town near Milan, lamented,9 “This is the demonstration that the virus has a sort of intelligence, even if it is a single-cell organism. We can put up all the barriers in the world and imagine that they work, but in the end, it adapts and penetrates them.”

We all sympathize with his despair. The virus doesn’t need intelligence to overcome at least some of our barriers. Still, we shouldn’t be overly pessimistic. The truth is more hopeful; it may not be upbeat exactly, but it’s not the doom and gloom Vassallo predicts. A virus can’t understand evolution. But the good news is that we can.

A Bitter Pith to Swallow

Parasites and disease do not, of course, always have the upper hand. We do many things to either stave off or treat disease, and some of those things involve changes in behavior. Even early humans used medicine and treated injuries such as fractures, so it seems plausible that our closest relatives, at least, might do the same. And indeed, in the 1980s and 1990s, several researchers, including Richard Wrangham, noted some peculiar behaviors in the chimpanzees they were studying in Africa.10 Chimpanzees eat a variety of plant parts, including fruits and leaves, but some individuals were selecting young shoots of a plant called Vernonia amygdalina, which grows in many parts of tropical Africa, and stripping the stems of their bark. The chimpanzees then chewed on the bitter pith and juice, consuming sections anywhere from two inches long to a foot or more. The individuals that did this often seemed to be sick, exhibiting diarrhea, previous weight loss, and a lack of energy. Wrangham and others, most notably anthropologist Michael Huffman, examined the parasitic worm eggs in the feces of the animals, and found that the use of the plant was associated with a drop in their parasite egg number. Interestingly, the chimpanzees avoid the leaves and bark of the plant, which contain extremely high levels of the chemical thought to act as an anti-worming medication, levels that would be toxic if consumed in any quantity. Chewing bitter pith has now been seen on many occasions, with young chimps occasionally tasting the plant material their mothers had chewed.

A similar behavior with a similarly descriptive name, leaf-swallowing, has also been seen in chimpanzees.11 Here, the animals take a leaf from an Aspilia plant, fold it up, and swallow it whole, without chewing, so that the entire undigested leaf is found in the chimp’s droppings. An individual chimpanzee can swallow up to fifty-six of the leaves at a time. Again, the leaves are not eaten as food, and they are swallowed by animals that appear to have gastrointestinal symptoms associated with worms. Aspilia leaves are quite hairy, and scientists believe that the hairs help to scrape the worms from the gut, allowing them to be expelled.

More recently, similar behaviors involving leaf swallowing were observed in Asian gibbons,12 another great ape species, and both leaf-swallowing and bitter-pith chewing have been seen in bonobos and gorillas too. Other apes, including orangutans,13 as well as monkeys in South America, have been seen rubbing fruit or chewed leaves on themselves, which is interpreted as a way to rid their bodies of external parasites like ticks or fleas, or perhaps repel mosquitoes (cats may perform a similar activity, which I’ll discuss later).

How do the apes know what plants to choose, or when to administer them? Huffman and others see these behaviors as having evolved when our physiological means of avoiding disease, such as our immune system, is insufficient, as a kind of medical backup plan, and something that requires a distinct ability and probably a sophisticated level of cognition, given its occurrence in apes. From that point of view, self-medication in animals thus becomes a singular, even extraordinary, phenomenon. But it seems to me that separating such behavior from physiology, and assuming it requires some extra-special powers, causes the same problem I’ve noted all along: behavior and physical characteristics both evolve in the same way, and evolution doesn’t have any reason to use one set of mechanisms first and then try another. Animals have many ways of dealing with infection: some are on a cellular level, some are with tissues and organs, yet others are with behavior. This does not mean the behavior isn’t fascinating, or that investigating its origin isn’t worthwhile. But it might mean that it’s worth taking a step back to see where else in the animal kingdom such self-medication occurs.

Food, Medicine, Fumigation, and Goats

Since the discovery of leaf-swallowing and bitter-pith chewing, many other species, many of them non-primates, have been found to use substances that help kill or control parasites. They do so, mostly, by changing their diets. This has required us to do something of a retrenching in our thinking about what qualifies as self-medication, and to broaden the criteria of self-medication, since the behavior appears to be deep-seated in animal evolution. If only chimpanzees and people self-medicated, it would suggest that such behavior is highly specialized and sophisticated, possibly requiring advanced cognitive ability. But if, as it turns out, the behavior is widespread, and arose many times in evolution, then we have to look more deeply into the behavior’s origin.

That broader perspective means we need to define self-medication more precisely. Scientists have more or less agreed on four criteria.14 First, the animal has to ingest or apply something it ordinarily wouldn’t, whether that is more of a usually shunned food item or, as with the chimpanzees, a novel plant. Second, the behavior has to be prompted by the animal becoming ill, something that can be extremely hard to demonstrate without doing a careful experiment, and which means that some instances of supposed self-medication by elephants, tigers, and dogs are merely anecdotal for the present. Third, the behavior has to benefit either the animal itself or its kin. Finally, self-medication has to have some cost; perhaps it involves eating a substance that is also toxic, even though it cures the animal of a disease. If this last criterion is not met, then one would expect all individuals, sick or not, to consume or apply the medicine, since nothing would be lost by doing so. The use of medicine by animals, as in humans, involves balancing the benefit of treatment against the risk of the medication itself.

Instead of selecting a plant that is not ordinarily considered edible, some animals will eat more of one plant species than another, or will change their environment to make it less hospitable to disease. Some of the best evidence for self-medication in animals other than primates15 comes from what might seem like an unlikely source: goats and their fellow cud-chewing relatives, including sheep. Goats, of course, are known for supposedly eating anything, from tin cans to laundry off the line, but they turn out to be remarkably sensitive foragers. If free-ranging goats are infected with roundworms, they will increase their consumption of a shrub containing a chemical that fights the worms. Interestingly, only some breeds of goats will refine their diets in this way, which suggests an interaction between the genes of a particular breed and its likelihood for becoming parasitized. Goats are particularly sensitive to tannins in their feed. Tannins are a group of plant chemicals that give tea, red wine, and a host of other foods their astringent feeling when you eat or drink them, and are used, as the name suggests, in tanning leather. Their bitter taste helps the plant by deterring herbivores, at least some of the time, but tannins can also act to help rid the herbivore’s body of worms (though this is not a suggestion to use tea, much less wine, as a human anti-worming drug).

It is much easier to do controlled experiments on sheep and goats than on apes, and a number of studies16 have shown that the animals adjust their diets depending on the nutritional content of the food as well as on their health. Lambs that had been infected with intestinal worms, for instance, ate more of a food containing tannins than their unparasitized counterparts, but the difference in diets faded after the first twelve days of the study, which corresponded to the time when the worm levels decreased. Ruminants seem to be able to detect which foods are better for them, also; if sheep are given an internal infusion of highly nutritious liquid close to the time they are offered an otherwise low-nutrient feed, they will eat more of the food they otherwise would have shunned. Whether this means that the animals left to their own devices learn to eat medicinal foods because they associate them with feeling better afterward is unclear, but that would be a potential pathway for the acquisition of self-medication. One possibility is that animals that are sick become more open to eating new foods more generally, something that again has been demonstrated in an experiment with lambs.

Food, in other words, is their medicine, in keeping with an adage thought to have arisen with Hippocrates. Rather than consuming a new plant, animals simply eat a new combination of foods. The ruminants also illustrate the adage that the dose makes the poison. In virtually all cases, the animals do not switch their diets completely over to the tannin-containing plants. Instead, they just recalibrate how much of a given plant they consume. Sheep will even eat polyethylene glycol, a chemical used in many human medications, when they are also consuming foods with a high tannin content—the polyethylene glycol neutralizes some of the less desirable effects of the tannins.17 It isn’t always clear how self-medication develops in an individual. Mother sheep seem to teach their lambs about which plants are best to eat, but the experiments that would be necessary to demonstrate the exact contributions of genetic predilections and early life experience to the behavior have not been done.

Though the animals show a remarkable kind of “nutritional wisdom,” as some of the scientists studying them put it, these results do not mean that we could all cure our own diseases if left to choose the right foods, or that tannins should become the new superfood, even for sheep and goats. It does suggest, however, that by allowing grazing animals to consume a wider variety of plants, we might be able to reduce the number of drugs they are now given to control parasites and diseases, drugs that can contribute to antibiotic resistance and pollution.

And speaking of pollution, or at least of human-produced substances that seem to litter our environment, an unusual form of preventing disease has been discovered in urban birds. In addition to being snug places to raise the chicks, many birds’ nests are plagued by lice, fleas, mites, and other parasites. These pests are more than a nuisance, since they suck blood from the vulnerable featherless young birds and can lead to slower growth or even death. Such pests are a particular problem in species that use their nests over and over, either within a season or even from one year to the next.

The birds can’t physically remove the intruders, but some species do the next best thing: they place aromatic leaves inside the twigs, grass, or other materials used to build the nest. The plants act as a natural fumigant, reducing the number of fleas and other external parasites. In an interesting twist18 to the story, Constantino Macías Garcia at the National Autonomous University of Mexico and his colleagues have documented House Finches, common birds that have extended their range to most parts of North America, weaving fibers from cigarette butts into their nests. The butts contain nicotine, which is often used as an insecticide, and the scientists wanted to see just how the birds used the novel material and if it was effective. They selected thirty-two nests, and a day after the eggs hatched, took out the lining the parents had already added, replacing it with felt, to ensure that any parasites already present were gone. Then they added live ticks to ten of the nests, dead ticks to another ten, and left the remaining nests parasite-free.

The parent finches (who one can imagine questioning their own sanity: “I could swear this isn’t the way the nest looked before”) were much more likely to add fibers from cigarettes to the nest if it had ticks, and they added much more of the nicotine-containing material to the nests with live ticks versus dead ticks. The amount of fibers added was also greater if the nest had originally contained more fibers, suggesting that the finches can gauge the parasite levels inside their own nests. The use of tobacco, however, carries a cost, even for birds; both the nestlings and parents from the nicotine-enriched nests showed signs of DNA damage that was not seen in the unexposed birds.

Note that what may look like anti-parasite behavior may not be. Most of us are familiar with cats responding to catnip by licking and chewing it and rubbing their faces in the leaves. It has been demonstrated that some cats seem to enjoy the substance and some are indifferent to it. A group of Japanese researchers wondered if the behavior was recreational or had some other function,19 and noted that catnip, along with its relative silver vine, can also act as a mosquito repellant. The scientists suggested that the behavior evolved to keep cats from being bitten, which potentially transmits disease, and that the distraction of buzzing mosquitoes might make hunting less effective. The media loved the idea of cats using catnip as a medication, and articles on how to adapt this for human use suddenly sprung up online.

I am not completely ruling out cats using catnip to deter biting insects, but I remain a bit skeptical. No one has demonstrated that cats in the wild use the plants successfully, nor that mosquitoes are a major problem for cats. Unlike dogs, domestic cats do not get mosquito-borne diseases in large enough numbers for concern, and the presence of a substantial fraction of nonresponders (seven out of twenty-five of the cats in the study20 were unaffected by catnip or silver vine) also argues against it being important for cats. Maybe we just have to accept that cats enjoy a nice recreational substance now and again.

The Not So Very Hungry Caterpillar

Neither ruminants nor songbirds win any awards for their reasoning skills, which makes the notion that warding off illness requires a high level of cognition, an assumption made by some of those studying the practice in apes, questionable. And as with many other complex behaviors, some of the best examples of self-medication come from animals with very tiny brains, namely insects. It would be easy to say either that insects have instinctive self-medication behavior, while primates and other vertebrates must learn to treat their ailments, or that both have some kind of innate wisdom about what’s good for their bodies, but that would be wrong on all counts. Like all behaviors, including complex ones like treating one’s own diseases, self-medication in primates and insects alike requires both input from genes and environment.

Admittedly, watching a caterpillar determinedly munching its way through a pile of leaves takes some of the romance out of animal self-medication. As with the goats, however, changing their diet is one of the first ways that insects defend themselves against disease. The familiar monarch butterfly is well known for its toxicity to birds that try to eat them; the butterflies get their noxious taste from chemicals in the milkweeds that the caterpillars feed on. Those chemicals, called cardenolides, can help control parasites that cause disease to the caterpillars, but they are also toxic to the caterpillars. Different kinds of milkweed have different levels of cardenolides, and mother butterflies lay their eggs on milkweed with more cardenolides if they are infected with a pathogen than they do if they are healthy. In so doing they confer greater protection on their young. The defense against disease comes at a price, however, with lower survival of the toxin-feeding caterpillars. Presumably this is a price worth paying in circumstances where diseases are rampant. Similar consumption of plants with high levels of pathogen-deterring chemicals has been documented in woolly bear caterpillars as well as bumblebees, which will preferentially consume nectar that contains alkaloids, another set of chemicals that can fight infection.

It’s particularly interesting that the monarch butterflies cannot cure themselves of an infection, but can only help their offspring grow up more resistant to attack. Such medication of kin, rather than of oneself, has now been demonstrated in a number of insect species.21 Perhaps unsurprisingly, social insects such as ants and bees are excellent medicine providers for their colonies, with both kinds of insects collecting plant resins and using them to line their nests. Like a botanical version of the sprays and wipes we use in our homes to kill germs, resins have natural antimicrobial properties. The insects use them enthusiastically, with some ants gathering up to twenty kilograms (forty-four pounds) of the sticky stuff and stashing it in their nest.

Honeybees have long been known to use resins collected from a variety of plants to coat the inside of their hives. A study22 by my friend and colleague at the University of Minnesota, Marla Spivak, showed that bees infected with chalkbrood, a fungal disease, collected more resin than uninfected control bees, and that the resin protected the hive from further infection. The bees do not eat the resin, making it a somewhat different kind of medical treatment than the leaves consumed by the apes or the dietary changes in goats. Marla advocates23 for the term social-medication, as opposed to self-medication, since the practice is not limited to the user of the substance in question. She suggests that because the resins are sticky and difficult to handle, bringing them back to the nests is a costly behavior that nonetheless pays off. That stickiness might also have contributed to an inadvertent problem for modern beekeeping; Marla believes that because the resin makes opening manufactured hives difficult, beekeepers have selected for bees that do not use it as much, which in turn leaves the colonies vulnerable to disease.

My favorite recent example24 of medical treatment in the insect world comes from a rather obscure ant species in Africa called Megaponera analis that makes its living hunting termites and bringing them back to the ant colony for food. Hunting termites is dangerous business, at least for an ant, because the prey defend themselves with their formidable jaws, and an ant may get injured, losing one or more of her legs (as an aside, all the workers that go out on the raids, like all the ants you see at your picnic, are worker females). Individual ants are often considered to be the disposable fast-fashion items of the insect world, cheaply generated and quickly consumed, meaning there is little reason to repair or cherish them.

These ants, though, live in relatively small colonies of several hundred, compared to the hundreds of thousands in other species. This means an ant that is wounded on a raid is often still worth saving, and a lightly injured party will take up a hunched position with the legs drawn up, which makes her easier to carry. Other ants then perform a kind of battlefield triage, determining which ants are worth saving, and leaving behind those that bear fatal wounds, such as having lost more than two legs. After taking the injured ant back to the colony, the other workers will groom and lick the area around the wound, removing the dirt and spreading natural antimicrobials from their salivary glands. Note that dirt removal is a rather specialized activity. Because ants, like all insects, have external hard skeletons that completely cover their bodies, dirt usually poses no threat to them; when their skeleton is punctured, however, disease-carrying bacteria become much more of a threat. The grooming reduced the likelihood of ants dying of their wounds by a whopping 80 percent. The four- and five-legged survivors quickly learn to cope with their disability. Interestingly, about a third of the ants on raiding parties have lost at least one leg. To me, this triage assessment followed by transport and finally treatment is far more sophisticated than a chimpanzee folding up a leaf when it has a stomachache.

Similar medical behavior, though without the field medics, has been observed in many other insect species, including flies, other kinds of ants and bees, and moths. How do these tiny creatures with their limited nervous systems perform such complicated acts? As with the other behaviors I’ve discussed in this book, there is no need to invoke a mysterious wisdom that comes with listening to the body or paying attention to nature’s pharmacy. Instead, it’s about applying those same rules of thumb that my crickets use when paying attention to how much song is present as they grow up.

When it comes to self- or social medication, behavior that responds to a particular cue in the environment, say the presence of a fungus in your nest if you are a bee, will evolve so long as the individual doing it survives and reproduces better than individuals that don’t. Anti-disease behavior is no different from other behaviors in this regard, and it evolves in stages, just like the spider-mimicking tail of the vipers I discussed in chapter 2. Imagine a primordial caterpillar that ate very slightly more of a food that contained disease-fighting chemicals when it was infected, but not when it was healthy. Why would it do that? It doesn’t matter—maybe the leaf smelled different from the rest. What matters is that doing so resulted in marginally better survival, and the predilection for eating it is at least partly heritable. Remember that evolution can move in small increments over nearly unimaginable lengths of time. The caterpillar doesn’t need to know what it is doing any more than the snake has to determine exactly how to make that spidery tail appear.

Neanderthal Teeth and Spicy Food

If animals using medicine is so widespread, then when in our evolutionary history did humans adopt the practice? And could we have learned about treating our own illnesses from animals? Several of the researchers working on ape self-medication speculate that humans could well have observed sick animals recovering after eating particular plants, and then emulated them. I am a bit skeptical about this possibility, given the number of hard hours of observation that have been necessary to document the practice in the primates to begin with. It’s not likely that early humans had the leisure time to do that much natural history work, but it can’t be ruled out. Perhaps more intriguing is the observation25 that elephant keepers in Thailand use many plants to treat their charges, some of which are also used by elephants in the wild, and 55 percent of them are seen in human medicine as well. Elephants have been part of human culture in Asia for at least four thousand years, and their behaviors may well have been carefully monitored.

Whether it arose from imitating other animals or not, the idea that early humans, or humanlike ancestors, were employing plants to heal their illnesses seems plausible. Even today, more than 80 percent of people in developing countries use plants for virtually all their medical requirements, and the natural pharmacopeia of plants with active compounds is extraordinarily large. Documenting the use of ancestral medicine is difficult, of course, since plant material usually doesn’t fossilize well and other evidence is scanty.

An unusual source of information about our evolutionary ancestors and relatives use of medicine comes from a rather humble place: dental calculus. Also called tartar, it is the hardened stuff that accumulates on the teeth and, for modern humans, gets removed by the dental hygienist at our regular cleanings. Early hominins accumulated dental calculus too, and the hardened material traps anything that goes into the mouth. Archaeologists have analyzed26 DNA from the dental calculus of Neanderthals from several sites in Europe and found traces of chamomile (Matricaria chamomilla) and yarrow (Achillea millefolium), neither of which are food plants and both of which may be used in herbal medicine to treat a variety of maladies. A penicillin-containing fungus was also found in the teeth of a Neanderthal, giving rise to the speculation that antibiotics might even have been employed in prehistoric times.

Early humans, as well as Neanderthals, may also have helped each other when they were ill or wounded. Skeletons showing signs of recovery from infections or severe injuries including broken bones have been recovered from sites in several parts of the world. Given that we are social creatures, it stands to reason that our ancestors would have helped each other when necessary. This does not mean, again, that we—or any other animal—have a mysterious inner instinct that shows us which plants are poisonous and which can heal us. It is tempting to think that we simply inherited the knowledge found in chimpanzees and other apes and then embellished upon it as our species evolved. Indeed, anthropologist William McGrew suggested27 that we can use chimpanzees as a blurry model of our earlier selves, saying, “Anything a chimpanzee can do today could also have been done by the Last Common Ancestor six to seven million years ago.” He admits that this is a simplifying assumption, but it also falls into the fallacy of the scala naturae: chimpanzees did not stop evolving when they and we split apart from that ancestor, and their self-medication behavior, as well as many other aspects of their lives, could certainly have changed dramatically over such a long stretch of time. We could both use medicine because of convergent evolution rather than a shared inheritance.

Have humans evolved to use food—not specific dishes, but their customary diets—as medicine? One of the most basic ways in which cultures around the world differ is in their cuisines. Scandinavian food is not the same as Italian, and both use different ingredients with different manners of preparation than are commonly found in Thailand or India. Perhaps most notably, cuisines differ in the amount and kind of spices that they use, and it’s common knowledge that tropical countries tend to have spicier food. In the late 1990s, behavioral biologist Paul Sherman28 wondered if this co-occurrence of hot dishes and hot climates might have arisen because the spices help prevent parasites and foodborne illnesses. In other words, cuisines evolved to be spicy where those spices did the most good. If so, then people who used spices where the risk of getting sick from food was high would be more likely to survive and reproduce than people who ate a blander diet. Again, it doesn’t matter why people think they use the spices, just that they do so.

It is certainly true that many spices have antibacterial properties, and meat is particularly likely to spoil in places where refrigeration is or historically was hard to come by. Sherman and his colleagues reasoned that meat dishes therefore should be more highly spiced than vegetable ones. They tabulated the use of spices in recipes for both meat- and vegetable-based dishes from thirty-two countries, and then looked for any relationship between climate and the spiciness of the cuisines. As they had predicted,29 hotter places used more spices, and were particularly likely to use more chilies, garlic, and onion, as well as anise, cinnamon, coriander, cumin, ginger, lemongrass, turmeric, basil, bay leaf, cardamom, celery, cloves, green peppers, mint, nutmeg, saffron, and oregano. Not all of these are spices by a strict definition, but the first ten items, all of which are spices or at least similar in their flavoring use, are especially good at inhibiting bacterial growth.

This seems like conclusive evidence for cultural evolution in humans, with societies gravitating toward foods that also help them avoid illnesses. But more modern work has found a problem with Sherman’s analyses, one that is only solved by recent methods for analyzing the kind of data Sherman used. Lindell Bromham, a professor at the Australian National University in Canberra, pointed out30 that the correlations among cultures make it difficult to draw conclusions just by looking at relationships between a given country, its climate, and its cuisine. It is in part a classic confusion of correlation and causation that can arise in evolutionary studies.

Before delving into the spices, consider a simpler, hypothetical, example. Say you want to see whether birds that wake up before dawn are also more likely to eat worms than birds that rise later in the day. You imagine that selection has acted on each of the species, favoring the predawn behavior because it makes hunting worms more effective. You survey ten bird species and find that, indeed, the three species that sleep late eat other foods, while the seven bird species that are up early mainly consume worms. Conclusion drawn.

The problem, however, is that you didn’t control for another variable that led to your results: the evolutionary relationships among your ten species. It turns out that of the seven worm-eating species, five shared a common ancestor relatively recently. They all inherited their predawn wakefulness from that ancestor, and so they really don’t comprise five independent ways to test your idea, but only one. You would need to find four more bird species that did not have a recent ancestor to test the theory.

Similar issues arise in any large-scale comparison seeking to explain why traits occur together, and the spices-parasite link is no exception. As Bromham put it,31 “cuisines in South East Asia are all more similar to each other than any of them is to Scandinavian food, and they are also all in a much warmer climate. Treating all of those cuisines as separate data points will make it look like there is a link between climate and cooking, even if there isn’t.” The food in Thailand is like the food in Malaysia in part because the populations of both countries learned from each other as their cuisines were developing. Twenty years ago, when Sherman did his analysis, techniques for statistically controlling for this problem weren’t available, but Bromham and her colleagues were able to use more modern methods to reexamine recipes from around the world and see if they supported the earlier conclusions.

If you permit me the pun, spoiler alert: they did not. Bromham used a set of 33,750 recipes with 93 different spices from 70 cuisines on six continents. Any way you slice it, climate isn’t related to a cuisine’s spiciness. The risk of foodborne illness is correlated with spice use, as you would expect, but so are other diseases and conditions, including fatal car accidents, which means that people in hotter countries have lower life expectancies. But this, as Bromham says “doesn’t mean that spicy food shortens your life span (or makes you crash your car).”32 Other intervening and as-yet unexplained factors connect the two. After the publication of her work, Bromham has received emails from all over the world proposing reasons why people have spicy diets, but so far the jury is still out. Many parts of a culture are associated with its food, including its wealth. But the moral of the story is that one should be careful about extrapolating evolutionarily derived behaviors, whether that is spiciness of food or picking a mate, from a comparison of human cultures.

The Subtler Side of Disease

The flashy effects of disease like fungus-directed ants or fearless mice get all the attention due to their novelty. But diseases exact a toll on many aspects of a human’s or animal’s life, often subtly, but with extremely important results. To illustrate, let’s consider risk management behavior in gerbils. I realize that the phrase “risk management behavior” calls to mind studies of the way people use seatbelts or ride motorcycles, and is not usually associated with rodents. But a 2020 study33 of Allenby’s gerbil, a species that lives in the deserts in and around the Middle East, showed that the animals respond to a bacterium infection by changing their behavior in ways that could have far-reaching effects on their ecosystem.

The bacteria is a variety of Mycoplasma, a group of micoorganisms that can cause a range of diseases including a form of pneumonia. This variety is common among seed-eating gerbils and seems to cause deficiencies in some of their essential nutrients, though it does not outright kill the rodents. The scientists suggested34 that the gerbils may respond to infection by changing their behavior in two different ways. First, they might compensate for the deficiency by foraging more, which exposes them to predation by owls and other night-hunting animals. Alternatively, they could simply become lethargic, which has a number of consequences, some of which are not straightforward. Lethargic gerbils are less able to escape predators because they cannot run away as quickly. That in turn means that they should spend more time looking out for predators, to give them more lead time. But spending time looking over one’s shoulder, so to speak, takes away from time the animal can spend searching for seeds, further decreasing its nutritional health. Balancing the risk of being eaten against the need for food thus requires sophisticated risk management.

The researchers, who were based in Israel, the United States, and Brazil, set up large enclosures in the desert habitat of the gerbils and infected half of them with the bacteria. They then placed seeds in trays that were either close to hiding places in bushes or out in the open, and then released the tagged animals to determine their behavior. Finally, they allowed a Barn Owl into the enclosures to act as the predator, feeding the bird first to ensure it didn’t simply scarf up all the gerbils in a single go. It turned out that the sick gerbils spent more time foraging, which exposed them more to the owl. They also seemed to get fewer seeds per unit of time spent searching for food, perhaps because they were, like a human suffering from the flu, simply less able to concentrate on searching. Interestingly, a chronic, long-standing infection had worse effects on the gerbils than an immediate acute infection. The upshot is that although the bacterium itself didn’t kill the animals, or even keep them from apparently performing normal behaviors, it had a big effect on which gerbils were successful and, in time, which ones reproduced most successfully.

It’s All in Your Head

Finally, from the You Can’t Make This Stuff Up files, we have a story about sea slugs and an extreme reaction to disease. Sea slugs are the rather more glamorous cousins of the shell-less mollusks you find in your garden. Often beautifully colored, they move sinuously through the water in oceans around the world. Two species, called sacoglossan sea slugs, were recently found35 to have an extraordinary ability: they can decapitate themselves, and then grow a completely new body, including the heart and digestive organs, from the head alone. The detached body does not respond in kind, and instead it moves around in presumed bewilderment for several days to months before it expires, a scene that surely should be incorporated into a horror film at the earliest opportunity. The heads can start eating algae, their usual food, within hours. The algae, which live inside the cells of the slug, can photosynthesize to provide the slug with energy, apparently enabling it to shortcut the usual need for nutrients. Numerous questions about this phenomenon remain; in a mastery of understatement, Sayaka Mitoh and Yoichi Ysa, the scientists who did the study, said, “The reason why the head can survive without the heart and other important organs is unclear.”

Why are the slugs relevant to this chapter about behavior and disease? Many animals can regenerate body parts, such as a lizard and its tail or a salamander and a limb. They do so either as a way to avoid predators (a hungry bird ends up with a thrashing tail in its mouth while the rest of the animal escapes) or to recover from injury. The slugs, however, seem to engage in their voluntary decapitation because of—you guessed it—parasites. Sea slugs get a parasite that lives in their bodies, but it does not occupy the head. The parasite makes it difficult, if not impossible, for the slugs to reproduce, and when the head grows a new body, it is parasite-free, allowing the slug to go on to a more fulfilling future. The behavior seems a tad on the extreme side, but it goes to show that fighting disease can get you into some unexpected situations. Rudyard Kipling wrote approvingly that:

If you can keep your head when all about you

Are losing theirs and blaming it on you,

. . .

Yours is the Earth and everything that’s in it

For the slugs, at least, it’s not so much about keeping your head as it is about relinquishing your body, though probably that is not what Kipling had in mind.

It is fitting, I think, to conclude this book with the humble sea slug, an animal that few people have heard of and fewer still would find appealing. Yet the intricate nature of its response to infection illustrates both the complexity that exists in seemingly simple creatures and the futility of creating dichotomies among animals in terms of behavior or intelligence. Even a humble slug is capable of feats that humans cannot achieve. Those dichotomies are false whether we put humans into one category and other animals into another, or whether we allow apes, crows, and octopus into the club and exclude everything else. It is also false if we try to classify our behavior as learned and theirs as instinctive. Instead, understanding how behavior evolves allows us to celebrate the entanglement of both.