Up the airy mountain
Down the rushy glen,
We daren’t go a-hunting,
For fear of little men ...
William Allingham, The Fairies, c. 1870
These days, of course, we still believe in invisible things that make us ill, but now we call them germs. There are other, almost invisible things that make us ill, invertebrates that feed on us and spread disease, and some that harm us with their stings, but germs do it without any equipment at all. The poisoning game is one even the tiniest can play on equal terms, so even if we are too clever to fear little men anymore, we should fear the tiniest organisms for their toxins.
We assume hygiene to be a recent invention, but consider this description of food poisoning, taken from the Prologue to the “Cook’s Tale” in Chaucer’s Canterbury Tales. It is given here once in the original, and once in a later version that largely retains the feel of Chaucer’s language while making more sense for modern readers. The cook asks if the other pilgrims will give him a hearing:
Oure Hoost answerde and seide, ‘I graunte it thee
Now telle on, Roger, looke that it be good;
For many a pastee hastow laten blood,
And many a Jakke of Dovere hastow soold,
That hath been twies hoot and twies coold
Of many a pilgrym hastow Cristes curs,
For of thy percely yet they fare the wors,
That they han eten with thy stubbel goos;
For in thy shoppe is many a flye loos.’
Our Host answer’d and said; ‘I grant it thee
Roger, tell on; and look that it be good,
For many a pasty hast thou letten blood,
And many a Jack of Dover hast thou sold,
That had been twice hot and twice cold
Of many a pilgrim hast thou Christe’s curse,
For of thy parsley yet they fare the worse,
That they have eaten in thy stubble goose:
For in thy shop doth many a fly go loose.’
Geoffrey Chaucer, Canterbury Tales, c. 1387
Two things stand out in the comments of the Host here: the risk of reheating cold food that may already be tainted, and the part flies play in spreading illness in some unspecified form. The cook was in the habit, he says, of serving reheated Jack of Dover, taken to be warmed-up pie, of “letting the blood” (draining the gravy) from pasties, his parsley makes them ill, and flies are loose in his shop. Duncan Gow’s children made him a “parsley sandwich” that was in fact hemlock, but this is probably not Chaucer’s meaning here.
The world of science and medicine has two opposing views of disease, the environmental and the microbial. Malaria, for example, was first thought to be caused by the miasma of swamps and later by tiny life forms, injected into the bloodstream by mosquitoes. The evidence for swamp miasmas was that draining the swamps reduced the incidence of malaria, but we now recognize that draining the swamps reduced the number of mosquitoes. Long before the emergence of the germ theory in the 1860s—at least as far back as the eighteenth century and maybe even longer—a few people had at least an inkling that there were tiny things causing some sorts of illness, maybe not Allingham’s “little men” but something too small to be seen by the naked eye:
[I cannot believe that people] . . . talk of infection being carried on by the air only, by carrying with it vast numbers of insects and invisible creatures, who enter into the body with the breath, or even at the pores with the air, and there generate or emit most acute poisons, or poisonous ovae or eggs, which mingle themselves with the blood, and so infect the body.
Daniel Defoe, Journal of the Plague Year, 1722
Defoe is best known to us as the author of Robinson Crusoe. Here he was probably relying on notes kept by his uncle, Henry Foe, who actually lived through the London epidemic, since Defoe had to have a source for his vivid descriptions of life in the time of plague. So in the 1660s or, at the very latest, around 1720, somebody must have been gossiping in pub or coffee shop about microbes, sufficiently often for one of the Defoe family to pick up on it.
The people who might be tempted to try Botox are the same people who will spread their kitchens and bathrooms with hideous poisons, designed to kill every germ, because germs are seen as an even more deadly form of poison, while disinfectants are good, and harmless. The ideas of “poison” and “germ” are intertwined in history, and certainly in this tale, first published by Oliver Wendell Holmes in the 1840s:
And if this were compared with the effects of a very minute dose of morphia on the whole system, or the sudden and fatal impression of a single drop of prussic acid, or, with what comes still nearer, the poisonous influence of an atmosphere impregnated with invisible malaria, we should find in each of these examples an evidence of the degree to which nature, in some few instances, concentrates powerful qualities in minute or subtile forms of matter.
Oliver Wendell Holmes, Medical Essays, 1842
In the same vein, Charles Darwin (who may or may not have treated himself with arsenic when he was young) wrote in The Voyage of the Beagle that “at this period the air appears to become quite poisonous; both natives and foreigners often being affected with violent fevers.” On the question of Darwin’s treatment, some people claim that there is no proof of it, but he was certainly treated with calomel, and there was this letter written in 1831:
Ask my father if he thinks there would be any objection to my taking arsenic for a little time, as my hands are not quite well, and I have always observed that if I once get them well, and change my manner of living about the same time, they will generally remain well. What is the dose?
Charles Darwin, letter to Miss Susan Darwin, September 6, 1831
When we look at the large number of bacteria that make us ill or kill us by the production of assorted toxins—ranging from diphtheria to whooping cough to food poisoning, to cholera, tetanus, and more—it is quite legitimate to think and speak of the bacteria as poisons, at least indirectly. In a few cases, those toxins they generate are quite capable of causing the symptoms of the disease even in the absence of the bacteria, so it is no wonder that people were willing to hang on to the poison model a little longer, if they were so inclined. Old ideas, old paradigms always take a while to die out completely.
In this case, the decisive period was, roughly, the decade of the 1860s. The medical world was beginning to take the germ theory seriously, and to look ever more closely at the chemical poisons that might be used against those other poisons, now revealed as microbes, to make people healthy again. The process of change was still incomplete in 1875, when Alfred Taylor made it clear that, for him, rabies was still a poison, not a microbe: “arsenic is a poison whether it enters the blood through the lungs, the skin, or the stomach and bowels: but such poisons as those of the viper, of rabies, and of glanders, appear to greatly affect the body only through a wound in the skin.” (The horse—and, occasionally, human—disease glanders is caused by a bacterium called Pseudomonas, but in 1875 this bacterium was yet to be seen.) In time, new culturing methods, oil immersion lenses using toxic oil of savin, and new artificial dyes that stained bacteria by binding tenaciously to them would change all that. Later still, these dyes would be used to attach poison molecules to the bacteria, but that is another story.
Perhaps the best recorded example of the way thinking developed over the decade can be seen in the reports of the Royal Commission that investigated the outbreak of “cattle plague” that began in the summer of 1865—these days, we would call it rinderpest. It seemed a novel event, because the last outbreak of rinderpest in Britain had been over a century earlier.
The 1865 plague showed up first at Islington cattle markets in June, but within a month it had reached East Anglia, Shropshire, and Scotland. Transport has always been a major factor in the spread of epidemics: road transport in Africa carried HIV away from its starting point; the bubonic plague outbreak on the Pacific rim around 1900 owed its start to steam trains in China and steamships to places like the east coast of Australia and the west coast of America; and, in more recent times, we have seen SARS and West Nile disease spreading with help from jet aircraft. In 1865, a combination of free trade and railway cattle trucks allowed the cattle plague to spread far and fast, especially as nobody knew how it was spreading.
By September, 1,702 farms and 13,000 cattle were affected. By January 1866, 120,000 cattle were affected, and the situation was getting worse. The authorities imposed a weak ban on moving cattle, but there was no effective enforcement of the ban, so cattle were still moved and the disease still spread. People might have been gaining a clearer understanding of infection, but its causes were still a mystery. The London Times, whose editorials were more influential than any broadsheet or its editor today could dream of, was opposed to the notion of contagion, which meant people tended to favor miasmatic theories of plague poisons spreading far and wide.
That would all change, of course, when Robert Lowe, a member of the Royal Commission, became an editorial writer at the Times in October 1865. The tide of opinion began to change, but by then experience too had imposed a greater acceptance of the contagious nature of the disease, and, in February 1866, when the commission’s second report came out, there was general agreement that only slaughter could stamp out the plague. In the end, half a million head of cattle died.
As science confronted the plague, there was a great deal to consider, and all the best minds were set to the problem—Lyon Playfair, William Crookes, J. B. Sanderson (after whom J. B. S. Haldane was later named), and a pupil of Justus Liebig named Angus Smith.
Despite the germ theory starting to gain some prominence, time and again we find references in the commission’s report to the poison causing the disease: telling phrases such as “the blood contains the poison of the disease, so that serum obtained from it will give the disease by inoculation,” and, “the poison contained in a minute portion of the mucous discharge . . . multiplies when it is injected into an animal, and so causes it to sicken in turn.” On the other hand, they also spoke of disinfection, albeit in a qualified way:
Disinfection, in the sense in which the word is used here, implies the destruction of an animal poison in whatever way it is accomplished. To find a perfect disinfectant for the Cattle Plague poison would be to stop the disease at once.
Third Report of the Commissioners into the Cattle Plague, May 1866
In 1854, Angus Smith and one Alexander McDougall had patented McDougall’s Powder for use as a sewage deodorant. McDougall manufactured and sold the powder, which was mainly carbolic acid, which Taylor describes as “a crystalline product of the distillation of coal-tar. When pure, it melts at 102°F. It has a characteristic, and not unpleasant odour.” He adds that this poison could be as swift as prussic acid in its action, bringing death in as little as 20 minutes, and usually within four hours. Mostly used for suicide, he notes, it leaves white staining around the mouth and brown stains called eschars on the skin where the poison trickles from the mouth. In 1864 McDougall used it successfully to kill parasites of cattle on a sewage farm. When Joseph Lister heard of this, he came up with the idea of using the spray in antiseptic surgery, becoming the first of the modern Great Poisoners of medicine.
By 1867, an exultant Lister was able to write:
Since the antiseptic treatment has been brought into full operation, and wounds and abscesses no longer poison the atmosphere with putrid exhalations, my wards, though in other respects under precisely the same circumstances as before, have completely changed their character.
Joseph Lister, British Medical Journal, 1867
In January 1866, William Crookes suggested to Smith that they patent the use of carbolic and cresylic acid as a disinfectant. They probably would have been unsuccessful (in part because the mixture had already been described in Crookes’ own journal, Chemical News). In any event, Smith declined. He was, after all, one of the commissioners on the Cattle Plague Commission, and so might have been accused of having a conflict of interest. They agreed, though, on the efficacy of this approach, and they urged others to use disinfectants.
Crookes and Smith seem to have thought that the disinfectants worked by dissolving the poison rather than killing the microbes, but, whatever their thinking, Crookes knew that he was on the right trail at last. As early as December 1865, he had advised a friend, a Mrs. Carmichael of Thirsk, to use carbolic acid as a disinfectant. Not content with this, he went to her farm to make sure the procedures he had recommended were carried out. All Mrs. Carmichael’s 25 cattle were saved, but surrounding farms all suffered big losses. Smith and Crookes set out their method:
Wash the woodwork of the (cow)sheds everywhere with boiling water, containing in each gallon a wineglassful of carbolic acid. Then lime-wash the walls and roofs of the shed with good, freshly-burnt lime, adding to each pailful of whitewash one pint of carbolic acid. Cleanse the floors thoroughly with hot water, and then sprinkle freely with undiluted carbolic acid. Lastly, close all the doors and openings, and burn sulphur in the shed, taking care that neither men nor animals remain in the shed while the burning is going on.
Angus Smith & William Crookes, Recommendations for Disinfection, 1866
Crookes’s ownership of Chemical News came in handy, as he was able to write up his procedures and results—and accord himself just a bit of self-praise—free of any unpleasant peer review requirements. He wrote a report that he serialized in the journal, saying the sheds were easily contaminated by a virus (a word that he used in the older sense of poison) like smallpox. He also carried out an experiment, of sorts, where he collected air from sheds housing dying cattle and passed it through cotton wool. He treated one half of the cotton wool with carbolic acid vapors for half an hour, and then inserted the two pieces in two calves. The calf that received the untreated cotton wool died, the other lived—hardly a defining result, but it was at least indicative. The official report of the Royal Commission carried the most weight, however:
A large number of substances which can be used in many other cases as disinfectants must be put aside. . . . Compounds of iron, zinc, lead, manganese, arsenic, sodium, lime, or charcoal powder, and many other substances, want the volatile disinfecting power; iodine, bromine, nitrous acid and some other bodies are too dear, or are entirely volatile, or are injurious to the cattle.
On full consideration, it appears that the choice must lie between chlorine, ozone, sulphur, and the tar acids (carbolic and cresylic). Two of these bodies, viz., chlorine, in the shape of chloride of lime, and the tar acids, have the great advantage of being both liquid and aeriform; they can be at once added to discharges, and constantly diffused in the air.
Third Report of the Commissioners into the Cattle Plague, May 1866
The report adds that there is evidence that chlorine, ozone, sul-furous acid, and the tar acids “all actually do destroy the Cattle Plague poison,” but, in the long term, the greatest benefits were seen when people began to think in terms of poisoning the microbes causing disease.
It would be a while, though, before living germs would be fully accepted as the main cause of disease. Indeed, even now, there are people who assert loudly that all disease is caused by toxins. They are generally followers of an excellent medical man, Sir (William) Arbuthnot Lane (1856–1943), who, in his earlier days, introduced the use of metal plates, rather than wires, to join fractures. He also pioneered a treatment for cleft palate, but in 1925 he went off the rails, at least from the viewpoint of mainstream medicine. All disease, he averred, was caused by toxins created in the bowel.
The result of this was that during the late 1920s, Lane’s toxin theory of disease led to surgical removal of large parts of the intestine to deal with vague complaints such as headaches, backaches, and depression. Colonic irrigation—a throwback to old-fashioned purging—first became fashionable among the wealthy and hypochondriacal. If all those bacteria in the intestine were producing toxins, then, without a home, they would be stymied. Off with the bowel, Lane cried!
Well, who can say? Attitudes change, and while there is no evidence for it, perhaps there is something in what Lane said. Once it would have been heresy to suggest that bacteria caused ulcers, which were obviously caused by some form of “poison” secreted in the intestines—now we know Helicobacter pylori causes them. Ideas change as we get more knowledge. Just recently, a group of researchers in Melbourne and London suggested that a possible cause of what they dub “the diabetes epidemic” may be a toxin produced by Streptomyces bacteria consumed in foods, such as potatoes, taken from the soil—making potatoes even more risky than their glycemic index and solanine levels might suggest—and the infection may extend to other root crops as well:
Streptomyces species are ubiquitously present in soil and some can infest tuberous vegetables such as potatoes and sugar beet. Hence dietary exposure to a Streptomyces toxin could possibly cause repetitive pancreatic islet-cell damage, and so be diabetogenic in humans genetically susceptible to autoimmune insulitis.
Zimmet, Alberti & Shaw, Nature, 2001
At this stage, this remains just a plausible and interesting hypothesis, one of many arising from the radical advances in both medicine and biology in the twentieth century. Several generations of biologists and medical researchers have been inspired by Paul de Kruif’s The Microbe Hunters, first published in 1927, but even as they read it, they were entering into an era where the true heroes of medical science would be the poison hunters.
Tasting the dorsum of frogs in the field has proved to be a sensitive test for the presence of pharmacologically active compounds in skin secretions. The presence of such compounds confers bitter, burning, or otherwise unpleasant taste. However, the possibility that it may confer posthumous authorship on papers such as this is a strong negative factor in its use.
Neuwirth, Daly, Myers & Tice, 1979
Not all searches after new poisons have been quite so risk-laden, not all applications of poison have been with malignant intent, and many of the most unpleasant poisons have been quite stealthy—and in many cases, microbial. We can live with that, though, largely because other parts of the biosphere will usually provide an antidote to those poisons.
Some medical workers now argue that one supposed deficiency disease, kwashiorkor, is, in fact, caused by a mycotoxin. Stomach ulcers are now blamed on a bacterium. Of course, the bacterial infection is treated with a poison, an antibiotic, but it remains to be seen how many other conditions are caused by toxins produced by tiny life forms.
Some forms of heart disease may be caused by viral infections of the cardiac muscle, and some cases of hardening of the arteries may be caused by plaque-forming bacteria, though there is probably no toxin involved in this case. Cancers may be caused by toxins released by viruses, but some cancers may be treated by sending in modified viruses to target and kill the cancerous cells.
Cells that go wrong and start multiplying out of control normally recognize that they are doing the wrong thing and destroy themselves. We call this apoptosis, a poetic name drawn from the Greek for “falling leaves,” and recent research has shown just how vital this process is. It defends us from cancers, abnormalities that are normally snuffed out by special suicide bags that take out rogue cells. A cancer only gets away when the body’s ability to poison a suspect cell is lost.
Just as a hive of bees or a nest of ants is genetically programmed to dispose of a single sick member for the benefit of the rest of the individuals with the same genes, so our cells will destroy the cells that go bad, assuming the affected cells do not destroy themselves first. And because other cells with the same genes do better as a result, evolution reinforces this behavior at every step along the way.
Certain types of cancer, about 60 percent of all malignant tumors, arise because a gene known as p53 has somehow been knocked out or disabled. When it is operating properly, p53 stops cells from slipping into uncontrolled cell growth, a common feature in cancer. The American researcher Frank McCormick designed an adenovirus, a common cold virus known as ONYX-015, to take advantage of this missing gene.
An ordinary adenovirus contains a gene called E1b that disables the p53 mechanism, allowing it to attack a healthy cell. An adenovirus without E1b could not invade normal, healthy cells because it would not be able to disable the healthy p53 gene. On the other hand, tumor cells that lack the p53 gene should be an easy target for the virus. In other words, a disabled adenovirus would be a magic bullet, a smart bomb, an agent that can tell friend from foe, and wipe out the foe. The virus would enter the tumor cells and, with no p53 gene to inhibit it, would replicate continuously, ultimately causing cell death.
This, at least, was the theory behind McCormick’s work, but laboratory studies showed that ONYX-015 replicates in tumor cells in which p53 remains intact. So if that part can go wrong, what else might the virus be doing? Is it a cause for worry? Apparently not. McCormick took into account some work reported in Nature in 1998 to offer a possible explanation of what is happening. The explanation involves a second gene, p14ARF, which appears to be mutated and defective in some tumor cells—it seems the damage to p14ARF may indirectly disable the p53 function.
McCormick reported that when normal cells are infected with ONYX-015, the p53 protein is produced, and the virus is shut down. In tumor cells that are missing p14ARF, despite their p53 genes being intact, this p53 production does not occur, leaving the cells open to adenovirus attack. Their strength, the missing gene that stops the cancerous cells from poisoning themselves, can be their downfall. (More recently, this work encountered some setbacks and has now been discontinued, but the facts remain the same.)
There is a somewhat older medical tradition of magic bullets that began with German immunologist Paul Ehrlich, who entertained a novel idea. The nineteenth century had seen the science of microscopy advance by leaps and bounds, as better lenses and oil immersion lenses using juniper oil combined with new and marvelous stains that locked on to specific chemicals found in some cells but not others. Now thin sections of tissue could be examined, and different cells could be distinguished. Ehrlich’s notion was to find a stain that locked on to microbes but not to any cell in the human body, a stain that might attach to a lethal dose of poison or just to a single cell or cell type.
His 606th trial was on arsphenamine, better known today as salvarsan, C12H12N2O2As.2HCl. 2H2O or dioxy-diamino-arsenobenzol-dihydrochloride. This became the famous magic bullet for syphilis. It was less than perfect, and it had a reputation for killing the patient, but then syphilis itself resulted in a horrid death from madness, often preceded by blindness. Mercury, in the form of intramuscular injections of mercuric succinamide, was a common treatment (and gave rise to J. Earle Moore’s aphorism, “Two minutes with Venus, two years with mercury”).
These days, however, salvarsan has been superseded by more benign poisons, the antibiotics prescribed for all sorts of other conditions and which are equally effective on the spirochetes of syphilis. No doubt, given time, though, some of these conditions will start to develop resistance to the common antibiotics, and we will be back where we started.
The magic poison bullet model still works, though, and one of the more effective modern variations has been very small doses of ricin, or at least the A chain of the toxin, the one that does damage when it gets inside a cell. Of course, without the B chain that opens the door to let it in, the A chain can do nothing, but when attached to an antibody, ricin A chains can be released into cultures of human cells infected with diseases such as HIV-1 in latent form. Other similar structures have been used on a number of cancers, and researchers are now working on modified forms of the A chain that should be even safer.
So are there still new poisons out there to be discovered, or only old poisons to be rediscovered? Most probably there are both. One of nature’s Great Poisoners is ergot of rye, a resting stage in the life cycle of the fungus Claviceps purpurea. The ergot is more formally called the sclerotium, a formation that hibernates in cold climates, ready to start a new generation the following year. Ergot replaces the seeds of rye, producing a purple lump that looks to the French like a cockspur, or ergot. The ergot looks quite unlike the true grain, but it was so common people thought it was part of the rye plant, until the 1850s when the true nature of the ergot was understood.
Ergot is not something we think about each day, yet it is not all that rare and unusual. I could go into my street and find it two houses away. There is nothing to fear from it, even though it is deadly, or potentially so. There are about 35 species of Claviceps, most occurring on grasses, like my neighborhood species. All of them form a sclerotium and the same types of mycotoxins, as the poisonous alkaloids of fungi are usually called, but most of them never get near the human food chain. These fungi produce four groups of alkaloids, including lysergic acid and lysergic acid amides (think LSD) and two related compounds, ergotamine and ergocristine. There is no LSD as such in ergot, but any one of the mycotoxins can trigger hallucinations as part of what is termed “convulsive ergotism.” There is another form, “gangrenous ergotism,” where the blood supply to fingers, toes, and limbs is cut off, leading to death of the flesh and loss of the affected part, or to infection and gangrene that can prove fatal.
Mycotoxins are high on the list of accidental poisons. A few of the mycotoxins, penicillin among them, are helpful antibiotics, diabolically toxic to bacteria and just mildly unpleasant to us, but some are quite deadly to humans, though it may not always be immediately apparent when a victim is being poisoned.
For example, liver cancer is an important public health problem in developing countries, where it may be ten times as common as in developed nations. As many as 76 percent of recorded cases of liver cancer occur in Asia. This might possibly be caused by genetics or by lifestyle, or by something in the environment. Exposure to the hepatitis B or C viruses, or to fungal aflatoxins, or both, seems to be the prime suspect. One obvious question is why the liver is such a common target. The answer is that the majority of the liver’s cells, called hepatocytes, are constantly taking on foreign molecules, and often, losing the battle.
Aflatoxins are metabolic products of some species of Aspergillus (mainly A. flavus and A. parasiticus) and are among the most potent liver cancer compounds known. So who is most at risk? It depends where you live: the legal limit for aflatoxin in food is 4 parts per billion (ppb) in France and the the Netherlands, 15–20 ppb in Canada and the United States, and 30 ppb in India. A lot depends on how rigorously these limits are policed, how much of the food eaten is close to the limit, or on exposure to hepatitis B, because that virus can increase the risk of liver cancer arising from food contaminated with aflatoxin thirty-fold.
While health authorities look at ways of reducing aflatoxin exposure, the evidence would seem to suggest that it would do much more good to vaccinate the developing world’s population against hepatitis B. These toxins can be found in the developed world in peanut butter made from untreated “organic” peanuts. Such contaminated crops are hard to sell in the developed world, however, so they are usually sold in the developing country of origin, or find their way into famine relief.
One ergot product, ergonovine, may produce abortions, either by mischance or deliberately, and some of its other toxins cross over into breast milk, affecting infants who are particularly vulnerable. University of Maryland historian Mary Matossian, among others, has made a strong case for ergot as a major shaper of European history, an effect that she argues is largely hidden by other events.
The problem with interpreting old records is that not only does ergot produce over twenty different alkaloids, but different strains of ergot produce different amounts of individual mycotoxins, so Russian ergot was less likely to affect women’s fertility, even though the Russian strains had a higher overall level of alkaloids. And if the wind-blown Claviceps can vary across a geographical region like Europe, it can most certainly vary over time.
Of course, depending on the strain and the way it is treated, ergot can also be useful. In German, it is called mütterkorn or “mother’s grain,” recalling the way German midwives used it in the sixteenth century to help women in labor. The dosage had to be measured extremely carefully. Just the right amount of the purple grain would hasten contractions; a little more and ergot was an efficient abortifacient; a very little more and the woman suffered gangrene and convulsions.
So, given that ergot is variable in the poisons it produces, a number of scientists have looked at patterns of disease, and wondered if some of the things we interpret as arising from a modern “disease” might not be a result of ergotism or some other mycotoxin, acting either directly on the victims and killing them, or indirectly by suppressing their immune system, so that death was caused by otherwise serious but not deadly diseases.
The evidence for this has to come from patterns of mortality—looking at who died, where, and when. Mary Matossian argues, for example, that British “ague,” commonly understood to be malaria, broke out at the wrong time of year for mosquitoes, and it is more likely to have been caused by toxins from ergot-infected rye. This is a reasonable hypothesis, but the disease might also have been caused by some other mycotoxin growing on fruit varieties that have since been discarded or a strain of fungus that no longer exists.
Other grains have their own fungi; wheat, for example, has Fusarium, but its toxins do not pass through mother’s milk like the ergot alkaloids. It is possible that the high incidence of infant mortality throughout the ages could have something to do with ergot, either as direct poisoning from breast milk or as a result of immature immune systems being suppressed and rendered vulnerable. These days, the highest death rates from infectious disease occur in dry summers, but before about 1750, wet summers were the deadliest. One way to explain this is to blame the deaths on ergotism or other fungal toxins, but whatever the reason, there seems to have been a Great Poisoner about—it is even possible there never was such a disease as the Black Death.
Most epidemicists assume that the Black Death was bubonic plague, but in 2002 anthropologist James Wood and his colleagues argued that the data do not fit. They claim that one reliable record, English priests’ monthly mortality rates during the epidemic, shows a forty-five-fold greater risk of death than during normal times, a level of mortality far higher than the rate usually associated with bubonic plague. We might see those death rates when a previously uninfected population first encounters a new disease, but the figures appear too extreme for the disease to have been bubonic plague.
Modern bubonic plague typically needs to reach a high frequency in the rat population before it spills over into the human community via the flea vector. Historically, epidemics of bubonic plague are associated with enormous die-offs of rats, but there are no reports of dead rats in the streets in the fourteenth-century epidemic. The pattern is not that of more recent epidemics when the cause has been confirmed as bubonic plague.
It was the single symptom of lymphatic swelling that led nineteenth-century bacteriologists to identify the fourteenth-century epidemic as bubonic plague, but the symptoms of the Black Death also included high fevers, fetid breath, coughing, vomiting of blood, and foul body odor, as well as red bruising or hemorrhaging of skin and swollen lymph nodes. Many of these symptoms may indicate bubonic plague, but they can appear in many other diseases, so diagnosis based on just one symptom is, at best, unwise.
Clue patterns can be seen by looking at where a disease strikes. Cold, dry areas, unsuited to ergot—places such as Iceland, northern Norway and Sweden, Finland, and large areas of Russia and the Balkans—escaped the plague entirely. Wood believes that, rather than being spread by animal and insect vectors, the Black Death was transmitted through person-to-person contact, like measles and smallpox. The geographic pattern of the disease seems to bear this out, since it spread rapidly along roadways and navigable rivers and was not slowed down by the kinds of geographical barrier that would restrict the movement of rodents.
According to Wood, we can only trace modern bubonic plague reliably back to the late eighteenth or early nineteenth century. Who knows when it first emerged, or how it appeared? While the bacillus may have mutated over time, it may also have had some outside assistance. Overall, the original “Black Death” seems to have carried off a third of the population, but the mortality pattern was very patchy, so perhaps there was some sort of immunosuppression caused by mycotoxins in the most common of foods—rye, or black, bread.
Significant support for the involvement of a mycotoxin in plagues can be seen if we correlate rainfall patterns and outbreaks of plague, where dry weather halts the plague. In the four or five years before the great plague outbreak in London of 1665–66, Samuel Pepys makes constant reference to plague incidents in London: it was around, but not severe. Then 1665 and 1666 were very wet—and plague erupted.
So was ergotism a problem at the time of the Black Death? The winter of 1340 was very severe in Leicestershire, with heavy snowfalls and rain. The following summer saw people suffering fits, pains, and an overwhelming desire to bark like a dog. In passing, this was presumably the origin of barking mad, though it begs the question: why did people not feel an urge to meow like cats, sing like linnets (or swans), or chirrup like sparrows? Again, in 1355, there was an epidemic of “madness” in England, with people hiding in thickets and woods from demons. And with the fear of demons came fears of witchcraft.
Witches and witch trials seem to have a lot to do with ergotism. The witch trials of Salem seem to have been ergot-involved, and trials in Scotland in the sixteenth century were centered on rye-growing areas, suggesting an ergot connection. Throughout the 1560s, as Europe’s climate grew colder and damper, witch trials increased in number. In England at this time, the Essex marshland was being reclaimed and the new ground was commonly sown with rye, the only grain that could thrive in the sour soil.
The real trap for “witches” was that they were people who knew their plants, who had wort-cunning, and so could cure some conditions. Unfortunately, their superstitious neighbors assumed those with the power to cure disease must also have the power to cause it, and this brought about witch burnings. Today, we are more advanced, so we allow the modern-day occupants of the witch role to insure themselves, and then we fasten onto them like leeches in courts of law. (Biologically, a parasite that allows the host to live to be preyed on another day is seen as more advanced than a pathogen that kills the host outright.)
Ergot may even have been responsible for the division of the old Holy Roman Empire into the two regions that would become France and Germany We know “holy fire” was active in the Rhine valley in AD 857, reducing populations, checking fertility, and causing social disruption. By AD 900, the Norsemen were in France and pressuring the Holy Roman Empire to the extent that Charles III was forced to abdicate, and the empire split into eastern and western halves. The Norsemen had little use for rye, unlike the Franks.
Ergot may have been responsible for other world events. In July 1789, just a few days after the fall of the Bastille and just as the rye crop was harvested, ha grande peur, “The Great Fear,” started, running from July 20 to August 6. It took the form of rumors that brigands were on the way to seize the peasants’ crops. No brigands transpired, but the fear was so strong that the National Constituent Assembly met at Versailles and voted to abolish the Ancien Régime, the social and economic order that kept the ruling classes on top of the heap.
Matossian has found references to the rye crop in July being “prodigiously affected by ergot,” and there is clear evidence of illness caused by bad bread. While it would probably be excessive to argue that the French Revolution was caused by ergot, it would appear the direction of the revolution may have been partly ergot-driven.
If that seems too extreme a claim, consider this: in 1722, Tsar Peter the Great had assembled a huge army, ready to sweep into Turkey. The Turks were unusually ill-prepared for war, because a tulip-growing craze was diverting the whole of Istanbul. The Russians numbered 20,000 at Astrakhan, a force large enough to drive Turkish forces out of the Ukraine, which would then be free to join Russia, and perhaps even large enough to take Istanbul.
There was just one catch: rye hay and grain were brought in to feed the army and its horses, but in August, the ergot in the grain struck. Horses went down with the blind staggers and, soon after, men began to fall victim to Saint Anthony’s fire. More than 20,000 soldiers and civilians died in and around Astrakhan that autumn, and Russia’s best chance to push back the Turks and gain a warm-water port from which to open up foreign trade perished with them.
To most people, ergotism is a disease of the past. It might have caused dancing madness in medieval Germany or made people barking mad in England at the same time, but it held no terrors in the modern twentieth century. After all, ergot had been tamed. Indeed, one of its toxins, ergonovine, had been selected, purified, and added to the pharmacopoeia, with small doses being used to reduce bleeding in childbirth. Some of the others were being looked at, and one that would shape a generation, LSD, would soon be hailed as a powerful tool in psychiatry. It was quite clear to all that Saint Anthony’s fire had been quenched forever.
Then, one day in 1951, everything changed. On August 12, a small Provençal town, Pont Saint Esprit, was struck by a strange disease that at first seemed to be appendicitis, but the symptoms were not quite right. Two patients seen by Dr. Jean Vieu both had low body temperatures and cold fingertips, were babbling, and had hallucinations. When another patient turned up with the same symptoms the next day, Vieu consulted with two colleagues. Between them, they had 20 patients with the same symptoms. Many of them were also exhibiting violent behavior, apparently because of the hallucinations.
Straitjackets were rushed to the town to restrain the victims of this sickness, and terror spread as people learned of a demented eleven-year-old boy trying to strangle his own mother. People began to whisper of mass poisoning by the authorities. It was some time before people realized ergot lay behind the outbreak, because it was 130 years since the last case, and subsequent generations of farmers knew how to treat ergot: first, the rye seed is immersed in a 30 percent potassium chloride solution, so the ergots float away leaving the seed behind; then the field is deep-ploughed to bury any spores, and a different crop is planted the next year, breaking the cycle.
The one thing you don’t do is sell the seed, untreated, for human consumption. Sadly, a farmer, a miller, and a baker had conspired to get rid of diseased rye by hiding it in wheat bread. As a result, 200 were made ill, 32 became insane, and four died, in part because of greed and perhaps in part because nobody really knew how serious the effects were.
It is worth speculating that global warming will change crop rotations and growth patterns, and will also change the ability of toxic fungi to flourish where they are now unknown. The effects of this change may be quite unexpected, but, as we have seen, we are prepared to poison our world, so long as there is a profit to be made. At this stage we cannot say exactly how global warming will affect us in the future, but affect us it will, and we can reliably expect the unexpected.
We would seem to be running the microbes a close second as the greatest of the Great Poisoners, and while they do little to make life easier for us, we seem to be doing plenty to make life easier for them.
Are we coming to the end of the poisons era, entering a time when we use genetics instead of poisons to cure our ills? I suspect that we may move to some new, hitherto undreamt of, classes of poisons, but no, we won’t really move away from poisons as such: not until our microbial enemies stop using poisons, and that has been going on for a very long time.