The main preoccupation of the human species has been, and probably always will be, other people. Reproducing more of them, feeding them, controlling them, organising them, making them love and adore you – these are the things that all of our human achievements grasp at. The previous chapters have covered some of the ways in which our preoccupations with people have come back to haunt us in our recent past – essentially, the increasing density of humans on the planet, engaged in types of living that leave an ever-larger and -heavier footprint on the earth. The revolutionary torpor of the Neolithic, trudging for thousands of years across continents only to be reinvented somewhere thousands of miles away and begin again, left a legacy of people. Everywhere. As the people pile up, so do the places that hold them: villages become towns, towns become the very first cities. Across the planet, the concentration of humans led to a concentration of resources, often in the hands of a few.
We’ve looked in Chapter 5 at the physical stigmata of being left with too little in an increasingly unequal world. The last few chapters discussed our violent methods of recourse in problems we have with other people, whether it’s in order to control them, to take their things or to inflict a more purely personal violence. In the next few chapters we will start to look at the final – and fatal – consequences of living in a densely populated urban world. Cities have a fairly bad reputation when it comes to coming up with ways to kill off their inhabitants, and one of the most difficult cases to argue in their favour is that of infectious disease. These chapters deal with the delicate balancing act our species has to perform – we want to be close to the centre of the action, but at the same time there are all these other people there, and some of them look like they’re definitely sniffling. Cities tip the balance between the number of people potentially around to be infected and the viability of the infecting organism. Cities attract people, and people carry disease. But what if it’s in cities that all those diseases circulate, mutate and adapt … to us? No virus wants to die alone; what if we were to make the argument that cities give infectious disease a chance to temper their malevolence, to become endemic conditions that do not kill the majority of their hosts? In this chapter we will look at the very peculiar case of the Mycobacterium family of infections, which offers a strange insight into the way cities have changed our experience of disease.
It’s hard to imagine an animal species that doesn’t come with its own particular pairing of infectious diseases – in recent years there has been considerable media coverage of swine flu, avian flu, badger-borne tuberculosis, mad cow disease and a number of others that alarm us with their potential to jump across to our species. We’ve already dealt with the diseases that our domesticated animals have given us in Chapter 3, but there are always other vectors for infection to be considered. I have indelible and slightly salt-crusted memories of my very first ‘away’ field school experience on the Channel Islands off the coast of California. After a thrilling ride on a coast guard boat that included racing dolphins, choking down sea spray à la Kate Winslet in Titanic and generally feeling smug about not getting seasick, we arrived on the island of Santa Rosa with our camping supplies for the weekend. The course coordinator, the highly respected archaeologist1 Jeanne Arnold, had strict instructions. Most of them concerned not messing up the fascinating archaeology of the Chumash people who had occupied these islands, but the warning that stuck with me was the imprecation to wash the tin cans our food2 came in before opening them. The island mice, which scurried incontinently along the tops of the storage racks in the research station kitchen, were carriers of hantavirus. In my 20 years in California, I had never realised that a virulent killer disease was lurking just a few miles off the coast; the relatively rare hantaviruses can cause either respiratory distress or haemorrhagic fever, making them essentially a hipster version of Ebola. Mouse pee can kill you, prairie dogs are a reservoir for bubonic plague and armadillos carry leprosy; these are the sad facts of adorable mammal life.
For many researchers, the human story of infectious disease is a tale of two halves: an Edenic situation before the rise of sedentary farming life, and a quagmire of poxes and plagues ever after. This tipping point is known as the epidemiological transition, and traces of the increased impact of infectious diseases on our species really fluoresce in the archaeological record after this point. A new source of information, however, is challenging some of this orthodoxy: ancient DNA (aDNA) has started to give us a picture of the effect of infectious disease far further back in our evolutionary history than our trowels can find. This is less surprising than it may seem – only a handful of infections linger long enough in the human body to cause visible changes in the skeleton. Most have the decency to kill you quick, before your bones have a chance to react.
New advances in studying aDNA have shown us that Neanderthals and Denisovans, evolutionary (kissing) cousins who contribute a small portion of the genetic material to the modern humans whose recent ancestry lies outside of Africa, may have experienced their own epidemiological transition. According to a recent review by Charlotte Houldcraft and Simon Underdown, some of these cousins’ adaptations to European diseases (like the encephalitis carried by local ticks) may have trickled into the modern human genome in Europe, while a similar introgression may have occurred in people who eventually settled in Papua New Guinea, affecting genes associated with response to dengue, influenza and measles. We’ve traded infectious bacteria with a host of other hominins; there is now evidence to suggest we modern humans actually gave Helicobacter pylori, a largely asymptomatic infection that can occasionally be involved with more unfortunate gastrointestinal problems, to Neanderthals. It may be that further genetic work will uncover other transmissions across ‘species’ lines; perhaps even infections that the rather shallow Neanderthal gene pool was unable to survive. No wonder we gave them ulcers.
If we have always lived (or died) with infectious disease, why does it matter in the story of how we have built our modern urban world? Some researchers have suggested that, infectious disease was actually what kept us penned up in Africa for so much of our evolutionary history. Archaeologists Ofer Bar-Yosef and Anna Belfer-Cohen argue that disease must have been a major factor in keeping early numbers down, just as it is in chimpanzees today, and it wasn’t until we escaped the all-you-can-infect buffet of local animal-borne diseases in Africa that we could get our numbers up. This may not be the most parsimonious explanation of hominid geographic expansion, but it does present an interesting idea: that diseases can determine our species’ ability to expand into new territories and new ways of living. And of course, the converse is certainly true – as our species pushed into new landscapes and new lifestyles, we created new opportunities for disease. The epidemic transition coincides with the big lifestyle changes of the Neolithic because we introduced new disease risks and upped the likelihood of old ones. We started storing grain, which attracted a wonderland of commensal pests. Those pests are the very same suite of animals that show up in, say, Disney’s Cinderella, but instead of bringing cartoon laughter and song to oppressed workers, rodents and birds brought bacteria in their poo, fleas in their fur and species-jumping viruses to crowded, dirty human habitations. Not only did we have new animals to trade infections with, but we spent a lot more time with our old ones: domesticating them in-house meant a much better chance of picking up their diseases. Even that hardy pastime of tilling the soil carried its own risks: parasitic worms and worse lurked in the dirt, just waiting to make the jump to human hosts.
Dirt, animals and people. These are ideal conditions for breeding disease. But this chapter is going to look specifically at the case of infectious disease caused by the Mycobacterium family of bacteria, which are very much diseases of people, by people and for people, though we’re not averse to spreading the bacteria around a bit in either animals or dirt – more on that in a moment. We will start with absolutely no one’s3 favourite condition: leprosy.
To be a leper is to be untouchable: to die horrifically disfigured, shuffling off the mortal coil with your rotting bits leaving a trail of putrid wreckage behind you. There may or may not be indistinct and garbled moaning. The leper crab-walks through the popular imagination like a sort of Frankenstein’s monster of every anxiety dream you’ve ever had: teeth falling out, fingers flopping off and probably some sort of medieval jeering in the background just for effect. They sainted Mother Theresa because she worked with lepers, they lionised Che Guevara for his unstinting commitment to delivering medical care to those most wretched souls,4 and you wouldn’t touch one with a bargepole. ‘Be thou dead unto the world, but alive unto God’ (mundo mortuus sis, sed Deo vivas) – these are not the words you want to hear from your doctor. And yet. Popular imagination has much to recommend it, but does it know what it’s talking about when it comes to leprosy? Not so much, but that, it seems, is more than enough to structure the moral and spiritual treatment of leprosy through its long history in our species.
What actually happens to a body infected with leprosy? If that body is relatively healthy, or at least possessed of a decent immune response, not much, for quite some time. Leprosy acts more aggressively in people with lowered immune status; in the modern world, cross-infections might cause this, but in the ancient world we might suspect the twin devils of poverty and misfortune. The most aggressive changes to the skeleton are seen in what is called lepromatous leprosy (Mycobacterium leprae): sufferers develop all of the symptoms mentioned below. Tubercular leprosy (Mycobacterium tuberculosis) is more of a disease affecting the skin, and while still a problem in the modern world, it’s very difficult to detect archaeologically. Leprosy affects the nervous system, systematically destroying peripheral nerves in a process one medical text described as ‘formatico’, which basically translates as the feeling of ants crawling on your skin. Nerve damage is not a fantastic thing for your extremities: it means loss of sensation and paralysis, which result in extra stumbles and extra cuts to the hands and feet. These can go unnoticed and untreated, especially in the farther-away feet, which is why it’s actually secondary infections that are responsible for the rather horrifying tissue necrosis we associate with rotting lepers. The bones of the extremities show characteristic changes associated with infection response: loss of bone, changes to the articulations of the joints through inflammation and the body’s attempt to mount a rear-guard action, and formation of new bone and new articulations as the body tries to adjust to a newly paralysed limb.
Large amounts of the infective bacteria itself seem to hover around the nose and mouth, swelling the soft tissue and causing weeping sores that lead to a diagnostic series of changes to the face: the very foremost part of the nasal bone is shrunk away, the gums around the front teeth are eroded along with the underlying maxillary bone, and the rest of the small bones of the nasal passages show signs of response to inflammation. These changes are the characteristic suite of ‘facies leprosa’, and were described by Danish researcher Vilhelm Møller-Christensen from archaeological remains; as a thank you, the syndrome is named after him.5 The damage to the nasal region is nothing compared to the damage a leper’s sneeze can do – one estimate has 20,000 leprosy bacilli in a single blow of the nose.
The history of leprosy as an infectious disease goes back at least to the beginning of medical writing. The Sushruta Samhita, Indian medical texts from sometime in the first millennium BC,6 mention a disease that may well be leprosy, as do texts from China; historical reports of what seems to be leprosy are reported in Greece not long after. Bioarchaeological evidence of early cases comes from Dakhleh Oasis in modern-day Egypt during the Bronze Age. Archaeologically, it seems that leprosy was yet another gift brought to the Americas by Europeans; there are no observed New World cases with the characteristic changes described by Møller-Christensen prior to the fifteenth-century contact period.
A great deal of what we know about the historical experience of leprosy comes from its prevalence in medieval Europe, particularly in northern Europe, where the number of cases and archaeological finds coincide to give the most comprehensive perspective on leprosy in the past. We will talk here specifically about the European case, because leprosy in medieval Europe becomes no leprosy7 in post-medieval Europe, and that is something to think about.
Before the reader decides never to leave the house, there are a few common misconceptions that I should clear up. It’s actually very difficult to get leprosy. It takes fairly constant exposure to the bacteria in the form of exhalation droplets; you can’t even get it from skin-to-skin contact. Exposure also doesn’t guarantee infection – it helps to be malnourished or otherwise immunocompromised. Even then, you might get the tuberculoid form of the disease, which, while not exactly nice, restricts its actions to soft tissue. If you do manage to pick up a full-blown lepromatous leprosy infection, congratulations! But you are still more than likely going to die of something else first. Even if the disease does take hold, it can take years or even decades to experience the full swath of skeletal deformities, so archaeologists finding any lepers at all, is quite frankly miraculous.
Thank god then for leprosaria. Most of what we know about the skeletal effects of leprosy in the past comes from the concentrated remains from medieval hospitals that, among other functions, served as refuges for lepers; they also provided location for burial for those so afflicted. Despite the limited contagion of the disease, it was sufficiently horrifying that sequestration of those afflicted seems to have been a key feature of the response to leprosy. While isolated cases can be found filed among the wider population, the pronounced skeletal changes associated with leprosy are mostly identified in archaeological contexts of leprosaria. The nomenclature for these institutions comes from the same maudlin humour that pronounced the leper dead to the world, associating the afflicted with the biblical Lazarus. Maudlin is, as any Oxbridge scholar can tell you, a form of ‘Magdalene’,8 and therefore relating to the Saint Mary of the same designation; the association with leprosaria of Maudlin Hospitals is also due to the biblical story. Leprosaria, Lazar Houses or lazarettos functioned in medieval Europe to contain the frightful contagion of leprosy, and we can actually see quite a lot of the medieval sense of disease in the foundation, funding and final abandonment of these institutions. The perception of leprosy as a punishment by God is easy enough to comprehend, but it seems to have taken on the connotation of sexual sin to some extent as well – perhaps another reason that so many leper hospitals were named for the Magdalen.9 With lepers being perhaps the most visibly wretched of God’s creatures in the medieval world, it’s no wonder that they attracted considerable charitable donation. Founding a leprosarium did great things for your spiritual sanctity.
Between about AD 1100 and 1350, 300 leprosaria were founded in England. This seems to be the pinnacle of a much longer run of endemic Mycobacterium leprae infection on the island, which ran concurrent with similar epidemics in Scandinavia and Denmark. Throughout this period leprosy was a largely rural disease, as Charlotte Roberts and Keith Manchester point out in their 1989 synthesis, with close-living family members likely to spread the disease among themselves so that it might cluster in a family or village, but it never seems to have exploded to epidemic proportions as soon as it hit the city gates. Leper hospitals, as charitable institutions that must garner support, were located more conveniently near major population centres, but the destitute they took in were not necessarily urban dwellers. Historians are also increasingly finding that society’s terror of leprosy was not quite what we imagine it to be. Carole Rawcliffe, who wrote the actual book on medieval leprosy in Britain, pointed out in a lecture at Gresham College in 2012 that the unmarried, healthy female inmates of one of London’s leper hospitals were sworn in as religious sisters, though
… they were not very obedient and they were always being told off for being a bit lippy, and there were evidently quite a few instances of what the shocked inspectors termed ‘carnal copulation’. Since the canons were given to fraternising with the sisters over a drink, this is perhaps hardly surprising.
This does not sound like the canons of the Church, men of great spiritual and worldly knowledge, were overly concerned about leprosy.
If leprosy is mostly a rural disease, what place does it have in a book about cities? The answer lies in those hundreds of hospitals. Three hundred leper hospitals is a considerable number for such a small island, and it suggests that both leprosy and the chance to demonstrate Christian charity were very popular. There is no way of calculating how many of the inmates of these hospitals had clinical leprosy, as the historic practice of differential diagnoses could be more a matter of religious sensibility than science. But it seems fairly clear that people knew when you did have lepromatous leprosy, or we wouldn’t find such concentrations of the skeletal evidence of the disease in the cemeteries of lazarettos. What we have to question is why, after the Black Death upended the lives of urban and rural dwellers (leper and non-leper alike) in 1348, were there no more leprosaria? What happened to leprosy?
One argument says that leprosy might not always have meant leprosy – while some inmates demonstrably had Mycobacterium leprae infections, others may not have. Many have argued that any sort of skin disease might count, and we certainly know that skin diseases were of major concern to the people of Britain: from Edward the Confessor right up to James II, monarchs were obliged to cure the ills of their people through the direct laying on of hands. This ‘royal touch’ was of course a direct sign of God acting through the monarch, and the tradition persevered for at least 600 years in England and France. Cleverly, these monarchical manhandlings were judged most effective against a disease known as the King’s Evil, which many modern researchers have identified as scrofula: another form of Mycobacterium infection that affects the lymph nodes, particularly around the neck, so it might be readily visible, but rather handily is quite likely to go away on its own. Queen Anne is the last British royal to indulge the custom, laying hands on the famous writer Samuel Johnston to cure him of his scrofula on 30 March 1712.10 The death of the tradition, which seems to have attenuated steadily from the fifteenth century until finally coming to an end in the seventeenth, follows the trajectory of decline in the leprosaria, but for two things. First, the seventeenth century saw a clamour for more royal touching, not less; and second, the Mycobacterium that causes scrofula is not Mycobacterium leprae but Mycobacterium tuberculosis.
Tuberculosis is a disease that many people associate largely with wan Romantic poets and Victorian slums; this would be a mistake. In 2014 the World Health Organization estimated that there were almost 10 million new cases of tuberculosis, and over 1 million deaths. Tuberculosis, like leprosy, is still very much at large in the world, and what’s more, we’re running out of ways to cure it. Drug-resistant TB has been a fact of life for nearly 20 years, spurred on by the resurgence of tuberculosis infections in high-risk populations (the immunocompromised, intravenous drug users), and it’s getting worse. Drug-resistant TB has arisen partly through our cavalier attitude towards taking complete courses of medication,11 but probably most significantly through our even less excusable cavalier attitude towards providing full treatment for those most at risk. Extreme multi-drug resistant TB (XDR TB) is one of the first signs modern medicine has had that the golden age of antibiotics may not last forever; the Centers for Disease Control and Prevention in America give odds that XDR TB can be cured as low as 30 per cent. Still, the numbers of cases that affect the general population are low, and lack the tragic romance of the last great flourish of tuberculosis infections in the eighteenth and nineteenth centuries, because they affect children in Africa and not Emily Brontë.
Tuberculosis, or the disease that we think of as tuberculosis, is caused by infection with Mycobacterium tuberculosis or, in very rare cases among people who spend a lot of time with certain animals, Mycobacterium bovis. Droplets containing the bacteria exhaled into the air spread the infection, and once inhaled, bacteria move into the lungs of a new host. In healthy individuals, they can even stay there, essentially walled off from the rest of the body by a high-functioning immune system. Given a chance, however, the bacteria can spread in three different ways: into the central nervous system, causing tubercular meningitis; into the lymph system, causing scrofula; or into the bone. It’s this last path that provides the skeletal evidence of the disease we can identify in the archaeological record. As the TB infection spreads through the body it prefers certain types of bones, though any port will do in a storm; however, there are a few characteristic lesions that can be used to narrow down identification in archaeological skeletons.
Tuberculosis is a disease that takes away bone, rather than putting bone down. Combined with the characteristic locations of tubercular changes in the skeleton, this makes it one of only a few disease conditions bioarchaeologists can identify in the past. One key clue is angular kyphosis, or the angulation of the spine, which is the technical term for what happens when the little round stacks of vertebrae that keep you upright change shape. The erosive lesions of tuberculosis infection can break down the bodies of the individual vertebrae so that, instead of Camembert roundels, they look like pointed wedges of Brie. This pitches the remainder of the spine forwards, creating a distinct angulation. This characteristic deformity is identified as ‘Pott’s disease’, after the surgeon who described it in 1779. Tuberculosis can leave faint traces on the bones of the chest, particularly the inner surfaces of the ribs and sternum – anywhere where there is nice marrow for the bacteria to proliferate. Just as in the metabolic diseases discussed in Chapter 5, the proportion of marrow within bone changes as the body grows, so that children frequently show evidence of tubercular changes to the hands and feet that are far less common in adults. Infection can also cause abscess formation, and these degenerative lesions force whatever bones are nearby to respond by forming erosive divots. Degeneration of the joints, particularly the hip, can also be observed; however, it’s not always possible to say whether an eaten-away hip joint and associated smashed-up femur are the result of tuberculosis or some other sort of disease.
The archaeological evidence for the antiquity of tuberculosis currently extends far back into the human past, though perhaps not quite as far as some have claimed: the frequently cited case of a 500,000-year-old Homo erectus adolescent does not seem to hold up to the standard of the pattern of bony changes that we have discussed above. There are more securely identified changes in skeletons from the Neolithic period onwards in Europe and the Middle East, as well as historical accounts of the disease that are fairly unmistakable from the Bronze Age onwards in Asia and Europe. Both classical Indian and Chinese medical texts mention tuberculosis and its symptoms in relatively unambiguous form; this extends back to the Huang Ti Nei Ching texts from China, which have a semi-mythical origin date in the third millennium BC, but are likely younger. The skeletal evidence of tuberculosis in Asia is rather more frustrating. It has been identified at least as far back as the collapse of the Harappan civilisation in the second millennium BC. Many of the early identifications of tuberculosis came from forensic investigations of archaeological soft tissue, which of course comes largely from one source: mummies.
In Egypt there is a long tradition of encountering tuberculosis; in 1825 the first ‘scientific’ autopsy of a mummy was carried out on Irtyersenu, a woman who was buried in the cemetery at Thebes sometime around 600 BC. The colonial consumers of Egypt’s mummy export empire in the eighteenth and nineteenth centuries were very keen on this sort of forensic investigation; mummy ‘unwrappings’ were cultural events; you could buy your ticket and watch a celebrity Egyptologist denude a 5,000-year-old body.12 For fun. Almost two centuries later genetic tests identified Mycobacterium tuberculosis, which explained the pockets of fat found in her body as indicative of wasting rather than the assumption of corpulence by the original team.
The difficulty with identifying tuberculosis is that not all cases develop the skeletal lesions that bioarchaeologists look for; additionally, idiosyncratic immune responses might lead to the development of more or less unidentifiable cases. Suspected cases of tuberculosis have been reported from a variety of archaeological contexts, rendered all the more contentious by the ongoing argument between researchers who saw TB as an ‘Old World’ disease that was brought to the New World by Europeans in the fifteenth and sixteenth centuries, like leprosy. We can now decisively put this theory to rest: DNA has confirmed the presence of tuberculosis in 1,000-year-old bones from Peru in the mid-1990s, and further evidence has continued to roll in since. In the Old World, the first known case of tuberculosis is identified right at the cusp of the Neolithic transition at the cave site of Atlit-Yam in modern-day Israel. The 9,000-year-old bones of a woman and a year-old child were found with skeletal lesions that suggested TB infection; the child had reactive expansion of finger bones and lesions on the skull, while the evidence from the woman was not diagnostic. Molecular archaeology, however, stepped in to confirm that both skeletons contained Mycobacterium tuberculosis.
As in so much infectious disease research, genetic analysis of the pathogen offers an increasingly nuanced vision of the antiquity of disease in humans. Recent estimates suggest that Mycobacterium tuberculosis came out of Africa with modern humans some 40,000 years ago. After 10,000 years or so it split into two clades, with one group affecting only humans and the other group affecting humans and other mammals. Unlike the diseases that we have blamed on cows in the past, in the case of tuberculosis it’s all on us: we appear to have given rise to the strains of Mycobacterium that affect mammals. The estimations of genetic timing provide a fascinating insight into our co-evolution with a killer. The strains of human-affecting tuberculosis that affect East Africa, India and Asia last had a common ancestor about 14,000 years ago – the approximate kick-off time of the Neolithic Revolution on those continents. The specific Beijing strain of modern tuberculosis infection has its origins in a genetic split from other branches around 6,000 years ago, during the Neolithic period of little farming villages around the Yangtze River. The Latin American strains in turn last split from their common ancestor around 7,000 years ago, suggesting that there is no reason to doubt the presence of tuberculosis in the New World prior to European contact. In fact, Mycobacterium DNA that seems to hover closer to the older branches of the lineage, Mycobacterium africanum and Mycobacterium tuberculosis, has been confirmed in a 17,000-year-old bison from North America, leading those researchers to suggest it as a potential origin for human TB infection rather than the other way round. However, the majority of the evidence emerging from the genetic profiling of the Mycobacterium complex of bacteria is that the human-transmitted types are older than the ones that we can pick up from animals; Mycobacterium bovis, the tuberculosis we unleashed on unsuspecting cows, seems to branch off around the time of the really intense domestication of the species around 5,000 years ago. Once you start looking at the molecular clocks, it’s very easy to see our fingerprints all over this disease.
Having established the antiquity of tuberculosis, it’s possible to trace it through history to the point where it becomes endemic, and then, suddenly, violently epidemic. In the burgeoning urban centres of the world, a triad of preconditions for epidemic infectious disease were being perfected: (1) gross poverty and associated malnutrition, which lowers immune competency and makes for a large pool of potential florid expression of infectious disease; (2) dense populations, which percolate and spread infection; and finally (3) connections to the wider world. The Roman cities of the very first centuries AD seem to have coincided with an uptick in the evidence of skeletal identification of tuberculosis – perhaps not an unexpected finding given the well-networked and urban nature of the Roman Empire. Interestingly, however the majority of finds are from the rather peripheral island of Britain. It’s not clear whether this suggests a preponderance of tuberculosis, perhaps relating to the potential for the disease to expand into a naive British population, or the propensity of the British to excavate Roman skeletons and search them for signs of TB: most probably the former, as TB has been found in Iron Age skeletons from the UK as well as from a variety of sites just on the edges of the roman World, both geographically and chronically. The age of empires is a good one for tuberculosis: the great urban centres at their hearts make excellent incubators for TB, and their vast networks of trade and exchange pump the disease into new territories.
The network of urban cities in medieval Europe was a burgeoning disease incubator, as it found out to its great sorrow in 1348. Leprosy staggers off stage in the latter part of the medieval period, and is rare indeed by the early modern period of industrial and proto-industrial cities. Tuberculosis, on the other hand, while obviously present during the medieval period from archaeological and historic evidence, waits until the Enlightenment to really kick off. This seeming tag-team effort by Mycobacterium species to dominate the European field initially suggested to many researchers that what we see in the decline of leprosy and the rise of TB is in fact evidence of growing cross-immunity. It was noted very early on that many patients with leprosy actually die of TB; conversely, exposure to TB seemed to confer some sort of immunity towards leprosy. Could the spread of tuberculosis around Europe by the exponential growth in urban populations and the trade networks that connected them actually have inoculated the population against the scourge of leprosy?
Recent analysis of archaeological skeletons seems to say no. A team led by microbiologist Helen Donoghue sampled archaeological remains with characteristic skeletal changes attributed to either leprosy or tuberculosis infection from a range of sites spanning fourth-century AD Egypt to sixteenth-century AD Hungary. Their goal was to obtain genetic signatures from any Mycobacterium present to see if the cross-immunity theory could be supported. Instead, what they found was that, out of 32 samples, 10 had traces of both Mycobacterium leprae and Mycobacterium tuberculosis. In order for the team to find the bacteria from samples in the bone, the Mycobacterium infections would both have to be actively coursing around the body, leading the team to conclude that cross-immunity was unlikely. Instead, they argued, co-infection is a much more likely scenario, and one unfortunate consequence of having leprosy is a propensity to acquire and die of other infections.
The complicated interplay of tuberculosis infection and the decline of leprosy might then be more succinctly ascribed to the triumph of the high-mortality infection in a new ecological niche: the urban population. While tuberculosis was a lingering disease, with years required before the characteristic lesions appeared in the skeleton, it certainly didn’t hang on as long as leprosy did, and its spread was far more rapid. It seems that at some point, perhaps around the fourteenth century, the transmission of infectious diseases ramped up to the point where virulence outbid its rivals in disease forms. Epidemics of infectious disease became endemics, naturalised citizens of cities thanks to the contributing factors of population growth, mobility and integrated networks for the transmission of people, goods and animals, all driven by the engines of urban growth.
It may be that how we think about disease in the past is not entirely unrelated to how we think about disease in the present. Rawcliffe suggests that much of the history of leprosy in the medieval period was written at a time when historians themselves occupied a world of plague and newfound pestilence. Empire had opened up the farthest-flung reaches of the world to the Europeans, and while there may not have been dragons, there were certainly diseases long thought dead. She points to the flurry of excitement over the scientific identification of the Mycobacterium leprae bacteria by Hansen in 1873 as evidence that historians were looking at the past through a pandemic lens. I would add that the 1870s was a time when the punishing regime of the dense urban megalopolis that London had grown into started to become apparent to those who governed it. In a city full of killer epidemics, it’s easy to see how a terror from the past might be magnified. Whether leprosy was an intolerable burden of isolation, madness, disfigurement and disability, or whether this burden could be borne with the help of some ecumenical wine and company, we may not ever entirely know.
What we do know is that, in a newly densely populated urban world, the course of our health was forever altered. We have changed the environmental reservoir for the diseases that have trailed us for millennia, and we continue to do so today. Part of that is a function of the increased connection between our population centres, the roads, the sea lanes, the paths forged into new and ever more exotic disease habitats. Ebola thrives because it comes on four wheels and not two shaking legs; coronavirus spreads globally because the Hajj can only be made to one place but the faithful must come from any continent they find themselves on. The metal ages of bronze, copper and iron undeniably had their long-distance social and economic connections, often very impressive ones, but it’s in a globally connected world that we see the true consequences of dense urban living.
Humans have undergone a series of ‘epidemiological transitions’, where the nature and type of diseases most prevalent in our species shift. The most commonly discussed is what many would argue is the first ‘true’ transition, occurring in the Neolithic when the concentration of humans and animals in settled locations allowed for the transmission of infectious diseases that would otherwise sputter and die. Tuberculosis is perhaps the clearest example of a disease that we do not see molecular or archaeological evidence of before the Neolithic, but bears all the hallmarks of a density-dependent disease. After appearing on the scene 9,000 years ago, it skulks through the bioarchaeological record before once again making a nuisance of itself in the urbanised world of imperial Rome. Endemic if not epidemic, it knocks around for centuries before the huge reorganisation of human populations into increasingly connected, densely populated cities creates the circumstances for crisis mortality – plague.
1 And general inspiration for young students unsure of what they want out of life, who spend university break times sulking in their car listening to punk albums and chain smoking, until you encourage them out into the field – a graciousness I will never forget; I owe a huge debt of gratitude to Professor Arnold, whose name as you may notice appears quite a few times in these pages.
2 I maintain to this day, in the face of considerable opposition, that SpaghettiOs are in fact food.
3 I exclude bioarchaeologists, of course; we love a bit of leprosy.
4 Fine, OK, also because of the sexy communist revolutionary stuff and quality hat-wearing.
5 And you thought doctors competing to have fractures named after them was bad.
6 They also provide earliest mentions of other diseases, e.g. scurvy and smallpox. Unfortunately, each ancient symptom has been attributed to just about every modern condition, so it’s difficult to know which disease exactly any of the texts are referring to.
7 Well, very limited.
8 I spent ages mispronouncing the name of the Oxford college Magdalen, pronounced ‘maudlin’. You’ll excuse me for finding archaic English ridiculous.
9 There seems to be some elision of Marys going on in Christian theology; the sister of Lazurus and the former prostitute aren't necessarily the same Mary.
10 Either it didn’t work, or it didn’t work fast enough – Johnson had a subsequent operation and remained scarred for life.
11 Stopping a course of antibiotics ‘because I feel better now’ is akin to handing infectious disease a loaded gun; you just might not realise it because the gun is pointed at a malnourished child in a country you can’t pronounce. Don’t.
12 Either academic lectures were a lot more fun in the past or life before binge-watching was intolerably dull.