Just think of it. A hundred years ago there were no bacilli, no ptomaine poisoning, no diphtheria, and no appendicitis. Rabies was but little known, and these we owe to medical science. Even such things as psoriasis and parotitis and trypanosomiasis, which are now household names, were known only to the few, and were quite beyond the reach of the great mass of the people.
STEPHEN B. LEACOCK, Literary Lapses (1910)
MICRO-ORGANISMS
A PATCHWORK OF IDEAS AND INSTITUTIONS, theory and practice, craft and science, involving divided and vying professional factions, medicine has a generally muddled history, infinitely less clear-cut than, say, theoretical physics. But the latter part of the nineteenth century brought one of medicine’s few true revolutions: bacteriology. Seemingly resolving age-old controversies over pathogenesis, a new and immensely powerful aetiological doctrine rapidly established itself – one that its apostles prized as the master-key to disease, even to life itself Moreover, most unusually for medicine, the new disease theories led directly and rapidly to genuinely effective preventive measures and remedies, saving lives on a dramatic scale.
The general thinking behind bacteriology (that disease is due to tiny invasive beings) was far from new; theories of contagion, long proposed for maladies like smallpox and syphilis, maintained that disease entities were passed from the infected party to others; in the case of the pox, sexual intercourse offered the obvious transmission mode. Developing some hints in Galen, Girolamo Fracastoro had written in 1546 of disease seeds (seminaria contagiosa) carried by the wind or communicated by contact with infected objects (fomites); and the microscope confirmed the reality of wriggling, squirming ‘animalcules’. Yet what grounds did anyone have for thinking that such ‘little animals’ caused disease?
Similar problems attended the putrefaction problem. What made substances go bad, decompose and stink? Why did grubs and mites appear on decaying meat and fruit? Did decay produce the insects (by spontaneous generation) or insects the decay?
By boiling up broth, sealing it in containers and showing that nothing happened, Francesco Redi (1626–98) believed he had proved that maggots did not appear on meat protected against flies, thereby discrediting the theory of spontaneous generation; in De la génération des vers dans le corps de l’homme (1699) [On the Generation of Worms in the Human Body], Nicholas Andry also argued that the seeds ‘entered the body from without’. But, as so often, there were counter-findings. In 1748 John Needham (1713–81) repeated Redi’s experiments; he boiled a meat infusion, corked it, reheated it and, on cooling, identified ‘animalcules’ in the broth which, he concluded, had appeared spontaneously. Convinced Needham had failed to protect his infusion from the air, Lazzaro Spallanzani maintained that broth, if boiled and hermetically sealed, would keep indefinitely without generating life. With no agreement as to where these ‘little animals’ came from, their alleged role in disease causation was a mare’s nest.
The crucial issues raised were what such ‘demonstrations’ actually demonstrated (experiments are always open to multiple explanations), and whose experiments should be trusted. There were also metaphysical puzzles. For some, the very idea of ‘spontaneous generation’ smelt of scandal. It contravened the doctrine that God alone could create life and mocked the natural order, opening the door for whimsy and weirdness in the generation of ‘monsters’. Nature, reason taught, was constant and lawlike; hence, was not spontaneous generation as preposterous as centaurs or six-headed cows? Yet certain philosophes of materialist leanings, like Diderot, had a soft spot for spontaneous generation precisely for that reason, since it rendered God the Creator otiose while proving that Mother Nature was fertile, creating novel forms as she went along. Spontaneous generation therefore remained a bone of contention, both experimentally and philosophically. The debate had immediate implications for disease aetiology.
Belief in specificity gained ground in nineteenth-century medicine, thanks to the rise of patho-anatomy. So might not specific animalcules (parasites and bacteria) be responsible for particular diseases? There was no clear-cut evidence for this, partly because of the common assumption that all bacteria were much of a muchness. But that began to be challenged. In 1835, Agostino Basis (1773–1856), an estate manager from Piedmont who was faced with the devastating silkworm disease, muscarine, argued that the fungus found on dead silkworms contained the cause of the disease; by inoculating healthy silkworms with it he could induce the sickness. Bassi’s conclusions inspired Johann Schoenlein of Zürich (1793–1864) to investigate ringworm. He, too, found a fungus in ringworm pustules, concluding in 1839 that it was the cause of the condition.
Putrescence was also being hotly debated, thanks to Liebig’s fermentation theories and to cell pathology. Working in Müller’s Berlin laboratory, Schwann maintained that yeast cells caused fermentations and showed that heat would destroy the ‘infusoria’ responsible for putrefaction. Persuaded by Bassi, Jacob Henle (1809–85) claimed in Pathologische Untersuchungen (1840) [Pathological Investigations] that infectious diseases were caused by a living agent, probably of a vegetable nature, which acted as a parasite on entering the body: ‘the substance of contagion is not only organic but living, and endowed with a life of its own, which has a parasitic relation to the sick body.’ Broadly anticipating Koch’s postulates, he theorized criteria for testing the pathogenic role of parasites: constant presence of the parasite in the sick, its isolation from foreign admixtures, and reproduction of the specific disease in other animals through the transmission of an isolated parasite.
Splicing together various strands of evidence – clinical, veterinary, epidemiological and zoological – Henle challenged spontaneous generation and miasmatism. A political liberal who, like Virchow, supported the revolutions of 1848, he regarded his findings as a ray of hope, presaging an end to the despairing therapeutic nihilism bedevilling Paris and Vienna; once the causal organisms were found, cures would follow. But his conclusions were slighted as speculative. Liebig’s reading of fermentation and putrefaction as chemical not biological processes carried greater weight and chimed with dominant miasmatism and environmentalism. From 1857, however, the controversy was transformed by Louis Pasteur (1822–96).
PASTEUR
Born in the Jura, the son of a tanner who was a veteran of Napoleon’s grande armée, Pasteur, like all ambitious French lads, went to Paris, getting his education at the École Normale Supérieure. Chemistry was his first love, and from the beginning he displayed the dazzling dexterity that became his trademark, selecting big problems which galvanized his energies, and becoming confident all problems could be solved in the laboratory.
Chemistry led him to biology, through experimentation on tartrates. It was known that tartaric acid (a waste product of wine-making) and racemic acid had identical chemical compositions but different physical properties. The crystals of tartrate compounds were asymmetric; solutions could be produced that rotated polarized light both to the left and the right. Pasteur concluded that such molecular asymmetry fundamentally distinguished living from inanimate things. Thereafter it was the properties of the living that fascinated him. Moving from crystals to life, he began to probe the meanings of micro-organisms, thereby laying the foundations for his abiding ‘vitalism’: a commitment to the irreducible divide between merely chemical and truly living phenomena. Thereafter his mission was to reveal the workings of biology.
Appointment in 1854 to a university chair at the manufacturing centre of Lille led him to study fermentation: the souring of milk, the alcoholic fermentation of wine and beer, the forming of vinegar. Liebig had stated that fermentation was a chemical process, regarding ferments as unstable chemical products. Pasteur promoted the notion of their specificity: fermentation, he held, was the result of the action of particular living micro-organisms. His new inquiries, continued after his return to Paris to take up a chair, centred on the souring of milk (lactic acid) and on the fermentation of sugar into alcohol; by 1860 he had established the biological (rather than chemical) character of fermentation, showing it required such micro-organisms as brewer’s yeast. These organisms could in some cases even live without oxygen, in an atmosphere of carbon dioxide; they were thus ‘anaerobic’.
This research programme, probing the specific actions of micro-organisms, blossomed into some of Pasteur’s most spectacular demonstrations, designed to refute Félix Pouchet’s claim to have established spontaneous generation by means of critical experiments. The biologist Pouchet (1800–72) was himself no mean experimenter, and his doctrine of spontaneous generation, set out in his Hétérogénie (1854) [Heterogenesis], chimed with the reductionist scientific naturalism championed by anticlericals attempting to free science from the supernatural. On grounds both scientific and spiritual, Pasteur, however, discounted the possibility of life arising out of mere matter: apparent proofs, such as Pouchet’s, merely betrayed shoddy lab techniques, and he devised ingenious counter-experiments to prove the essential role of micro-organisms.
As everyone knew, broth in a flask would go ‘bad’ and organisms would appear. Were these, as Pouchet claimed, spontaneously generated? Convinced they came from living agents in the atmosphere, Pasteur devised an elegant sequence of experiments. He passed air through a plug of gun-cotton inserted into a glass tube open to the atmosphere outside his laboratory. The gun-cotton was then dissolved and microscopic organisms identical to those present in fermenting liquids were found in the sediment. Evidently the air contained the relevant organisms. Further experiments showed air could be introduced without infecting a sterile infusion, if the air had previously been sufficiently heated. Thus the organisms present in air were alive and could produce putrefaction, but heating killed them. He next showed that an infusion could be sterilized and left indefinitely open to the air provided the flask’s neck had a convexity pointing upwards: the air could pass up over this swan-neck but the organisms were impeded by gravity. Finally, he showed that micro-organisms were not uniformly distributed in the air. Taking numerous sealed flasks containing a sterile infusion, he broke and resealed the neck of each at a range of different altitudes; unlike in Paris, in calm, high mountain air very few of the flasks showed growth.
No fact is theory-free and incontrovertible. All the same, Pasteur’s experiments were exceedingly impressive and persuasive, and when, in characteristically French manner, the lab war between Pouchet and Pasteur was officially adjudicated by the Académie des Sciences, the ruling came down decisively on Pasteur’s side. (It surely helped that he was a Parisian establishment figure who could play upon conservative and Catholic anxieties that Pouchet’s spontaneous generation was the creed of materialists and anticlericals.)
Developing a sense of duty and destiny, Pasteur marched majestically on to tackle the murky relations between micro-organisms, putrefaction and disease, showing that particular ferments were living forms. Continuing his work for the wine industry, he proved that the micro-organism Mycoderma aceti was responsible for souring wine and that heating it to 55°C eliminated the problem. Later, he applied the same principle to beer and milk: ‘pasteurization’ marked a major step towards the purifying of foods. Since it had been argued by Henle and others that fermentation, putrefaction and infection were related, it required no drastic leap for Pasteur to conclude that disease was a vital process, once he was sure the air was teeming with germs. The first disease he attributed to a living organism was pébrine, which was devastating the French silkworm industry. He showed it was caused by a communicable living organism (a protozoan), and laid bare its life cycle from moth through egg to chrysalis.
On 19 February 1878 before the French Academy of Medicine, Pasteur argued the case for the germ theory of infection. Later that year, in a joint paper with Jules Joubert (1834–1910) and Charles Chamberland (1851–1908), he spelt out his conviction that micro-organisms were responsible for disease, putrefaction and fermentation; that only particular organisms could produce specific conditions; and that once those organisms were known, prevention would be possible by developing vaccines.
In 1879 he put these ideas to the test in investigations of chicken cholera and anthrax, two diseases extremely destructive to French agriculture. He infected healthy birds with ‘stale’ cholera-causing microbes, two weeks or more old, and was intrigued to discover that no serious disease followed. Next he injected these same birds, and some others, with a new culture. Whereas the additional birds fell ill, those previously injected remained healthy. Here was the way to protect chickens against cholera – he had succeeded in immunizing the chickens with the weak, old bacteria culture, which afforded protection when he later gave them fresh, strong samples. Pasteur’s hunch had paid off, but it was he who said that chance favours the prepared mind.
He then applied the same principle to anthrax, a highly contagious condition commonly affecting cattle and sometimes humans. It was a disease of the lungs, which often afflicted woolsorters, and was conventionally attributed to rural miasmas. Livestock losses were immense, and anthrax was particularly ruinous because it continued to develop in fields from which infected animals had long been excluded.
Fortunately, the groundwork had already been laid. Franz Aloys Pollender (1800–79) Casimir Joseph Davaine (1812–82) had observed microscopic bacilli in the blood of cattle which had died of anthrax. Robert Koch (1843–1910), soon to emerge as Pasteur’s titanic rival, had also been investigating the disease. Koch had studied medicine at Göttingen under Wöhler and Henle; after serving as a surgeon in the Franco-Prussian War, he took a post as district medical officer (Kreisphysikus) in Wollstein, a small town in Posen (modem Poland), avidly pursuing microbiological researches in his backyard laboratory. Anthrax was severe in Posen. Koch found that under certain conditions the rod-shaped anthrax bacilli (Bacillus anthracis) formed exceedingly heat-resistant spores (small encysted bodies) in the blood. Neither putrefaction nor heat killed them, and they could later develop into bacilli. The persistence of the disease in fields was thus explained: it was through the spore. Koch’s early laboratory work was technically adroit and systematic – the virtues which earned his later fame.
Using Koch’s anthrax bacillus, Pasteur experimented with different time periods to find the way to attenuate its effect, and finally succeeded in producing a vaccine. He then staged a characteristically stunning public demonstration. On 5 May 1881 at Pouilly-le-Fort near Melun, he injected 24 sheep, 1 goat and 6 cows with living attenuated vaccine, leaving a similar number of animals uninjected. He gave the test animals a further and stronger injection on 17 May, and then on 31 May all the animals received a virulent anthrax culture. By 2 June, the control sheep and the goat were all dead and the cattle ill, but the vaccinated animals were fine. The experiment was a striking success, suggesting the possibility of preparing vaccines against diseases by attenuating the infective agent. Such demonstrations gave the germ theory a boost, though Pasteur was concerned less with basic microbiological theory than with concrete investigations, solving problems and contributing to prevention and cures.
Aided by Chamberland and Pierre Emile Roux (1853–1933), in 1880 Pasteur moved on to rabies, a disease dreaded since antiquity because its hydrophobic symptoms were so gruesome and death inescapable. His attempt to find the causative microbe was to no avail – not surprisingly, since the virus can be seen only with an electron microscope. Undeterred, he began his search for a vaccine by injecting rabies-riddled spinal cord tissue into rabbits’ brains. When rabbit after rabbit had been injected with the same virus, a consistent incubation period of about six days was produced. The virus acting in this way was called a virus fixe. He then injected this fixed virus into the spinal cord, and after death dried them. A cord dried for two weeks became almost non-virulent. In 1884 he made a series of 14 graduated vaccines and set up an experiment with 42 dogs: 23 received 14 injections each, one injection a day, starting with the weakest vaccine and ending with the strongest; the remaining 19 dogs were the controls, receiving no injections. At the end of two weeks, all the dogs were exposed to the rabies virus. None of the 23 immunized got the disease, whereas 13 of the control dogs did. A way had been found to give dogs immunity to rabies; Pasteur later showed that, because the incubation period was lengthy, vaccination worked even if the dogs had been infected for some time.
The moment of truth came on 6 July 1885, when Joseph Meister was brought to his doorstep. Two days before, this nine-year-old boy had been bitten fifteen times by a dog thought to be rabid, and a doctor had told the boy’s mother to try Pasteur. He took the risk: he ordered a fourteen-day series of increasingly virulent (and painful) injections, and the boy stayed well. So did a second case treated three months later, a fourteen-year-old shepherd lad, Jean-Baptiste Jupille, from Pasteur’s home-district of the Jura, who had been severely bitten as he tried to protect other children from a rabid dog.
These dramatic human interest events, expertly handled by Pasteur who had a flair for publicity and a way of presenting his experiments as more successful and conclusive than they were, captured the world’s imagination and vindicated the role of experimental biology. Over the next fifteen months, the vaccine was given to well over two thousand people, and his rabies procedure became standard, with about 20,000 people worldwide being treated during the next decade. Though Pasteur won lavish praise, criticism was levelled as well, on the grounds that he was injecting perhaps perfectly healthy people (not all those bitten by rabid animals develop rabies) with what might prove an unsafe virus. His confidence was posthumously vindicated in 1915, however, when a ten-year study revealed that, of 6000 people bitten by rabid animals, only 0.6 per cent of those vaccinated had died, compared with 16 per cent of the rest.
On a wave of national enthusiasm created by rabies immunization, the Institut Pasteur was set up in 1888, and donations flooded in; appropriately Joseph Meister became the gatekeeper. When Pasteur died eight years later, he was buried in his Institute, consecrated as a shrine to medical science.
KOCH
Pasteur was a wizard, both within the lab and beyond, but bacteriology’s consolidation into a scientific discipline was due mainly to Robert Koch (1843–1910) and his team and pupils, whose painstaking microscopic work definitively established the germ concept of disease and systematically developed its potential.
By formalizing the procedures for identifying micro-organisms with particular diseases, and by his insistence upon pure cultures, Koch elevated bacteriology into a regular science, rather as Liebig had normalized organic chemistry and Müller and Ludwig, physiology. Koch’s paper on the aetiology of infectious diseases (1879) – a testament to his method and orderliness – launched upon the daunting task of discriminating among bacteria, connecting micro-organisms with particular effects, and settling the old question of whether bacteria were the cause of infection or simply background noise. It also offered an early formulation of what came to be known as Koch’s Postulates. Formalized in 1882, these stated that to prove an organism was the cause of any disease, it was necessary to demonstrate
1 That the organism could be discoverable in every instance of the disease;
2 That, extracted from the body, the germ could be produced in a pure culture, maintainable over several microbial generations;
3 That the disease could be reproduced in experimental animals through a pure culture removed by numerous generations from the organisms initially isolated;
4 That the organism could be retrieved from the inoculated animal and cultured anew.
These conditions were mostly able to be fulfilled, though some pathogenic entities, notably viruses, had to be accepted without meeting all the conditions. The thinking behind these rigorous postulates, and their applicability, boosted the dogma of specific aetiology – the idea that a disease has a specific causative agent, with the implication that once this agent has been isolated, it will be possible to control the disease.
In isolating specific bacterial strains, artificial cultivation in liquid media had served Pasteur perfectly well. As superior microscopic techniques revealed the distortions these produced, Koch looked for solid culturing media, beginning by growing bacteria colonies on a potato slice and later solidifying the standard broth by adding gelatin. This liquefied at body temperature, but that problem was solved by using agar-agar, an extract of Japanese seaweed, to solidify the culture medium on a special dish devised by Richard Julius Petri (1852–1921).
Koch scored his first great triumph on 24 March 1882, in revealing before the Berlin Physiological Society the bacillus causing tuberculosis, Mycobacterium tuberculosis, and thus at last settling the vexed question of its aetiology. In the following year, with another cholera pandemic heading Europe’s way, he was sent to Egypt to investigate, arriving hard on the heels of a French team headed by Pasteur’s colleague, Roux. The latter used the classic Pasteurian method, which was to reproduce the disease in animals and then look for the organism; but the method failed, because cholera affects only humans. Working directly on cholera victims, Koch isolated and identified Vibrio cholerae (the comma bacillus) in Alexandria in 1883 and more convincingly the next year in India, showing the bacillus lived in the human intestine and was communicated mainly by polluted water – thus vindicating fully the work of John Snow. He then went on to Calcutta, where he confirmed his findings, and in February 1884 reported his success to the German government, amid tremendous jubilation: first tuberculosis, then cholera!*
Koch became burdened with success, his research declined, and to offset that he turned oracle. The methods he had pioneered proved their worth, however, leading to the rapid discovery, largely by his own pupils, of the micro-organisms responsible for diphtheria, typhoid, pneumonia, gonorrhoea, cerebrospinal meningitis, undulant fever, leprosy, plague, tetanus, syphilis, whooping cough and various other streptococcal and staphylococcal infections.
Pasteur’s dramatic success with the anthrax and rabies vaccines had fuelled expectations of instant therapeutic breakthroughs. All that was needed, it seemed, was that the relevant micro-organism had to be isolated in the laboratory, and an appropriate vaccine would follow as the night the day. In the event, success proved mixed and often completely elusive. Two early developments provided, respectively, a dazzling victory and a dramatic setback.
The triumph was diphtheria, a disease spread through droplet infection and producing fever, sore throat and a hard cough. A leathery membrane forms on the tonsils and palate, blocking the airways and often causing death. Especially in great cities, diphtheria assumed pandemic proportions after 1850. The death rate was high, occasionally being the principal cause of death among children. In New York in the 1870s, over 2000 children a year were dying of it.
In his Des inflammations spéciales du tissu mugueux (1826) [Special Inflammations of the Mucous Tissue], Pierre-Fidèle Bretonneau (1778–1862), an early advocate of germ theory, had distinguished it as a specific disease, coining the word diphtérie from the Greek for leather (diphthera), alluding to the choking tissue produced in the throat. In 1883 Theodor Albrecht Edwin Klebs (1834–1913), a pupil of Virchow, isolated and described its specific organism, the diphtheria bacillus (Corynebacterium diphtheriae), a rod-shaped bacterium. Friedrich Loeffler, one of Koch’s assistants, then succeeded in cultivating it. (He also discovered the rod-shaped bacillus in healthy children, one of the observations that led to the concept of the carrier.)
Once the cause was known, the bacillus’s action in the human system had to be established. Between 1888 and 1890, brilliant laboratory investigations in France by Roux and Alexandre Yersin (1863–1943), and in Germany by Karl Fraenkel (1861–1901), Emil Behring (1854–1917) and his Japanese colleague Shibasaburo Kitasato (1852–1931) resolved the problems. Roux and Yersin showed that the diphtheria bacterium produced a poison which, when inhaled, lodged in the throat or windpipe, generating a poisonous toxin in the blood-stream. This permitted definitive diagnosis.
In December 1890 Fraenkel showed that attenuated cultures of diphtheria bacilli, injected into guinea pigs, produced immunity. Working with Kitasato in Koch’s Institute, Behring announced that the blood or serum of an animal rendered immune to diphtheria through the injection of the relevant toxin could be used to treat another animal exposed to the disease. Immune animals could be prepared by challenging them with gradually increasing doses of either bacillus or toxin.
Such a diphtheria antitoxin (a toxin-resisting substance) was first used on a child in a Berlin clinic on 25 December 1891. This dramatic Christmas rescue, outpasteuring Pasteur (how the modern media would have loved it!), proved a success. Serum production began, and its introduction in 1894 into Berlin hospitals brought an instant plunge in diphtheria mortality. Meanwhile in Paris, Roux and Yersin made large-scale serum production possible by using horses as sources of antitoxin. The French serum was introduced into England by Joseph Lister; diphtheria antitoxin came into general use about 1895, and within ten years the mortality rate had dropped to less than half (the epidemic was in any case spontaneously waning).
Especially once the Hungarian Béla Schick (1877–1967) developed the test bearing his name to identify the presence of immunity, large-scale immunization programmes were undertaken. In New York, the death rate had peaked at 785 per 100,000 in 1894; by 1920, it had dipped to under 100. By 1940, with 60 per cent of pre-school children immunized, diphtheria deaths had become a thing of the past.
The campaign brought a famous victory and, because, like rabies, it also involved children, it provided further superb publicity for the new bacteriology. New scientific possibilities had been opened up since – in contrast to Pasteur’s live vaccines – it had now been shown that the cell-free serum of immunized animals could kill virulent bacteria, and protection could be transferred via serum from animal to animal. (This suggested that it was not simply the bacterial cell itself that caused disease, but a toxin it yielded.) On this putatively safer basis serum therapy was launched, with the production of antitoxins not just for diphtheria but also for tetanus, plague, cholera and snake bites. Yet serum therapy encountered problems of its own, for antitoxin production was impossible to control, and supplies varied in strength and purity. Occasional deaths of patients receiving antitoxin proved shocking, and serum sickness (fever, rash and joint pains) was a common side-effect. Apart from such practical troubles, profound questions were surfacing about the nature of the body’s reactions to micro-organisms and chemicals.
If diphtheria was the dramatic therapeutic success, the dispiriting failure was tuberculosis, potentially the gold medal for the new science. Consumption had become the single largest cause of adult deaths in the West. Thanks to the Paris school, cases could reliably be clinically diagnosed. Laennec and Bayle had unified the disease, and in 1839 J. L. Schoenlein, professor of medicine at Zürich, named the whole complex ‘tuberculosis’, since the tubercle seemed to be its anatomical root.
But its cause remained obscure and hotly disputed – was it hereditary, constitutional, environmental, or contagious? The received wisdom was that an ‘innate susceptibility’ or a ‘diathesis’ was to blame. Despite an army of ‘cures’, ranging from blistering to living in cowsheds to inhale the breath of cattle, and the new faith in the sanatorium on the magic mountain, tuberculosis seemed a good justification for therapeutic nihilism: ‘I know the colour of that blood! It is arterial blood. I cannot be deceived in that colour. That drop of blood is my death warrant. I must die,’ cried John Keats, on first coughing up blood – and how right he was. Some survived for a long time, and some recovered spontaneously; but no realist thought medicine cured the disease.
The idea that tuberculosis was communicable, though mainly rejected, had its advocates. William Budd (1811–80), best known for his work on typhoid fever, argued for contagiousness on the basis of epidemiological studies, and the French physician Jean Antoine Villemin (1827–92) attempted to confirm this by inoculating rabbits and guinea pigs with sufferers’ blood, sputum, and secretions – work paralleled in Germany by Virchow’s pupil Julius Cohnheim (1839–84). Villemin also argued for cross-contagiousness between humans and cattle, but his work had little immediate impact; attempts to repeat his rabbit experiments were inconclusive, and many mysteries remained. Rebutting Laennec, Virchow maintained that pulmonary and miliary tuberculosis were quite different diseases, though here he perhaps betrayed a chauvinism that killed three birds with one stone: denigrating both Paris and Pasteur, and voicing his perennial scepticism towards bacteriology.
Koch made the dramatic breakthrough. Having cultured a specific microbe apparently associated with tuberculosis, in 1882 he provided solid evidence from animal experiments, conformable with his ‘postulates’, that the tubercle bacillus was the specific cause of the disease. Then, after years of travelling and official duties connected with his prestigious Institute for the Study of Infectious Diseases, he began to work in the laboratory again, with great intensity and secrecy, perhaps feeling the need to eclipse Pasteur with one great therapeutic coup. In August 1890 all was revealed in a speech before the Tenth International Congress of Medicine in Berlin: Koch had found a substance which arrested the growth of the tubercle bacillus in the test-tube and in living bodies, referring to his agent, which he called ‘tuberculin’, as a ‘remedy’ and thus leading the world to believe he had a TB cure.
Dazzling publicity followed, and Koch was fêted. Before tuberculin’s efficacy and safety had been evaluated, the Kaiser personally conferred upon him the medal of the Grand Cross of the Red Eagle, and he received the freedom of the city of Berlin. Despite Germany’s law prohibiting ‘secret medicines’, Koch avoided disclosing the nature of tuberculin. Sent to Berlin to report for the press, Arthur Conan Doyle (1859–1930) paid a call on Koch’s son-in-law and found his office knee-deep in letters begging for the miraculous remedy; the whole business was like Lourdes.
Within a year thousands had received tuberculin treatment, without system or controls. It seemed to help some patients in the first stages of lupus (tuberculosis of the skin), but experience quickly showed that tuberculin was useless or even dangerous for patients with pulmonary tuberculosis. The fiasco brought a violent backlash, with denunciations of Koch and his secret remedy. A study prepared for the German government found little evidence to justify the claims made for tuberculin. Koch was rumoured to have sold his ‘secret’ to a drug company for a million marks, to help finance his divorce and remarriage.
In a paper published in January 1891, Koch at last revealed the nature of his remedy: tuberculin was nothing but a glycerine extract of tubercle bacilli. He was accused of divulging the great secret only when it had become obvious that tuberculin was financially worthless. He disappeared to Egypt with his young bride, leaving his underlings to cope with the débâcle.
To the end of his life, he continued to express the hope that an improved form of tuberculin would serve as an immunizing agent or cure. He was mistaken, though it did prove to have a use – not as a cure but as a diagnostic aid in the detection of early, presymptomatic tuberculosis. In the heroic tradition of the time, Koch had tested tuberculin on himself: his strong reaction indicated that, like most of his contemporaries, he had not escaped a ‘touch of tuberculosis’; and what he had stumbled upon was the complex immunological phenomenon now called delayed-type hypersensitivity. The tuberculin test was put into service, and microbiology laboratories were able to help the physician monitor the patient’s status by analysing throat cultures or sputum samples.
Koch made a further blunder: he wielded his authority to scotch Villemin’s case that bovine and human tuberculosis were very similar. Human tuberculosis, Koch insisted, could not be transmitted to cattle, nor could bovine tuberculosis be communicated to humans. In this he was wrong, and only when his mistake was undone could it be recognized that transmission of tuberculosis from cattle to humans was a serious problem. This led to measures to purify milk through pasteurization and tuberculin tests. His latest biographer has concluded that Koch ‘ended his career as an imperious and authoritarian father figure whose influence on bacteriology and medicine was so strong as to be downright dangerous’.
Despite the tuberculin débâcle, the search continued for ways of immunizing against tuberculosis. Attempts to protect individuals by injecting them with tubercle bacilli, killed or treated, had no success until a new method was developed by Albert Calmette (1863–1933), of the Pasteur Institute, and his collaborator Jean Marie Guérin (1872–1961). From 1906 they used living bacilli from a bovine strain of the tubercle bacillus so attenuated as to have lost their disease-producing properties while retaining their protective reaction. The vaccine was given the name BCG (Bacilli-Calmette-Guérin); it was first used for inoculating calves and then, from 1924, after delays caused by the First World War, was extended to humans. By 1928 it had been successfully given to 116,000 French children, though its efficacy remained controversial. With medicine thoroughly tainted with nationalism, Germany declined to approve BCG, as did the USA; in Britain its uptake was dilatory, but it was used successfully in Scandinavia, where it markedly reduced the death rate. After the Second World War, the BCG vaccine was central to a huge Danish Red Cross vaccination programme in war-devastated Europe.
The great infectious diseases were targeted by the new bacteriology with mixed success: discovery of the infective agent by no means always led to effective therapies. Nevertheless, in the twenty-one golden years between 1879 and 1900 the micro-organisms responsible for major diseases were being discovered at the phenomenal rate of one a year. Typhoid was one.
By 1837 the distinction between typhoid fever and typhus fever had been established, and the typhoid micro-organism was isolated in 1884 by Koch’s pupil, Georg Gaffky (1850–1918). Immunization against typhoid was introduced by Almroth Wright (1861–1947) in 1897, but its efficacy was disputed by the statistician Karl Pearson (1857–1936) and only a fraction of the British troops received it during the Boer War; in South Africa 13,000 men were lost to typhoid as against 8000 battle deaths. Controversy raged until a special anti-typhoid commission reported favourably in 1913; the army then adopted a policy of vaccinating all soldiers sent abroad. The results were dramatic: whereas in the Boer War typhoid incidence was around 10 per cent with a mortality of 14.6 per 1000, in the Great War incidence was down to 2 per cent, with a minuscule death rate. Because of the presence of paratyphoid fever on the eastern fronts, killed cultures of paratyphoid bacilli A and B were added to the vaccine, so that it became known as T.A.B.
Another success, proved by the First World War, came with tetanus. This extremely dangerous disease (the death-rate is above 40 per cent) is caused by tetanospasmin, a toxin secreted by the bacterium Clostridium tetani which lives in the soil. The bacillus enters the body through agricultural cuts and battlefield wounds, and the toxin travels along nerve fibres towards the spinal cord. Sweating and headaches are followed by increasingly severe muscular spasms in the head and neck (lockjaw). Though known to Hippocrates, nothing could be done until the bacteriological era. The tetanus bacillus was discovered, like so many others, in the 1880s. Arthur Nicolaier (1862–1942) produced it in mice by inoculating them with garden earth; Kitasato grew it in a pure culture in Koch’s laboratory in 1889, leading to the production of antitoxin. (He also found it grew when deprived of oxygen, an early example of the anaerobic bacteria group, discovered in 1861 by Pasteur.) Tetanus became a serious problem at the outset of the 1914–18 war, when the bacillus entered the body through gaping shell wounds. From 1915 practically every wounded soldier received antitoxin, and tetanus was dramatically reduced.
Some progress was also made with plague. The bacillus was discovered independently by Kitasato and Yersin during the Hong Kong epidemic in 1894. The Swiss-born Yersin had studied in Paris, becoming Roux’s assistant and publishing papers with him on diphtheria before leaving to satisfy his wanderlust in the Far East. He returned to bacteriology, but in the colonial context, going to Hong Kong to investigate the plague epidemic spreading from China.
In June 1894, more or less simultaneously with Kitasato, Yersin isolated the plague bacillus now known as Yersinia pestis, reproducing the disease experimentally in healthy rats and transmitting it from rat to rat. ‘The plague’, he wrote dryly, ‘is contagious and inoculable. The rat probably is the principal vector and one of the most promising prophylactic measures would be extermination of rats.’ It had long been observed that outbreaks of a deadly disease among vermin preceded outbreaks of plague in humans; these epizootics which preceded epidemics finally became recognized as being due to the plague bacillus, conveyed via the Xenopsylla cheopis flea.
Exploiting this discovery, however, posed further problems. Bacteriologically, plague differs from diphtheria in that the organisms, instead of remaining localized, multiply rapidly throughout the body. The filtrate of a culture of plague bacilli was not very toxic and so conferred no immunity. The first vaccine made from killed cultures of plague bacilli came from the Russian, Waldemar Haffkine (1860–1930), an ex-pupil of Pasteur working in British government service in India, one of the world’s plague centres. Its success was rather limited, and nothing helped much before antibiotics.
A significant breakthrough in understanding and management also followed with undulant fever, a disease involving fever, with muscle and joint pains. Many of the British sick and wounded in the Crimean War, shipped to Malta to recover, had contracted this condition. In 1887 Major David Bruce (1855–1931) isolated the causative organism in ‘Malta fever’. The organism was of the spherical or coccus type, being called Micrococcus melitensis (from Melita, Latin for Malta). Goats were found to be highly susceptible, excreting the organism in their milk, and a ban on drinking goats’ milk produced a dramatic fall in the disease. Ten years later the Norwegian Bernhard Bang (1848–1932) independently described a very small bacillus found to cause contagious abortion in cattle. This Bacillus abortus also caused an obscure and persistent condition in humans; named undulant fever, it was common in the Mediterranean. In 1918 it was concluded that Bruce’s Micrococcus melitensis and Bang’s Bacillus abortus were identical. A new name Brucella abortus was coined in Bruce’s honour, and the diseases caused by them became Brucellosis – another triumph for British terminological imperialism. The health gains following this discovery were limited; the British garrison on Malta was protected from contaminated milk, but no efforts were made to reduce the incidence of Brucellosis among the local population.
Though by any criteria bacteriology had a dazzling string of successes to its credit, certain diseases proved refractory. One was scarlet fever, a dreadful killer of infants throughout the nineteenth century. Streptococci were first isolated from the blood of scarlet fever patients by Edward Klein (1844–1925) in 1887, but he was unable to reproduce the disease in animals. And while streptococci could be recovered from the throats of scarlet-fever patients, the next steps – showing, following Koch’s postulates, that the bacterium was the true cause of the disease and then producing a vaccine – were stymied. The streptococcus was found to be pathogenic for various laboratory animals, but on injection it hardly ever produced typical scarlet fever.
In 1924, George (1881–1967) and Gladys Dick (1881–1963), at the University of Chicago, identified haemolytic streptococcus as the causal agent and succeeded in infecting volunteers after swabbing their throats with a culture obtained from scarlet fever patients; they also established a test for immunity (the Dick test). But, as with many other communicable diseases, what brought its decline was not a therapeutic breakthrough but a healthier environment and improving patient resistance.
DEBATES OVER IMMUNITY
Whereas Pasteur developed attenuated live vaccines, German researchers pioneered serum therapy. They turned their attention from cellular to so-called ‘humoral’ immunity once it was shown that animals could be made immune to the toxins produced by diphtheria and tetanus bacilli, thanks to injections of immune serum. Opposing their view that a bactericidal property resided in the serum, a counter-theory was developed by Élie [Ilya Ilyich] Metchnikoff (1845–1916), the Russian pathologist appointed in 1887 as sub-director of the Pasteur Institute. This dispute became the scientific expression of Franco-German and Russo-Japanese rivalries.
How did the body develop immunity to protect itself against organisms? Recognition had been growing from the mid nineteenth century that normal blood could destroy bacteria, but little was understood of how that happened: Pasteur preferred vaccines to theories. In 1884 Metchnikoff observed a phenomenon which suggested a cellular theory of immunity and resistance. He saw amoeba-like cells in water fleas and other lower organisms ‘ingesting’ foreign substances like fungi. These cells, he concluded, might be similar to the pus cells in the inflammatory response of higher organisms. Microscopic observations on animals infected with various micro-organisms, including the anthrax bacillus, revealed white blood cells attacking and appearing to digest these disease germs, ‘fighting infection’ like soldiers. Pasteur gave Metchnikoff’s ideas his nod, while Koch and most German bacteriologists demurred; Koch even suggested that white blood cells might be more like a fifth column through which germs spread into the organism. Metchnikoff’s cellular immunity theories became connected with the French school, and chemical theories with the German view that germ wars were waged less by the blood cells than by the serum.
Metchnikoff styled the cells which ingested micro-organisms ‘phagocytes’ (from the Greek phagein, to eat, and kutos, cell). Macrophages was his name for the large mononuclear cells of the blood and tissues which ingested foreign particles; microphages were the leucocytes of the blood, active in ingesting micro-organisms. In what became the cellular (phagocytic) theory of immunity, he showed that one special kind of macrophage, the white cell (granulocyte), ate bacteria, and also that the body’s supply of such cells multiplied when infection struck. His views constitute perhaps the first model of immune response.
The alternative serum or humoral theory viewed infections as caused by bacilli-produced toxins; filtrates of these, containing no organisms, caused disease when injected into animals, the bacillus producing its effects through exotoxins in the filtrate. But the serum of treated animals equally acquired the property of neutralizing toxin: Behring and Kitasato called this property ‘antitoxic’.
By 1890 scientists had thus identified both a cellular and a serum system. Koch’s tuberculin work pointed to a third – a group of smaller, light-staining white cells different from Metchnikoff’s larger, dark-staining white granulocytes. These became known as lymphocytes. The body thus appeared to have an immune system made up of various elements which worked by combining forces. This possibility was strengthened in 1895 when two Belgian biologists, Joseph Denys and Joseph Leclef, modified Metchnikoff’s views. The Russian held that the leucocytes from an animal immunized against a certain organism actively engulfed that organism (phagocytosis). Working with streptococci, they showed that, if the leucocytes from a treated animal were placed in immune serum, the resultant phagocytosis was exceptionally active.
These ideas were developed further by Almroth Wright, director of the Institute of Pathology at St Mary’s Hospital in London, a larger-than-life figure caricatured on stage as Sir Colenso Ridgeon in George Bernard Shaw’s The Doctor’s Dilemma. Wright held that the action of both normal and immune serum was due to the presence of certain substances which promoted phagocytosis. Likening these to a sauce making the bacteria more tasty for the leucoctyes, he called them opsonins (Greek, opsonein, to prepare food), these being antibodies facilitating phagocytosis. The level of opsonic activity could be seen as a measure of a patient’s defences against bacterial infection – hence the slogan Shaw put into Ridgeon’s mouth: ‘Stimulate the phagocytes!’ The Englishman’s work on opsonins appeared to marry the chemical (German) and cellular (French) theories of immunity, though his limitless faith in immunization (‘the physician of the future will be an immunizer’ he predicted) proved unjustified.
All such antigen-antibody reactions (as they were later called) had certain features in common, protecting the individual against bacterial poisons. But it was found that comparable reactions could occur which were harmful rather than preservative. With diphtheria antitoxin treatment, some patients developed serum sickness (drowsiness, sweating and rashes), something first studied in Vienna by Clemens von Pirquet (1874–1929), and his assistant Béla Schick. Examining reactions to substances such as pollen, von Pirquet decided they were due to antigen-antibody reactions and coined the term ‘allergy’ to indicate the hypersensitive state producing abnormal reactions to certain foreign substances. Allergic reactions had been known since the Greeks; John Bostock (1773–1846) had coined the term ‘summer catarrh’ (hay fever), and John Elliotson (1791–1868) identified pollen as the agent; but the cause of such reactions had remained mysterious.
Bacteriological investigations of resistance and immunity also brought to light the baffling question of the carrier. Experience showed that the diphtheria bacillus sometimes persisted in the throats of convalescent patients; in 1900 a case was reported of a healthy individual passing typhoid bacillus in his urine, and some persons convalescing from enteric fever still excreted the organism, forming a worrying source of further infection. It was soon realized that some carriers could excrete it for many years, the most notorious being the Irish-born ‘Typhoid Mary’ who, though well herself, infected many people in New York with enteric fever between 1900 and 1907. The mechanisms of immunity were evidently more complicated than anyone had surmised, and early military images of gunning down ‘invading’ micro-organic pathogens obviously needed refinement.
CHEMOTHERAPY
Chemical theories of response were systematized by Paul Ehrlich (1854–1915), from 1899 director of the Royal Prussian Institute for Experimental Therapy in Frankfurt-am-Main. A truly seminal thinker, Ehrlich had a personal interest in these matters, since he had discovered tubercle bacilli in his sputum, had tried Koch’s tuberculin therapy and had spent a year in Egypt convalescing. He drove immunity investigations one stage further by developing chemotherapy, pinning his faith on the creation of artificial antibodies.
Treatment by natural drugs, above all herbs, goes back to the dawn of medicine; experience showed that certain substances had therapeutic properties. Paracelsus had proclaimed specific remedies for specific diseases, and Sydenham had hoped that one day every disease would have its own remedy, on the model of the Peruvian bark for malaria. From time to time new medications had been hit upon, as with the Revd Edmund Stone’s discovery of willow bark, which was the first stage on the road to aspirin.*
As shown in Chapter 11, the study of materia medica developed during the nineteenth century into laboratory-based pharmacology. Meanwhile drugs research and manufacturing became inseparably linked. The booming chemical industry developed pharmaceutical divisions, often as a sideline of the thriving dyestuffs business. In Britain W. H. Perkin (1838–1907) isolated mauve (aniline purple) from coal tar in 1856, but it was German entrepreneurs who excelled in exploiting dyes and organic chemistry.
Drug production became industrialized, with many of the companies appearing that later dominated the field. In 1858 E. R. Squibb opened a laboratory to supply medicines to the US army. Benefiting from the Civil War, his firm expanded rapidly, producing pure ether and chloroform, and using steam power for pulverizing drugs. The Eli Lilly Company was founded in Indianapolis in 1876; Merck and Company, a branch of a leading German chemical firm, opened in the United States in 1891; Parke, Davis & Company, formed in 1867, established one of the earliest research institutes in 1902.
Technological advances helped the drugs firms. Mass-production of sugar-coated pills started in France, being refined in 1866 by William R. Warner, a Philadelphia manufacturer who also began production of small pills (parvules). The gelatin capsule was developed, being brought into general use about 1875 by Parke, Davis. Capsules not only made medicine easier to swallow, they ensured a precise dose. Mechanization also made the tablet possible. A tablet-compression machine was introduced in England by William Brockedon in 1843 and in the USA by Jacob Denton in 1864.
Henry Wellcome (1853–1936) was born in Wisconsin, the son of a travelling Second Adventist minister. Inspired by a doctor uncle, he went into pharmacy, sweated as a travelling salesman (peddling pills, not salvation), and hitched up in his mid twenties with Silas Burroughs (1846–1895), who had the capital Wellcome lacked. Burroughs was the first American to bring medicines to Britain in mass-produced, machine-made tablets. Setting up in Holborn, Burroughs, Wellcome and Co. procured the British patent for the process, inventing‘Tabloid’ as their trade-mark (the term’s application to newspapers came much later).
Developing its research side, the pharmaceutical industry joined hands with academic pharmacology, whose institutional development followed the familiar German path. Institutes, notably those at Dorpat and Bonn, produced research schools employing chemists and physiologists. By 1900, pharmaceutical manufacturers were turning discoveries made in university laboratories to profit. Such cooperation between science and commerce was not always plain-sailing: industrial patenting and profit-seeking potentially clashed with the ideals of open scientific inquiry. When John Jacob Abel (1857–1938) and some academic colleagues established the American Society for Pharmacology and Experimental Therapeutics (1908), they excluded anyone in the permanent employ of a drug firm.
Wellcome ran into similar problems in Britain when he sought registration for animal experimentation at his Wellcome Physiological Research Laboratories, set up in 1894. Although he maintained his laboratories were independent of his drug firm, they were financed out of company profits and in practice linked with the manufacturing side. With the backing of key members of the British medical establishment, however, he obtained the necessary Home Office authority for animal experiments, and other British pharmaceutical firms followed, as animals were used to raise antitoxins and test products.
The symbiosis between science and industry was closest in Germany: Ehrlich’s Frankfurt Institute research laboratories had ties with the Hoechst and Farbwerke Cassella companies. In his quest for chemical cures, Ehrlich thus had a long tradition of pharmaceutical developments and microbiological investigations to draw on. His vision lay squarely within the framework of the new bacteriology, taking the idea of natural antibodies and transferring it to synthetic drugs. The idea had been already present in his doctoral thesis, which held that specific chemicals could interact with particular tissues, cells or microbial agents. Systematically exploring the range of dyes manufactured by the German chemical industry – dyes were evidently promising because, as histological staining made clear, their action was specific, staining some tissues and not others – Ehrlich was intrigued by the molecular (stereochemical) aspects of physiological and pharmacological events. Above all, he believed chemical structures were crucial to the actions of biologically active compounds, and that they could not affect a cell without being attached to it: corpora non agunt nisi fixate (substances do not react unless they become fixed) was one of his adages. A ‘receptor’ was a structure that received a dye. If there were dye receptors, why not drug receptors? Ehrlich began looking for substances fixed by microbes but not by the human host.
His first contributions to immunity theory came in the 1890s. Pondering how tetanus antitoxin actually worked, he advanced a series of significant hypotheses. Each molecule of toxin combined with a particular, invariant amount of antitoxin; the toxin-antitoxin connection involved groups of atoms fitting together like a key in a lock; tetanus toxin became bound to the cells of the central nervous system, attaching itself to the chemical ‘side-chains’ on the cell protoplasm, thereby blocking their physiological function. This blockage led the cell to produce fresh side-chains to compensate for what was blocked. These were the antibodies produced by toxin action.
Ehrlich’s side-chain, or chemical affinity, theory was based on the assumption that the union of toxin and antitoxin was chemical in nature, involving agents specifically toxic for particular bacteria, which would have no effect on the host. An antibody in the blood, produced in response to a certain micro-organism, was specific for that organism and highly effective in killing it, but harmless to the host. Antibodies (nature’s remedies) were magic bullets which flew straight to their mark and injured nothing else. The challenge was thus to find chemical equivalents tailormade for a particular organism and non-injurious to its host. Chemotherapy would be the discovery of synthetic chemical substances acting specifically on disease-producing micro-organisms.
Guided by this model of antigen-antibody reactions, Ehrlich set out to find agents specifically bound to and toxic for particular bacteria. In 1891, with quinine’s action in mind, he treated malaria with methylene blue, one of the aniline dyes – the first instance of Ehrlichian chemotherapy; the results, he thought, were promising. The next targets for his new chemotherapy were the trypanosomes, the causative agents of sleeping sickness. For this he tried a drug called atoxyl and similar arsenical compounds. This was quite effective, but caused neurological damage and blindness by way of side-effects.
Next he turned to syphilis. That disease had seemingly become more virulent again in the nineteenth century; certainly it was a disease of the famous, including Baudelaire and Nietzsche, the myth being popular among the avant garde that it contributed to genius, providing drive and restless energy. Many writers were positively exalted at getting poxed (or were good at putting a brave face on it). ‘For five weeks I have been taking mercury and potassium iodine and I feel very well on it,’ boasted Guy de Maupassant in 1877:
My hair is beginning to grow again and the hair on my arse is sprouting. I’ve got the pox! At last! Not the contemptible clap . . . no-no – the great pox, the one Francis I died of. The majestic pox . . . and I’m proud of it, by thunder. I don’t have to worry about catching it any more, and I screw the street whores and trollops, and afterwards I say to them, ‘I’ve got the pox’.
The natural history of syphilis had been clarified. In 1837 Philippe Ricord (1800–1889) established the specificity of syphilis and gonorrhoea through a series of experimental inoculations from syphilitic chancres. He also differentiated primary, secondary and late syphilis, the three stages of infection. In 1879 the German bacteriologist Albert Neisser (1855–1916) identified the gonococcus causing gonorrhoea, and in 1905, the protozoan parasite causing syphilis was discovered by Fritz Schaudinn (1871–1906) and Erich Hoffman (1868–1959); found in chancres, this spiralling threadlike single-celled organism was named the Spirochaeta pallida (since renamed the Treponema pallidum). Diagnostic screening was made possible in 1906 when August von Wassermann (1866–1925) developed a specific blood test. Despite these substantial advances in knowledge, no therapeutic advances had been made upon the wretched mercury, in use since the sixteenth century. Arsenical compounds such as atoxyl were mildly effective but injurious.
Seeking a chemical cure, by 1907 Ehrlich had synthesized and tested over 600 arsenical compounds. He took out a patent on Number 606, but went no further. In 1909 the Japanese bacteriologist Sahachiro Hata (1873–1938) began work as his assistant and retested the whole series of synthetic preparations for their action on the Treponema. It became clear that 606 was very active. After two physicians had volunteered as guineapigs, Ehrlich’s collaborators began intramuscular injections of 606 on some of their most hopeless patients, and were surprised at the improvements engendered by a single injection. By September 1910 about 10,000 syphilitics had been treated with Preparation 606, by then named Salvarsan. It transformed syphilis treatment, especially once it was used in the modified form of Neo-Salvarsan (1914), now called neoarsphenamine. This represented a considerable advance, but it was toxic and still required many painful injections into the bloodstream over a long period before a cure was complete – the ‘magic bullet’ didn’t cure syphilis ‘like magic’.
Once Salvarsan was discovered, would not other chemical magic bullets follow rapidly? Though plausible, that hope proved wrong. Many compounds, including some new synthetic dyes, were tried against the common bacterial diseases (the cocci and bacilli), but without success. Chemotherapy came to seem, after all, an impossible dream. Well into the twentieth century, for most infections there were no effective therapies; ancient and useless remedies like emetics were still prescribed; as late as the 1920s, the professor of applied pharmacology at Harvard, H. W. Haggard (1891–1959), confessed that medicine could ‘do little to repair damage from diseases’. The only effective chemotherapeutic substances, as distinct from painkilling drugs like morphia, were mercury, and Salvarsan and its variants, antimony for schistosomiasis, and quinine. Quinine’s action was still little understood; it was thought to have a selective affinity for malaria parasites in the blood, but in laboratory experiments it was hardly active in killing the malaria parasite. This suggested that the action was not a direct destruction of the parasites but a change produced in the body tissues inhibiting further parasite development. The situation changed, however, in 1935, when Gerhard Domagk (1895–1964) published his experiments with Prontosil.
Searching, like Ehrlich, for chemical remedies, Domagk devoted his early years to testing the therapeutic potential of metal-based compounds – gold, tin, antimony and arsenic. None worked: their antibacterial actions were too weak or their toxic side-effects too strong. In 1927 he was appointed research director of I. G. Farbenindustrie, the chemical company which had absorbed such familiar names as Bayer and Hoechst. Since his firm’s main products were azo dyes used for colouring textiles, he decided, like Ehrlich, to see whether they had any negative effect on streptococci, organisms that produce infections including erysipelas, tonsillitis, scarlet fever and rheumatism. In 1932 he found that one azo compound, Prontosil red, a brilliant red dye, cured mice injected with a lethal dose of haemolytic streptococci. Domagk successfully treated his own daughter with it for a streptococcal infection.
Scientists at the Pasteur Institute in Paris obtained Prontosil samples for investigation. Synthesizing the drug, they verified Domagk’s results, and found it worked when the compound split into two parts within the body, and that one of the two parts, later called sulphanilamide, was largely responsible for Prontosil’s ‘bacteriostatic’ action – that is, it did not kill bacteria but prevented them from multiplying in the host, thus allowing the host’s immune system to destroy them.
Domagk went into production with his new drug. As it could not be patented (Prontosil was basically sulphonamide, which had been synthesized back in 1907), it became readily available. At Queen Charlotte’s Maternity Hospital in London, Leonard Colebrook (1883–1967) used it to treat puerperal fever and found it was a ‘miracle drug’, slashing mortality from 20 to 4.7 per cent – and at last realizing Semmelweis’s dream.
Though effective against streptococci, Prontosil was little use against pneumococcal infections, and scientists began to look for comparable drugs. In 1938, a British team, led by A. J. Ewins (1882–1958) of May and Baker, developed M&B 693 (sulfadiazine 693, later called sulphapyridine), which worked well against pneumococci and was even better than sulphanilamide against streptococci. M&B achieved fame when it saved the life of Winston Churchill, seriously ill with pneumonia at a critical stage of the Second World War.
All these compounds were bacteriostatic, affecting the bacterial metabolism and preventing its multiplication in the host, thereby permitting natural body defences to succeed against the invader. As well as puerperal fever, the new drugs checked the pathogens in erysipelas, mastoiditis, meningitis, and some urinary diseases, including gonorrhoea: sulphanilamide could dispose of a case of gonorrhoea in just five days. Domagk was awarded the Nobel Prize in 1939, but Hitler disapproved of such things and had Domagk detained by the Gestapo to prevent his going to receive it (he received it in 1947).
These new ‘sulpha drugs’ began to be prescribed in vast quantities: by 1941, 1700 tons were given to ten million Americans. However, deaths were reported, and strains of sulpha-resistant streptococci appeared. Controls over pharmaceuticals were then minimal, and experience showed that the sulphonamides had their dangers and could also become ineffectual. They nevertheless represented a major step towards the control of bacterial diseases, and their development spurred research into other anti-microbial agents.
ANTIBIOTICS AND THE DRUGS REVOLUTION
Pasteurian bacteriology opened up the vision of biological (as distinct from chemical) agents being deployed to destroy bacteria. But what sort of biological agents might prove effective? Folklore suggested that fungi might be antibacterial: popular medicine widely recommended mould for treating wounds or cuts. But the first clear observation of antibacterial action was made in 1877 by Pasteur: while anthrax bacilli rapidly multiplied in sterile urine, the addition of ‘common bacteria’ halted their development. In 1885, the Italian Arnaldo Cantani (1837–93) painted the throat of a tubercular child with bacterial strains and reported that the bacteria in his mixture displaced tubercle bacilli while reducing fever. He stated the principle of bacterial antagonism: one infective pathogen would drive out another, a notion chiming with popular Darwinian notions of the struggle for existence.
The condition in which ‘one creature destroys the life of another to preserve his own’ was called ‘antibiosis’ by Paul Vuillemin (1861–1932). He termed the killer or active agent the ‘antibiote’. In due course the word antibiotic (meaning destructive of life) was brought in by Selman Waksman (1888–1973). The first antibiotic to be described was penicillin, a natural by-product from moulds of the genus Penicillium. It was brought to light through the work of Alexander Fleming (1888–1955), a Scottish bacteriologist at St Mary’s Hospital, London.
During the First World War, Fleming had been working on wounds and resistance to infection, demonstrating that the harsh chemical antiseptics used to cleanse wounds damaged natural defences and failed to destroy the bacteria responsible for infection. He was therefore receptive to the phenomenon of lysis, then under investigation. Exploring staphylococci in 1915, Frederick Twort (1877–1950) noticed that in some cultures the microbial colonies tended to disappear. He filtered some of these, and found a few drops poured over a staphylococcus culture produced degeneration. In 1917, working with cultures obtained from dysenteric patients, Felix d’Hérelle (1873–1949) found that the diluted filtrate produced lysis (dissolving) of the organisms in a broth-culture of the dysentery bacillus. He called the lytic agent the bacteriophage (or simply phage, meaning eater). Such experiments tended to suggest that lytic agents, generally found in the intestinal tract, were most active against one particular bacterial species or related types, having no effect on others.
Aware of these developments, Fleming’s mind was receptive to the first of his discoveries, made in November 1921, when he identified the enzyme lysozyme, a component of tears and mucous fluids. This arose from accidental contamination of a culture of nasal mucus by a previously undescribed organism; it happened to be uniquely sensitive to the lytic action of the enzyme in the mucus, and Fleming observed its colonies being dissolved. The enzyme, which he called ‘lysozyme’, while it did not kill harmful bacteria, was clearly part of the body’s defence system. Sceptical about chemotherapy – once infection entered the body, he believed, it was the body which would have to contain it – Fleming regarded lysozyme in a different light, belonging as it did to that class of substances which bodies themselves produced against outside intrusions.
Fleming’s identification of penicillin came six years after the lysozyme discovery, in August 1928. He had been working on staphylococci, the pathogens responsible for boils, carbuncles, abscesses, pneumonia and septicaemia. Returning from holiday, he found that a mould which had appeared on a staphyloccus culture left in a petri dish in his St Mary’s lab seemed to have destroyed the staphylococcus colonies. In a paper published in 1929 he identified the mould as Penicillium rubrum (actually it was Penicillium notatum). While the penicillin strongly affected such Gram-positive* bacteria as staphylococci, streptococci, gonococci, meningococci, diphtheria bacillus and pneumococci, it had no toxic effect on healthy tissues and did not impede leucocytic (white cell) defence functions. This weighed heavily with Fleming in view of his general opinions on wound treatment; penicillin appeared not just strong but safe. Yet it had no effect on Gram-negative bacteria, including those responsible for cholera and bubonic plague; it was hard to produce and very unstable, and thus did not seem clinically promising. Fleming did nothing, and the scientific community paid little heed.
Ten years later, however, a team of young Oxford scientists, led by the Australian Howard Florey (1898–1968), head of the Dunn School of Pathology, and including the ebullient biochemist Ernst Chain (1906–79), a refugee from Nazi Germany, launched a research project on microbial antagonisms. Combing the scientific literature for antibacterial substances, Chain found Fleming’s report, and the team began to grow P. notatum, soon encountering the difficulties involved in isolating the active ingredient from the liquid the mould produced – only one part in two million was pure penicillin. Another biochemist in the team, Norman Heatley (b. 1911), devised improved production techniques. They continued purifying the drug and began testing. On 25 May 1940 they inoculated eight mice with fatal streptococci doses, and four were then given penicillin. By next morning, all had died except the four treated mice.
Florey seized upon the drug’s potential; his department went into production, using, in best Heath-Robinson manner, milk churns, lemonade bottles, bedpans and a bath tub until they thought they had enough to try it on a patient – a policeman near death from staphylococcal septicaemia following a scratch while pruning his roses. There was, in fact, so little available that his urine was collected to recycle as much of the drug as possible. By the fourth day, he had improved remarkably, but then the penicillin ran out and he died.
Recognizing that his laboratory could not produce enough, Florey approached British pharmaceutical companies, but they were too busy supplying wartime needs; so in July 1941 he went to the United States, enlisting aid at the Northern Regional Research Laboratory in Peoria, Illinois. There Heatley, working with Andrew J. Moyer (1899–1959), increased the penicillin yield thirty-four-fold (they made it in beer vats), and three American pharmaceutical companies went into production.
By 1943, British drug companies too had begun to mass-produce penicillin and, in May, Florey travelled to North Africa to perform tests on war wounds. The success was extraordinary. By D-Day in June 1944, enough was available to allow unlimited treatment of allied servicemen. In 1945, Fleming, Florey and Chain shared the Nobel Prize – Heatley received nothing. He was made to wait until 1990 for his reward: an honorary MD from Oxford University.
Penicillin proved highly effective against most types of pus-forming cocci, and against the pneumococcus, gonococcus, meningococcus and diphtheria bacillus, the bacilli of anthrax and tetanus and syphilis. Prepenicillin, the pneumonia fatality rate was around 30 per cent; it dropped to around 6 per cent, and pneumonia, once the old man’s friend, ceased to be a major source of death.
Research continued on the antagonism between fungi and moulds and harmful bacteria, but with sporadic success. In 1927 René Dubos (1901–81) had gone to the Rockefeller Institute Hospital in New York to conduct research on antibacterial agents in the soil. In 1939, with Rollin Hotchkiss, he isolated a crystalline antibiotic, tyrothricin, from a culture media of the soil organism Bacillus brevis. Tyrothricin proved active against a range of important bacteria but too toxic for the treatment of infection in humans. These observations, however, were suggestive, and they gave a major impetus to the development of more effective antibiotics.
In 1940 Selman Waksman (1888–1973), a Russian who had migrated to the United States and become a distinguished soil microbiologist, isolated an antibiotic called actinomycin. Though impressively lethal to bacteria, it proved so toxic that it was not tried clinically; however, it convinced Waksman that he was on the right trail. In 1944 he discovered another species of this fungus, to which the name Streptomyces griseus was later given. From this he isolated the antibiotic streptomycin, which proved active against the tubercle bacillus, and its toxicity was relatively low. Use of streptomycin rapidly led, however, to resistant strains and it was found more effective when used in combination with para-amino-salycylic acid (PAS).
In 1950 testing began on a third anti-tuberculous agent, developed by Squibb and Hoffman-La Roche in the United States. This was isonic-otinic acid hydrazide, or isoniazid. Like streptomycin, it was prone to resistance, but the shortcomings of these anti-tuberculosis drugs were minimized after 1953 by combination into a single long-term chemotherapy. Tuberculosis had been steadily declining over the previous century; antibiotics delivered the final blow.
The long anticipated therapeutic revolution had eventually arrived. A flow of new drugs of many kinds followed from the 1950s, including the first effective psychopharmacological substances. Some proved extremely valuable, others marginal, and a few positively dangerous. One of the most successful, or at least adaptable to many purposes, has been cortisone, isolated in the Mayo Clinic in the 1930s and put to use with spectacular success after the war, initially for rheumatoid arthritis and other inflammatory conditions. ‘If the word “miraculous” may ever be used in referring to the effects of a remedy,’ claimed Lord Horder (1871–1955), ‘it could surely be excused here.’ Arthritis sufferers, long bedridden, were able to get up and walk. Yet it had strong side-effects: ugly skin disorders, heart disease and stomach ulcers sometimes occurred; patients became obese and highly susceptible to certain infections. Clearly, hormonal treatments could disturb the body’s homeostatic balance.
Drugs finally began to appear against viral conditions. For centuries the term ‘virus’ (from the Latin for ‘slime’ or ‘poisonous juice’) had signified a poison produced by living beings and causing infectious disease. But viruses understood as specific entities emerged as great enigmas out of bacteriological experimentation. Isolation of them became much easier from 1884, when Chamberland made a filter with pores small enough to hold back bacteria but large enough to allow viruses to pass through.
In 1886 Adolf Eduard Mayer (1843–1942) discovered that tobacco mosaic disease could be transmitted to healthy plants by inoculating them with extracts of sap from the leaves of diseased plants. Mayer filtered the sap and demonstrated that the filtrate was still infectious. In 1897 Martinus Willem Beijerinck (1851–1931), seeking the micro-organism responsible for tobacco mosaic disease, discovered that the disease was apparently transmitted by a fluid after it had passed through a ‘bacteria-tight’ filter. Concluding that the toxin was in the form of an infectious fluid, he introduced the term ‘filterable virus’ to refer to a cell-free filtrate as a cause of disease. Although few bacteriologists gave much credence to his notion of life in a fluid form, the discovery of the filterable virus attracted considerable attention. In 1901 James Carroll (1854–1907) reported that filterable virus caused yellow fever in humans, shifting the study from botany to virology, and freeing biology from the dogma of the cell.
Viral diseases were successively identified, for instance poliomyelitis, first clinically described at the end of the eighteenth century, while Simon Flexner succeeded in producing paralysis in monkeys with virus derived from infected nasal secretion. Vaccines for viral diseases followed, a key figure being the American John Enders (1897–1985). Growing viruses in animal tissues with Thomas H. Weller (b. 1915) and Frederick C. Robbins (b. 1916) at the Children’s Hospital in Boston, by March 1948 Enders had grown mumps viruses in chicken-broth cultures, and by 1949 polio virus on human tissue. Enders next turned his attention to a measles vaccine, tested in 1960 and licensed in 1963. By 1974, it was judged to have saved 2400 lives in the US alone.
While vaccines had success, drug treatments against viruses proved difficult to develop, since viruses are intracellular parasites, with an intimate association with the host chemical solution. Only since the 1970s has progress been made, first with acyclovir, potent against herpes zoster (shingles), cold sores, and other herpes infections. In cells infected with the herpes virus, acyclovir is converted to a metabolic blocking agent, thereby largely overcoming the old and plaguing problem of toxicity to the host. Other viruses have been less amenable; influenza viruses continue to be a hazard, since they mutate rapidly.
Up to the 1960s new drugs could be launched without strict safety requirements. As laws became more stringent, requiring lengthy and exacting testing, the pace of innovation slowed. That may in some measure explain why the late twentieth century brought no new drugs whose impact could compare with the sulpha drugs or penicillin. Yet in the wider perspective the twentieth-century transformation appears impressive: effective vaccines were developed against smallpox, measles, mumps, typhoid fever, rubella (German measles), diphtheria, tetanus, yellow fever, pertussis (whooping cough), and poliomyelitis, and successful drugs against many bacterial conditions, some viral infections, and numerous metabolic disorders.
‘I will lift up mine eyes unto the pills’, sang the journalist Malcolm Muggeridge in 1962, doubtless tongue in cheek. ‘Almost everyone takes them, from the humble aspirin to the multi-coloured, king-sized three deckers, which put you to sleep, wake you up, stimulate and soothe you all in one. It is an age of pills.’ He was right. Whereas before 1900 the physicians’ pharmacy was largely a magazine of blank cartridges, many effective drugs have been introduced: antibiotics, antihypertensives, anti-arrhythmics, anti-emetics, anti-depressants and anti-convulsants; steroids against arthritis, bronchodilators, diuretics, healers of stomach and duodenal ulcers, endocrine regulators and replacements, drugs against parkinsonism and cytotoxic drugs against cancers.
Disasters happened too. Introduced as a safe sleeping tablet, Thalidomide was withdrawn in 1961 after causing horrendous foetal defects in over 5000 babies. Other tragedies and scandals came to light only later. For instance, beginning in the 1940s, the synthetic oestrogen diethylstilbesterol (DES) was given to women to prevent miscarriage and subsequently to prevent pregnancy. Some early studies showed that it was ineffective and, moreover, caused foetal abnormalities in animals, but these findings were ignored. Even after 1971, when it was discovered that DES caused a rare form of vaginal cancer in ‘DES daughters’ as well as other reproductive problems, it continued to be prescribed in the United States as a ‘morning-after’ pill. It was also used as a growth stimulant in livestock and, despite being known as carcinogenic from the 1960s, the influential US agricultural lobby stood behind DES.
In the century from Pasteur to penicillin one of the ancient dreams of medicine came true. Reliable knowledge was finally attained of what caused major sicknesses, on the basis of which both preventions and cures were developed. In the general euphoria created by the microbe hunters and their champions, some of the wider conditions of life contained within the evolutionary struggle were easily disregarded, the prospects of killing off diseases being too precious to ignore. In retrospect, far from the bacteriological and antibiotic paradigms then adopted becoming the basis for the progress of all future medicine, the period between Pasteur and Fleming may one day be nostalgically recalled as an anomalous, if fortunate, exception to medicine’s sisyphean strife.
* Koch’s germ theory of cholera was disputed by the Munich hygienist Max von Pettenkofer (1818–1901), who upheld a version of the miasmatic theory and denied the bacillus was the vera causa of cholera. He got Koch to send him his cholera vibrios and put them to the test:
Herr Doctor Pettenkofer presents his compliments to Herr Doctor Professor Koch and thanks him for the flask containing the so-called cholera vibrios, which he was kind enough to send. Herr Doctor Pettenkofer has now drunk the entire contents and is happy to be able to inform Herr Doctor Professor Koch that he remains in his usual good health.
Pettenkofer must have been fortunate enough to possess the high stomach acidity which sometimes neutralizes the vibrios.
* That road was long. In 1826 two Italians found that willow bark’s active ingredient was salicin, and three years later a French chemist obtained it in pure form. Meanwhile the Swiss pharmacist Johann S. F. Pagenstecher began extracting a substance from meadowsweet (Spirea ulmaria, a pain reliever well-known to folk medicine), which led to the German chemist Karl Jacob Löwig (1803–90) obtaining the acid later known as salicylic acid. Its molecular structure was ascertained in 1853 by Karl Friedrich Gerhardt (1816–56), a Montpellier chemistry professor, who tried to eliminate its severe side-effect: the painful irritation of the stomach-lining. In time Felix Hoffman (1868–1946) came up with acetylsalicylic acid, found to be not only a painkiller but anti-inflammatory and anti-pyretic. In 1899, a new name was invented for the drug: aspirin. The following year, the German Bayer drug company took out patents on it and it became their best-selling product, indeed the most popular drug of all time; in the United States, over 10,000 tons of aspirin are used annually.
* The bacteriologist J. M. C. Gram (1853–1938) devised a method for differentiating different sorts of micro-organisms, using a stain.
TROPICAL MEDICINE, WORLD DISEASES
DISEASES OF WARM CLIMATES
IN THE MODERN EPIC OF HEALTH, a hero’s part has often been assigned to tropical medicine, the branch of the microbiological revolution bearing fruit in the Third World: intrepid doctors going off to the steaming jungles and overcoming some of the most lethal diseases besetting mankind.
‘Tropical diseases’, however, do not constitute a single natural class of pathogens; they amount to the medley of maladies that came to be dealt with by ‘tropical medicine’, a discipline which took shape towards 1900 and comprised a multiplicity of skills, interests and personnel. It fed off striking developments within medicine, above all bacteriology, but the discipline came into being primarily because of the needs of imperial powers competing to enlarge their stakes in Africa, Asia, the Americas and Oceania – for better medical protection for their nationals and control over the peoples and environments they were mastering.
Contemporaries frankly recognized, indeed gloried in, the intimate relationship between colonization and medicine. Without new medical skills, how could the imperial mission have been realized? Tropical medicine, some argued, was the true or even sole justification of empire:
Take up the White Man’s burden
The savage wars of peace –
Fill full the mouth of famine
And bid the sickness cease.
urged Rudyard Kipling. For Cecil Rhodes, empire meant civilization, and tropical medicine was high among its crowning glories. Joseph Chamberlain (1836–1914), who became Britain’s colonial secretary in 1895, viewed disease control as integral to imperialism. Hubert Lyautey (1854–1934), one of the architects of the French colonial medical service, declared ‘La seule excuse de la colonisation, c’est la médecine.’
The problems of the tropics were nothing new to European nations. Long before germ theory, Spanish and Portuguese, Dutch, French and British officers and sea captains had tried to create sanitary encampments for their troops, developing a rule-of-thumb military hygiene based upon the miasmatic view that diseases came out of the earth. During the nineteenth century the British applied their public health notions to the administration of India: as well as threats from the climate and the physical environment, the Raj was moved by the dangers to health supposedly posed by Indians and their strange ways. Smallpox vaccination was introduced, but there was little success in dealing with cholera. Anti-plague activities involved the medical surveillance of populations at risk, the isolation of sufferers in hospitals, the rapid disposal of corpses and the destruction of personal property, all of which prompted Indian resistance to what were condemned as draconian measures.
The diseases beneath the umbrella of what in time became organized, taught and practised as tropical medicine formed a miscellany. They included long-familiar conditions like malaria. ‘Marsh fever’ or the ‘ague’ was still endemic in low-lying, swampy and estuarial parts of Europe as well as the wider world. From early modern times, fen drainage and capitalist agriculture brought a retreat of the disease from northwest Europe, but it remained severe around the Mediterranean littoral. Colonists encountered it in many parts of Africa and Asia, notably India; ‘fever’ was also something settlers spread, wherever destruction of forests by slash-and-burn clearance techniques, irrigation schemes and other environmental disturbances produced the myriad pools and sheets of standing water which formed mosquitoes’ breeding grounds.
Diseases deemed ‘tropical’ also included conditions like bubonic plague, which had decimated Europe before retreating, but still occasionally flared out beyond Africa and Asia. Plague disappeared from Europe (final devastation: Marseilles, 1720) and declined in the Ottoman empire (the last great Egyptian outbreaks occurred around 1850), but still remained devastating in south-east Asia. The ‘third plague wave’ originated in China between 1856 and 1866 and took its severest toll in east Asia and north Africa. Late in the century it spread northwards to Hong Kong and Manchuria, and west to Bombay, killing a million Indians in 1903, while also invading Java, Asia Minor, South Africa and even the Pacific shores of the Americas. San Francisco was struck, and plague became permanently established among wild rodent populations in California, to this day producing occasional human fatalities.
Many parts of Asia were plagued throughout the twentieth century. There were grave pneumonic plague outbreaks in 1910–11 and 1920–21 in northern Manchuria, and India remained a victim, suffering over twelve million deaths in the first half of the twentieth century. As recently as 1994 a serious outbreak of what was possibly pneumonic plague occurred around Delhi.
Overseas trade and colonial expansion exposed Europeans to deadly diseases unknown or little known in Europe. Amongst these, one that struck repeatedly and ruthlessly was yellow fever, an explosive disorder, causing haemorrhaging and intense jaundice, followed by coma and often death, first experienced along the west coast of Africa, and then carried by European traders and conquerors to the New World, together with their slave cargoes. By 1700, it was infecting ports in the Caribbean and Latin America; it spread along the Atlantic coasts of the New World as far north as Boston, later also becoming a regular visitor to Colombia, Peru and Ecuador. Between 1800 and 1850 it devastated the port cities of the southern USA, visiting Savannah with fifteen epidemics, Charleston with twenty-two and New Orleans, the necropolis of the America south, with at least thirty-three. As late as the 1870s it was still rippling up the Mississippi from New Orleans to Memphis.
Early control measures relied on quarantines. Infected ships were isolated, flying a flag called the yellow jack, which became a term for the disease itself. Quarantining assumed that the fever was contagious, but a growing body of miasmatist medical opinion, encouraged by commercial lobbies fretting at the trade standstills quarantines brought, claimed the true source of the infection was filth; heat acting on rotting animal and vegetable matter caused putrid exhalations.
Unlike cholera, yellow jack never seriously threatened Europe, though once in a while it found its way to ports like Bordeaux, Nantes or even Swansea, bringing deaths and panic. But in the colonies it caused extensive fatalities among settlers; in that respect it differed from other conditions exposed by colonization which were viewed primarily as ‘native diseases’. Disorders like sleeping sickness rarely afflicted whites but impeded colonization, as well as being affronts to medicine and civilization.
In many villages and clearings where the white man took his wares and weapons, he encountered ghastly unknown conditions: kala-azar, with its leprosy-like symptoms, in India and Africa, ‘big belly’ (bilharzia) in the Nile Valley, sleeping sickness on the savannas. Though this was little recognized at the time, such diseases were not features waiting to be discovered, like the source of the Nile; they had often been aggravated or even created by imperialism. Bringing war, the flight of peoples, clearings, settlements, encampments, roads and railways and other ecological disruptions, and the reduction of native populations to wage-labour or to marginal lands, colonization spread disease. ‘When the Europeans came,’ reflected the medical missionary Albert Schweitzer (1875–1965),
the natives who served them as boats’ crews, or as carriers in their caravans, moved with them from one district to another, and if any of them had the sleeping sickness they took it to fresh places. In the early days it was unknown on the Ogowe, and it was introduced about thirty years ago by carriers from Loango.
Colonial powers, however, would see disease in one light only: an evil, an enemy, a challenge, it had to be conquered in the name of progress. ‘Malarial fever . . . haunts more especially the fertile, well-watered and luxuriant tracts,’ explained Ronald Ross:
There it strikes down not only the indigenous barbaric population but, with still greater certainty, the pioneers of civilization – the planter, the trader, the missionary and the soldier. It is therefore the principal and gigantic ally of Barbarism. . . . It has withheld an entire continent from humanity – the immense and fertile tracts of Africa.
Fortunately it was possible to point to progress being made, partly thanks to quinine: its success, particularly as used by Dr William Baikie (1825–64) on the 1854 Niger expedition, has been viewed as the prime reason why Africa ceased to be the white man’s grave. In 1874 for instance 2500 quinine-dosed British troops were marched from the Atlantic to the far reaches of the Asante empire in west Africa without serious loss of life; armed with quinine, the French began settling in Algeria in large numbers.
The efficacy of quinine and the later war on mosquitoes gave colonists fresh opportunities to swarm into the Gold Coast, Nigeria and other parts of west Africa and seize fertile agricultural lands, introduce new livestock and crops, build roads and railways, drive natives into mines, and introduce all the disruptions to traditional lifestyles that cash economies brought. The ecological transformation and social proletarianization created by Ross’s ‘pioneers of civilization’ triggered massive epidemics, in particular sleeping sickness, while the planting of coffee, cocoa, rubber and other cash-crop monocultures led to decline in the nutritional status and general well-being of natives in Africa, Asia, America and the Pacific.
Moreover, as with the earlier conquistadores, colonists imported new diseases to which natives had no resistance. Tuberculosis proved especially severe, being introduced by whites and then spread by African labourers to sub-Saharan Africa. Before the advent of Europeans, the inhabitants of Pacific islands had suffered from their own diseases, from Filariasis and tropical skin afflictions, but these horticulturalists, isolated for thousands of years, were easy game for foreign infections. The Hawaiian islands had remained ‘undiscovered’ until Captain Cook’s landing in 1778; his crew introduced syphilis. Within a century, venereal diseases, together with smallpox and other epidemics, reportedly reduced the indigenous population by 90 per cent. When measles first struck Fiji and Samoa in the 1870s, it slaughtered 20–30 per cent of the natives.
A similar headlong decline set in among aborigines after English settlement of Australia began in 1788. Smallpox erupted almost immediately and destroyed half those who had contact with Port Arthur (Sydney). ‘Wherever the European has trod, death seems to pursue the aboriginal,’ mused the sensitive young Charles Darwin (1809–82) in his Beagle journal. Shortly afterwards the Tasmanian aborigines became totally extinct.
Initially at least, colonial medicine was aimed at preserving colonists. In due course, medical missionaries were also sent out, supplementing their religious brethren, to save the bodies as well as the souls of the natives. They were the acceptable face of colonialism. ‘The foreign doctor is persona grata,’ one missionary noted in 1899. However dedicated, they proved a mixed blessing. Their medical work was sometimes subordinated to evangelizing while, set up in the name of health, leper colonies broke up families, forcibly removed sufferers and imposed alien western values and living patterns; and, as has happened more recently with AIDS, hospitals and mission posts inadvertently served as centres for disease communication no less than control. In any case, what magic bullets did nineteenth-century western medicine have for sleeping sickness or bilharzia?
TROPICAL MEDICINE
The hazards facing white men abroad – trading, fighting, preaching, planting – had long attracted medical attention. The fevers endemic to warm climates were rationalized by humoral theory. Sailors and settlers were advised to avoid ‘hot’ food and strong liquor and to drink ‘cooling’ fluids. Europeans arriving in the Caribbean or East Indies were known to be at high risk from local diseases, and experience counselled a gradual acclimatization process. Being costly commodities, slaves too were ‘seasoned’.
Supplanting this traditional ‘medicine of warm climates’, a distinctive tropical medicine arose in the last third of the nineteenth century. The timing was no accident, for it was then that imperial rivalries climaxed, as Britain, France and other European nations scrambled for Africa and other bits of the globe that were still there for the taking. Millions of Europeans were crossing the oceans to aid in the work of conquest, conversion, and civilization – more Britons migrated to the empire in the decade before the First World War than were killed in the war itself. Among them were doctors.
The great epidemiological breakthrough came with Patrick Manson’s recognition that the diseases of warm climates could involve parasites and vectors. A bank manager’s son from Aberdeen, where he studied medicine, Manson (1844–1922) moved to the Far East in 1866, spending a dozen years in the Chinese Imperial Maritime Customs Service in Formosa (Taiwan) and Amoy (now Hsaimen). There he encountered elephantiasis, the chronic disfiguring disease which, through lymph-flow blockage, produces grotesque swelling of the limbs and genitalia. He determined it was caused by a nematode worm, Filaria, and in 1877 traced the role played by bites from the gnat Culex fatigans in spreading filarial parasites to the human bloodstream: the first time an insect had been shown to be part of the natural history cycle of a disease. The membranes in the gnat stomachs broke down, releasing the worms that tunnelled into the thoracic muscles, where they metamorphosed in preparation for their entry into humans. This discovery that insects acted as hosts to a disease attracted little immediate notice, but was to have profound consequences once it became accepted that mosquitoes and other insects were the vectors of other deadly conditions.
Six years after helping to found a medical school in Hong Kong, Manson returned to London in 1889, successfully specializing in diseases contracted overseas and becoming physician to the Seaman’s Hospital on the Thames, which was to serve as the clinical facility to the School of Tropical Medicine which he established in 1899. An avid natural historian of disease, Manson stamped his model of tropical medicine on the specialism he shaped, partly through his Tropical Diseases: A Manual of the Diseases of Warm Climates (1898), a textbook which popularized the new parasitology.
Building on bacteriology, the new tropical medicine then proceeded to lay bare, alongside the by then well-known bacilli, the pathogenic role of other classes of micro-organisms: in dysentery, an amoeba; in Schistosomiasis, a class of worms, the trematodes; in sleeping sickness, a Trypanosome, a protozoan, and in malaria, another protozoan, the Plasmodium. The infection chains were typically more complicated than with the familiar European water or air-borne diseases.
MALARIA
The world’s most serious endemic disease was malaria, with its hot and cold fever, commonly leading to death. Perception of its gravity was sharpened when the French entrepreneur Ferdinand de Lesseps’ bid to add the Panama Canal to the Suez came to grief because of the fever – in the 1880s over 5000 workmen died on the project. The first breakthrough came in 1880, when Alphonse Laveran (1845–1922), a French army surgeon working in Algeria, observed the malaria plasmodium in its first stage of sexual reproduction. This squared with Manson’s finding that Filariasis grew in mosquito stomachs, but how was it transmitted?
Anticipating the mosquito to be both the malaria host and vector, Manson shared his thoughts in 1894 with a young Indian-born British army surgeon, Ronald Ross (1857–1932), who, though preferring poetry to medicine, had studied at St Bartholomew’s Hospital and entered the Indian Medical Service, becoming involved from 1892 in the malaria problem. Inspired by Manson’s hunch that it was transmitted through mosquitoes, Ross returned to India, determined to prove the hypothesis.
His brief was clear: he would find patients suffering from malaria, allow mosquitoes to feed on their blood, kill the insects and dissect them to see what became of Laveran’s organism. ‘The Frenchies and Italians will pooh-pooh it, then adopt it, and then claim it as their own,’ Manson warned his disciple.
Many problems became clarified in due course: the complex interaction between the Plasmodium’s life cycle and the disease; the existence of several forms of the organism capable of causing the different types of malaria (tertian, quartan, etc.); and the recognition that not all mosquito species acted as vectors.
Confirming Laveran’s work, Ross discovered the Plasmodium in the stomachs of Anopheles mosquitos which had bitten malaria sufferers. The great breakthrough was made in his laboratory in Secunderabad on ‘Mosquito Day’, 20 August 1897:
At about 1 p.m. I determined to sacrifice the seventh Anopheles. . . of the batch fed on the 16th, Mosquito 38, although my eyesight was already fatigued. Only one more of the batch remained.
The dissection was excellent, and I went carefully through the tissues, now so familiar to me, searching every micron with the same passion and care as one would search some vast ruined palace for a little hidden treasure. Nothing. No, these new mosquitoes also were going to be a failure: there was something wrong with the theory. But the stomach tissues still remained to be examined – lying there, empty and flaccid, before me on the glass slide, a great white expanse of cells like a great courtyard of flagstones, each of which must be scrutinized – half an hour’s labour at least. I was tired, and what was the use? I must have examined the stomachs of a thousand mosquitoes by this time. But the Angel of Fate fortunately laid his hand on my head; and I had scarcely commenced the search again when I saw a clear and almost perfectly circular outline before me of about 12 microns in diameter. The outline was much too sharp, the cell too small to be an ordinary stomach-cell of a mosquito. I looked a little further. Here was another, and another exactly similar cell.
The afternoon was very hot and overcast; and I remember opening the diaphragm of the sub-stage condenser of the microscope to admit more light and then changing the focus. In each of these cells there was a cluster of small granules, black as jet and exactly like the black pigment granules of the Plasmodium crescents. As with that pigment, the granules numbered about twelve to sixteen in each cell and became blacker and more visible when more light was admitted through the diaphragm. I laughed, and shouted for the Hospital Assistant – he was away having his siesta. ‘No, no’, I said; ‘Dame Nature, you are a sorceress, but you don’t trick me so easily. The malarial pigment cannot get into the walls of the mosquito’s stomach; the flagella have no pigment; you are playing another trick upon me!’ I counted twelve of the cells, all of the same size and appearance and all containing exactly the same granules. Then I made rough drawings of nine of the cells on page 107 of my notebook, scribbled my notes, sealed my specimen, went home to tea (about 3 p.m.), and slept solidly for an hour. . . . When I awoke with mind refreshed my first thought was: Eureka! the problem is solved!
Ross had found in the stomach wall of an Anopheles mosquito the oocysts (eggs) which are the intermediate stage of the Plasmodium life cycle. Assured of the parasite’s presence, he was then able to outline its history within the mosquito: first, as a zygote in the stomach; then as an oocyst in the stomach wall; and finally as a mature sporozoite that reaches first the proboscis and then a human host when the insect bites and squirts its saliva prior to sucking blood. Ross thus demonstrated the mosquito’s role in malaria transmission, working out, through experiments with birds, the detailed relationship between the Plasmodium life cycle and the disease. Still the poet, he celebrated his triumph in doggerel:
I know this little thing
A myriad men will save
O Death, where is thy sting?
Thy Victory, O Grave?
Meanwhile, working with his fellow Italian, Amico Bignami (1862–1929), Giovanni Grassi (1854–1925) had independently linked human malaria to the Anopheles mosquito and showed how the insect becomes infected through feeding off the blood of a person with the Plasmodium parasite in the bloodstream. Deeply fascinated by parasites, in 1891 Grassi discovered the malaria parasite of birds, Protosoma praecox, which looked very similar to Plasmodium vivax, and inoculated malaria parasites from one bird to another. Broadening his study, he became convinced that whenever malaria occurred, mosquitoes were present, but not all mosquito-infected areas were malarial. Hence particular species had to be responsible. In August 1898 he discovered that the female of Anopheles was the carrier of malaria; in an experiment, a healthy volunteer was bitten by Anopheles, developing a malarial fever, and in the next year he proved that Anopheles became infected on biting a diseased person.
When Ross claimed priority over Grassi, a sharp and chauvinistic controversy followed (the Englishman denounced the ‘Italian pirates’). In 1902 Ross was awarded the Nobel Prize, leaving the Italians outraged and Grassi embittered. Certainly Ross reached the conclusion before Grassi that mosquitoes transmitted malaria, but Grassi first identified Anopheles as the agent of transmission and elucidated the complete sequence of steps in the life cycle of the parasite. The Nobel Committee made amends to Laveran in 1907 by awarding him the prize for the discovery of the Plasmodium parasite, but Grassi, who by 1898 had worked out the entire Plasmodium life cycle, was left empty-handed.
This parasitological model opened up an astonishing new vision of disease aetiology. While it yielded no cures, it afforded a prospect of malaria control, through eradicating mosquitoes. Measures attempted included using copper sulphate, spreading kerosene on ponds to prevent hatching, screening windows and sleeping under nets. Control programmes were launched. With the support of the Rockefeller Foundation, the US Public Health Service began an anti-malaria assault in the south, starting in Arkansas. The results were amazing: malaria incidence dropped 50 per cent in three years at a cost of under a dollar per capita, Paris green dust (copper arsenic) proving a cheap and effective method for killing mosquito larvae. By 1927 the disease had essentially been eliminated from American towns.
At the end of the Great War, Italy still had some two million cases a year. An American-style anti-mosquito campaign was established, and between 1924 and 1929 a marked reduction in infections was achieved in a test area. During the 1930s operations were extended; within ten years the worst malarial regions in Italy were under control and, in the previously uninhabitable Pontine marshes, 200,000 acres of new farmland had been brought under cultivation.
Success encouraged similar ventures in Africa, Asia, and Brazil. The introduction of DDT (dichloro-diphenyl-trichloroethane) made control procedures more effective. Synthesized in 1874 from chlorine, alcohol and sulphuric acid, DDT’s insecticidal powers were discovered in the mid 1930s. The fact that it remained active for weeks obviated repeated respraying. By 1945, though malaria was still annually infecting 300 million people globally, eradication was being touted as a practical possibility; in 1957 the World Health Organization judged its conquest an attainable goal, and the US Congress voted large sums for a worldwide campaign to eradicate malaria through spraying with insecticide and immunizing with chloroquine. The plan was to wipe malaria out by 1963.
Hopes were dashed, however. Mosquitoes quickly became DDT-resistant, and the insecticide was found to enter the food chain, creating grave health and environmental dangers, including bizarre genetic mutants. The Plasmodium also became resistant to drugs, including quinine and chloroquine. The tide turned adversely; in 1963 Congress cut off funds, and that encouraged the disease to return with renewed strength. Sri Lanka had one million cases in 1955, hardly any in 1964, but half a million again by 1969. Malaria in India climbed from its 1961 low point of under 100,000 cases to 350,000 in 1969, and 2.5 million by 1974. By 1977, incidence in India was thought to have soared above six million cases. In Brazil malaria cases have similarly shot up, partly as a result of deforestation, logging and open-cast mining. In sub-Saharan Africa, malaria is now annually responsible for the death of nearly a million children. Globally, as wonderdrugs produced superbugs, there were three times as many cases of malaria in the 1990s as there had been in 1961.
YELLOW FEVER
Early investigations of malaria shed light on the yellow fever problem. In 1807 John Crawford of Baltimore (1746–1813) had suggested links between mosquitoes and yellow fever; Josiah Nott (1804–73) of Mobile, Alabama, and Louis Beauperthuy (1807–71), a native of Guadeloupe, suggested around 1850 that the mosquito was a possible vector, though without firm evidence. In 1881 Carlos Finlay (1833–1915), a Havana physician of Anglo-French descent, repeated the suggestion.
Finlay’s attention was drawn to the possibility of mosquito transmission by noticing that in the haemorrhages of yellow fever red blood globules were discharged unbroken. He hypothesized that this was akin to smallpox and vaccination in general, the implication being that if one wanted to transfer yellow fever from a sufferer to a healthy person, the inoculable material had to be extracted from within the former’s blood vessels and introduced into those of the latter. That was something that could certainly be achieved by mosquitoes. Finlay then conjectured a chain of events: a yellow fever patient whose capillaries the mosquito could bite; survival of the mosquito until it could bite another person; those bitten by the same mosquito contracting the disease. In 1881 he singled out the Aedes aegypti mosquito as the agent – an early ascription of disease transmission to insects.
In the mid 1880s Finlay studied the natural history of the mosquito. The stinging mosquitoes were fecundated females, he concluded, which would lay eggs within a few days on suitable water. Finding that the blood of the sick person, when transferred to the mosquito, underwent some modifications, he speculated that the microbes multiplied in the mosquito’s mouth, though he had not yet arrived at his later views suspecting the salivary glands. Between 1881 and 1898 he conducted over a hundred experiments, inoculating with yellow fever, but these did not convince his colleagues. Meanwhile in 1897 Guiseppi Sanarelli, an Italian bacteriologist working in Uruguay, announced he had identified a bacillus (Bacillus icteroides) in yellow fever patients which might be the causal agent.
Following appalling disease mortality in the 1898 Spanish-American War and a yellow fever outbreak among American troops based in Cuba, a US Army Yellow Fever Commission was appointed in 1900, headed by Walter Reed (1851–1902) from Johns Hopkins University and james Carroll (1854–1907) of the US Army Medical Corps. Their reasoning pointed to a vector. Ross’s recent identification of the mosquito’s role in malaria and the cumulative evidence for insect involvement in other diseases, including sleeping sickness, led the commission to conduct a trial of Finlay’s hypothesis.
Because no animals were then known to suffer from yellow fever, the researchers became their own experimental subjects. Using Aedes aegypti mosquitoes raised by Finlay, three researchers attempted the experiment. Jesse Lazear (1866–1900) allowed some mosquitoes to bite yellow fever patients and then his own arm. Nothing happened. Carroll repeated the exercise and within four days fell severely sick, but, because he could have picked up the disease elsewhere in Havana, this proved nothing. Then a soldier who had had no contact with yellow fever volunteered to be bitten by the same mosquito. He went down with a mild attack: the first clear proof that mosquitoes spread the disease. (Lazear was meanwhile bitten accidentally while working in a yellow fever ward, developed fever and died.)
The researchers set up a properly controlled experiment. Soldier volunteers were divided into two groups. The first lived among the clothing and bedding of yellow fever victims to see if it contained anything contagious; the second were put in isolation and then bitten by mosquitoes infected by yellow fever patients. Not one of the former sample contracted yellow fever; 80 per cent of those bitten by the infected mosquitoes fell sick (all survived).
Once the mechanism was understood – yellow fever must follow the bite of an infected mosquito – the commission was in a position to conclude that it was caused by an unknown microscopic agent, indeed one which passed through a filter that would retain the smallest known bacteria – the first time a filterable virus was implicated as the cause of a human disease.
The findings of the Reed Commission led to a programme in Cuba, launched in 1901 by William C. Gorgas (1845–1920), an American military doctor from Mobile, Alabama, to control yellow fever by destroying mosquitoes. Every barrel in Havana where water might collect was targeted. Kerosene was spread on ponds; wells and tanks were screened, and yellow fever patients isolated. The infective chain was disrupted; within three months the disease had vanished from Havana.
Success spurred the American authorities to tackle yellow fever and malaria mosquitoes in Panama. A few years earlier the French had abandoned construction of the Panama Canal primarily because of fever: in October 1884 alone, no fewer than 654 workers had died of yellow fever. In 1904 the Americans took over the operation, with Gorgas in charge of the medical department. Despite scepticism from the Canal Zone’s governor, General G. W. Davis (‘spending a dollar on sanitation’, he snapped, ‘is as good as throwing it into the Bay’), Gorgas was given the go-ahead for a sanitation plan. Pumped water was installed and all domestic storage receptacles withdrawn; mosquito traps were set; undergrowth scorched and insects hunted down. This too proved a success; by September 1906 the last yellow fever victim had died in the Canal Zone; it became a quarter where Europeans could work! When the canal was finally opened in 1913, the zone was said to be twice as healthy as the United States, a dramatic vindication of the new power of medical science to aid civilization. Subsequent programmes were carried out in Guatemala, Peru, Honduras, El Salvador, Nicaragua, Mexico and Brazil – though in some parts, notably Brazil, mosquitoes recovered much of the ground formerly ceded, and the yellow fever threat returned.
Meanwhile, findings in Africa forced a rethink. In 1925, the Rockefeller Foundation West African Yellow Fever Commission discovered a new jungle variety of yellow fever, established that monkeys as well as people were susceptible, and decided that the virus could shuttle between men and monkeys. There being no feasible way to eradicate arboreal mosquitoes, immunization became the only possible protection in the jungles. Fortunately by 1937 an effective vaccine had become available, and vaccination programmes were launched; but yellow fever remains a severe problem wherever breeding grounds for the Aedes mosquito are provided by road ruts, old tyres, and anything else that serves as a water trap.
SLEEPING SICKNESS
Many other diseases were to be elucidated through the parasitological model, leading to prospects of control if not cure. Amongst these was dengue, a haemorrhagic fever, prevalent in the Caribbean and the hot zones of the East. Classic dengue involves a fever that comes on rapidly after an initial mosquito bite, striking children in particular. The temperature rises to 104°F, a severe headache develops, with prostration and excruciating joint pains (‘breakbone fever’ was its popular name). Though an insect vector was suspected, the epidemiology long remained a mystery.
In Beirut in 1905, under the direction of T. L. Bancroft (1860–1933), volunteers were infected using the Aedes mosquito; researchers injected filtered blood from a patient into a healthy volunteer and brought on an attack: the transmission agent was clearly the mosquito. In the 1940s Albert Sabin (1906–1993) managed to cultivate the virus in the laboratory, and it is now known that there are four distinct types of virus which cause dengue and at least three other arbo viruses producing dengue-like diseases. The last major New World epidemic occurred in Trinidad in 1954, but it has returned sporadically in Latin America, for instance in Costa Rica in 1993 and Puerto Rico in 1994, when it also struck US soldiers in Haiti. A severe outbreak hit New Delhi in 1982; and dengue remains endemic in Africa, China, south-east Asia and Australia.
Another condition cracked was Chagas’ Disease, named after Carlos Chagas (1879–1934), scion of a wealthy Brazilian coffee-planting family. In early work on malaria and yellow fever, Chagas noticed that some insects were hosts to a trypanosome, naming it Schizotrypanum cruzi (later Trypanosoma cruzi), and concluding that it affected monkeys, cats and dogs. He then detected it in human blood, and linked it to a local disease whose acute symptoms were fever and generalized oedema. He discovered that its animal reservoir was the armadillo, an animal common in those regions. It has been suggested that Charles Darwin’s lifelong ailments might have been the long-term consequences of Chagas’ disease, contracted while in South America during the voyage of the Beagle.
Among the gravest diseases facing colonial medicine was sleeping sickness, found widely in sub-Saharan Africa, to which both animals and humans are susceptible. There are now known to be two main types, producing distinctive conditions. The chronic form – the Gambian or west African version, Trypanosomoa brucie gambiense – develops very slowly; the acute form caused by Trypanosoma brucei rhodiense, has a short incubation period of 5–7 days and is found in eastern and southern Africa. Once bitten by an infected fly, swelling begins, with discolouration and a rash; headache, irritability and insomnia develop, and general lymph-node swelling. The parasites multiply in blood, lymph, tissue fluids and eventually the cerebrospinal fluid. Consequences include male impotence, spontaneous abortion in females, tachycardia and hypotension. Once the disease enters the central nervous system, deterioration leads to death.
Symptoms of what was known as ‘African lethargy’ or ‘Negro lethargy’ were known to Arabs and Europeans in Africa from the fourteenth century. John Atkins (1685–1757), an English naval surgeon who had visited the Guinea coast, discussed the ‘sleeping distemper’ in The Naval Surgeon (1734), noting it was prevalent among natives in western Africa and extremely dangerous. Sufferers who did not die, he reported, have an irresistible tendency to sleep, ‘lose the little reason they have and turn idiots’. A century later Robert Clarke described ‘narcotic dropsy’ in Sierra Leone, commenting that it appeared more prevalent in the interior. In 1876, a French naval surgeon found sleeping sickness common in Senegal, with whole villages being abandoned.
This was a clear sign that the disease was spreading, mainly due to the disruptions caused by colonization. Henry Stanley, the explorer who found Dr Livingstone, became economic development director for Belgium’s King Leopold II, and his success in opening up the Congo to commerce flushed the disease into the central areas. Missionaries reported emptied villages, while conveying the sick to mission stations inadvertently triggered infection of previously uninfected zones. Fatalities soared; between 1896 and 1906 devastating epidemics killed over a quarter of a million Africans around the shores of Lake Victoria, and double that number died in the Congo. The horrors of sleeping sickness seized the public imagination.
In 1894 Major David Bruce was sent to Zululand to investigate the cattle disease, nagana. He found the sinuous long-tailed Trypanosoma protozoan in cattle blood and deemed the tsetse fly, two centimetres long, brownish and blood-sucking, responsible for its transmission (its bite, Livingstone had noted, ‘is certain death to the ox, horse, and dog’).*
While Bruce was uncovering the transmission of what came to be known as animal trypanosomiasis, its human form continued to spread. In 1901 a severe epidemic in Uganda claimed more than 20,000 victims. In response, Sir Patrick Manson’s First Sleeping Sickness Commission, which included young Count Aldo Castellani (1875–1971), was sent out from the London School of Tropical Medicine. It sought the cause of sleeping sickness by studying the action of Filaria pustans, thought to play a role in elephantiasis. This got nowhere, but Castellani stayed on to pursue his researches.
In the meantime, Robert Mitchell Forde (1861–1948), a hospital surgeon in Bathurst, Gambia, and Joseph Dutton (1876–1905), from the Liverpool School of Tropical Medicine, isolated a trypanosome from the blood of an English shipmaster suffering from ‘Gambia fever’. Dutton named it T. gambiense. Trypanosomes were also found in a fever patient, leading to speculation that the tsetse fly was responsible.
Working at Entebbe in Uganda, Castellani identified a streptococcus in sleeping sickness victims which he thought was the causal agent. He also found trypanosomes in the cerebrospinal fluid of dying patients.
Piecing together the jigsaw, it was becoming recognized that sleeping sickness took two forms, depending on the body part in which the infection was active. When the parasite circulated freely in the blood, a chronic, episodic fever (‘Gambia fever’) resulted; but when it established itself in brain tissue, lethargy and loss of function – sleeping sickness proper – developed. On Bruce’s arrival in 1903 to head the Second Sleeping Sickness Commission, Castellani reported these conclusions, and, building on his nagana work, Bruce targeted the trypano-some. Together they found trypanosomes in the spinal fluid of sleeping sickness cases. Another Anglo-Italian priority controversy resulted; Bruce credited Castellani for finding the trypanosomes, but claimed he had recognized that they were the causal agent in the disease.
The aetiology of sleeping sickness had been unravelled, but how was it to be prevented or treated? Paul Ehrlich and other chemists were experimenting with arsenical compounds such as atoxyl which were moderately effective if applied at the earliest stages. In Africa Dr Schweitzer and Dr Eugene Jamot (1879–1937) both relied on chemotherapy, the former operating from his hospital centre at Lambaréné in Gabon, the latter going in search of his cases. Preventive strategies were to include forcible removal of natives from areas like the shores of Lake Victoria, where the disease was endemic. It was, however, a disease destined to suffer from neglect; most of the victims were poor Africans, so there was little incentive for pharmaceutical companies to devote research dollars to its eradication.
Through these and other studies, tropical medicine established a role for itself and put down roots. Within a few years of the foundation of the London School in 1899 by Manson and of its Liverpool cousin, under Ross, funded by merchants persuaded that better control of disease would facilitate imperial commerce, similar institutions were established in France, Germany, Italy, Belgium and the United States. In its first half century, tropical medicine scored many successes: ‘I now firmly believe’, Manson declared, ‘in the possibility of tropical colonization by the white races.’ In the process, medicine’s tasks in the tropics were to change. Though the protection of European soldiers, administrators and settlers remained the top priority, it came to be understood that the health of whites could not be wholly separated from that of natives. Founded in 1714, the Indian Medical Service broadened its concerns, and in Africa a Colonial Medical Service for the British colonies was started in 1927. Natives were mainly catered for, however, not by governments but by missions. In Africa, medical missionary work had begun in 1799 with John Vanderkemp (1748–1811), a Dutch physician sponsored by the London Missionary Society. Different denominations – Anglican, Baptist, Methodist, etc. – carved up particular territories, though there was often competition between Protestant and Catholic missions.
Such missions were not merely set up among ‘savages’. From the seventeenth century, western medicine had been introduced into China and Japan by Jesuit missionaries. The Japanese doctor Narabayashi Chinzan (1643–1711) produced a book called Koi geka soden [Surgery Handed Down] in 1706, which drew upon the writings of Ambroise Paré he had obtained from the Dutch physician, Willem Hoffman, operating at Deshima from 1671 to 1675. Missionary work by physicians started at Canton around 1840. Evangelical physicians subsequently took western medical education to imperial China, in due course introducing bacteriology and pathology. Western medical schools were founded, leading to the Peking Union Medical College, set up in 1917 with Rockefeller Foundation support, the idea being that ‘secular philanthropy’ would expose China to western science. By 1937 the Peking Union Medical College had graduated 166 practitioners, skilled in western theories and therapies. After 1949 under the communist government, western religious missions were discontinued, but the PUMC was kept going as a high-level medical school.
As this suggests, in the twentieth century religious missions were supplemented by intervention from philanthropic bodies, notably the Rockefeller Foundation. Set up in 1913, the Rockefeller International Health Division promoted health activities in Latin America and Asia involving basic research, training personnel and model health programmes. A notable example of Rockefeller ‘missionary activity’ (on its own patch) was the campaign to eradicate hookworm from the American south. This ‘disease of laziness’ affected many poor southerners, black and white, producing chronic anaemia, fatigue and lethargy. Essentially a disease of poverty spread through poor sanitation and going barefoot over infested ground, it was both treatable and preventable. The Rockefeller Sanitary Commission, headed by Wickliffe Rose (1862–1931), was charged with investigating the prevalence of hookworm and educating the public about prevention. Rose’s sanitary inspectors treated some 700,000 individuals, held more than 25,000 public meetings, and distributed more than two million booklets between 1910 and 1915, when the campaign ended and the commission was absorbed into Rockefeller’s International Health Board (IHB).
The IHB extended the hookworm campaign into tropical areas, added malaria and yellow fever eradication to its agenda, and invested money in schools and institutes of public health and tropical medicine in many countries in Europe, Asia and Latin America. Success was mixed. A hookworm campaign in China in the early 1920s foundered on the age-old practice of fertilizing mulberry trees with human faeces, often infested with hookworm eggs.
The inter-war years brought the gradual rapprochement of tropical medicine with epidemiology at large, as researchers viewed their subject in terms of a wider medical ecology. The influential policy-maker, Simon Flexner (1863–1946), at the Rockefeller Institute for Medical Research, and Wade Hampton Frost (1880–1938), at the Johns Hopkins School of Hygiene and Public Health, moved from narrow pursuit of infectious agents to broader views of disease and its environments.
Tropical medicine was fraught with ambiguities. Based in the metropolitan centres of colonial or neocolonial powers rather than in the infected countries themselves, the specialty inevitably reflected white priorities and attitudes. Funds were channelled into high-profile laboratory research, though critics claimed problems could better be managed by investing in things of little interest to scientists – drinking water, sanitation, food. Not least, tropical medicine was vulnerable to characterization as the tool of colonial powers or post-colonial multinationals, mopping up the mess created by the ‘development’ (or perhaps ‘undevelopment’) which imperialism and capitalism produced.
Medicine also seemingly set itself at the service of empire by providing justifications for racial dominance. Colonial doctors often portrayed ‘savages’ as ignorant, filthy, childlike and stupid, sometimes out of real contempt, sometimes prompted by ‘rescue’ motives, or to raise money in the mother country for hospitals and education.
MEDICAL ANTHROPOLOGY
Medicine played its part in the construction of racist anatomy and physical anthropology. Studies of skulls and skeletons allegedly proved white superiority, while with the triumph of Darwinism from the 1860s natives could be stigmatized as evolutionarily ‘primitive’ and destined to be defeated in the struggle for existence.
Such issues provoked controversy, and medical authors figured on both sides. In Britain, James Cowles Prichard (1786–1848), brought up a Quaker, and the Quaker Thomas Hodgkin (1798–1866), seeing ‘God in everyone’, were committed to the biblical doctrine of monogenesis and the unity of the human race. Prichard’s Researches into the Physical History of Mankind, first published in 1813 and going through many later editions, held that all humans shared a single physical origin and mentality, dwelling on similarities between different peoples in terms of physical type, language, customs, political organization and religion. This view of the ‘psychic unity of man’ was countered by the Scottish anatomist, Robert Knox (1791–1862), whose The Races of Man (1850) developed polygenism. Race, Knox believed, explained all, and in the 1862 edition he held that only certain races fell within the human species. The debate between mono-and poly-genists expressed conflicting attitudes to colonial expansion. Hodgkin’s Aborigines Protection Society (1835) aimed to collect ‘authentic information concerning the character, habits and wants of uncivilized tribes’ so as to ameliorate colonial practice. The argument became reflected in competing learned societies, the Ethnological Society of London being liberal and the Anthropological Society of London (1872) racist.
The science of anthropology was largely promoted by medically trained travellers. One of the first major British field trips, the Torres Straits expedition, was organized by Alfred Haddon (1855–1940), a protégé of Michael Foster’s Cambridge school of physiology. The expedition included two scientists trained in neurophysiology and experimental psychology, C. S. Myers (1873–1946) and W. H. R. Rivers (1864–1922). Rivers’ Medicine, Magic and Religion (1924) viewed indigenous medical systems as social institutions to be studied in the same way as kinship, politics or other institutions, seeing native medical practices as following rationally from the culture at large. Such views were further developed in E. E. Evans-Pritchard’s (1902–73) Witchcraft, Oracles and Magic among the Azande (1937). Asked why Africans explained disease in terms of witchcraft, he responded, ‘Is Zande thought so different from ours that we can only describe their speech and actions without comprehending them, or is it essentially like our own though expressed in an idiom to which we are unaccustomed?’.
Like anthropology at large, medical anthropology initially set about explaining the aberrations of the native or primitive mind, with a view to governing it better and perhaps educating it. In time such medical systems came to be studied in less ethnocentric ways, and some of the insights of medical anthropology were later applied to western societies and western medicine themselves.
The bonds between medicine and empire were many-faceted. Disease and medicine played a part in the establishment of European empires overseas. Epidemics devastated and demoralized indigenous populations in the New World in the sixteenth century, and later in Africa, Australia, New Zealand and the Pacific islands, clearing the way for European conquest. Without disease, European intruders would not have met with such success or found indigenes so feeble in their resistance. Yet endemic diseases also held back European expansion into Africa. Imported from Africa, yellow fever and malaria shaped the ethnic composition and distinctive history of colonization and slave labour in the Caribbean.
Colonialism was hardly good for the health of the Third World. Epidemiological link-ups between previously isolated regions, the movements of fleets and armies, of millions of slaves and indentured labourers, the spread of disease by ecological change and social dislocation, the misery bred by shanty-towns – all have been implicated in the hail of death that European rule brought. The good that western medicine did was marginal and incidental. It formed, however, an integral part of the ideological baggage of empire. With colonialism equated with civilization, a prominent place was claimed for medicine among the benefits the West could bestow. White man’s medicine excluded others, and Christianity’s ‘healing’ doctrine was a challenge to the rival power of local ‘witchdoctors’. The imperialism latent in western medicine was obvious in its attitudes towards indigenous healing: it aimed to establish rights over the bodies of the colonized. The vigorous denunciation of the ‘witchdoctors’ of Africa and other indigenous healing specialists was supported by claims that their practices were grounded in dangerous superstition. As western medicine became more convinced of its uniquely scientific basis, colonial authorities intervened to ban practices and cults which they saw as medically or politically objectionable, hence the prohibition against witchcraft in Britain’s African colonies in the 1920s and 1930s.
INTERNATIONAL MEDICINE
Mirroring changes in world politics and economics, medicine’s orientations shifted in the century after 1850. International capitalism, war and new technologies challenged national barriers. Before the twentieth century, the health problems of the industrial world were largely distinct and independent from those of the colonized: in many respects, the ‘West’ and the ‘Rest’ were still just making contact. During the twentieth century all grew interlocked, through the transformation of empires, gigantic population migrations, the changes brought by multinational capitalism, communications revolutions, world war and global politics.
As if to signal the consequences for world health, the early twentieth century brought the most mobile and lethal pandemic the world has ever seen, the influenza pandemic which swept the globe from 1918, killing over twenty-five million people in six months (about three times as many as died in the First World War), followed by an epidemic of encephalitis lethargica (an inflammation of the brain and spinal cord) and another wave of killer influenza in 1920. Perhaps a mutant of swine flu, it went global due to the massive movements of peoples associated with the war. It showed, were any proof needed, that the globe had shrunk.
The influenza epidemic appeared in three waves, the first being mild and unimportant. Beginning in August 1918, the second turned rapidly to pneumonia, against which medicine had no defence. It exploded in centres separated by thousands of miles: Freetown, Sierra Leone, Brest – the French port for American expeditionary forces – and Boston, Massachusetts. Two-thirds of Sierra Leone’s population contracted flu, and more than 1000 died in Freetown alone. In Boston 10 per cent of the population was affected, and over 60 per cent of the sufferers died.
At San Francisco Hospital in California, 3509 pneumonia cases were admitted, with 25 per cent mortality. Throughout the US public gatherings were banned; schools, churches, cinemas and businesses were closed, and face masks worn – but nothing helped.
Influenza spread rapidly among American soldiers in the United States and abroad. At Camp Devens, Massachusetts, the first diagnosis was made on 12 September 1918; in less than a fortnight, the tally had reached 12,604. By October, some 20 per cent of the US army was ill: 24,000 soldiers died of flu; battle deaths were 34,000. In all, the United States lost over 500,000, England and Wales, 200,000; a quarter of the population of Samoa perished. Frantic efforts were made to culture it, discover its aetiology and devise treatments – with little success. The influenza disappeared as quickly and mysteriously as it had arrived. It was the greatest single demographic shock mankind has ever experienced, the most deadly pestilence since the Black Death.
Nothing since has struck on such a scale. Is that luck? Or a mark of the effectiveness of the better nutrition, public health measures, vaccines, chemotherapy and antibiotics since developed? It is hard to be sure. After the flu, the international health community grew more confident that the risk of great pandemics was dwindling. Hopes were pinned on a medical internationalism encouraged since the mid nineteenth century. Against the backdrop of the cholera pandemics a meeting had been held in Paris in 1851, involving twelve nations (including the Ottoman empire) and designated an International Sanitary Conference. It directed its attention to disputes that remained unresolved as to whether plague, yellow fever and cholera were contagious, miasmatic or resulted from an ‘epidemic constitution’. Most delegates voted for cholera to be subject to quarantine regulations, but none of the participating governments ratified the regulations.
Setbacks like this constantly dogged medical internationalism. A second International Sanitary Conference was held in Paris in 1859. After five months, it adjourned with no resolution of the dispute between contagionists and miasmatists. A third was held in Constantinople in 1866, a fourth in Vienna in 1874, another in Washington in 1881 and a sixth in Rome in 1885. Still no agreement could be reached; that was left for the International Sanitary Conference in Venice (1892). Forty-one years of discussion had been needed to reach a limited accord on the quarantining of westbound ships with cholera on board – and by then the cholera problem had largely gone away!
In 1907 twenty-three European countries established a permanent establishment, the Office International d’Hygiène Publique (OIHP), located in Paris. It was to collect and disseminate knowledge on infectious diseases, with a view to its being embodied in international quarantine regulations. Eventually the OIHP included nearly sixty countries, including Persia/Iran, India, Pakistan and the United States.
The OIHP’s initial focus was on cholera, plague and yellow fever, but it broadened to other communicable diseases, such as malaria, tuberculosis, typhoid fever, meningitis and sleeping sickness. It also provided an international forum for discussion of sanitation, inoculation, notification of tuberculosis cases and the isolation of leprosy cases.
The Versailles settlement following the First World War brought the League of Nations into being, and this established a subdivision, the Health Organization of the League of Nations. The United States was not a member of the League and, as a member of OIHP, it vetoed a proposal to integrate that body into the League. By consequence, in 1921 there were two parallel international agencies: the OIHP and the League’s Health Organization. There was also the Pan-American Sanitary Bureau.
With typhus raging in Russia and the influenza pandemic, an International Health Conference met in London in 1919, but it was attended by only five countries: Great Britain, France, Italy, Japan and the United States. Eventually cooperation improved between the Health Organization and the Office International d’Hygiène Publique. Several new international health activities were initiated by the League, including a Malaria Commission (1923), and a Cancer Commission. These bodies reported on drugs standardization and the unification of pharmacopoeias, malnutrition, typhus, leprosy, medical and public health education.
With the German invasion of Poland in 1939, the League of Nations collapsed and, with it, the Health Organization. In view of this, those who established the World Health Organization (WHO) following the Second World War were insistent that it should not depend on the survival of its parent body, the United Nations. By June 1948, WHO had fifty-five national signatories, and a secretariat in Geneva. In ringing and idealistic tones WHO declared that its goal was ‘a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity’.
WHO launched campaigns for immunization of the world’s children against six diseases: diphtheria, tetanus, whooping cough, measles, poliomyelitis and tuberculosis (with BCG vaccine). After deep political dissensions, it promoted the training of auxiliary health personnel, such as China’s ‘barefoot doctors’ and India’s traditional birth attendants. In 1946 the United Nations General Assembly had also established an International Children’s Emergency Fund (UNICEF), which worked in close cooperation with WHO, directing funds for the provision of food and drugs supplies and equipment. UNICEF came under the general supervision of UNESCO (United Nations Educational, Scientific and Cultural Organization). The World Health Organization has attempted in many ways to monitor international disease developments – collecting statistics, improving cooperation, intervening in health crises, and developing plans for Third World health improvements. Other agencies like the International Red Cross have spearheaded famine relief and epidemic interventions.
A further type of international initiative developed: bilateral foreign aid to developing countries, involving investment, agricultural and industrial development, educational programmes and medical schemes. By 1980, bilateral expenditures by most industrialized countries had outstripped multilateral programmes. The United States, for example, was providing health assistance to sixty countries in 1980.
The world health movement scored some signal triumphs, notably the worldwide eradication of smallpox – the first time a disease has been entirely eliminated by human intervention. In 1966, when 10–15 million people in 33 countries still caught smallpox every year and 2 million died, WHO voted for a ten-year mass vaccination campaign to eradicate it once and for all. The multinational WHO team comprised 50 fulltime and 600 temporary medical workers, led by an American, Dr Donald Henderson (b. 1928). Gradually, the disease vanished from the Americas, Africa and the Middle East, but remained in India and Pakistan. In November 1974, a famous victory was celebrated, when Bangladesh’s freedom from smallpox made world headlines. The last smallpox fatality was, ironically, in Birmingham. In 1978, Janet Parker, a British photographer, was working above a research laboratory when the virus escaped through the ventilation system; she caught the disease and died. Today, the virus officially exists in only two laboratories, in Atlanta and Moscow. The rest of the world’s stocks have been destroyed, unless (as is widely suspected) samples are being held in reserve as biological weapons among other biological and chemical weapons stockpiled by the great and would-be great powers.
With smallpox eradication accomplished, the World Health Organization developed programmes to eradicate other major childhood diseases – measles, whooping cough, diphtheria, polio – responsible for the deaths of millions every year. The prospects, however, are less propitious than with smallpox; a combination of features set that apart from most other infectious illnesses, in that it was not especially contagious, had no animal reservoir, was easily recognized and diagnosed, and there was a very effective vaccine.
Measles and polio also have no animal reservoir, but the prospects of success in the short term remain slim. In central Africa few countries achieve a 50 per cent immunization rate of children under one year of age. With twenty-two million children born each year, India immunizes little more than half, though China has succeeded in reaching over 90 per cent of its susceptible population.
Recognition has grown that the application of western medicine, though well-meaning, has often been inappropriate, ineffective, selective or even counter-productive. The West neglected bilharzia until US personnel began to acquire it in the Philippines during the Second World War. Critics of subsequent bilharzia campaigns have condemned the over-reliance on science and on single interventions such as molluscicides to kill the snail vectors, maintaining that the ‘commando’ approach – going in and striking the vectors with disease-specific magic bullets – fails to deal with the deep problems and is at best a temporary expedient.
Others have greater faith in scientific advances directed against specific diseases, preferring these to mass-participatory, broad-scale interventions against afflictions associated with poverty. In the case of schistosomiasis, currently infecting some 150,000 million people, research on molluscicides continues; the techniques of molecular biology are being used to seek vaccines, and praziquantel has transformed therapy. Supporters of high-tech medicine believe it may prove more practical to attack diseases one by one with vaccines and drugs than to raise living standards sufficiently to reduce the tragic burden of ill health on a broad front.
NEW DISEASES
National and international instability, especially war in Africa and Asia, has disrupted health administration and created the poverty, famines and refugee problems that foment disease: at least twenty million people round the world live in refugee camps. Economic crises, political upheavals, and mass population migrations have militated against the high levels of financial investment, administrative support, organizational infrastructure and international cooperation required to sustain campaigns against world diseases. One outcome was that hopes that malaria might be eradicated by pesticides and drugs were abandoned in the 1980s.
By 1970 malaria had been eradicated from most of Europe and the USSR, North America, several Middle Eastern countries, and much of the Mediterranean littoral. That was a fine public health achievement, but not enough. Seasonal or endemic transmission continued in many tropical countries, and difficulties dogged the implementation of eradication programmes. More worryingly, the parasite steadily became seriously drug-resistant. Resistance to cheap drugs like chloroquine began in south-east Asia, and by the 1980s most malarial parasites in that region were completely resistant. By 1990 chloroquine resistance was common throughout Africa. The parasite changes its genetic makeup more rapidly than the pharmaceutical industry can produce new and tested drugs. Moreover, some of the newer drugs, like Mefloquine, produce severe side-effects.
In many nations it has grown more difficult to implement control programmes because, with population growth, the numbers at risk are increasing dramatically. Not least, air travel has provided a perfect means of speeding infected mosquitoes around the globe, while global warming is extending the habitats suitable for them. At the close of the twentieth century, malaria is still afflicting over 300 million’ people – far more than in 1960 – and causing between two million and three and a half million deaths a year.
Improvement in some countries balances deterioration in others. Hopes have been held out for the vaccine developed by a Colombian physician, Manuel Patarroyo, which has been chemically synthesized rather than made using a biological process. Others are less sanguine. ‘It is folly for anyone to tell you’, Dr Thomas Eisner (b. 1929) of Cornell University has commented, ‘that we can cope with spreading insect populations. I’m anxious about that kind of technological optimism. We tried to wipe out malaria, and what have we got? DDT-resistant insects, drug-resistant Plasmodium and a vaccine that’s not working.’ With other diseases, too, ‘progress’ proved counter-productive; in Egypt schistosomiasis was worsened by the construction of the Aswan dam, causing a backing-up of stagnant waters. Many factors make the worldwide epidemiological situation more ominous: political and economic destabilization, rocketing population growth, mass migrations, especially of refugees, and the intensification of poverty. In such circumstances, recent decades have brought the resurgence, among rich and poor nations alike, of diseases once believed to be decisively in retreat. Tuberculosis is once more becoming common in the First World and rampant in the Third, multiplying in inner city areas of the USA and to a lesser degree in Europe.
There are several reasons. One is the vulnerability of the immuno-suppressed: TB may be the first sign that a person is HIV positive. Another is low-grade health care for the poor and powerless. In the United States tuberculosis began to appear most visibly among drug addicts and the homeless, many of whom had little access to health care. Such sufferers commonly abandoned treatment before it was complete, creating ideal circumstances for the emergence of drug-resistant strands. By 1984, half of those people with active tuberculosis had a strain of the germ that resisted at least one antibiotic. Ten years later many strains were resisting several of the drugs for treating TB; some resist them all.
Between 1985 and 1991, when tuberculosis increased 12 per cent in the United States and 30 per cent in Europe, it rose 300 per cent in the parts of Africa where TB and HIV frequently go together. Approximately 10 million people have active tuberculosis; it kills 3 million each year, 95 per cent of them in the Third World.
Diphtheria has returned to the former Soviet Union and parts of the old eastern bloc. A development symptomatic of decaying sanitary and public health standards, it also marks the emergence of new strains of micro-organisms until recently believed vanquished by vaccines and antibiotics. Such ‘superbugs’ were to be predicted in the light of evolutionary and adaptational pressures upon bacilli and viruses. This Darwinian process has been hastened by the overuse and misuse of pharmaceuticals.
Cholera is another disease, thought to have been eradicated, which has returned. The Americas had long been free of cholera – the United States had suffered no major outbreak since 1866, South America none since 1895. In 1961, however, the seventh pandemic erupted, initially in Indonesia. Its source was the new El Tor strain of V. cholerae. El Tor spread through Asia and hit Africa in 1970, attacking twenty-nine countries in two years. By 1991, after thirty years, the El Tor outbreak was the longest cholera pandemic ever. It reached Peru in January 1991, probably in water flushed from the ballast tanks of a ship from Asia, in the same way that vibrios briefly infected Mobile Bay in the Gulf of Mexico that year. The epidemic in Lima began after inhabitants ate a local delicacy made from tainted raw fish. The wastes of infected people then entered Lima’s antiquated sewer systems, and within three months 150,000 cholera cases had been reported. The disease reached a new country almost every month during 1991, racing through Chile, Colombia, Ecuador, Bolivia, Brazil, Argentina, and Guatemala. By early 1992, it had affected 400,000 Latin Americans, killing 4000.
In 1993 a new strain of V. cholerae erupted in India and Bangladesh, killing 5000 people. Called strain 0139 (it was the 139th to be discovered), it soon spread to south-east Asia and may herald the eighth pandemic. Cholera’s resurgence shows the fragility of barriers against epidemics in a world that has shrunk. The demands of international capitalism for migrant workforces, the opening up of borders, the ebb and flow of peoples due to war and persecution, the increased mobility of affluent air-travelling populations – all these factors mean that formerly contained diseases now have no fixed abode.
These are among the key factors in the new diseases, notably AIDS, spreading since the 1970s. Seemingly originating in sub-Saharan Africa, the transmission of AIDS has been through sexual fluids and blood. It first came to the attention of physicians in the USA in 1981, when it was found that young homosexual men were dying from rare conditions associated with the breakdown of the immune system. A period marked by moralizing and victim-blaming, wild recriminations, political squirming and intensive medical research led by 1983 to the discovery of the virus (HIV), generally held responsible for the condition.
Early hopes for a cure or a vaccine have been frustrated, partly because the human immunodeficiency virus (HIV) mutates even more rapidly than the viral agents of influenza, thwarting development of both vaccines and drugs. Moreover, because HIV breaks down the immune system, sufferers often fall victim to illnesses such as pneumo-cystis pneumonia, tuberculosis and other opportunistic infections. It was widely predicted in the mid eighties that a global total of ten million cases would be reached by 1996. Though AIDS continues to spread as a pandemic, some of the more apocalyptic projections have been scaled down: the total as of 1996 appeared to be about 1,393,000. However, it remains out of control, exceedingly dangerous (being initially asymptomatic and hence unwittingly transmissible), and at its most severe in those nations of central Africa which are poorest and have the fewest medical services.
It is unlikely that AIDS is a new disease; it probably long possessed its own niche in the African rain-forests. The opening up of hitherto isolated areas to economic exploitation, travel and tourism, and the ceaseless migration of peoples, have probably flushed it out and unleashed it onto a defenceless world. It may also not be a coincidence that AIDS appeared in Africa at the same time that the World Health Organization was eradicating smallpox. During the 1970s members of the WHO were vaccinating young people in central Africa with live smallpox vaccine, re-using needles forty to sixty times. Live vaccines directly provoke the immune system, and can awaken sleeping giants such as viruses.
Other comparably dangerous viral diseases have emerged. These include Lassa, Marburg, and Rift Valley fevers in Africa, and others in South America such as Bolivian haemorrhagic fever. Most, it is supposed, are transferred from animal reservoirs. In 1976, a hitherto unknown virus even more frightening than Marburg and Lassa appeared. Ebola haemorrhagic fever broke out suddenly in Nzara, a town in southern Sudan. The illness began with fever and joint pains; then came haemorrhagic rash, black vomit, kidney and liver failure, seizures, bleeding from all orifices, shock and death. The virus spread to a hospital in a nearby town, where it killed many patients and staff. Of almost 300 victims, more than half died. Two months later, it broke out 500 miles away, in the Ebola River region of Zaire, killing 13 of 17 staff at a mission hospital and spreading to patients, who carried it to more than 50 villages. One nurse was taken for treatment to the country’s capital, Kinshasa, a poor, crowded city of two million with direct air links to Europe – an event which might have triggered a disaster in Zaire or beyond.
The epidemic had run its course by November 1976; in 1979 a smaller outbreak occurred in southern Sudan, and in 1979 another hospital-centred outbreak occurred in Tandala, Zaire. Thirty-three patients were diagnosed, of whom 22 died: Until now, such outbreaks have remained restricted. But it is conceivable that, like AIDS, these or similar viruses could pounce, and spread throughout the world. As more traditional habitats are invaded, risks exist of major catastrophes from such diseases.
In 1969, the US surgeon general, Dr William H. Stewart, told the American nation that the book of infectious disease was now closed. The West seemed to have conquered epidemics, and epidemiology seemed destined to become a scientific backwater. The manifest shortsighted-ness of that view is a measure of the medical optimism prevalent a generation ago.
Could there ever be solid grounds for the return of that optimism? There seems little reason to believe that western scientific medicine can single-handedly overcome pathogenic agents. And that must be all the truer as so many medical problems are the product of environmental and social disruptions caused by western economies and nation-states. Fertilizers, hormones, herbicides and pesticides introduce fresh hazards into the food chain; global warming is producing changes in atmospheric gases and affecting the balance of land and sea; the greenhouse effect is enhanced; as a result of the use of chlorofluorocarbons (CFCs), the ozone layer continues to be eroded. Such developments bring new disease threats: radiation and cancer, the spread of malaria and other ‘tropical’ diseases to new habitats, and the migration of microbes.
BEFORE THE NINETEENTH CENTURY the treatment of the mad hardly constituted a specialized branch of medicine. General physicians would handle the insane as part of their regular caseload, and a few acquired a reputation for it; but when doctors wrote about madness it was essentially as part of wider discussions of humoral imbalances or fevers. In late eighteenth-century England, the emergent ‘trade in lunacy’, focusing on private madhouses, lay only partly in doctors’ hands. At the onset of his madness in 1788, George III was first treated by regular court physicians and then by a mad-doctor, Francis Willis, who was a clergyman of the Church of England as well as a doctor of medicine.
On both sides of the Atlantic, it was the community that had traditionally judged individuals to be out of their minds, and the community that principally coped with them. Mad people were a family responsibility; failing that, the parish or town would provide a carer or custodian, or have a maniac put into safe-keeping in a jail, dungeon or house of correction (the German Zuchthaus or the hôpital général in France) or would simply send them packing. In Catholic countries, certain monasteries and religious houses had a tradition of caring for mad people; exorcism was occasionally used and assorted individuals – priests and healers as well as doctors – were reputed to have special personal powers over madness. Until the close of the eighteenth century, madhouses were not primarily medical institutions; most, like Bethlem in London, had their origin as religious or municipal charities. They might have an honorary physician who paid visits, and patients would occasionally be ‘physicked’ – mainly purging and bleeding – with the violent under restraint with manacles and straitjackets.
THE RISE OF THE ASYLUM
All this changed in the nineteenth century, as the development of psychiatric medicine made it first common, then routine, and finally almost inescapable, for the mentally ill to be treated in what were successively called madhouses, lunatic asylums and then psychiatric hospitals, where they increasingly fell under the charge of specialists.
The madhouse was an ambiguous institution. It was long criticized as a gothic horror, all cruelty and neglect, whips and chains. William Hogarth and many other satirists exposed London’s Bethlem Hospital (Bedlam). Yet by the nineteenth century the ‘new’ asylum had become the object of praise as a progressive institution, indeed the one truly effective site for the treatment of insanity, the place, the eminent British psychiatrist John Conolly (1794–1866) maintained, ‘where humanity, if anywhere on earth, shall reign supreme’. In 1837 Dr W. A. F. Browne (1806–85), soon to be head of the Montrose Royal Lunatic Asylum in Scotland, pronounced on What Asylums Were, Are, and Ought to Be. Traditional institutions had been abominations, he told his readers; present ones were better, and the asylum of the future would be positively idyllic:
Conceive a spacious building resembling the palace of a peer, airy, and elevated, and elegant, surrounded by extensive and swelling grounds and gardens. The interior is fitted up with galleries, and workshops, and music-rooms. The sun and the air are allowed to enter at every window, the view of the shrubberies and fields, and groups of labourers, is unobstructed by shutters or bars; all is clean, quiet and attractive. The inmates all seem to be actuated by the common impulse of enjoyment, all are busy, and delighted by being so. The house and all around appears a hive of industry. . . . There is in this community no compulsion, no chains, no whips, no corporal chastisement, simply because these are proved to be less effectual means of carrying any point than persuasion, emulation, and the desire of obtaining gratification . . .
Such is a faithful picture of what may be seen in many institutions, and of what might be seen in all, were asylums conducted as they ought to be.
The nineteenth century, in other words, brought the ‘discovery of the asylum’ – growing faith in the institution, with ceaseless attempts to rectify, refine and perfect it.
For progressives reform meant freeing the insane from chains and other benighted cruelties. In France the physician Philippe Pinel (1745–1826), a devout Roman Catholic who had thought of becoming a priest, was given responsibility for the insane at the Bicêtre in 1793 at the height of the Revolution. Believing the mad behaved like animals because that was how they were treated, he experimented with reducing mechanical restraints – though the dramatic image of Pinel ‘striking the chains off the mad’, once beloved of historians, belongs to legend. This was a huge success, and ultimately most of his mental patients were freed from their irons.
Pinel had picked up some of his ideas about madness from folk wisdom; he was far from being a specialist psychiatrist. A supporter of the dominant Paris patho-anatomical approach, in his own day he was more famous for his Nosographie (1798) [Nosography] and La médecine clinique (1804) [Clinical Medicine] than his Traité médico-philosophique sur l’aliénation mentale (1801) [Medico-philosophical Treatise on Mental Alienation]. His writings reflected the concerns of mainstream medicine: an intense admiration for Hippocrates and an emphasis on hard clinical observations, combined with aversion to speculation.
As to the aetiology of insanity, Pinel stressed mental over physical causes: ‘Derangement of the understanding’, he reflected, ‘is generally considered as an effect of an organic lesion of the brain,’ consequently as incurable, but that supposition was, ‘in a great number of instances, contrary to anatomical fact’. A devotee of Locke and Condillac, and committed to Enlightenment optimism, he made much of psychological factors. Yet, unlike Locke’s emphasis on the role of delusion, and the association of ideas, Pinel’s preferred traitement moral (‘moral treatment’) was directed at the emotions no less than the intellectual faculties. And, while retaining the traditional division of insanity into melancholia, mania, idiocy and dementia, he developed new disease categories, especially partial and affective insanity. His idea of manie sans délire, later called folie raisonnante, involved a partial insanity (patients who were mad on one subject) in which the personality was warped but the understanding remained sound. Viewing passions as the primary source of this condition, he held that such patients were under the ‘domination of instinctive and abstract fury’. When, in 1835, James Cowles Prichard put affective disorders of this kind into the category of ‘moral insanity’, he acknowledged his debt to Pinel.
What did moral treatment involve, as envisaged by Pinel? The alienist, he maintained, was to prefer the ‘ways of gentleness’. This strategy of hope assumed that insanity did not entail a descent into animality or a total and permanent obliteration of the patient’s sane self; some humanity remained to be worked upon. Humanity was effective, but the alienist might also be obliged to call upon ‘repression’, and his physical presence would radiate authority. Pinel’s contemporary, Francis Willis, was renowned for a piercing stare which imposed mastery, and many mad doctors at this time learned a trick or two from actors and Mesmerists.
During the Reign of Terror, a Parisian tailor expressed reservations about Louis XVTs execution, a confession which aroused the suspicion of his peers. Misconstruing a conversation he later overheard, the tailor became persuaded that his own death at the guillotine was approaching. Soon this delusion grew into a fixation which haunted him, necessitating his confinement in a lunatic asylum, where he was treated by Pinel. He arranged a kind of occupational therapy: the tailor would patch the other patients’ clothing for a small payment. The patient seemed to recover, but the improvement proved temporary, and he suffered a relapse. This time Pinel opted for a more inventive approach which involved staging a complicated demonstration; he arranged for three doctors, dressed as magistrates, to appear before the tailor. Pretending to represent the revolutionary legislature, the panel pronounced his patriotism beyond reproach, ‘acquitting’ him of any misdeeds. As a consequence of the faux trial, Pinel noted, the man’s symptoms disappeared at once (although, he confessed, they later returned).
More broadly, moral treatment also proposed to end the faulty thinking, which, according to the prevalent Lockean sensationalist psychology, was implicated in insanity. The alienist could attempt to distract the patient from his ‘deluded imagination’, perhaps by diverting his mind, perhaps by engaging him in useful labour; or he could subject him to shock. Similarly, in a tradition loaded with Renaissance precedents, contemporary English asylum-keepers such as Joseph Mason Cox (1762–1822) suggested hiring performers to act out patients’ delusions. The management of disordered passions was to be achieved through face-to-face authority.
Pinel’s impact at the Bicêtre (for men) and its sister institution the Salpêtrière (for women) was signal, and the Traité was quickly translated into English, Spanish and German, spreading his ideas on the moral causation and treatment of insanity, and arguing the case for a reformed asylum milieu and for innovative management techniques. His message was optimistic: organic brain disease might indeed be incurable, but melancholy and mania without delirium would typically respond to moral methods.
Minimizing restraint and replacing cruelty with kindness were also advocated in Florence by Vincenzo Chiarugi (1759–1820), while moral therapy developed independently in Britain, where the York Retreat, opened in 1796, achieved celebrity as the symbol of progress. The Retreat was set up after the mysterious death of a Quaker patient in the York Asylum, a subscription hospital milked and mismanaged by two successive physicians. Outraged, the local Quaker community decided to establish its own charitable asylum. Partly by religious conviction, partly by practical trial and error, it evolved a distinctive therapeutics grounded on quiet, comfort and a supportive family atmosphere, in which the insane were treated like ill-disciplined children. Its success was publicized by Samuel Tuke’s (1784–1857) Description of the Retreat (1813) and later by the testimony which he and his grandfather, William (1732–1822), gave to a Parliamentary Committee on Madhouses (1815): ‘neither chains nor corporal punishment are tolerated,’ it was claimed.
If the York Retreat was presented to the MPs as a kind of heaven, or at least a haven, old Bedlam appeared as hell. At Bethlem, the committee was informed, one patient, James Norris, had been restrained in a shocking manner:
a stout iron ring was riveted round his neck, from which a short chain passed through a ring made to slide upwards and downwards on an upright massive iron bar, more than six feet high, inserted into the wall. Round his body a strong iron bar about two inches wide was riveted; on each side of the bar was a circular projection; which being fashioned to and enclosing each of his arms, pinioned them close to his sides.
Bethlem’s physician, Thomas Monro (1759–1833), somewhat lamely reassured the committee that chains and fetters were ‘fit only for the pauper lunatics: if a gentleman was put in irons he would not like it.’ The hearings revealed comprehensive mismanagement at Bethlem; Monro was a supine absentee. George Wallet, the steward, was asked: ‘How often does Dr Monro attend?’ He replied, ‘I believe but seldom . . . I hear he has not been round the house but once these three months.’ The late surgeon had been an alcoholic (for ten years ‘generally insane and mostly drunk. He was so insane as to have a strait-waistcoat’). Thus the staff ratted on one another and Bethlem was exposed.
Tuke’s Description offered, by contrast, a shining model for early nineteenth-century reformers. As with Pinel, moral therapy was justified in England on the twin grounds of humanity and efficacy. The Retreat was modelled on the ideal of family life, and restraint was negligible. Patients and staff lived, worked and dined together in an environment where recovery was encouraged through praise and blame, rewards and punishment, the goal being the restoration of self-control. The root cause of insanity, physical or mental, mattered little. Though far from hostile to doctors, the Tukes, who were tea merchants by profession, stated that experience showed nothing medicine had to offer did any good.
Medical men, however, grew increasingly involved in psychological medicine and in treating the mad – a response, among other things, to the wider nineteenth-century trend towards specialization and the division of labour in an overstocked medical market. The success of the Tukes’ experiment at York, and its glowing repute, presented opportunities for doctors, since it legitimated the institutionalization of the mad. But it also posed a challenge: after the Tukes’ assertion that medicine achieved nothing, how were doctors to demonstrate that madness was a medical condition for which they possessed special skills?
While accepting much of moral treatment, most nineteenth-century physicians maintained that insanity was ultimately rooted in the organism, particularly the brain; for that reason therapy needed to be incorporated within a medical model, and prescribed by physicians. There followed a dramatic increase in books on insanity, virtually all by doctors; and a growing body of ‘mad-doctors’ emerged, called ‘alienists’.
In England an Act of 1808 permitted local authorities to raise ratepayers’ money to build lunatic asylums for the mad poor, but the response was uneven. Following scandals, the Metropolitan Commissioners in Lunacy were established in 1828, with a brief to inspect London’s lunatic asylums: that five of its fifteen members were medical indicates the still rather ambiguous status of mad-doctors in the public eye. Professional identity was consolidated in 1841 with the Association of Medical Officers of Asylums and Hospitals for the Insane, providing Victorian psychiatry with a platform. It published the Asylum Journal (1853), later renamed the Journal of Mental Science (1858), and in due course became the (Royal) Medico-Psychological Association. In 1971 it became the Royal College of Psychiatrists. The German equivalent, the Association of German Alienists, came into being in 1864; by 1900 it had 300 members, by 1914, 627.
Growing public preoccupation with insanity – the disorder seemed to be worsening, yet high hopes were initially held out for its curability – led to an Act of 1845 compelling each county to erect, at public expense, an asylum for the pauper insane. The development and reputation of English psychiatry became strongly identified with these public asylums; the private ‘trade in lunacy’ had always been suspect, and ‘office psychiatry’ was slow to develop. In these circumstances the leaders of the profession cut their teeth and made their names in the large new public asylums which offered challenges to the ambitious, energetic and talented.
The outstanding early Victorian British psychiatrist was John Conolly (1794–1866), who served as superintendent between 1839 and 1844 at the large public asylum at Hanwell in Middlesex, and was noted for his introduction of non-restraint, though in that he had been anticipated by Robert Gardiner Hill (1811–78) at the Lincoln Asylum. Conolly’s Treatment of the Insane without Mechanical Restraints (1856) advanced the ideal of moral therapy in an institutional context under a presiding physician. His case histories show his awareness of psychological and social factors; he was sensitive to the dangers of improper confinement, and a humane optimism marked his repudiation of the bad old practices of the past.
While stressing moral therapy, Conolly upheld the physical basis of insanity, drawing on the controversial phrenological doctrines developed by the Austrian, Franz Joseph Gall (1758–1828), and J. C. Spurzheim (1776–1832). Phrenology held that the brain was the organ of thought and will, that it determined character, and that its configurations revealed personality. The brain was a jigsaw of separate ‘organs’ (amativeness, acquisitiveness and so forth) occupying specific cortical areas and shaping the personality. An organ’s size governed the exercise of its functions; the contours of the skull signalled the brain configurations beneath, while the overall balance of the ‘bumps’ determined personality. Gall initially identified twenty-seven faculties, and more were added later. A talented anatomist, Gall was in 1805 hounded out of Vienna on the grounds that his doctrines were materialistic.
At bottom, phrenology was organic, but it could be used to underpin moral therapy: while basic psychological traits were held to be innate, the phrenological concept of human nature was flexible enough to allow a role for education in developing the faculties and hence the mind. Phrenology could thus serve as a flexible resource within psychiatry, endowing it with a somatic foundation yet affording therapeutic promise. It captured a wide public following, appearing to be a means to self-understanding and self-improvement; it appealed to many alienists too. Treatments nevertheless remained a rag-bag. Together with moral therapy’s emphasis on socialization and labour, inmates might be subjected to cold baths and showers, isolation, electric shocks and rotating chairs, or they might be purged and bled. Every superintendent had his favourite cocktail of cures, blending the physical and the moral, while in reality most patients spent their time in idleness, inside or outside their cells, or were left to the dubious ministrations of untrained and often thuggish attendants.
American psychiatry developed along comparable lines. The York Retreat provided a model for the Hartford Retreat in Connecticut, founded in 1824, the Friends’ Asylum near Philadelphia (1817), the McClean Hospital in Boston (1818) and the Bloomingdale Asylum in New York (1821). The New World also had its heroes. In 1812 that founding-father of American medicine Benjamin Rush published his Medical Inquiries and Observations upon the Diseases of the Mind; like Pinel, he elaborated notions of partial and affective insanity. Rush’s style of moral therapy employed physical restraint and fear as psychotherapeutic agents, and his zeal for venesection extended from yellow fever to his psychiatric practice. ‘The first remedy under this head should be bloodletting,’ was his advice for mania: ‘It should be copious on the first attack . . . From 20 to 40 ounces of blood may be taken at once . . . The effects of this early and copious bleeding are wonderful in calming mad people.’
The early asylum era in America was marked by Samuel B. Woodward (1787–1850) at the Worcester State Hospital, and Pliny Earle (1809–92) at the Bloomingdale Asylum, both of whom incorporated moral therapy within a medical approach. They were among the founders of the Association of Medical Superintendents of American Institutions for the Insane (AMSAII), set up in 1844, the year that Amariah Brigham (1798–1849) established the American Journal of Insanity, later the American Journal of Psychiatry, which became identified with the AMSAII.
A sign of psychiatry’s coming of age as a medical specialty was forensic psychiatry. It had long been accepted that the insane should not be punished for criminal acts. In 1799 James Hadfield tried to assassinate George III, but the trial was halted when his defence lawyer convinced the judge that Hadfield was labouring under an insane delusion. Thereafter juries in England brought in special verdicts of ‘not guilty by reason of insanity’. Distinguishing criminality from insanity was not traditionally considered a matter of medical expertise. From the early decades of the nineteenth century, however, the insanity defence was increasingly likely to involve medical testimony. Certain mad-doctors, including John Haslam (1764–1844), once of Bethlem, and Forbes Winslow (1810–74), became celebrated for their court-room testimony and their treatises on forensic psychiatry. Expert psychiatric witnesses staked out claims to be able to detect ‘partial’ insanity, particularly monomania, imperceptible to the public. The implication was that only psychiatrists could tell who was really sane – only they could plumb the criminal mind.
During the 1840s, the insanity plea became a matter of dispute. The trial in 1843 of Danid M’Naghten for the murder of Edward Drummond, Sir Robert Peel’s private secretary, was stopped on the grounds of insanity. After the controversial acquittal, the Law Lords were asked to draw up guidelines to form the legal basis for criminal insanity and responsibility. The resulting M’Naghten Rules (1844) established the insanity defence as the criminal’s inability to distinguish right from wrong. This legalistic formula (‘the nature and quality of the act’) scotched the claim being advanced by post-Pinelian psychological medicine for the recognition of disorders of emotion and volition (‘irresistible impulse’) without disorder of the understanding, since the jurists saw in the psychiatrists’ notion of partial insanity a threat to the idea of free will, which forms the sheet anchor of concepts of guilt and punishment. Debates over the insanity defence thus expressed conflicts between legal and psychiatric models of consciousness and conduct. The boundaries between the bad and the mad remained contested, as did the public role of psychiatry.
By mid century the rise of professional bodies, journals and legislation concerning the insane marked the high point of asylum psychiatry in Britain and America. Progressive alienists pinned therapeutic faith on the architecture and atmosphere of their asylums, trusting to order and organization, discipline and design, and their leadership qualities. Asylums were prized as scientific, humane, cost-effective, curative institutions.
Similar optimism prevailed in France, where from 1838 each département was required to erect a public asylum for the pauper insane. Provision was to be made for the segregation of noisy from quiet patients, dirty from clean, violent from peaceful, and acute from chronic; patients would be removed from their ‘pathological’ home environment; and ‘moral measures’ (work, re-education and self-discipline) would be prominent. The 1838 code spelling out these requirements incorporated the recommendations of Pinel’s pupil, Jean-Etienne Dominique Esquirol (1772–1840), who had travelled extensively observing psychiatric institutions, campaigning for improvement, and had begun formal psychiatric lectures in 1817. His Des maladies mentales (1838) [Mental Maladies] was the outstanding psychiatric statement of the age. Experience led Esquirol to a remodelling of psychiatric thinking. While asserting the ultimately organic nature of psychiatric disorders, he documented their social and psychological triggers, developing the concept of ‘monomania’ to describe a partial insanity identified with affective disorders, especially those involving paranoia, and framing conditions like kleptomania, nymphomania, pyromania and other forms of compulsion. Advocating the asylum as a ‘therapeutic instrument’, he became an authority on its construction, and planned the National Asylum of Charenton, of which he became director.
Esquirol had pet ideas about the asylum’s therapeutic efficacy. He advanced the theory of isolation, ‘removing the lunatic from all his habitual pastimes, distancing him from his place of residence, separating him from his family, his friends, his servants, surrounding him with strangers, altering his whole way of life’. As with other forms of moral treatment, this was grounded in sensationalist psychology. A radical change in environment supposedly shook up the pathological ideas entrenched in the disturbed person, leaving the psychiatrist in a position to provide new stimuli to establish sane ideas.
With their psychopathological doctrines and influential accounts of illusion, hallucination and moral insanity, all based on impressive case experience, Esquirol and his pupils exercised dominance over French psychiatry, mirroring French hospital medicine’s emphasis on close clinical observation and routine autopsy. His main students were E. E. Georget (1795–1828), who wrote on cerebral localization; Jean-Pierre Falret (1794–1870), author of a classic description of circular insanity; Louis Calmeil (1798–1895), who described dementia paralytica; J. J. Moreau de Tours (1804–84), one of the pioneers of degenerationism; and Jules Baillarger (1809–90), who worked on general paresis and is best remembered for his account (1854) of the manic depressive cycle (folie à double forme), characterized by a succession of attacks of mania and depression. Baillarger’s claim to its discovery was hotly disputed by Falret who called the disease folie circulaire and held that a lucid period interrupted the two pathological states. The Esquirolians radically transformed contemporary classification and diagnosis of mental disorder.
THE ASYLUM UNDER FIRE
The asylum movement in Europe and the US established a lasting scheme of care, but other models were also tried. Communities were advocated along the lines of the long-established Belgian therapeutic village of Geel, providing a family and domestic setting for the mentally disturbed; and a greater diversity of mental institutions sprang up after 1850, including polyclinics and private ‘rest homes’ and ‘nerve clinics’ for the affluent, which sought to avoid the stigmatizing connotations of madness. Despite governmental licensing and regular inspections, abuses remained. In England the Alleged Lunatics’ Friend Society, founded by the ex-patient John Perceval (1803–76) aimed to expose and rectify these.
Also vigilant in this field was the crusading American Dorothea Dix (1802–87). The daughter of a fanatical Christian, she took up philanthropic activities, and in 1841, finding several insane prisoners dumped in an unheated room at the East Cambridge (Massachusetts) House of Correction, she took the issue to court. She then spent two years investigating the condition of the insane in jails, almshouses and houses of correction, while urging the founding of proper asylums for the insane. Subsequently she extended her campaign to other states, visiting institutions, exposing evils, and soliciting the support of sympathetic politicians. By 1852, seventeen states had founded or enlarged hospitals for the insane. Visiting Europe, her exposure of abuses helped to prompt the setting up of a royal commission to examine the treatment of the insane in Scotland.
Asylum abuse proved an endemic disorder. In England a series of Madhouse Acts passed from 1774 onwards was intended to put an end to such iniquities through certification procedures, but scandals throughout the nineteenth century leave no doubt that confinement of those protesting sanity or malicious imprisonment remained common, sometimes coming to a head in causes célèbres, like that of Louisa Lowe – a casé which underlines the vulnerability of women. Married in 1842 to the Revd George Lowe, in 1870 she moved out. Her husband demanded she return home, she refused; he had her kidnapped and taken to Brislington House asylum in Bristol. Though the certificates authorizing her admittance were found to be invalid, the proprietor Dr Fox detained her for a further two months. She brought an action against him for false imprisonment, but the lord chief justice ruled it was not a criminal offence to incarcerate a sane individual, provided the intent was not malicious.
There she remained until February 1871, when she was moved to Lawn House, Hanwell, the private asylum of the distinguished Dr Henry Maudsley, who diagnosed her as ‘suffering from delusions’. During a visit by the commissioners in lunacy in March, she demanded to be set free, but on her request that a jury be convened to decide her fate, the commissioner, Dr Lutwidge, retorted, ‘It is very possible but very undesirable.’ In October the commissioners again visited Lawn House. Mrs Lowe again begged to be released, but they replied that it would be ‘contrary to all etiquette’, as a suit had been brought by her husband to gain control of her property, then yielding £500 p.a. Overall, she was detained for eighteen months. Warning ‘that many sane, and still more merely eccentric and quite harmless persons, are languishing in the mad houses’, she founded the Lunacy Law Reform Association to protect future family victims, and documented her plight in Quis Custodiet Ipsos Custodes? (1872).
Further asylum evils were exposed around 1900 by Clifford W. Beers (1876–1943), a young Yale graduate and businessman, after a severe mental breakdown led to his being institutionalized first at a private hospital and later in a public institution. ‘I left the state hospital in September, 1903’, he wrote in his immensely influential A Mind That Found Itself (1908), ‘firmly determined to . . . organize a movement that would help to do away with existing evils in the care of the mentally ill, and whenever possible to prevent mental illness itself’. Beers claimed to have been neglected and tyrannized by a regime which was positively vicious. The Mental Hygiene Society, which he established in 1909, turned into the National Committee for Mental Hygiene, a campaigning force for mental health, first in the United States and then more widely, pledged to improve the standard of care available for the mentally ill, raise the level of professional psychiatric training, and promote mental hygiene through public education.
Meantime, other voices had been criticizing the asylums, not for their lapses into cruelty but on the grounds that they were misconceived and counter-productive. Around 1840 the American physician, Edward Jarvis (1803–1884), visiting Conolly’s Hanwell Asylum, just outside London, had this to say on how the ideals of that asylum were being undermined by size and economy:
This is a huge establishment. . . . Here hundreds are gathered and crowded. The rulers prefer such large asylums. They think them economical. They save the pay of more superintendents, physicians, and other upper officers; but they diminish the healing powers of the hospital. . . . The economy is not wise, or successful.
The National Association for the Promotion of Social Science, founded in 1856 by philanthropists, statisticians and reformers, pondered as early as 1869 whether
we cannot recur, in some degree, to the system of home care and home treatment; whether, in fact, the same care, interest, and money which are now employed upon the inmates of our lunatic asylums, might not produce even more successful and beneficial results if made to support the efforts of parents and relations in their humble dwelling.
At roughly the same time, the asylum superintendent, John Arlidge (1822–99), concluded, ‘a giant asylum is a giant evil.’
Asylums provided a mountain of information to support new models and classifications of mental diseases, enabling diagnosticians to build up clearly defined pictures of psychiatric diseases, capable of being recognized symptomatically. General paresis was described in 1822 by Antoine Laurent Bayle (1799–1858). Although the causative micro-organism of syphilis remained unknown, the neurological and psychological features of general paresis (euphoria and expansiveness), combined with its distinctive organic changes as revealed by autopsy, strengthened the hand of those who believed that psychiatric diseases could be described using the techniques employed by Laennec and Louis with tuberculosis. Early volumes of the Annales médico-psychologiques reflect this clinico-pathological orientation.
The confinement and isolation of the mentally ill created opportunities for the accumulation of observations on patient behaviour and symptoms, leading to new descriptions and illness classifications. Epileptics, for instance, began to be segregated from the insane, and in 1815 Esquirol organized a special hospital for them, fearing they would worsen the mentally disturbed. By 1860, special epileptic hospitals had been founded in France, Britain and Germany, and in 1891 the first American hospital was established in Gallipolis, Ohio. Esquirol produced an improved description of petit mal; Calmeil described ‘absence’, distinguishing between passing mental confusion and the onset of a grand mal attack, and W. R. Gowers (1845–1915) clarified the ‘aura’. A major therapeutic step dates from 1912, when phenobarbital was found effective in suppressing epileptic seizures.
There were comparable developments in asylums for so-called ‘idiots’. Idiotism had long been accepted as hopeless: ‘absolute idiocy admits of no cure,’ noted the nineteenth-century psychiatrist George Man Burrows (1771–1846). But Enlightenment optimism led some physicians to believe that much could be done with the mentally deficient. Inspired by Condillac’s utilitarian psychology and by experiments with blind, deaf and feral children, special schools were set up, first of all in France. The great pioneer was the physician and educator Edouard Séguin (1812–80), who, like so many others, migrated to the New World to realize his utopian dreams. He was confident that colonies of defectives, headed by valiant paternalistic pedagogues, could be disciplined into normalcy, ultimately rejoining society as productive workers.
Training also became the keynote in the major idiots’ asylums established in mid Victorian England. Five rural colonies were created around the middle of the century, most famously Earlswood near Redhill in Surrey. By herding together large numbers of ‘abnormal’ people, these in turn prompted the framing of new medical categories. The mongoloid type, or Down’s Syndrome, was first identified at Earlswood by Dr John Langdon Down (1828–96). Hopes were expressed that stimulating institutional environments would lead to mental improvement, but in due course such colonies became essentially segregative, isolating ‘defectives’ to stop them from breeding. In a notorious American legal ruling of 1927, Justice Oliver Wendell Holmes (1841–1935) ruled that ‘three generations of imbeciles are enough’.
GERMAN PSYCHIATRY
Germany also developed prominent lunatic asylums, notably Illenau in Baden-Baden, where Richard von Krafft-Ebing (1840–1902) gained early clinical experience. But, unlike Britain or France, German psychiatry was chiefly associated with the universities and their research-oriented medicine. Perhaps for this reason, it became embroiled in sharper debates between rival organic and psychological traditions. At the beginning of the century, Johann Christian Reil (1759–1813) developed a holistic approach, somatically based yet indebted to Romanticism, and with an emphasis on psychodynamics. His Rhapsodien über die Anwendung der psychischen Curmethode auf Geisteszerrütingen (1803) [Rhapsodies on the Use of Psychological Treatment Methods in Mental Breakdown] offered a version of moral treatment: the charismatic alienist with a powerful personality commanding the patient’s imagination; a staff trained in play-acting to further the alienist’s efforts to expel the patient’s fixed ideas – all this combined with salutary doses of therapeutic terror (sealing-wax dropped onto the palms, whipping with nettles, submersion in a tub of eels) to overwhelm the patient. His Magazin für die psychische Heilkunde (1805) [Journal for Psychological Therapy] helped put German psychiatry on the map.
The psychological approach to madness was developed with J. C. A. Heinroth (1773–1843) and Karl Ideler (1795–1860). This also drew heavily on Romanticism, with its speculative metaphysics and empathetic exploration of the inner consciousness. A pious Catholic who taught psychiatric medicine at Leipzig University, Heinroth viewed mental disorder in religious terms, and the aetiological explanations offered in his Lehrbuch der Störungen des Seelenlebens (1818) [Textbook of Mental Disturbances] disparaged physical causes. He compared insanity and sin; both were voluntary and hence transgressive renunciations of the rational free will which was God’s gift. Moral treatment must expose the madman to the healthy and religious personality of the alienist. ‘In the great majority of cases,’ he insisted, ‘it is not the body but the soul itself from which mental disturbances directly and primarily originate.’ He proposed a combination of gentle therapies with severe methods (shock, restraint and punishments) for intractable conditions. Each case required individual diagnosis and prescription, for which detailed case histories were essential.
Other German psychiatrists deplored the speculative fantasies of such ‘psychicists’, which they associated with the anti-scientific tendencies of Romanticism, cultivating instead an organic orientation which looked for its model to the prestigious science of physics. In this Maximilian Jacobi (1775–1858) was the key early figure, but the main aetiological assumptions were laid down in J. B. Friedreich’s (1796–1862) Versuch einer Literärgeschichte der Pathologie und Therapie der Psychischen Krankenheiten (1830) [Attempt at a History of the Literature of the Pathology and Therapy of Psychic Illnesses]. Rebutting Heinroth, Friedreich stressed material causes. The Viennese physician Ernst von Feuchtersleben (1806–49) aimed to weave psychic and somatic strands into a personality-based psychiatry, aiming for an ambitious synthesis of neurophysiology, psychology and psychotherapeutics. To Feuchtersleben, developing something akin to the modern concept of ‘psychosis’, ‘psychopathy’ meant a disease of the whole personality.
The core tradition of German university psychiatry was founded by Wilhelm Griesinger (1817–68). Sympathetic to the new materialistic currents emerging in the physiology of Helmholtz and du Bois-Reymond, Griesinger boldly asserted that ‘mental illnesses are brain diseases’. His insistence that ‘every mental disease is rooted in brain disease’ encouraged brain pathology research aimed at finding the physical location of mental disorders. Yet even Griesinger conceded in Pathologie und Therapie der psychischen Krankheiten (1845) [Pathology and Therapy of Psychiatric Diseases] that not all pathological states were accompanied by detectable cerebral lesions. Mental disease, in his view, was typically progressive, moving from depressive states to more behaviourally and cognitively disruptive conditions. The underlying somatic abnormality would begin with excessive cerebral irritation, leading to chronic, irreversible patho-anatomical brain degeneration, and ending in the disintegration of the ego, common in chronic mania and dementia.
Griesinger’s pronouncements defining mental disorder as brain disease (‘Psychological diseases are diseases of the brain’; ‘insanity itself . . . is only a symptom’) had a dogmatic ring. But he qualified them, and his aetiology was multifactorial. Among predisposing and precipitating causes of mental disease, he mentioned heredity, brain inflammation, anaemia, head injury and acute febrile disease; however, he also discussed ‘psychical causes’. His stress upon the transition from normal to pathological psychic processes, and on the progressive course of psychiatric illnesses was later taken up by Kraepelin. For Griesinger, belief in the somatic origin of such disorders was meant to give hope to science and restore dignity to patients traditionally stigmatized by a diagnosis of lunacy.
Griesinger set academic psychiatry on course, pressing for the congruence of psychiatry and neurology and for the establishment of neuropsychiatric clinics – campaigns conducted in the Archiv für Psychiatrie, which he established in 1868. His Berlin successor, Carl Westphal (1833–90), continued in his tradition of brain psychiatry, publishing monographs on diseases of the brain and spinal cord. After 1850, this kind of university psychiatry prospered in German-speaking Europe, supported by the twin pillars of German medical education, the polyclinic and the research institute. Unlike asylum superintendents in England, the top university psychiatrists did not share their patients’ lives night and day, and their orientation was hardly therapeutic. The goal of university psychiatry was more the scientific understanding of psychiatric disorders through systematic observation, experimentation and dissection.
Following Griesinger, Theodor Meynert, Carl Wernicke and other scions of this academic tradition sought to create a rigorous psychiatry wedded to neurology and neuropathology and rooted in scientific materialism. A product of the illustrious Vienna medical school, Meynert (1833–92) spent his entire career there, first as assistant to Rokitansky, and from 1870 as professor of psychiatry. Essentially a neuropathologist and more dogmatically somaticist than Griesinger, he subtitled his textbook, Klinik der Erkrankungen des Vorderhirns (1884) [A Clinical Treatise on the Diseases of the Forebrain], in protest against what he condemned as the wishy-washy mentalistic connotations of ‘psychiatry’. He conducted distinguished research in neuroanatomy, and his laboratory established his reputation and attracted students including Forel and Wernicke. In practice, however, his organic programme ran into grave problems, and he was reduced to concocting various vague entities, such as the primary and secondary ego, to describe his patients’ behavioural and thought disorders.
Wernicke (1848–1905) represents the apogee of German neuropsychiatry. His lifelong pursuit of cerebral localization centred around a consuming interest in aphasia. He helped establish the concept of cerebral dominance and delineated the symptoms following various sorts of brain damage in two extremely influential texts: his three-volume Lehrbuch der Gehirnkrankheiten (1881–3) [Manual of Brain Diseases], and his Grundriss der Psychiatrie (2nd ed., 1906) [Foundations of Psychiatry].
DEGENERATION
While claiming their science could provide explanations of the pathophysiological and neurological mechanisms of psychiatric disorders, organicists were far from sanguine about cures. This pessimism, reflected elsewhere, was in part a product of their patient populations; asylums everywhere were filling with patients with intractable and seemingly irreversible organic disease, notably tertiary syphilitics. Therapeutic pessimism bred a new hereditarianism. Advocates of moral therapy and asylum reform had expressed confidence in early treatment and environmental manipulation; by mid century, however, the accumulation of long-stay cases eroded hopes, and attention to family backgrounds suggested inherited psychopathic traits. These observations were systematized into a degenerationist model by two French psychiatrists, Esquirol’s pupil J. Moreau de Tours (1804–84) and Benedict Augustin Morel (1809–73), and in England by Henry Maudsley.
Physician to large asylums in Mareville and Saint-Yon, Morel turned degeneration into an influential explanatory principle in his Traité des dégénérescences physiques et morales (1857) [Treatise on Physical and Moral Degeneration]. Produced by both organic and social factors, hereditary degeneration was seen by him as cumulative over the generations, descending into imbecility and finally sterility. A typical generational family history might pass from neurasthenia or nervous hysteria, through alcoholism and narcotics addiction, prostitution and criminality, to insanity proper – and finally utter idiocy. Once a family was on the downhill slope the outcome was hopeless. Alcoholism – a concept coined in 1852 by the Swede Magnus Huss (1807–90) – provided a model for a degenerative disease, since it combined the physical and the moral, was widespread among pauper lunatics, and supposedly led to character disintegration. Valentin Magnan (1835–1916) set Morel’s theories into the mould of evolutionary and reflex biology with his idea of ‘progress or perish’; his opinions were dramatized in Emile Zola’s novels, particularly L’Assommoir (1877), in which Magnan appears as an asylum doctor.
These French attitudes caught the mood of the times, echoing bourgeois fears in a mass society marked by proletarian unrest and socialist threats. Griesinger acknowledged his debt to Morel, and Meynert, Wernicke and other brain psychiatrists documented the hereditarian dimensions of insanity. Meynert’s Viennese successor, Richard von Krafft-Ebing was an exponent of degenerationist thinking. Best known for his Psychopathia Sexualis (1866) [Sexual Psychopathology], a study of sexual ‘perversion’ and ‘inversion’ (that is, homosexuality), he classified various disorders as psychische Entartungen, constitutional degeneration.
Paul Möbius (1854–1907) and Max Nordau (1849–1923) helped to popularize degenerationist thought. Exploring the presumed connexions between genius and insanity, Möbius was intrigued by hypnosis, hysteria and the relations between sexual pathology and madness, publishing widely on sexuality and gender differences. Women were slaves to their bodies: ‘Instinct makes the female animal-like.’ High intelligence in women was so unusual as to be positively a sign of degeneration. Producing a classification of psychiatric disorders admired by Emil Kraepelin (1856–1926), Möbius endorsed the notion of hereditary degeneration, though his particular fascination was with those called ‘dégénérés supérieurs’, i.e. individuals of abnormally high intelligence. He produced ‘pathographies’ which examined the relationships between genius and insanity through the lives of men such as Rousseau and Goethe.
Morel’s ideas were taken up in Italy by the psychiatrist and criminologist Cesare Lombroso (1836–1909), who viewed criminals and psychiatric patients as degenerates, evolutionary throwbacks often identifiable by physical stigmata: low brows, jutting jaws and so forth. Comparable physical evidence of degenerative taints could be found in non-European races, in apes and in children.
A more optimistic reading of tendencies towards degeneration was predictably taken in America. There George M. Beard (1839–83) popularized the concept of ‘neurasthenia’, regarded as a kind of nervous weakness produced by the frantic pressures of advanced civilization and diagnosed as an early stage of progressive hereditary degeneration, all of which proved a drain on the individual’s finite reserve of ‘nerve force’. Beard’s ideas were developed further by Silas Weir Mitchell of Philadelphia (1829–1914), who introduced the ‘Weir Mitchell treatment’ (bed rest and strict isolation) as a way of overcoming such tendencies. But American thinking had its darker side too; the filling up of asylums brought fears that insanity was epidemic. The trial in 1881 of Charles Guiteau, the murderer of President Garfield, spotlighted issues of heredity, criminality and moral insanity, since psychiatrists based defence testimonies on their conviction that the assassin was a degenerate. By 1900 lobbies were urging compulsory confinement, sterilization and other eugenic measures, as well as the use of psychiatry for immigration control.
Late nineteenth-century psychiatry also came under other scientific influences. The neurologist John Hughlings Jackson (1834–1911) used the evolutionary philosophy of Herbert Spencer (1820–1903) as the basis for his accounts of nervous function and dysfunction, and Henry Maudsley (1835–1918) developed an outlook heavily influenced by neurology and evolutionary biology. Inspired by the institutional achievements of Kraepelin, and finding asylum psychiatry hamstrung by routine cures, Maudsley laid the foundations for the Maudsley Hospital in his will. It opened in London in 1922. Its medical school, which became the Institute of Psychiatry, was intended as a site where graduate psychiatric training and research could be pursued in conjunction with a large psychiatric hospital.
Kraepelin’s work was the culmination of a century of descriptive clinical psychiatry and psychiatric nosology. Building on Karl Kahlbaum’s (1828–99) conception of the disease entity as distinct from the patient’s psychopathological state, Kraepelin approached his patients as symptom carriers, and his case histories concentrated on the core signs of each disorder. Combining earlier descriptions by Kahlbaum (catatonia), Morel (démence précoce) and Ewald Hecker (1843–1909) (hebephrenia) into a single category, he formulated the idea of dementia praecox, a degenerative condition, which was the forerunner of schizophrenia, distinguishing it from manic-depressive psychoses (Falret’s old ‘circular insanity’). Kraepelin’s classification remains the framework for much modern psychiatry – indeed, his textbook can be seen as the forerunner of today’s Diagnostic and Statistical Manuals. His interest in the natural history of mental disorders involved him in the entire life histories of his patients in a longitudinal perspective which privileged prognosis.
A follower of the great experimental psychologist Wilhelm Wundt (1832–1920), Kraepelin also pioneered psychological testing in psychiatric patients and made quantitative correlations between bodily state and mental disorders. Fostering research, his Munich clinic became an international attraction and the inspiration for similar establishments elsewhere. Among his colleagues was Alois Alzheimer (1864–1915), whose researches into dementia led to the formulation of Alzheimer’s disease.
Heredity played a part in his conceptual framework, but Kraepelin was critical of the wider theory of degenerationism – one point he had in common with Freud, though in general they viewed each other warily. With only nominal expectations about the efficacy of treatment, Kraepelin was gloomy about the outcome of major psychiatric disorders, especially dementia praecox. By 1900 the optimism for curability with which the nineteenth century had opened had almost entirely run into the sands; the asylums had filled up with patients for whom cures were no longer expected. ‘We know a lot and can do little,’ commented Georg Dobrick, a German asylum doctor, in 1910.
It is not surprising that psychiatry seemed to many to turn into society’s policeman or gate-keeper, designed to police the boundaries between the sane and the insane, the normal and the pathological. When such attitudes linked up with degenerationism and eugenics, it could lead to psychiatric politics in which it would be decided that the lives of the mentally ill were not ‘worth living’ and were a threat to society; from the 1930s, Nazi psychiatry decided that schizophrenics as well as Jews had to be eliminated.
To some degree in reaction against the hopelessness of asylum psychiatry and the dogmatism of the somaticists, a new dynamic psychiatry appeared in the late nineteenth century. Its historical roots include Mesmer’s therapeutic use of ‘animal magnetism’, later called hypnotism by the Manchester surgeon James Braid (1795–1860). With its dissociations and apparent automatism of behaviour, hypnotism unveiled hitherto hidden dimensions and layers of the personality and raised new issues about the will, unconscious thinking and the unity of the self.
Linked to scientific exploration of mediums and spiritualism, the dynamic aspects of the psyche were investigated by physicians such as A. A. Lièbault (1823–1904) and H. M. Bernheim (1840–1919) in Nancy, while Jean-Martin Charcot (1825–93) made hypnotism central to his hysteria studies. At his clinic at the Salpêtrière, the giant public hospital in Paris, Charcot demonstrated the diagnostic potential of hypnotism and developed ideas of the aetiology of hysteria and related neuroses. Addressing its manifestations, he undertook a massive clinical scrutiny of hysterical pathology, exploring motor and sensory symptoms, bizarre visual abnormalities, tics, migraine, epileptiform seizures, somnambulism, hallucinations, word blindness, alexia, aphasia, mutism, contractures, hyperaesthesias, and numerous other deficits – and had some measure of success in mapping hysteria onto the body. He was delighted, for instance, to discover hysterogenic points, zones of hypersensitivity which, when fingered, provoked an attack; it confirmed his conviction of the reality of ‘latent hysteria’. Yet his early faith that scientific investigation into hysteria would systematically reveal demonstrable neurological substrates proved a forlorn hope.
Charcot made extensive use of hypnosis as a diagnostic device to uncover hysteria. What he failed to realize was that the hypnotic and hysterical behaviours of his ‘star’ hysterical performers were artefacts produced by his own personality and expectations within the theatrical and highly charged atmosphere of the Salpêtrière, not objective phenomena waiting to be scientifically observed. He deceived himself into thinking his patients’ behaviours were real rather than the products of suggestion. The months Sigmund Freud spent in Paris in 1885 were crucial to his development, and psychoanalysis has never been able to escape the charge that, as with Charcot, its ‘cures’ are largely the product of suggestion.
FREUD
Trained in Vienna in medicine and physiology, Freud (1856–1939) received his MD in 1881, specializing in clinical neurology. Working with the Viennese physician Josef Breuer (1842–1925), he became alerted to the affinities between hypnosis, hysteria and neuroses. Breuer told Freud about a patient, ‘Anna O.’, whose bizarre hysterical symptoms he treated from 1880 by inducing hypnotic states during periods of ‘absence’ (dissociation), systematically leading her back to the onset of each symptom, one by one. On re-experiencing the precipitating traumas, the relevant hysterical symptom vanished. The time Freud spent with Charcot gave him a theoretical framework for understanding Breuer’s experiences – not least a hint of the sexual origin of hysteria (Vert toujours la chose génitale,’ Charcot confided to him). When he returned to Vienna, Freud and Breuer began a close collaboration that resulted in 1895 in the publication of their joint Studien über Hysterie [Studies on Hysteria].
In the early 1890s Freud developed his theory that neurosis stemmed from early sexual traumas. His hysterical female patients, he then maintained, had been subjected to pre-pubescent ‘seduction’ – that is, sexual abuse by the father; repressed memories of such assaults were the triggers of their trouble. He spelt out this ‘seduction theory’ to his friend Wilhelm Fliess in May 1893, and during the next three years his enthusiasm for his shocking hypothesis mounted until, on 21 April 1896, he went public in a lecture in Vienna on the aetiology of hysteria.
The next year, however, on 21 September 1897, he confessed to Fliess, ‘I no longer believe in my neurotica’ – that is, the seduction theory. Freud had convinced himself that his patients’ seduction stories were fantasies, originating not in the perverse deeds of adults but in the erotic wishes of infants. The collapse of the seduction theory brought the birth of the idea of infantile sexuality and the Oedipus complex – first disclosed to Fliess a month later:
I have found love of the mother and jealousy of the father in my own case too, and now believe it to be a general phenomenon of early childhood . . . if that is the case, the gripping power of Oedipus Rex, in spite of all the rational objections to the inexorable fate that the story presupposes, becomes intelligible. . . . Every member of the audience was once a budding Oedipus in phantasy. . .
Up to his very last publication in 1939 Freud held to its importance: ‘If psycho-analysis could boast of no other achievement than the discovery of the repressed Oedipus complex, that alone would give a claim to be included among the precious new acquisitions of mankind.’ The twin pillars of orthodox psychoanalytic theory – the unconscious and infantile sexuality – thus emerged from Freud’s volte face; had the seduction theory not been abandoned, psychoanalysis would not exist*
In due course, tensions separated Freud from Breuer, who favoured physiological hypotheses and the use of hypnotic techniques. Freud moved more in the direction of psychological mechanisms, abandoning hypnosis and developing psychoanalysis. He advanced challenging theoretical concepts such as unconscious mental states and their repression, infantile sexuality and the symbolic meaning of dreams and hysterical symptoms, and he prized the investigative techniques of free association and dream interpretation, two methods for overcoming resistance and uncovering hidden unconscious wishes.
In his Introductory Lectures (1916–17), Freud promoted the notion of conversion as a ‘puzzling leap from the mental to the physical’, and continued to describe hysterical symptoms as symbolic representations of unconscious conflicts. During the First World War, his ideas about the psychogenesis of hysterical symptoms were applied to shellshock and other war neuroses: soldiers displaying paralysis, muscular contracture, and loss of sight, speech and hearing with no apparent organic basis were regarded as suffering from conversion hysteria. Though in principle still committed to scientific biology, in reality Freud explained psycho-dynamics without reference to neurology.
His ideas became central to the twentieth-century understanding of the self – among them the dynamic unconscious and the insights into it afforded by free association; the meaning of dreams; repression and defence mechanisms; infantile sexuality; the sexual foundations of neurosis and the therapeutic potential of transference.
In his later years, while continuing to elaborate his theories of individual psychology – for example, developmental phases, the death instinct, the ego, superego and id – Freud extended his insights into the social, historical, cultural and anthropological spheres, including psychohistorical studies of Leonardo da Vinci, the origins of incest taboos, patriarchy and monotheism. He saw himself as a natural scientist, but his ideas enjoyed their main impact in other fields. With its new view of the personality, Freudian psychoanalysis changed the self-image of the western mind.
PSYCHOANALYSIS
Freud visited the New World in 1909, and psychoanalysis proved particularly influential in the United States, with a number of pioneer analysts emigrating there, notably Alfred Adler (1870–1937) and Helene Deutsch (1884–1982).
Adler is remembered largely for his theory of inferiority: a neurotic individual would over-compensate by manifesting aggression. After participating in the psychoanalytic circle of Sigmund Freud in its early years, he broke with orthodox Freudianism and elaborated his own theory in Über den nervösen Charackter (1912) [The Nervous Character]. Later, he directed his psychological theories to exploring the social relations between individual and environment, stressing the need for social harmony as the way to avoid neurosis. His views became part of an American commitment to social stability based on individual adjustment and adaptation to healthy social forms.
There was a strong Swiss tradition in psychiatry. Eugen Bleuler (1857–1939) at Burghölzli, the Zürich psychiatric hospital, deployed psychoanalytic theories in his descriptions of schizophrenia, a term he coined for the illness idea he developed from Kraepelin’s dementia praecox. But it was Carl Jung’s (1875–1961) influence which was predominant, especially after he broke with Freud in 1912 and developed ‘analytical psychology’, a less sexual vision of the unconscious psyche. A pastor’s son, Jung trained in medicine in his native Basel before specializing in psychiatry. After meeting Freud in 1907, he was for a few years the master’s favourite son. Schisms developed, however, worsened in 1912 when Jung’s Wandlungen und Symbole der Libido [The Psychology of the Unconscious] repudiated many of Freud’s key theories (e.g., the sexual origin of the neuroses), and by 1914 the rift was complete.
Jungian analytic psychology claimed to offer a more balanced view than Freud of the psyche and its different personality types, developing the concepts of the extravert and introvert in his Psychologische Typen (1921) [Psychological Types]. He prized a healthy balance of opposites (animus and anima, the male and female sides of the personality), and was fascinated by the symbiosis of thought, feeling and intuition. He maintained the existence of a ‘collective unconscious’, stocked with latent memories from mankind’s ancestral past. Studies of dreams, of art and anthropology led to a fascination with archetypes and myths (e.g., the earth mother), which he believed filled the collective unconscious, shaping experience, and, as stressed in his final book, Mensch und Seine Symbole (1964) [Man and His Symbols] forming the springs of creativity. With its vision of the self finding realization in the integrated personality, Jung’s analytic psychiatry remains inspirational on ‘new age’ thinking as a personal philosophy of life.
In France Pierre Janet (1859–1947) elaborated theories of personality development and mental disorders which long dominated French dynamic psychiatry. Exploring the unconscious, Janet has left sensitive clinical descriptions of hysteria, anorexia, amnesia and obsessional neuroses, and of their treatment with hypnosis, suggestion and other psychodynamic techniques. He proposed a general theory to account for mental phenomena, correlating hysteria with what he called ‘subconscious fixed ideas’ and proposing to treat it with ‘psychological analysis’. There were many similarities between Janet’s views and early Freudian psychoanalysis.
Psychoanalysis spread rather slowly to the United Kingdom. There was, perhaps, a deep-rooted Anglo-Saxon distrust of depth psychiatry. Mental disturbances and other personal crosses were viewed as private tragedies, to be coped with domestically, with the aid of a discreet family doctor and trusty retainers. An early British Freudian, David Eder (1866–1936), recalled addressing a paper in 1911 to the Neurological Section of the British Medical Association on a case of hysteria treated by Freud’s method. At the end of his talk, the entire audience, including the chairman, rose and walked out in icy silence. The distinguished psychiatrist, Charles Mercier (1852–1919), a one-time colleague of Hughlings Jackson, gloated in 1916 that ‘psychoanalysis is past its perihelion, and is rapidly retreating in to the dark and silent depths from which it emerged. It is well that it should be systematically described before it goes to join pounded toads and sour milk in the limbo of discarded remedies.’
Nevertheless inroads were made, aided by the crisis in explanations produced by shellshock during the First World War. The main pioneering individual in spreading the gospel was Ernest Jones (1879–1958). A founder of the London Society of Psychoanalysis (1913) (reborn in 1919 as the British Society for Psychoanalysis), the Welshman Jones became a close friend of Freud and later his biographer. In 1912, he brought out the first book published in England in this field: Papers on Psycho-Analysis. His ebullient personality, vanity and extraordinary energies made him a born proselytizer. Later, Melanie Klein (1882–1960) and Anna Freud (1895–1982) enriched the scene, Anna having fled to England with her father in 1938. Freudians and Kleinians crossed swords over the interpretation of the infant unconscious; in London the Tavistock Clinic promoted the cause of psychotherapy, especially for children and families.
Perhaps shocked into modernity by the horrors of the war, the lay public became increasingly receptive. In the 1910s the number of publications noting psychoanalytical matters was increasing. In 1919, it was reported that interest in psychoanalysis was ‘now growing by leaps and bounds’; Freud’s books, grumbled The Saturday Review, were now ‘discussed over the soup with the latest play or novel’, while ‘every moderately well-informed person now knows something about Jung and Freud.’ On the whole, the press preferred Jung’s theories to Freud’s, finding the latter’s sexual views ‘repugnant to our moral sense’.
In due course, the spread of Freudian notions led to his ideas gaining ground – it became reputable in the inter-war years and conventional by the 1950s that ordinary people might have ‘complexes’, and that neuroses ran like a watermark through the population at large: juvenile delinquency, housewife blues, family conflicts, alcoholism, adjustment problems, generational tensions and so forth. By the 1950s, popular psychological culture had created new and exciting images like the ‘crazy mixed-up kid’ – the more modern and down-market version of the melancholy poet or Romantic genius.
The psychiatrization of everything occurred first in the United States. It was a trend deliciously mocked in some of Stephen Sondheim’s lyrics to West Side Story (1956), in which the crazy mixed-up young New Yorkers taunt the police officer on the warpath:
Officer Krupke, you’re really a square;
This boy don’t need a judge, he needs an analyst’s care!
It’s just his neurosis that oughta be curbed,
He’s psychologic’ly disturbed.
We’re disturbed, we’re disturbed, we’re the most disturbed,
Like we’re psychologic’ly disturbed.
DESPERATE REMEDIES
The phenomenal growth of psychoanalysis overshadowed developments in the medical treatment of mental problems. The consequences of bacterial infections for brain function were identified, beginning with syphilis, and Julius Wagner-Jauregg (1857–1940) discovered that counter-infection with malaria was effective against general paresis of the insane. Medicine developed a variety of techniques for treating neurosis or psychoses – some successful, others dangerous failures. Insulin was employed against schizophrenia. Though hazardous, insulin shock brought some positive results. While working with epileptics, Ladislaus Joseph Meduna (1896–1965) developed a shock treatment in which camphor was the convulsive agent, producing convulsions so violent that patients suffered broken bones. In 1938, working at a neuropsychiatric clinic in Genoa, Ugo Cerletti (1877–1963), began to use electric shocks (ECT) for alleviating symptoms in severe depression. Psychosurgery too enjoyed popularity in the 1930s and 1940s; At Lisbon University, Egas Moniz (1874–1955) claimed that obsessive and melancholic cases could be improved by frontal leucotomy, surgical severance of the connections of the frontal lobes with the rest of the brain. Although he received universal acclaim and a Nobel Prize in 1949, Moniz was attacked for altering the mental states of individuals.
Such developments – violent, invasive and frankly experimental – signal the desperation of well-meaning psychiatrists to do something for the masses of forgotten patients in asylums. Equally, they reflect the powerlessness of those patients in the face of reckless doctors, and the ease with which they become experimental fodder.
MODERN DEVELOPMENTS
From the mid twentieth century expectations rose for psychopharmacology. The first psychotropic drug, lithium, was used to manage manic depression in 1949. Antipsychotic and antidepressant compounds, notably the phenothiazines (Largactil – referred to by its critics as the ‘liquid cosh’) and imipramine, were developed in the early 1950s. The prominent British psychiatrist William Sargant (1907–88) saw in the new drugs a release from the shadowland of the asylum and the folly of Freudianism. Drugs, he said, would enable doctors to ‘cut the cackle’, predicting that the new psychotropic drugs would eliminate the problem of mental illness by the 1990s. Psychopharmacology certainly brought a new self-confidence and therapeutic optimism to the psychiatric profession. It promised a relatively safe, cost-effective method of alleviating mental suffering without recourse to lengthy hospital stays, psychoanalysis or irreversible surgery. It also restored psychiatry’s wishful identity as a ‘hard’ science.
Growing reliance on psychotropic drugs seemed one solution to the problem of the asylum. The more forward-looking psychiatrists of the inter-war years in Europe and America had grown critical of the old mental hospitals which haunted the landscape. Not least, their putatively clear segregation of the sane from the mad no longer seemed to make epidemiological sense. Psychiatrists increasingly voiced the view that the bulk of mental disorders were to be found not among asylum populations but in the community at large. The emphasis was falling upon neuroses not severe enough to warrant hospitalization or certification but considered to be endemic. ‘During the last 30 years,’ Willy Mayer-Gross tellingly noted, ‘the interest of psychiatry has shifted from the major psychoses, statistically relatively rare occurrences, to the milder and borderline cases, the minor deviations from the normal average.’ Psychiatry needed to become properly informed about apparently new patterns of morbidity.
Psychiatric attention was thereby being extended to ‘milder’ and ‘borderline’ cases, and mental abnormality began to be seen as part of normal variability. A new social psychiatry was being formulated, whose purview extended over an entire populace. The implied blurring of the polar distinction between sane and insane was to have momentous practical consequences for custody and care. As emphasis tilted from institutional provision per se to the clinical needs of the patient, it pointed in the direction of the ‘unlocked door’, prompting a growth in outpatients’ clinics, psychiatric day hospitals and regular visiting, and encouraging treatments which emphasized discharge. These developments presaged the waning of the asylum era.
Yet that passage was not smooth. Some hoped to effect a modernization of the mental hospital from within, and from the late 1940s a few English mental hospitals unlocked their doors. ‘Therapeutic communities’ were set up – distinct units of up to a hundred patients within the larger hospitals – in which physicians and patients cooperated to create more positive therapeutic environments, eroding authoritarian hierarchies between staff and inmates. A therapeutic community designed for the rehabilitation of those with personality disorders was established at Belmont Hospital by Maxwell Jones (1907–1990), ushering in ideas of shared decision-making and an increasingly relaxed atmosphere.
Others insisted on far more drastic measures. This led to the 1960s’ antipsychiatry movement, with three main beliefs: mental illness was not an objective behavioural or biochemical phenomenon but a label; madness had a truth of its own; and, under the right circumstances, psychotic madness could be a healing process and should not be pharmacologically suppressed.
The most charismatic proponent of antipsychiatry was Ronald Laing (1927–89), a Scottish psychiatrist influenced by existential philosophers. In 1965 Laing established an antipsychiatric community (‘hospital’ was deliberately avoided) at Kingsley Hall in a working-class London neighbourhood, where patients and psychiatrists lived under the same roof. Psychiatrists were said to ‘assist’ patients in living through the full-scale regression entailed by schizophrenia. An attractive writer, Laing in particular won a following among liberal intelligentsia at the time of the counter-culture and student protests against the Vietnam War. Films like Family Life (1971) and One Flew Over the Cuckoo’s Nest (1975) mobilized public feeling against the policing role of psychiatry and the ageing asylums.
The movement had many centres. Its chief American spokesman was Thomas Szasz (b. 1920); in France Michel Foucault (1926–84) lent his support. Antipsychiatry gave impetus to the deinstitutionalization of the insane in the late 1970s and the 1980s. At the same time, and from a different angle, politicians took up the cause of community care, keen to reduce costly psychiatric beds and to phase out mental hospitals. Enoch Powell, British minister of health, announced in 1961 that the old mental hospitals (‘isolated, majestic, imperious, brooded over by the gigantic watertower and chimney combined, rising unmistakable and daunting out of the countryside’) should be scaled down or closed down, and that those requiring in-patient treatment should be treated by local hospitals.
The question of the reality of psychiatric diseases continues problematic. With each new edition of the Diagnostic and Statistical Manual of the American Psychiatric Association, some disorders disappear and others appear, including ‘post-traumatic stress disorder’ and ‘attention deficit disorder’. There are now tens of thousands of people in the United States claiming that parental abuse in childhood has produced major psychological trauma, the repressed memory of which is being recovered thanks to the ministrations of their psychoanalysts. Sceptics claim that such ‘recovered memories’ memories are false and pure artefacts created by the suggestion of psychotherapists, and licensed by fashionable diagnostics.
The antipsychiatry movement has now largely spent its force. But as the end of the twentieth century approaches, psychiatry lacks unity and remains hostage to the mind-body problem, buffeted back and forth between psychological and physical definitions of its object and its techniques. Drug treatments have become entrenched. During the 1960s, the tranquillizer Valium (1963: diazepam) became the most widely prescribed drug in the world; also important were Miltown (1955) and Librium (1960). For a decade and more, central nervous system drugs have been the leading class of drugs sold domestically by American manufacturers, usually accounting for about one fourth of all sales. Such preparations, developed in the last three decades, have permitted treatment of the mentally disturbed on an outpatient basis and substantially reduced the numbers of institutionalized mental patients. The most explosive growth, however, has been in psychotherapy, where techniques involving group sessions, family therapy, sensitivity training, consciousness-raising, game-and role-playing, and behaviour modification through stimulus and reinforcement have transformed treatment of mental problems.
During the last twenty years, growing cynicism, patients’ rights lobbies, the exposure of administrative abuses and similar scandals, feminism, and other critical currents have questioned and undermined dramatically the standing of orthodox professional psychiatric services and Freudian psychoanalysis. The psychiatric profession has become a sitting target for the press and politicians.
At the beginning of the twentieth century the British Medical Journal was upbeat: ‘In no department of medicine, perhaps, is the contrast between the knowledge and practice in 1800 and the knowledge and practice in 1900 so great as in the department that deals with insanity.’ Not so the Journal of Mental Science. Pointing in the same year to the ‘apparent inefficacy of medicine in the cure of insanity’, it took the gloomy view: ‘Though medical science has made great advances during the nineteenth century, our knowledge of the mental functions of the brain is still comparatively obscure.’ The Lancet managed to look in both directions at once, editorializing in 1913 that only then and belatedly was ‘British psychiatry beginning to awake from its lethargy’.
The close of the twentieth century may bring the same variety of diagnoses on developments within psychiatry. For some, it has been the century when the true dynamics of the mind have been revealed by Freud; for others, psychoanalysis was a huge sideshow, and at last the organic understanding of the brain is resulting in effective drugs, including, most recently, the anti-depressant Prozac. Within five years of its introduction, eight million people had taken Prozac, which was supposed to make people feel ‘better than well’.
The trump card of a new science of the brain has often enough been played, unsuccessfully, in the history of the discipline, and the claims of brain scientists to understand consciousness and its terrors have been shown to be shallow, indeed deluded. Whether civilization’s treatment of the mentally ill has become more humane in a century which gassed to death tens of thousands of schizophrenics is a question which permits no comforting answers about rationality and sanity.
* A sometime Glasgow millhand, David Livingstone (1813–73) decided to become a medical missionary, qualified in 1840 and offered his services to the London Missionary Society. His exploration of the Zambesi, discovery of the Victoria Falls, efforts to find the source of the Nile, disappearance in the heart of Africa, and meeting with Stanley formed one of the great Victorian adventure stories. His Missionary Travels and Researches in South Africa (1857), included an account of the tsetse fly, and of the horse and cattle diseases resulting from its bite, although the trypanosome carried by the fly was as yet undiscovered. He administered arsenic to horses as a nagana remedy.
* In his The Assault on Truth (1983), Jeffrey Masson has argued that Freud got it right first time; it was the abandonment of the seduction theory that was the error, a betrayal of the truth and of his patients. This betrayal was in part due to the death of Freuds father in October 1896: thenceforth Papa Sigmund stood in the father’s shoes; psychoanalysis was a cover-up.