AFTER LISTER
MEDICINE HABITUALLY EDGES FORWARD with minor improvements; in surgery, new technical know-how brought Plaster of Paris bandages, safer artery clips, flexible rubber tubes, rubber gloves and so forth, all doing their bit in the latter part of the nineteenth century. But blazing changes were also occurring.
Operating theatres which resembled shambles in 1860’, recalled one surgeon fifty years later, ‘are replaced by rooms of spotless purity containing cantilevered metal furniture and ingenious electric lights. All concerned in the operation are clothed from nose-tip to toe-tip in sterilised linen gowns, and their hands are covered with sterilised rubber gloves.’ Sepsis had ceased to have surgery by the throat. ‘In my undergraduate days every surgical case got erysipelas,’ Dr George Dock (1860–1951) explained to his students at the University of Michigan in 1904. ‘If a man came in with a compound fracture, he got erysipelas. It was considered part of hospital life.’ By then that was becoming a thing of the past, and the combination of anaesthesia and asepsis offered the unprecedented prospect of safe and virtually unlimited surgical intervention. For thousands of years surgery had been a business of boils and broken bones, hernias, venesection and the occasional amputation; ‘an operation on the heart would be a prostitution of surgery,’ declared the young Theodor Billroth (1829–94). Yet this high-minded caution rapidly became old-fashioned.
Contrast the operations undertaken by Joseph Lister between 1877 and 1893 with those of his protégé, William Watson Cheyne (1852–1932). About 60 per cent of Lister’s practice concerned accident and orthopaedic cases, with tubercular conditions prominent, and his 597 surgical repertoire focused on bones, joints and superficial tumours. Though his results were excellent, his operations were traditional. Up to 1893, Lister attempted no abdominal surgery and Cheyne undertook just one bowel operation. In the new century, this branch of Cheyne’s practice rose to around three in ten cases; opening the abdomen became the bread-and-butter of surgery.
Surgeons began attempting operations on organs and lesions hitherto taboo: bowel inflammations, the pancreas, the liver and biliary tracts, peptic ulcers, gallstones, and a range of cancers – and also knife and gunshot wounds in the abdomen. One consequence was that surgery’s profile rose. A sign of the times was the Mayo Clinic at Rochester, Minnesota, founded by the brothers William (1861–1939) and Charles Mayo (1865–1939), masters respectively of abdominal and thyroid surgery. Sons of a rugged individualist surgeon who had prided himself on his skill in the removal of ovarian tumours, the Mayos followed in their father’s footsteps and turned the local Minnesota hospital, St Mary’s, into a surgical powerhouse. In 1800 a big London hospital staged no more than two hundred operations a year; a century later the Mayos and their team were performing over three thousand, while 1924 saw their clinic logging a staggering 23,628 operations, with 60,063 patients on the books. Surgery had developed a scope and achieved a popularity hitherto quite unthinkable; the Mayos became household names and millionaires. The new surgery was accompanied by new procedures and the routinization of systematic testing: blood counts (important for the diagnosis of typhoid fever or the prognosis of pneumonia), urinalysis and, with the discovery of insulin, the measurement of urinary sugar.
The Columbus of the new surgical techniques was Theodor Billroth. Educated in Göttingen, Billroth was a man of many talents; he wrote a book called Wer ist musikalisch? (1895) [Who is Musical?] and was a good friend of Brahms, who dedicated two string quartets to him. In 1856 he was appointed assistant to the greatest German surgeon of the day, Berhard von Langenbeck (1810–87) at the Berlin Charité hospital. Four years later he became professor in Zürich, then moved on to Vienna in 1867, where he capitalized on the splendid facilities at the General Hospital, modernized by Rokitansky and Skoda. Talented acolytes gathered around him at the ‘second Vienna school’.
The all-round scientific surgeon, Billroth moved easily from the bedside to the microscope in the laboratory and to the operating theatre. His classic Die allgemeine chirurgische Pathologie und Chirurgie (1863) [General Surgical Pathology and Surgery] derived indications of surgery from the underlying pathophysiology of wound-healing, regeneration, inflammation, haemorrhage, etc. The work established his reputation worldwide, running through sixteen editions and being translated into ten languages.
An innovator on many fronts, Billroth refined Listerian antisepsis and pioneered regular temperature measurement for post-operative control, drawing on Wunderlich’s works, which taught that temperature rise was a sign of complications. Superb technique and a dauntless temperament ensured him a leading role in developing new operations, and he was the first to have success with some: gastric resections, removal of the whole oesophagus, and the creation of detours around acute or chronic intestinal obstructions through the use of anastomoses (new channels and connexions – replumbing in effect) between parts of the digestive tract. Frankly experimental, his new methods sacrificed many lives but, as his practices became refined and post-operative care improved, mortality rates dipped.
With role-models like Billroth to follow, the ambitious surgeon was beguiled into believing that all manner of diseases could be cured or checked by chloroforming the patient and plying the knife and needle. The potential of surgery became almost a matter of faith: if patients failed to improve after a operation, didn’t this show that further lesions remained to be excised, yet more fixing-up to be done? Probing around in the thorax and the abdomen would and did unearth assorted anomalies, which were then deemed pathological and hence indications for further intervention. The body’s interior seemed an Africa in microcosm, that dark continent being opened up, mapped and transformed. Fame and fortune awaited the surgical pioneer who first laid the knife to some hitherto untouched part – perhaps he would be immortalized by an eponymous operation.
Some surgeons grew quite cavalier: the dazzling Irish-born William Arbuthnot Lane (1856–1943) viewed the innards as little more than a problem in plumbing. He urged, for instance, colectomies – removal of lengths of the gut – to treat that favourite English malaise, constipation, or even as a prophylactic against ‘intestinal stasis’ and ‘autointoxication’ (the self-poisoning which he supposed resulted from the artificialities of modern civilization). In Lane’s pathological model, poisons were absorbed from a sluggish colon, producing pelvic pain and other symptoms. An overloaded colon might lead to ‘Lane’s kinks’, which he claimed to iron out. He was, significantly, in great demand; the sick were beginning to look to surgery as the new panacea.
Appendectomy, not only for the acute condition but for so-called ‘chronic’ appendicitis (the ‘grumbling appendix’), enjoyed a vogue in the 1920s and 1930s. The pathology of the condition had long been familiar, described post mortem in the sixteenth century by Berengario da Carpi and Jean Fernel. The condition had been diagnosed in a living patient in 1734 by Wilhelm Ballonius, and in 1812 James Parkinson (1775–1824) described how inflammation of the appendix preceded peritonitis. But though appendicitis thus came to be recognized as dangerous, pre-Lister treatments invariably and inevitably remained, in the time-honoured Hippocratic manner, medical: blood-letting, leeches and enemas.
Various surgeons have been credited with the first appendectomy; in England, Robert Lawson Tait (1845–99) performed the operation in 1880; another pioneer (1886) was Ulrich Rudolf Krönlein (1847–1910). The optimal surgical approach remained contested. On the whole, the British preferred to wait until the inflammation died down before operating and some surgeons, including Frederick Treves (1853–1923), even hesitated to remove the whole organ. He won eminence and a baronetcy when on 24 June 1902 he drained the appendix of the Prince of Wales, who had gone down with appendicitis shortly before he was to be crowned as Edward VII.* Treves was a passionate champion of the new surgery. ‘It is less dangerous to leap from the Clifton Suspension Bridge’, he declared, ‘than to suffer from acute intestinal obstruction and decline operation.’ All manner of new operations were tried; some became established, others disappeared. Procedures were devised to fix abdominal organs found upon X-ray or exploratory operation to be ‘misplaced’ or ‘dropped’ (dropped organs were thought to cause neurosis and vague, unlocalized abdominal pains). ‘Hitching up the kidneys’ was recommended as a fix for back pain; fragments of vertebrae were removed for the same reason. And countless tonsillectomies were performed on small children; it was an easy, relatively safe, and lucrative operation. In 1925 a staggering 25.5 per cent of patients admitted to the Pennsylvania Hospital were allegedly suffering from diseases of the tonsils – tonsillectomy became for a while easily the most common hospital procedure.
Tonsils – sometimes viewed as an organ that evolution had rendered functionless – were removed not only because they were ulcerated but because they seemed enlarged, and doctors and parents were convinced that surgical intervention would solve the problem of never-ending childhood infections. A study undertaken in New York in 1934 revealed that of 1000 eleven-year-olds in state schools, 611 had already had their tonsils removed. The other 389 were evaluated by a panel of doctors, and the procedure was recommended for 174 more. Different physicians were brought in to inspect the remaining 215 and they chose to operate on 99 more, leaving 116 from the original 1000 ‘surgery-free’. These 116 were evaluated by a third panel of doctors, who recommended a further 51 for surgery. In this ‘can-do, will-do’ atmosphere, surgical operations took on the quality of a cure-all. Since it is now known that tonsils form part of the immune system, removal must have been positively damaging – and over eighty children died annually in England in the 1930s because of post-operative complications following tonsillectomies.
Hysterectomies also had a vogue; removal became popular in the belief that it would deal not only with the assumed physical pathology but with emotional and psychological difficulties. Another fashionable inter-war diagnosis was focal sepsis: the notion that pockets of pus were lurking in the sickly body, causing infections and requiring surgical extraction. It thus became routine to extract teeth, often all the teeth of patients in psychiatric hospitals. Other surgical fads included operations to remove sympathetic nerves, so as to end spasm of the gut or artery.
Recourse to the knife became almost a reflex. Its appeal lay in the fact that it was new, quick and, supposedly, painless and safe; who would refuse a short-cut to end suffering? (Indeed a class of patients emerged suffering from an addiction to surgery: Münchausen’s syndrome; no label seems to have been devised, however, for surgeons addicted to surgery.) The myth of the surgeon as hero blossomed; at long last the craft had thrown off its associations with pain, blood and butchery and become linked by the public and the press with science and life-saving. The truth was rather more complicated: up to the 1940s appendectomy, for example, had a one in five mortality, and post-operative complications remain to this day one of the main sources of iatrogenic disorders and hospital-caught infections.
NEW OPERATIONS
Surgery was extended to many familiar grave conditions and to organs hitherto untouched. Cholecystotomy, removing gallstones, was first performed in 1867 by the American John S. Bobbs (1809–70). Removal of the gall bladder (cholecystectomy) also became routine, being first attempted in 1882 by Carl Langenbuch (1846–1901) in Berlin. In 1884, J. Knowsley Thornton (1845–1904) tried something different: finding two big stones inside a patient’s duct, he crushed them using a rubber-jawed pincer designed for nose-polyps. Two months later, he opened the duct of another patient and took out gallstones – the first choledocholithotomy. Confidence grew in opening up the gall bladder, and eventually surgery was recommended even for general indications, such as pain in the gall bladder area; biliary disease began to slide from the physicians’ hold into the surgeons’ grasp. (In a later technological age, gallstones were fragmented by the shock-wave method, a technique also used for kidney stones.)
Kidney operations had been extremely rare and undertaken only in extremis. The first known nephrectomy (surgical removal of a kidney) was done in 1861, by E. B. Wolcott (1804–80) of Milwaukee, who removed a large kidney tumour, though his patient died two weeks later. Success dates from 1870 with the German surgeon Gustav Simon (1824–76), though the mortality rate was initially high.
Surgery also became viable for ulcers. An operation for chronic peptic ulcers was described in 1881 by another of Billroth’s protégés, Anton Zwölfter (1850–1917); it was first performed for duodenal ulcers in 1892 by Eugene Doyen (1859–1916) in Paris. An alternative was to remove much of the stomach (gastric resection), so as to reduce secretion of hydrochloric acid. Billroth attempted this in January 1881 on a woman named Thérèse Heller; the operation took ninety minutes and she recovered well, only to die of liver cancer four months later. Connecting the resected stomach with the duodenum, his method became known as ‘Billroth 1’.
An acute perforated ulcer spells danger: the hole must be sutured to prevent fatal peritonitis. ‘Every doctor, faced with a perforated ulcer of the stomach or intestine’, wrote one of Billroth’s devotees, ‘must consider opening the abdomen, sewing up the hole and averting a possible or actual inflammation by carefully cleansing the abdominal cavity.’ This was first done on 19 May 1892. A man with stomach ulcers suddenly went down with peritonitis; Ludwig Heusner (1846–1916) was summoned to his home where he accomplished the operation in two and a half hours; a similar operation was performed the next year in England by Hastings Gilford (1861–1941). By the mid nineties, the suture of perforations of the stomach and duodenum had become part of the repertoire.
Other conditions were being subjected to surgery for the first time, including certain cancers. Breast cancer operations date back to antiquity: Aetius of Amida had emphasized that the knife should cut healthy tissue around a tumour and that a cauterizing-iron should stanch the blood. The Florentine Angelo Nannoni (1715–90) published a book in 1746 on the surgical treatment of breast cancer; in his Traité des maladies chirurgicales et les opérations qui leur conviennent (1774–6) [Treatise on Surgical Ailments and Operations], the French surgeon Jean-Louis Petit advocated radical removal of the breast muscle and lymph nodes. The mastectomy Larrey performed on Fanny Burney in 1810 was probably along these lines.
Radical mastectomy was advocated by Sir Astley Cooper. He wrote in his Lectures on the Principles and Practice of Surgery (1824–7), be sometimes necessary to remove the whole breast where much is apparently contaminated, for there is more generally diseased than perceived and it is best not to leave any small portion of it as tubercles reappear in them. If a gland in the axilla [armpit] be enlarged, it should be removed and with it all the intervening cellular substance.’ The operation was attempted by Charles Hewitt Moore (1821–70) at the Middlesex Hospital in London, which had a special cancer ward, but without success. In 1879 Billroth reported that out of 143 women who had undergone it, a mere thirty-five survived for any length of time (surgeons may have felt less uncomfortable about practising their techniques on such desperate cases). Billroth did not specialize in breast cancer, but he did envisage surgery as bringing hope for cancer sufferers. He tackled stomach cancer, for instance, with ‘Billroth II’, closing the top of the duodenum and connecting the resected stomach with the jejunum. Such methods proved popular for stomach cancer, as well as for duodenal ulcers.
Among Billroth’s students in Vienna was the American William Stewart Halsted (1852–1922). He quickly became one of the top young surgeons in New York, until, in 1884, and some colleagues began to experiment with cocaine. He became addicted, his work deteriorated, and his chief, William Welch, arranged for him to go on a detoxifying cruise before taking up a post at the Johns Hopkins Medical School as professor of pathology. Within six months, Halsted was back in the psychiatric hospital, having switched from cocaine to morphine in hopes of beating his addiction.
At Baltimore he enjoyed a remarkable career spanning thirty-three years, training the best surgeons of the next generation, including the neuro-surgeon Harvey Cushing. Halsted ushered in a new ‘surgery of safety’, aiming not to injure tissues more than was avoidable, but his operations were radical. He evolved striking advances in bile duct, intestine and thyroid gland surgery, and the Halsted II procedure for repairing inguinal hernias. But he was best known for his cancer treatments. Cognizant of how cancer of the breast spread through the lymph system, he advocated radical mastectomy, in which the breast, all the lymph glands in the nearest armpit and the chest wall muscles were removed. Introduced in the 1890s, this was the treatment of choice for breast cancer for over half a century. Its underlying premise was that cancer remained in the breast for some time before spreading, and that the lymphatic system was relatively separate from blood vessels, so removal of lymph nodes would prevent the passage of cancer cells.
In 1937, however, the British Medical Journal reported that an identical percentage of women survived after less severe surgery, with Geoffrey Keynes (1887–1982) of St Bartholomew’s Hospital claiming equally good results with the simple removal of breast tumours (‘lumpectomies’), followed by radiation. By the 1970s it was accepted that tumours could not be cut out by their roots like weeds: by the time breast cancer was diagnosed, most patients had cancer elsewhere in their bodies. The lymphatic system and bloodstream were found to be utterly integrated, so the removal of lymph nodes to prevent cancer spread was mistaken. Many decades of experience offered little cause for satisfaction. In 1975, a World Health Organization survey showed that, despite the growing sophistication of surgery and other interventions, deaths from breast cancer had failed to decline since 1900. Surgery wasn’t the answer.
Operations were also developed for other forms of cancer. Lung resections were begun in 1933 by Evarts Graham (1883–1957) in Washington. Lobectomy (removing a lung lobe), segmental resection (removing only part of a lobe) and pulmectomy (taking out the whole lung) were also tackled, as was prostate cancer. In 1889 a Leeds surgeon, A. F. McGill (1850–90), reported 37 prostatectomies with good results, and in 1890 William Belfield (1856–1929) of Chicago reported eighty cases with a mortality of 14 per cent – considered excellent at the time. Surgeons were thus boldly going into the body, where none had gone before.
SEEING INTO INNER SPACE
Such interventions marched forward in line with other new probes and advances in diagnostic techniques. First had come the stethoscope; Helmholtz had developed the ophthalmoscope (1851), and in 1868 the oesophagoscope was produced by John Bevan, enabling foreign objects in the gullet to be located and removed. That year, too, the first stomach gastroscopy was done by an assistant to Adolf Kussmaul (1822–1902), who engaged a professional sword-swallower to swallow a pipe, nearly half a metre long, equipped with a lamp and lenses. An American surgeon, Howard Kelly (1858–1943), created the equivalent rectoscope in 1895, which was soon adapted by gynaecologists to explore the abdomen (laparoscopy).
Early gastroscopes were stiff, causing soreness and injury, but from the 1930s flexible versions were devised, making use of glass-fibre optics, which directed light though a tube by total internal reflection. Tiny cameras were later incorporated into such endoscopic devices, as well as tongs to take biopsy samples and lasers to stop bleeding.
The most decisive new windows into the body, however, were X-rays. On 8 November 1895, Karl Wilhelm Röntgen (1845–1923), a physics professor at Würzburg interested in cathode rays (the glow radiating from a wire vacuum sealed in a Crookes tube when a high voltage was applied) hit upon a new phenomenon while testing a tube. Having darkened his laboratory and wrapped the tube in black cardboard to screen out the light it emitted, he was surprised to find that, on switching on the current, a fluorescent screen coated with barium platin-ocyanide glowed a faint green colour. The tube was obviously giving off something else beside the familiar cathode rays; invisible rays were passing through the cardboard cover and bombarding the screen.
If these rays could penetrate card, what else might they pass through? Experimenting with playing-cards, a book, some wood, hard rubber and assorted metal sheets, Röntgen found that only lead barred the rays totally. He held some lead between the tube and the screen; its shadow was visible on the screen and so was the outline of the bones of his hand. Aware that cathode rays darkened photographic plates, he asked his wife to hold one in her hand, while he beamed the new rays onto it; her hand was distinctly outlined, and its bones and rings highlighted against the silhouette of the surrounding flesh.
Röntgen announced his discovery in Eine neue Art von Strahlen (1896) [A New Kind of Ray]. News of the rays (which he styled X-rays because their nature was unknown) made headlines, and coverage peaked in 1901 when he received the first Nobel Prize for Physics. Public excitement and anxiety followed: to capitalize on fears of Peeping Toms with ‘X-ray eyes’ peering through women’s underclothes, one enterprising firm advertised X-ray-proof knickers.
The diagnostic possibilities were quickly exploited. As early as 7 January 1896, a radiograph was taken for clinical purposes. Though initially only bones were examined, soon other things, like gallstones, were seen too, and the rays were valuable in diagnosing fractures and locating foreign bodies. In the Spanish-American War, an X-ray machine was used to scan for bullets, though its utility was reduced by the long exposure times (up to thirty-five minutes!). In December 1896, Walter Cannon at Harvard found that if laboratory animals were fed bismuth salts as a diagnostic meal the workings of their intestines could be observed on a fluorescent screen. By 1904 that technique was being applied to humans, with the substitution of the safer barium sulphate. ‘Barium swallows’ became routine, until largely superseded by endoscopy.
Early chest radiographs were unsatisfactory, since exposure times needed to be long (initially at least twenty minutes) and contrasts were poor – one of the reasons why, despite public fear over the scourge of tuberculosis, mass chest X-ray screening was not developed until the 1920s. What the stethoscope had been to the nineteenth century, the X-ray became for the twentieth: an impressive diagnostic tool and a symbol of medical power.
Not merely diagnostically valuable, X-rays were regarded as therapeutically promising. Röntgen found that prolonged exposure produced skin burns and ulcerations, hair loss and dermatitis, though these effects were turned to therapeutic account by physicians who used them to bum off moles or treat skin conditions ranging from acne to lupus (skin tuberculosis). The properties of other sorts of rays were also being touted. The Danish physician Niels Finsen (1860–1904) suggested that ultraviolet rays were bactericidal, and so could be useful against conditions like lupus. Many early hospital radiology departments provided both radiation and ultraviolet light therapy, and Finsen’s researches stimulated high-altitude tuberculosis sanatoria and inspired the unfortunate belief that sun-tans were healthy. The dangers of X-rays were also recognized; a relationship between exposure and skin cancer was reported as early as 1902. Benefits and costs have always been precariously balanced on the radiation research accounts sheet.
A few weeks after Röntgen’s discovery, the French scientist Henri Becquerel (1852–1908), in the course of setting up an experiment to investigate the X-ray potential of uranium salts, placed some salts over a photographic plate protected by aluminium; he then developed the plate without carrying out the intended experiment and found, to his surprise, that it had darkened at the point where the salts had been. Hence, uranium emitted rays which, like X-rays, penetrated matter. He published his findings, but then lost interest. In 1897, Marie Curie (1867–1934) chose uranium rays as the topic for her doctoral thesis.
Warsaw-born, Maria Sklodowska had left Poland in 1891 to study at the Sorbonne, and married a fellow-scientist Pierre Curie (1859–1906) in 1895. Studying uranium compounds, she noticed that pitchblende (uranium oxide ore occurring in tarry masses) was four times as active as uranium. Soon she was speculating upon the probable existence of a new radioactive element, which her husband joined her in the race to isolate. Laboriously they refined 100g of pitchblende and in July 1898 announced the discovery of the new element, ‘polonium’, called after Marie’s beloved Poland.
By November it was apparent that the refined pitchblende liquid, left over after the polonium had been removed, was still highly radioactive: it had to contain another undiscovered element. Further refinement produced a substance 900 times more radioactive than uranium: on 26 December 1898, they announced the discovery of radium. Both Mme Curie and Becquerel had burned themselves accidentally when carrying radium phials in their pockets, and in 1901 Pierre did so deliberately by strapping some to his arm. Demonstrations in 1904 that radium rays destroyed diseased cells, led to radiation treatments for cancer and other diseases. When in 1906 Pierre was knocked down and killed by a cart, Marie was offered his chair, becoming the first woman professor at the Sorbonne. In 1911, she was twice rewarded: she received her second Nobel Prize (her first had been in 1904), and the Sorbonne and the Pasteur Institute helped to fund her Radium Institute.
X-rays and radiation provoked immense interest among scientifically minded physicians. Before the First World War, various radium institutes sprang up, there were radiology journals and societies, and the new ‘miracle cures’ had been tried out for more than a hundred diseases, most notably cancer. Scores of proprietary cures, some sponsored by renowned doctors, traded on the conjectured therapeutic powers of radioactivity. Therapeutic enthusiasm outran caution, and the dangers of radiotherapy were determined at great cost to patients and radiographers alike (many of the latter lost their lives). Little thought was given to the long-term consequences of repeated exposure to heavy radiation – for instance, among technicians involved in handling X-ray machines. Fluoroscopes went on being employed quite casually in shoe-shops and X-rays in ante-natal clinics. As late as the 1940s benign menstrual bleeding was sometimes treated with X-rays and radium – a therapy which caused cervical cancer.
Exposure did not become a subject of deep and lasting public concern until the consequences of the devastation of Hiroshima and Nagasaki by atom bombs at the close of World War II became plain. Modern medicine has sometimes been so engrossed in its healing mission as to be cavalier about evaluating safety, benefits and costs.
Other roles for rays were discovered. Scientists found that electromagnetic waves heated up tissues which absorbed them. In 1917 Albert Einstein (1879–1955) announced the principle of the ‘laser’ (an acronym for Light Amplification by Stimulated Emission of Radiation), and in due course lasers were harnessed for medical use. Their high-energy waves could be focused to a microscopic point, were sterile, and caused little bleeding or scarring. Such optical ‘knives’ became deployed to weld a detached retina, burn through a blocked coronary artery, or even to efface an unwanted tattoo.
Other technical advances of great diagnostic importance followed from breakthroughs in microscopy. Microscopes were first used directly on the body in 1899, when the cornea (the transparent covering on the front of the eye) was investigated with a large-field stereo-microscope. In time this led to microsurgery: in 1921, the Scandinavian ear specialist, C. O. Nylen (b. 1892), performed an operation using a monocular microscope, and operating under microscopes soon became standard practice. What could be seen was, however, limited: since the wavelength of light is about one-thousandth of a millimetre, viruses would not show up under a light microscope.
In 1925, the London microscopist Joseph Bernard developed the ultraviolet microscope, which achieved magnifications of up to 2500 and allowed the larger viruses to be seen. Meanwhile, it became known that electrons travelled with a wave motion similar to light but 100,000 times shorter. Objects far smaller would thus be visible through an electron microscope. The first was developed by a Belgian, L. L. Marton (1901–79). By 1934, researchers had achieved the same magnification as with the best light microscopes; within three years, objects could be magnified 7000 times; and, by 1946, over 200,000 times with electron microscopes. Many medically significant aspects of cell structure were revealed: macrophages tangled with asbestos fibres, synapses between nerves, histamine-releasing granules, and so forth. In the 1970s scanning electron microscopes also became important.
Another development of immediate diagnostic significance was ultrasound, for assessing foetal progress through pregnancy. Developed in the 1950s by Ian Donald (1910–87), professor of midwifery in Glasgow, this drew on the naval echo-sounding technique known as sonar (Sound Navigation and Ranging). When subjected to an electric charge certain crystals emit sound waves at frequencies too high to be heard by the human ear; these ultrasonic waves travel through water, sending back echoes when they encounter a solid object. Distance can be calculated from the time-lapse.
Applying this principle to the body, Donald initially concentrated on showing how different classes of abdominal tumours give off distinct echoes, diagnosing them with a view to surgery; but by 1957 he was using ultrasound to diagnose foetal disorders, later applying it to establish pregnancy itself. Experience allayed fears that ultrasound could prove harmful to the foetus in the manner of X-rays, and foetuses became minutely monitored for abnormalities. A similar diagnostic technique made use of the infra-red radiation found in heat rather than sound waves. Different body parts emit varying heat patterns, measurable by the intensity of the infra-red waves they contain. These may be analysed to identify abnormalities: cancerous tumours, for example, show up as ‘hot spots’.
In 1967, Godfrey Hounsfield (b. 1919), an engineer and computer expert working for the British company EMI, had the idea of developing a system to build up a three-dimensional body image. Computerized axial tomography (CAT) transmits fine X-rays through the patient to produce detailed cross-sections, which are computer processed to create a three-dimensional picture whose shading depends on tissue density – more compact tissues absorb more of the X-ray beam. Refinements enabled the computer to colour the scan.
In what came to be called ‘imaging’, the CAT (or CT) scanner led to the PETT (positron emission transaxial tomography) scanner, used to diagnose and monitor brain disorders. Dispensing with X-rays (and hence the irradiation of the patient), this relies on radioactive emissions and enables doctors to study brain activity. Patients are injected with radioactive glucose, and distinct brain areas soak up different amounts depending on their activity-level. Analysis allows identification of the tell-tale patterns diagnostic of brain disorder in cases of strokes, psychiatric conditions and the like.
A still more advanced technique of making the body transparent is magnetic resonance imaging (MRI), which exploits the fact that hydrogen atoms resonate when bombarded with energy from magnets. Like CAT, MRI displays three-dimensional body images on a screen; its main advantage is that it does not involve radiation. Allowing body chemistry to be studied while physiological events are actually taking place, it can be used to monitor surgery involving organ transplants as well as the course of such diseases as muscular dystrophy. MRI scans are particularly useful for showing soft-tissue injuries within the spine, such as prolapsed disc.
NEW SURGICAL FIELDS
Stimulated by technical innovations and driven by outside pressures, not least the appalling wounds of two world wars, surgery moved stage-centre in the twentieth century. Initially its mission, like that of the Commandos, seemed to be to go in and destroy all threats, mainly with the knife. Surgeons focused on tumours and stenosis (constriction of vessels), especially in the digestive, respiratory and urogenital tracts, removing or relieving these by excision or Assuring, as in tracheotomy for tuberculosis or throat cancer. Abdominal surgery produced herniotomies, treatments for appendicitis and colon disorders, extirpation of cancer of the rectum, and so forth, in what has been called surgery’s heroic, even knife-happy, age. And if such invasions were at first frankly experimental and desperately hit-and-miss, in time and at a cost to life routinization brought greater safety and reliability. All the cavities and organs of the body were conquered, and certain departments of surgery were utterly novel, for instance neurosurgery. The operating theatre became the high altar of the hospital, and the white-coated, masked and capped surgeon, so cool in an emergency, became the high priest of medicine in images which pervaded popular culture, TV soap operas and the press.
Encouraged by new techniques and intoxicated by their own rhetoric and even successes, the profession was liable to offer surgical fixes for everything. High on the list of diseases the new surgery was eager to cure was tuberculosis. Though dipping by the closing decades of the nineteenth century, TB mortality remained shockingly high. Certain breakthroughs had been achieved, including the identification of the bacillus, but all therapies had proved failures (including of course Koch’s much-feted tuberculin). No wonder the possibility of a surgical solution was prized. In 1921, Sir James Kingston Fowler (1852–1934) hailed the pneumothorax technique of collapsing and immobilizing the lung – resting it, so as to encourage the lesions to heal – as one of ‘two real advances in the treatment of pulmonary tuberculosis’ (the other he had in mind was the sanitorium).
In reality, the surgical handling of the disease was far less clear-cut; a multitude of technical problems had first to be surmounted. One obstacle facing this and similar thoracic operations was air pressure. This is normally low in the pleural cavities around the lungs, but opening the body lets in air, which causes the lungs to collapse, and breathing becomes impossible. Great ingenuity was needed to overcome such difficulties.
Convinced the only way to defeat TB was to operate, Carlo Forlanini (1847–1918) of Pavia attempted the first pneumothorax operation in 1888. The ‘cavern’ area could be made to rest, he believed, if the ribs were mostly removed so that the chest wall and the lung collapsed. Forlanini injected an inert gas between the two layers of the pleura; the result was compression and lung collapse. Initially unsuccessful, this operation was worked upon by Ferdinand Sauerbruch (1875–1951), who had trained in Berlin under Langerhans before becoming professor at Berlin’s Charité Hospital. Sauerbruch, whose father had died of the disease, set about solving the problem of lung collapse on the opening of the thorax. He experimented on animals by enclosing the creature’s chest in a pressurized cage using gloves built into the cage wall, and developed ways of operating while the animal carried on breathing.
By 1904, a ‘negative pressure chamber’ had been built to hold a patient, a table and a full operating team. In this, Sauerbruch built up experience in thoracic surgery. By the 1920s, new and improved ways of establishing collapse had been developed; both physicians and surgeons had their part to play. A temporary collapse of the lung could be produced by nitrogen displacement (pneumothorax), which could be performed by physicians. Alternatively, a permanent collapse of the whole lung could be produced by cutting out ribs – the surgeon’s turf. Despite all the inventiveness and skill employed, pneumothorax proved of uncertain value, and it was abandoned after the Second World War.*
Brain surgery presented even more daunting problems. For thousands of years, no one had dared to operate on the brain, with the exception of trepanning the head (strictly speaking, skull not brain surgery). One reason was an almost total ignorance of how the brain worked and where lesions were likely to lie, beyond the guesswork of phrenology. Neurophysiological advances associated with the work of Jackson and Ferrier on epilepsy led to a more confident mapping of brain localization and better-grounded hopes of locating tumours.
Antiseptic techniques allowed surgeons to open up the skull to expose the brain. In 1876 William Macewen (1848–1924), who had studied under Lister, diagnosed and localized a cerebral abscess. His request to operate was refused by the patient’s family, but after death he was allowed to open the cranium, and the abscess was found as diagnosed. Three years later Macewen successfully removed a funguslike tumour of the dura (a meningioma). His Pyogenic Infective Diseases of the Brain and Spinal Cord (1893) chronicled ten years’ work: he had operated on seventy-four patients with intracranial infections, and sixty-three of them had improved.
In 1884, a patient was sent to the Hospital for Epilepsy and Paralysis (Queen Square Hospital) in London with a brain tumour diagnosed between the frontal and parietal lobes. Operated on by Lister’s nephew, John Rickman Godlee (1849–1925), a walnut-sized tumour was found exactly as predicted. (The patient, alas, died from complications.) The real trail-blazer in brain surgery was Victor Horsley (1857–1916), at Queen Square, who turned himself into the world’s first specialist neurosurgeon, writing about injuries and diseases of the spinal cord and brain. Another was Vilhelm Magnus (1871–1929) in Norway, who first operated on the brain in 1903, removing a tumour from an epileptic. Some twenty years later, he reported more than a hundred brain-tumour operations with a mortality rate of only 8 per cent.*
In the United States the field leader was Harvey Cushing (1869–1939), who learned his technique from Halsted; during the course of his career he removed more than 2000 brain tumours. After studying in England with Sherrington, Cushing returned to Johns Hopkins in 1901, became associate professor of surgery in 1903, and then spent most of his career at Harvard. He pioneered the operation for trigeminal neuralgia (tic douloureux).
Cushing favoured the gentle handling of tumours and tissue, using tiny silver clips at bleeding points in the brain to achieve bloodless operations. Brain surgery mortalities had averaged about 40 per cent; by 1915 Cushing had removed 130 tumours with a mortality of just 8 per cent. His work put brain surgery on the map, even if as late as 1932 a textbook was cautioning that no more than 7 per cent of brain tumours could be removed.
THE HEART
The success of the new surgery was chequered. New operations came thick and fast, specialization progressed, the operating theatre became the most prestigious site of medical practice, special clinics were set up for urological, neurological, thoracic, orthopaedic and paediatric conditions. But, as has been seen with cancer and tuberculosis, while giving ‘hope’, these measures were not always effective. In one high-profile field, however, a series of spectacular successes in large measure realized surgery’s promise, sustaining its impetus into more recent times when various improvements (including new diagnostic technologies and more effective drugs) made surgical intervention generally more effective.
That area was cardiovascular disease, a condition perceived to be dramatically worsening. Like a Himalayan peak, the heart is a great challenge to surgery. Sir Stephen Paget said in 1896 that ‘surgery of the heart has probably reached the limits set by Nature to all surgery; no new method, and no new discovery, can overcome the natural difficulties that attend a wound of the heart.’ But in the twentieth century, the heart ceased to be a no-go area.
As explored in Chapter 18, clinical knowledge of heart disease was growing, helped by diagnostic technology, notably the electrocardiograph (1903), which became as much a symbol of the modern hospital as the operating theatre. Over the decades other diagnostic improvements brightened prospects for heart surgery. In 1929 a German medical student, Werner Forssmann (1904--79), injected a catheter into his own arm, slid it up a vein and had an X-ray photograph taken of the catheter, whose tip he had pushed (it turned out) into the right atrium of his heart. This caused a sensation. He experimented again on himself in 1931, injecting a radio-opaque substance through a catheter into his heart and then having himself X-rayed (the first angiocardiogram). Building on this, in 1940 the Americans André Cournand (1895–1959) and Dickinson Richards (1895–1973) performed the first catheterization on a patient. Such innovative techniques were to become routine in diagnosis of heart disease.
Catheterization was to prove useful not just diagnostically but therapeutically. It was discovered that the inner lining of coronary arteries (blockage of which is a major cause of heart attacks) can be scoured in a technique known as endarterectomy. In 1964 angioplasty was devised by the Swiss doctor Andreas Grünzig (1939–85): a tiny balloon is inserted into a constricted artery via a catheter and inflated, which clears a path for blood flow. The New York heart surgeon Sol Sobel took this technique a stage further by injecting a powerful carbon dioxide gas jet into an artery via a hypodermic syringe.
As had been understood down the centuries by surgeons faced with aneurysm, the great problem in cardiovascular cases was to find a practical way of reconstructing arteries. This was achieved by a surgeon from Lyons, Alexis Carrel (1873–1948), a perfectionist who elevated the technical details of his craft into an almost religious mystique. Carrel showed that a piece of the aortal wall could be replaced with a fragment from another artery or vein; above all, having taken lessons from a lacemaker, he developed effective ways of sewing vessels together (anastomosis). In 1910 he described how to transplant an entire vessel, sewing the ends with ‘everted’ sutures so that the inside was left threadfree, in the belief that the familiar and lethal problem of clotting would thereby be surmounted.
His blood vessel anastomosis operations, joining severed arteries or veins, laid the foundations for later transplant surgery and seemed to presage the realization of the Cartesian model of the body viewed as a machine with replaceable parts. Carrel’s early animal experiments from 1902, performed with dog kidneys, ovaries, legs and other organs, attracted such attention that he went to America, working in Chicago and later at the recently founded Rockefeller Institute in New York. In the course of his experiments, he discovered that animals grafted with their own organs thrive, whereas organs sewn on from other animals provoke death; the rejection problem had raised its ugly head.
Traditionally off-limits, the heart and its vessels became the object of intervention thanks to these and related advances. Heart surgery developed by stages, and various operations were tried with improving results. Early efforts to suture stab wounds, for example, resulted in a mortality of 50–60 per cent, due mostly to infection, but by World War II, drawing upon sulfa drugs and antibiotics, surgeons were able to open the heart without undue risk.
Among acquired heart defects, one of the commonest was mitral stenosis, frequently the long-term consequence of childhood rheumatic fever. The valve between the left auricle and ventricle (it supposedly looked like a bishop’s mitre) becomes narrowed (stenosis); blood accumulates in the atrium and into the lungs, and the ventricle is not able to supply the body’s oxygen needs. The patient is tired and short of breath, and the retardation of blood circulation often results in heart failure. Surgical treatment was envisaged for this constriction in 1902 by Sir Thomas Lauder Brunton (1844–1916), and an operation for mitral stenosis was first attempted in 1923 by Elliott Cutler (1888–1947) in Boston, but without success. In 1925 the English surgeon Henry Souttar (1875–1964) reported operating on a person thought to have mitral stenosis. Though Souttar’s diagnosis turned out to be wrong, he discovered that he could stick his finger through the mitral hole, and suggested that might be a way of widening a narrowed valve. Failures in farther attempts at mitral valvotomy (out of ten attempts to cure mitral stenosis, eight patients died) led to a pause in this form of surgery.
Nothing happened until a Boston surgeon, Dwight Harken (b. 1910), returned to practice after the Second World War, during which he had come to know the heart through removing bullets. Turning to mitral stenosis, he fitted his finger with a small knife, and on 16 June 1948 treated a valve by dilating it with his finger and cutting its calcified ring. A few months afterwards in London, Sir Russell Brock (1903–1980) did likewise, but using his finger. The operation was progressively refined, leading to the ‘open commissurotomy’ operation used today. By the early 1980s about 16,000 mitral valve replacements were being performed each year in the USA.
The problem of ‘blue babies’ – those dying because of inadequate oxygen supply – was recognized from the late nineteenth century. Postmortem indicated that four things could be amiss:
• the pulmonary valve between the heart and the pulmonary artery, through which blood should travel to pick up oxygen in the lungs, might be narrowed;
• the right ventricle, the lower right chamber of the heart, could become swollen from having to pump blood through this constricted aperture;
• the partition (septum) between the two sides of the heart might be incomplete: a ‘hole in the heart’ allowing oxygenated (arterial) and deoxygenated (veinous) blood to mix;
• the aorta, normally supplying oxygenated blood to the rest of the body, could be misplaced so that it took blood from both the left and right ventricles.
Congenital defects of these kinds meant the baby’s body would receive insufficient oxygen and would become cyanosed, giving the skin its blue tinge. ‘Blue babies’ died young. Could blue baby syndrome be rectified surgically? This was first attempted at Johns Hopkins in November 1944. Denied entry to Harvard because she was a woman, Helen Taussig (1898–1986) had joined Johns Hopkins Medical School in 1921, becoming a specialist in paediatric cardiology. She found that some of her blue baby patients had another congenital heart defect – persistent ductus – yet they paradoxically seemed to do better: evidently the ductus or passage allowed blood to bypass the narrowed pulmonary valve and flow better to the lungs. The answer seemed to be to build an artificial ductus or shunt. She turned to the surgeon Alfred Blalock (1899–1964).
On 29 November 1944, Blalock operated on fifteen-month-old Eileen Saxon, a blue baby close to death. She hovered between life and death, slowly improved, and two months later was discharged from the hospital. By the end of 1950, Blalock and his co-workers had performed over a thousand such operations, and the mortality rate had fallen to 5 per cent. Taussig’s Congenital Malformations of the Heart (1947) became the Bible of paediatric cardiology.
With mitral stenosis, the surgeon could work on a regularly beating heart, since the procedure could be done in a trice; but more time was needed to correct other inborn heart defects, including a ‘hole in the heart’. This takes about six minutes, but the brain can go without blood for only four. How could the available operating time be increased? One idea was to cool the body, as the brain requires less oxygen at lower temperatures. Dog experiments supported this hypothermia approach, and it proved workable on humans.
But hypothermia would be useless for the more serious defects. John Gibbon (1903–73), in Philadelphia, addressed that problem: to devise a machine that would take on the work of the heart and lungs while the patient was under surgery, pumping oxygenated blood around a patient’s circulation while bypassing the heart so that it could be operated on at leisure. Having developed a heart-lung machine, in 1950 he began a series of animal experiments to perfect his surgical techniques in correcting heart defects. In 1952 he operated on his first human patient, a fifteen-month-old baby, who died shortly afterwards. His second patient was eighteen-year-old Cecilia Bavolek, who had a hole in the heart. On 6 May 1953 he operated, connecting her to his heart-lung machine for forty-five minutes – for twenty-seven minutes it was her sole source of circulation and respiration.
The operation was a success, and advances thereafter were dramatic. Use of low-temperature techniques with a heart-lung machine initiated open-heart surgery. In 1952, the American surgeon Charles A. Hufnagel (1916–1989) inserted a plastic valve into the descending part of the aorta in the chest, to take over from a diseased aortic heart valve, and, three years later, surgeons began to replace failed valves with ones from human cadavers.
From the 1950s blocked arteries in the limbs were being replaced, and in 1967 coronary arteries were tackled for the first time, when Rene Favaloro (b. 1923), a cardiovascular surgeon at the Cleveland Clinic, Ohio, ‘bypassed’ an occluded artery by grafting on a section of healthy vein from the patient’s leg above and below the blockage, inaugurating the celebrated coronary bypass. The plumbing involved in coronary artery surgery is quite simple; the aims are to reduce heart pain (angina), improve heart function, and to cut the subsequent incidence of heart attack and sudden death.
By the mid 1980s more than a million Americans had undergone a bypass operation. Over 100,000 are being undertaken every year and technically the operation has become routine. What is in doubt is the long-term value of such arterial surgery. It is effective in relieving angina pains, but it is not clear that, compared with other medical treatments, it improves life expectancy.
TRANSPLANTS
Initially the accent in modern surgery was upon excision. In time, plastic and replacement surgery began to grow in importance and stature. Such reconstructive surgery may take many forms. It can include removing malignant skin tumours and treating birth defects – such as cleft lips and palates – as well as burns and face traumas, urogenital deformities and cancer scars, to say nothing of cosmetic or aesthetic operations like breast implants.
Experience showed that skin and bone tissues could be ‘autotransplanted’ from one site to another in the same patient. In Paris Félix Jean Casimir Guyon (1831–1920) reported in 1869 that he had got small pieces of skin to heal on a naked wound, while in the same year, noticing that large wounds healed not only from the edges but from ‘islands’ of skin, Jacques Reverdin (1842–1908), in Geneva, had the idea of aiding the process by strewing slivers of skin on the wound: these extra islands, too, formed new skin. It was also discovered that corneas could be transplanted in a ‘keroplastic’ operation, originated by Eduard Zirm (1863–1944) in 1906, which saved the sight of many whose corneas had become opaque.
It was the First World War which decisively advanced skin transplants. Confronted by horrific facial injuries, Harold Gillies (1882–1960) set up a plastic surgery unit at Aldershot in the south of England. He was one of the first plastic surgeons to take the patient’s appearance into consideration (‘a beautiful woman’, he believed, ‘is worth preserving’). After the Battle of the Somme in 1916, he dealt personally with about 2000 cases of facial damage, and in 1932, he hired as his assistant his cousin, Archibald Hector Mclndoe (1900–60), who had gained experience at the Mayo Clinic. Shortly after the outbreak of the Second World War, he founded a unit at Queen Victoria Hospital in East Grinstead, Sussex.
The Battle of Britain in 1940 brought Mclndoe some 4000 airmen with terrible new injuries: facial and hand burns from ignited airplane fuel. Mclndoe felt that ‘plastic surgery’ did not truly describe what was required for these, which frequently took years and many operations to rectify, so he coined the term ‘reconstructive surgery’. For major facial injuries, he used Gillies’s tubed pedicle graft – a large piece of skin from the donor site, which remained attached by a stalk to provide it with a blood supply until a new one established itself. A surgical artist, Mclndoe had a gift for cutting complicated shapes from skin freehand.
What about organ transplantation? Thanks to Carrel’s new suturing techniques, transplants had become a technical possibility. Using experimental animals, Carrel had begun to transplant kidneys, hearts and the spleen; surgically, his experiments were successes, but rejection was frequent and always led to death: some obscure biological process was at work. There the matter rested, until work undertaken during the 1940s at the National Institute for Medical Research in London by Peter Medawar (1915–1987) clarified the underlying immune reactions. Medawar demonstrated that a transplant’s lifespan was much shorter in a host animal which had previously received a graft from the same donor, as it also was in a host injected with white blood cells from the donor. Evidently animals contained or developed antigens which were interfering with the transplanted organ: the host fought the transplant as it would a disease, treating it as an alien invader.
In 1951 Medawar had the idea of drawing on cortisone, a new drug known to be immunosuppressive. In most circumstances immunosuppressiveness would be regarded as a blemish, but here it could be a virtue, undermining the host’s resistance. A better immunosuppressor, azathioprine, was developed by Roy Calne (b. 1930) and J. E. Murray (b. 1919) in 1959; but the real breakthrough came in the late 1970s with the application of a new high-powered immunosuppressor, cyclosporine, which became indispensable to further progress in transplanting organs.
The kidney blazed the transplant trail. That was inevitable. The organ was easily tissue-typed and simple to remove; and the fact that everyone has two but needs only one meant that living donors could be used. Should a transplant fail, dialysis was available as a safety-net. Kidneys had long been objects of attention. As early as 1914, a team at Johns Hopkins developed the first artificial kidney (for dogs) designed to perform dialysis – the separation of particles in a liquid according to their capacity to pass through a membrane into another liquid. In the Netherlands, Willem Kolff (b. 1911) constructed the first workable dialysis machine for people thirty years later. At first, they were used only for treating patients dying of acute kidney failure, or from poisoning by drugs which could be removed through dialysis, the idea being to keep them alive long enough for their kidneys to recover. By the early 1960s, however, Belding Scribner (b. 1921), in Seattle, was treating patients with chronic kidney failure with long-term dialysis.
Human kidney transplants were tried in the United States from 1951 – for example on patients with terminal Bright’s disease – but the death rate was so awful that only very courageous, reckless, thick-skinned or far-sighted surgeons persevered. Why did they fail? Analysis made it clear that organs had diverse types of tissue, rather as with different blood types, and highlighted questions of compatibility and rejection. Meticulous matching of the tissues of donor and host would give a transplant more of a chance; the ideal case would be that of identical twins, an unlikely event. Nevertheless, the first successful human transplant was indeed performed on identical twins: the donated organ was a kidney transplanted to a twenty-four-year-old man who had two diseased kidneys and was close to death. The healthy kidney came from his identical twin brother, so there was no immunological barrier. The operation was carried out in December 1954 in Boston by J. Hartwell Harrison and Joseph Murray, and its success helped Murray to a Nobel Prize.
Once kidney transplants had been seen to work, the way was open for all other organs. In June 1963, James Hardy (b. 1918) transplanted the lung of a man who had died of a heart attack into a fifty-eight-year-old dying of lung cancer. The new lung functioned immediately and did so for eighteen days until the patient died from kidney failure. In the same year human liver transplantation was first attempted by Thomas Starzl (b. 1926) in Denver.
What of the heart? Transplanting the heart poses problems independent of rejection. Since it deteriorates within minutes of death and is impossible to store, it must be removed and transplanted with great speed, even without full tissue-typing. Nevertheless, it was bound to be the coveted prize for transplant surgeons, for all the obvious emotional, cultural and personal reasons, not least because soaring incidence of heart disease meant this breakthrough could have enormous utility.
The first human heart transplant was attempted on 23 January 1964 at the Mississippi Medical Centre. James Hardy, the pioneer of lung transplants, put a sixty-eight-year-old man with advanced heart disease on a heart-lung machine and prepared for the first transplant between humans. The donor was to be a young man dying from irreversible brain damage, but he was still alive when the potential recipient’s heart failed. Hardy stitched in a chimpanzee’s heart which was too small to cope, and the patient died.
The world had to wait another four years for the big event: at the Groote Schuur Hospital in Cape Town on 3 December 1967 Christiaan Barnard (b. 1922) transplanted the heart of a young woman, Denise Darvall, certified brain-dead after a car smash. It may be no accident that South Africa got in first; the land of apartheid had fewer ethical rules hedging what doctors could do. While Barnard had contemplated practising on a black man his chief was mindful that ‘overseas they will say we are experimenting on non-whites’, and this would have tarnished the triumph.
Barnard’s patient was fifty-three-year-old Louis Washkansky, who had suffered a series of heart attacks in the previous seven years and had been given only a few weeks to live: he died of pneumonia eighteen days later. Before his demise, publicity seemed to take precedence over the patient’s health: the press was admitted to see the recipient within days of the operation (something that would not be permissable today) and Barnard immediately began jetting around the world; he was abroad when Washkansky’s condition started to deteriorate.
Barnard was followed in January 1968 by Norman Shumway (b. 1923) at Stanford University in California, and in May that year by Denton Cooley (b. 1920), in Houston. Heart transplants became the rage as media coverage generated funding and fame. In the following year more than a hundred were performed around the world in eighteen different countries; two thirds of the patients were dead within three months. Criticism mounted of such rash human experimentation, especially in view of the lack of attention to tissue-typing (matching compatible tissues).
Initially heart transplants had paltry success; it was not until cyclosporin became available over ten years later that the rejection problem was surmounted. Cyclosporin brought to an end the period when kidney, liver, heart, heart-and-lung and other interventions were more beneficial to researchers than to the recipients. By the mid 1980s there were 29 centres in the US alone carrying out heart transplants. Approximately 300 were performed in 1984; of the recipients, 75 per cent lived for at least a year and almost 66 per cent for five years. By 1987 the tally of transplanted hearts had risen to 7000.
By temperament, transplant surgeons were an audacious breed. A colleague of the Texan Denton Cooley commented in the 1990s, ‘Twenty-five years ago [Cooley] didn’t feel it was worth coming into the hospital unless he had at least ten patients to operate on.’ Such men were not fazed by failure: out of seventeen transplants Cooley performed in 1968 only three survived more than six months. So were there limits to what might be achieved? And if so, what were they?
By 1990, Robert White (b. 1926) was beginning to experiment in Chicago on monkeys with ‘total-body transplants’ – that is, giving heads entirely new bodies. In 1992 the English child Laura Davies suffered, many thought, as a human guineapig at the hands of the Pittsburgh surgeon Andreas Tzakis; she underwent, in the full glare of publicity, to multiple transplants and re-transplants, involving as many as six organs, before she died. The BBC documentary producer, Tony Stark, tells in his book, Knife to the Heart, the story of an ex-patient of Tzakis’s, Benito Agrelo, a teenager who, experiencing agonizing side-effects from his post-transplant medication, quit his drugs with his family’s approval and chose to have a few months of normal life rather than a few years of agony. As a consequence, the medical authorities had him formally taken into care (on the grounds that medical non-compliance was a symptom of personality disorder). He was taken from his home, handcuffed and tied to a stretcher, and ambulanced back to hospital.
The life-and-death excitement of transplants could bring out a ruthless edge, a belief that ‘medical progress’ is an end which justifies almost any means. Critics allege that such incidents epitomize the disturbing inability of today’s high-tech medicine to accept the autonomy of patients and the reality of death. Barnard’s published autobiography portrays the author as a man obsessed by success; he became besotted by his opportunities for fame and sexual conquest. (It is ironic that his surgical career was cut short by a condition so banal as arthritis.)
Other forms of replacement surgery have been less glamorous, costly and controversial – and arguably far more valuable. Hip replacement was developed around 1960 by John Chamley (1911–82) at the Manchester Royal Infirmary. He tackled a widespread problem. When a degenerative disease causes the breakdown of bone tissue, the joint surfaces become rough and irregular, so that when the ball of the femur rubs against the hip socket even the slightest pressure can be extremely painful. Various replacement hips had been designed, including some fashioned from stainless steel. Unsafe and uncomfortable, the metal ball was attached to the femur with a screw, which loosened and eventually became unfastened. In addition, the steel hip squeaked with every step. Charnley tried to overcome these problems. Realizing that the body’s fluids could not lubricate the steel sufficiently, he carried out exhaustive studies on lubricants, adhesives and plastics, applying the principles of tribology (the science of wear). He performed the first clinical tests of a prosthetic hip in November 1972. The mechanical success of his hip replacement was remarkable, and within four years his procedure had helped more than 9000 patients to walk without crutches. By the 1990s some of the removal of bone was being done by robotic devices, doubtless heralding a trend towards mechanically driven surgery.
The first successful replantation of a severed limb took place in Boston in 1962. A twelve-year-old boy had his right arm cut off just below the shoulder in an accident. The boy and his arm were taken to Massachusetts General Hospital where the arm was grafted back on. Some months later the nerves were reconnected, and within two years the boy had regained almost full use of the arm and hand, to the point of being able to lift small weights. Replant operations became standard. In 1993 John Bobbitt became headline news when his penis, sliced off by his enraged wife, was sewn back on. Bobbitt blossomed into a porno movie star.
REPRODUCTION
Transplants were just one of many new procedures which radically increased the scope of intervention after 1950. Human reproduction has been spectacularly affected, leading to possibilities and ethical quandaries that transcend technical success or failure in the individual case.
Aware that many women were infertile because of blockage of their Fallopian tubes, the British gynaecologist Patrick Steptoe (1913–88) saw that if an egg were removed from an ovary, using a laparoscope, and fertilized with sperm, it could be placed in the mother’s uterus for normal development. This has become known as ‘test-tube’ fertilization, though the fertilization takes place not in a test-tube but in petri dishes.
Steptoe went into collaboration with Robert Edwards (b. 1925), a Cambridge University physiologist. In February 1969, they announced that they had for the first time achieved fertilization of thirteen human eggs (out of fifty-six) outside the body. Nine years later, in July 1978, Louise Brown was born at Oldham District Hospital, the first ‘test-tube baby’. Despite enthusiastic media coverage, by May 1989 only one in ten of such in vitro fertilization (IVF) treatments in Britain produced live babies.
Louise Brown’s birth was condemned by voices as diverse as Nobel laureate James Watson, the Vatican, and some feminist philosophers, for the foetus’s brief extra-uterine existence raised controversial moral questions: did every extra-uterine embryo have a right to be implanted? must surplus extra-uterine embryos be destroyed or stored and used for later research? if so, was ectogenesis (growth in an artificial environment) permissible? Together with artificial insemination by donor (AID), IVF also raised perplexing questions about the legitimacy of surrogate motherhood.
Efforts were made to resolve these issues. In July 1982 the Warnock Committee, chaired by philosopher Mary Warnock (b. 1924), was convened by the UK – Department, of Health and Social Security to examine the problems. It recommended that I VF be considered a legitimate medical option for infertile women, while limiting the extra-uterine maintenance of an embryo to fourteen days after fertilization: only then was experimentation admissible. The Committee also advised against the implantation of embryos used as research subjects in humans or in other species. It did not address surrogate motherhood, which is still contentious.
Meanwhile the Vatican came out against I VF; payments to surrogate mothers (receiving eggs and sperm from couples unable to have their own children) were outlawed in some countries; and there was shock when it was revealed, in 1992, that a fifty-nine-year-old Italian woman had been impregnated with a donor egg fertilized by her husband’s sperm. At the same time, an American doctor was sentenced to ten years in prison for having used his own sperm when artificially inseminating dozens of his female patients. In 1997, a sixty-three year-old mother gave birth after in vitro fertilization.
Advances in reproduction technology and transplant surgery have thus raised worries that pressures for legalization of the hire of wombs and the sale of organs will become overwhelming. In the United States Dr H. Barry Jacobs (b. 1942) floated the International Kidney Exchange Inc., with a view to importing Third World kidneys for sale to American citizens. Executed criminals in China already have their organs harvested; kidney sales by poor people have become common in various developing countries, and are not unknown in Harley Street. The disquieting parallel between the nineteenth-century Burke and Hare and these modern variants of body-snatching was a theme central to the film Coma, directed by Michael Crichton (1977).
Such developments have brought into question the whole status of the body: is it a ‘thing’, disposable on the market like any other piece of private property? (If so, who owns it after death?) The case for regarding it as a commodity has been advanced by some American utilitarian philosophers: rational choice and market forces, they argue, would create an optimum trade in body commodities such as sperm, embryos, wombs and babies. A ‘futures market’ in organs has been proposed. Donors (or rather vendors) would be paid in advance, on condition that they bequeathed their corpses to be ‘harvested’ on death. Linked also to the staggering growth of aesthetic surgery – over 800,000 facial procedures are annually performed in the United States alone – the new world of surgery is challenging traditional ideas of the person and the integrity of the individual life.
CONCLUSION
Some pattern may be seen in the development of surgical interventions. In its heroic, Billrothian stage, surgery was principally preoccupied with removal of pathological matter. In due course, thanks to better immunological knowledge and the evolution of anti-bacterial drugs, surgery entered a new phase, more concerned with restoration and replacement. Practitioners acquired a capacity to control and re-establish the functioning of the heart, lungs and kidneys, and also fluid balance. Since the mid twentieth century, one result has been a greater assimilation of surgical intervention into other forms of treatment.
Take angina, first depicted in the eighteenth century by William Heberden. For a time it was expected that surgical intervention would solve this, through the implantation of a bypass or balloon dilatation (coronary angioplasty). Indeed, the Russian president, Boris Yeltsin, who had suffered three heart attacks, underwent a quintuple bypass operation in November 1996. But angina has not only been tackled through surgery – drugs have played their part, notably beta-blockers. In a similar way, most cancers today are handled by a combination of surgery, radiotherapy, chemotherapy and, in some cases, hormone treatment.
Telling also has been the growth of implants as part of a wider strategy of controlling and re-establishing organ functioning. The first implantation of such a prosthesis came in 1959 with the heart pacemaker, developed in Sweden by Rune Elmqvist (b. 1935) and implanted by Ake Sening (b. 1915), designed to adjust beat frequency by electrical impulses in the case of arrhythmic variations. Remedy for many patients suffering from an abnormally slow heartbeat is now offered by an artificial pacemaker. By the early 1990s more than 200,000 were being implanted each year worldwide, approximately half in the United States (about the same number of bypass operations were being performed). Restorative procedures now range from eye lenses to penis implants to facilitate erection. Implanting a prosthesis is traditionally considered part of surgery, but is it so very different from ‘implanting’ a drug, the work of physic?
This contemporary blurring of the boundaries is evident, too, in changes in urology. Early in the twentieth century, disorders like bladder carcinoma were treated by cutting out malignant tumours. An alternative was provided by radiotherapy, also used for prostate cancer. Bladder cancer was one of the first cancers to be successfully treated with hormones (1941). These days the most frequent form of male cancer, prostate, is rarely dealt with by radical prostatectomy; most treatment is palliative: anti-androgen therapy helps, involving giving the female hormone, oestrogen. In other words, surgery has become increasingly integrated into wider therapeutic strategies, eroding the obsolescent barriers between physic and surgery.
It has thus gone through successive, if overlapping, phases of development. The age of extirpation, involving new ways of dealing with tumours and injuries by surgical excision, gave way to a period of restoration, in which stress fell on physiology and pharmacology, aimed at repairing endangered or impaired function. More recently replacement has come to the fore, with the introduction of biological or artificial organs and tissues. This has implied a more systemic approach to treatment, foreshadowing the ending of old professional identities.
* Treves was also famous for his dealings with Joseph Merrick (1860–90), the ‘Elephant Man’, suffering from hideous deformities, whom he befriended, rescuing him from freak shows and securing permanent accommodation for him at the London Hospital.
* Like many top surgeons, Sauerbruch was a driven man, convinced of his sacred mission as a healer. Authoritarian and egotistical, he gave enthusiastic support to the Nazis; after the war his unwillingness to retire, long after his cringing colleagues recognized that his judgment and powers had waned but had done nothing about it, killed many patients.
* Brain surgery for epileptics does not always involve removing tumours; it is possible to remove scarred brain tissue. One of the more common brain operations for epileptics is a temporal lobectomy; temporal lobe epilepsy is the most common form in adults.
MEDICINE USED TO BE ATOMIZED, a jumble of patient-doctor transactions. Practitioners were mainly self-employed, with at most a tiny back-up team: an apothecary compounding medicines or a surgeon’s apprentices pinning down a screaming amputee. Patient-doctor relations typically involved a personal contract, initiated by the sick person calling in a physician or a surgeon.
Other kinds of healing encounters, such as medical charities, made much of the personal touch, the face-to-face relationship believed essential to the office and alchemy of healing. There were of course exceptions: in plague epidemics doctors worked in teams under magistrates within a bureaucratic framework; medics banded together in colleges for esprit de corps and pomp and ceremony; and some served in larger social institutions, like the armed forces. But these were anomalous to the normal petty-capitalist occupational patterns. Writing in industrializing Manchester, Dr Thomas Percival presented in his Medical Ethics (1803) a picture of a fragmented, divided occupation, in which petty tensions and rivalries between individual doctors (every man for himself) threatened good relations and good practice. Medicine was traditionally small-scale, disaggregated, restricted and piecemeal in its operations.
What could be more different from today? Medicine has now turned into the proverbial Leviathan, comparable to the military machine or the civil service, and is in many cases no less business- and money-oriented than the great oligopolistic corporations. A former chairman of a fast-food chain who quit to head the Hospital Corporation of America (Nashville, Tennessee), explained his move thus; ‘The growth potential in hospitals is unlimited: it’s even better than Kentucky Fried Chicken.’ No wonder he thought that, since astonishing transformations 628 in scale were taking place in what has become known as the industrial – medical complex.
The annual number of hospital admissions in the United States rose from an estimated 146,500 in 1873 to more than 29 million in the late 1960s. While the nation’s population was growing five-fold, use of hospitals – the new capital-intensive factories of medicine – rose almost two-hundred-fold. In 1909 there were 400,000 beds in the USA; by 1973 there were 1.5 million. (In Britain, the number of beds per thousand population doubled between 1860 and 1940, and doubled again by 1980.) In the process, medicine became inordinately expensive, claiming a greater share of the gross national product than any other component (in the United States, a staggering 15 per cent by the 1990s). Critics complained that it was out of control, or at least driven more by profit to the supplier than by the needs of the consumer.
The transition from one-man to corporate enterprise is partly the institutional dimension of the developments sketched in the last few chapters: giant strides in basic and clinical research, and the pharmacological and surgical revolutions. In the 1850s Claude Bernard funded his research out of his wife’s dowry, while George Sumner Huntington, who discovered the disease named after him, was an obscure country practitioner in New York state; in those days all the tools of the country doctor’s trade fitted into a battered pigskin saddle-bag. But even Koch, who started small, ended up the satrap of several palatial research institutions, and since then the iron law has been expansion and amalgamation. These days medicine is practised, at least at its cutting edge, in purpose-built institutions blessed or burdened with complex infrastructures, bureaucracies, funding arrangements and back-up facilities. Orthodox medicine is unthinkable without its research centres and teaching hospitals served by armies of paramedics, technicians, ancillary staff, managers, accountants and fund-raisers, all kept in place by rigorous professional hierarchies and codes of conduct. The medical machine has a programme dedicated to the investigation of all that is objective and measurable and to the pursuit of high-tech, closely monitored practice. It has acquired an extraordinary momentum.
In a medical division of labour that has become elaborate, physicians remain superior in status; however, today they are but the tip of a gigantic health-care iceberg – of the 4.5 million employees involved in health-care in America (5 per cent of the total labour force), only about one in seventeen (300,000) is a practising physician. Perhaps nine out of ten of those employed never directly treat the sick. Time was when medical power lay with clinicians who attended kings and the carriage-trade; in ancien régime Denmark it was not odd that Dr Struensee (1737–72) was both royal physician and prime minister (an affair with the queen cost him his head). Today, though one or two transplant surgeons are household names, the real medical power lies in the hands of Nobel Prize-winning researchers, the presidents of the great medical schools, and the boards of multi-billion dollar hospital conglomerates, health maintenance organizations and pharmaceutical companies.
In many nations the largest single employer, medicine’s politics have become controverted. Providing life-saving services and priding itself upon being, as Sir William Osier asserted around 1900, distinguished from all other professions ‘by its singular beneficence’, medicine lays claim to a privileged autonomy. Yet that is also the special pleading of an institution dependent upon the market and the state for its financing and anxious to protect its comer.
Modern medicine has been able to root, spread and propagate itself in this way in part because it changed its objectives. Traditionally the physician simply patched up the sick individual; but medicine gradually asserted a more central role in the ordering of society, staking claims for a mission in the home, the office and the factory, in law courts and schools, within what came to be called, by friend and foe alike, a welfare or therapeutic state. The more medicine seemed scientific and effective, the more the public became beguiled by the allure of medical beneficence, regarding the healing arts as a therapeutic cornucopia showering benefits on all, or, like a fairy godmother, potentially granting everybody’s wishes. In 1993 the distinguished American writer, Harold Brodkey, who was suffering from AIDS, declared in a magazine, ‘I want [President] Clinton to save my life.’ Brodkey assumed all that was needed was larger federal grants to AIDS researchers. The previous year, the US government had spent $4.3 billion dollars on AIDS – more than any other disease except cancer.
In western market societies driven by consumption and fashion, medicine was one commodity for which rising demand could not summarily be dismissed by critics of ‘I want it now’ materialistic individualism. And ever since wily Chancellor Bismarck set up state-run medical insurance in Germany in 1883, politicians have been able to look to health care as a service appealing to virtually the entire electorate. Alongside bread and circuses, there came to be votes in pills and hospital beds. Yet medical politics, of course, never proved simple, and the consequence is that, at the close of the twentieth century, public debates on medical care and its costs have become sources of strife in both America and Britain.
In 1992 one of the campaign issues securing Bill Clinton’s election was his undertaking to reform health care. That he dropped the issue once he was president, on meeting opposition from the corporate bodies dominating medicine and their friends in Congress, indicates how medicine has become a political hot potato. In the UK, once Margaret Thatcher became prime minister in 1979, the future of the much-valued National Health Service, threatened by the Conservative government’s agenda for privatizing public services, was rarely out of the headlines.
Her government felt obliged to reassure voters time and again that, despite the closure of hospital wards and cuts in services, the NHS was safe in its hands. How different from the nineteenth century! Medical issues were then marginal to high politics. Nations did not even possess a ministry of health, hospital beds had no place in election manifestos, and the provision of health care was not considered the state’s business.
Of course, the state’s role in such matters had been growing during the nineteenth century, but up to 1900 its activities were ad hoc. Statutory medical provision tended to be limited to particular problems (e.g., policing of communicable disease) and was viewed by cabinets as a necessary evil rather than the true business of state or a vote-winner. (As noted, faced with cholera in 1848, The Times had remonstrated against being ‘bullied into health’; its readers probably agreed.) By 1900 medical professionals were generally licensed by law, even in the United States where medical sectarianism was rampant, but nowhere did the state outlaw irregulars. In industrialized nations public-health legislation had entered the statute book in respect of matters like sewage, sanitation and smallpox. But in the USA and elsewhere the management of health was still a tangle of voluntary, religious and charitable initiatives, as was primary care for the needy, while medicine for those who could afford to pay remained essentially a private transaction.
All this was to change, ceaselessly if unevenly, in the twentieth century. It became widely accepted that the smooth and efficient functioning of intricate producer and consumer economies required a population no less healthy than literate, skilled and law-abiding; and in democracies where workers were also voters, the ampler provision of health services became one way of pre-empting discontent. Health also moved centre-stage in propaganda wars – questions of national fitness came to the fore in the great Darwinist panics over racial decline around 1900. Between the wars, fascist Italy, Nazi Germany and the communist USSR each glorified the trinity of health, power and joy, rejoicing in macho workers and fecund mothers while unmasking social pathogens who supposedly endangered national well-being. The Nazis did not merely seek to exterminate what they called the cancer of the Jews; they encouraged cults of physical fitness (through hiking, paramilitary drill, sport and sun-bathing) and launched the first crusade against cigarette smoking. Hitler (unlike Roosevelt, Stalin and Churchill, a non-smoker) supported anti-tobacco campaigns in the name of hygiene, and smoking was banned in the Luftwaffe. The first major medical paper proposing a link between smoking and lung cancer was published by Dr F. H. Müller in Germany in 1939. In any case, whether democratic or fascist, the hands of great powers were often forced when it came to health matters: world wars required massive injections of public money and resources into centralized health services to keep fighting men in the field and sustain civilian morale.
The twentieth-century ship of state thus took health on board, paying lip service to medical thinkers and social scientists who taught that a healthy population required a new compact between the state, society and medicine: unless medicine were in some measure ‘nationalized’, society was doomed to be sick and dysfunctional.
Medical philosophers equally recognized that medicine had to revise its aims and objectives. Conventional clinical medicine was myopic and hidebound, reformers argued; only so much could be known from corpses, only so much could be done with sedatives, syringes and sticking-plaster. Why wait for people to fall sick? Prevention was better than patching; far better to determine what made people ill in the first place and then – guided by statistics, sociology and epidemiology – take measures to build positive health. In a rational, democratic and progressive society, medicine should not be restricted and reactive; it should assume a universal and positive presence, it should address the totality of pathological tendencies in the community and correct them through farsighted policies, the law, education and specific agencies.
Why not invest in screening, testing, health education, ante-natal care, infant welfare, school health? Why not conduct surveys to discover health hazards and epidemiological variables? Would it not make sense for medicine to step in to prevent citizens becoming decrepit at the workplace, intervene before they became alcoholics or neurotics, or before imbeciles started breeding? Ill-health could not truly be understood at the individual, clinical level, but only as an expression of the health of the social whole; likewise, it could not be combated ad hoc, but only through planned interventions. This was the only way to achieve national efficiency, to create a fit, indeed fighting fit, population.
Such views were widely embraced in war-torn Europe and to a lesser degree in North America from the early twentieth century, among the planners, academics and civil servants charged with administering modern society; among social democrats appalled at the human waste caused by inefficient market mechanisms in the Slump and the Depression; among progressive doctors convinced the profession would fulfil its mission only if the state spurred reform; among medical rationalists with visionary leanings and a taste for the social sciences; and not least among far-right propagandists preoccupied with ensuring national mastery in a cut-throat Darwinian arena whose very law was biomedical: thrive or perish.
The call for medicine to adapt was reinforced by the growing recognition that the disease landscape was changing, or was at long last being understood. Twentieth-century epidemiologists stressed that much of the sickness crippling workers, at immense socio-economic cost, was no longer being produced by the classic air-, water- and bug-borne infections. Typhus, diphtheria and other acute infections were being defeated by better living standards, sanitary improvements and interventions like vaccination. In their place, chronic disorders began to assume a new prominence. Medicine began to fix its gaze on a morass of deep-seated and widespread dysfunctions hitherto hardly appreciated: sickly infants, backward children, anaemic mothers, office workers with ulcers, sufferers from arthritis, back pain, strokes, inherited conditions, depression and other neuroses and all the maladies of old age.
To deal with these, the ontological or bacteriological model of disease was no longer sufficient. The health threats facing modern society had more to do with physiological and psychological abnormalities, broad and perhaps congenital tendencies to sickness surfacing among populations rendered dysfunctional and unproductive by poverty, ignorance, inequality, poor diet and housing, unemployment or overwork. To combat all this waste, hardship and suffering, medicine (it was argued) had to become a positive and systematic enterprise, undertaking planned surveillance of apparently healthy, normal people as well as the sick, tracing groups from infancy to old age, logging the incidence of chronic, inherited and constitutional conditions, correlating ill health against variables like income, education, class, diet and housing.
Diverse policy options were then available. One lay primarily in advice and education – helping citizens to adjust to socio-economic reality by instructing them in hygiene, cleanliness, nutrition and domestic science – the old non-naturals revamped. Another lay in specific interventions: providing free school meals, contraception centres, antenatal clinics and the like. In London in the 1930s, the Peckham Health Centre was a showcase experiment in the voluntary provision of health care, together with leisure and educational facilities, for a working-class community. Or there might be yet more ambitious political programmes, building on the model Virchow had perhaps imagined in 1848 for Silesia: health could not be improved without radical socio-political change designed to ensure greater social justice and equality.
Disease became conceptualized after 1900 as a social no less than a biological phenomenon, to be understood statistically, sociologically, psychologically – even politically. Medicine’s gaze had to incorporate wider questions of income, lifestyle, diet, habit, employment, education and family structure – in short, the entire psycho-social economy. Only thus could medicine meet the challenges of mass society, supplanting outmoded clinical practice and transcending the shortsightedness of a laboratory medicine preoccupied with minute investigation of lesions but indifferent as to how they got there. It was not only radicals and prophets who appealed to a new holism – understanding the whole person in the whole society; respected figures within the temple of medical science, including Kurt Goldstein (1878–1965) and René Dubos (1901–82), author of Mirage of Health (1959), were emphatic that the mechanical model of the body and the sticking-plaster formula would at best palliate disease (too little, too late) but never produce true health.
The twentieth century generated a welter of programmes and policies devoted to the people’s health. The underlying ideologies extended from the socialist left (state medicine should aid the underprivileged) to the fascist right (nations must defend themselves against sociopathogenic tendencies). Either way, the hallowed liberal-individualist Hippocratic model of a sacred private contract between patient and doctor seemed as passé as Smithian political economy in the age of Keynes. As medicine transformed itself after 1900 into a vast edifice, philosophies changed with it, embracing an expansive vision of the socialization of medicine and the medicalization of society. Buoyed up by the indisputable success of Listerian surgery, Pasteurian bacteriology and so forth, confidence was running high about what medicine and health care might achieve. In a world torn by war, violence, class struggle and economic depression, medicine at least would be a force for good! The benefits were clear; the disadvantages would surface only later.
MEDICINE AND THE STATE
Over the centuries medicine had slowly and incompletely become incorporated into the public domain. From medieval times the state began to regulate medical practices, creating a profession; in the early-modern period medicine was ascribed a role within mercantilist strategies for the consolidation of national wealth and manpower; and doctors were always liable to be called upon in time of emergency, particularly plague. In the nineteenth century new medical growth points arose, notably the need to cope with the threat of the sick poor and the environmental hazards caused by industrialization.
Spurred by a familiar mix of altruism and prudence, medical measures were devised to alleviate the afflictions of the masses. The nineteenth century brought philanthropic dispensaries and other forms of out-patient care for the ambulant sick, manned by charitable volunteers or by rank-and-file practitioners working for municipal, religious or philanthropic agencies. Hospitals provided beds for the sick poor, supported by charity (religious or secular) and public subsidy; sometimes they were staffed by elite practitioners using hospital positions as platforms for teaching, research and surgical practice. Centralized or municipal poor law organizations had to handle vast numbers of the sick poor and encountered the need to create immediate hospital facilities for them.
The sanitary movement promised a further approach to the health problems of industrial society, preaching and teaching good drains and housekeeping, physical and moral cleanliness, and in some situations being granted judicial powers. Public health set medicine onto a new official plane with the appointment of experts to compile official statistics, remedy nuisances and do the state’s business. A cohort of doctors emerged, beholden not to individual clients but in the guise of guardians of the health of the population generally, and with a brief not curative but preventive. The Victorian administrative state created a variety of appointments for doctors – as medical officers of health, public analysts, factory inspectors, forensic experts, prison doctors and asylum superintendents, to say nothing of those employed in the army, the navy and imperial enterprises such as the Indian Civil Service.
The ideological identification of medicine with public service was consolidated as more doctors earned part or all of their living in the state sector. Their position was far from comfortable, however, since medicine might need to serve two masters. The prison doctor was implicated in a punitive regime, but ethically his duty lay with the well-being of the individual convict. A similar predicament was involved with workmen’s compensation schemes for industrial accidents and illness. Doctors on statutory arbitration boards had to handle situations where the causes of illness and injury were contested between workman, master and state. When a coal-miner developed the eye disease nystagmus, was this to be diagnosed as due to work conditions or to an inherent constitutional diathesis? The practitioner had to act as society’s arbitrator.
Further intractable tensions arose over provision and payments for health care. Sickness costs far exceeded the routine capacity of many to pay, and third-party systems were devised for meeting bills. Initially, these took the form of friendly societies and ethnic associations created by labouring people themselves. Workers in a factory or a coal-pit might club together to pay a fixed weekly sum into a common kitty, to procure the annual services of a general practitioner. Though such arrangements provided junior doctors with some guaranteed income, contract practice was resented, since patient-power posed threats to professional dignity and autonomy: doctors were apprehensive that third-party payers would call the tune, reducing them to mere employees, imperilling their clinical freedom, and creating a Dutch auction.
But if the nineteenth-century medic might not be quite sure where he stood, in Europe at least his (and to a small but growing degree her) economic prospects were brightening towards 1900, as specialisms became entrenched and antiseptic surgery blossomed, with the post-Lister menu of operations growing rapidly, first tried out on the poor and later performed upon paying private patients. With surgeons undertaking more ambitious work under strict aseptic routines, there was more scope for the setting up of private hospitals, nursing homes and clinics. Meanwhile public hospitals were turning themselves into high-prestige diagnostic and surgical centres, wooing affluent patients and employing almoners to ensure that even the poor paid something towards costs.
As medical institutions learned how to appeal to the better-off, advances in diagnostics and surgical interventions transformed the political economy of medicine, first and foremost in the United States but also among the elite elsewhere. American medicine was inventive and energetic in promoting new specialties and business arrangements, providing wider services and diagnostic tests, and tapping new sources of custom and income. Medicine seemed good for business and business good for medicine. American doctors were bold in setting up their own hospitals, as did religious, ethnic and other groups. Whether private or charitable, all such institutions found themselves in competition for paying patients who, as the twentieth century progressed, constituted a growing proportion of their clientèle. By 1929 the Mayo Clinic in Rochester, Minnesota was a huge operation with 386 physicians on its books and 895 lab technicians, nurses and other workers. The clinic had 288 examining rooms, 21 laboratories and was housed in a 15-storey building.
In the big cities, American private practitioners discovered the advantages of behaving like lawyers or businessmen, setting up offices in downtown medical buildings, with access to common facilities. Forward-looking in the use of secretaries and technicians, they drew in patients by installing X-ray machines and chemical laboratories, and communicating the self-assurance which the successes of bacteriology and surgery seemed to warrant. At the forefront of innovation, or among those serving wealthy clientèles, medicine developed a powerful momentum and met growing public demand by evolving new forms. Archaic distinctions between private and public, commercial and charitable blurred as hospitals became cathedrals of the new medical science and housed every department of practice. From the 1880s the development of hygienic, well-equipped operating theatres turned hospitals from refuges for the poor into institutions fit for all. From the early 1900s surgery became much more intricate, laboratory tests and other investigations were extended, medical technology became essential, and staff costs leapt. Ambulance services made the hospital the heart of emergency care. All this raised it in the public eye – it also raised costs.
At the same time medical elites were increasingly courted by politicians, called upon to sit on committees and public inquiries to pronounce on social health, housing, diet and national welfare. Dealing with charged issues like the health of children or soldiers, leading doctors hobnobbed with ministers and got a sniff of the benefits that could follow from state welfare programmes and Treasury health investment. Medicine imperceptibly obtained a place at the table of power.
But the situation did not look so rosy for all. With politicians dreaming up schemes for health assurance and state medicine, and with the extension of municipal hospitals, baby centres, venereal disease clinics and the like, ordinary private doctors could feel left out, anxious that public medicine would inexorably encroach upon the private practice which was their bread-and-butter. Every enlargement of state, municipal, contract or charitable medicine meant potential patients lost by private medicine and the family doctor. Worker power in medical friendly societies, schemes for public health, and not least the entry of women into the profession – all these were sources of concern for the self-employed practitioner.
At the grassroots, doctors reacted to these threats by digging in their heels. In the early twentieth century, for instance, British Medical Association branches began to operate remarkably like a trade union, threatening to black-leg colleagues who worked for non-approved friendly societies or ‘poachers’, who set up practices in ‘overstocked’ areas. Bitter discord flared between those greeting the extension of organized, large-scale state and municipal medicine, expecting better funding and facilities, and those suspicious that these would be financially ruinous and inimical to ‘proper’ medicine.
A quandary thus faced British doctors in 1911 when the Liberal politician Lloyd George launched his National Insurance scheme, modelled along Bismarckian lines. In this compulsory scheme, workers earning up to £150 a year (roughly speaking the waged working classes) would be insured; their contribution would be fourpence a week, the employer would pay twopence and the state threepence; the insured workers would in return receive approved medical treatments from a ‘panel’ doctor of their choice, and for the first thirteen weeks of sickness a benefit of 10s per week for men (7s 6d for women – they earned less). There were restrictions upon benefits: hospital costs were not met, except in the case of T B sanatoria, and the families of insured parties were excluded, though there was a 30s maternity grant (babies were prized as the future of the race). It was a measure devised to be popular with the electorate (it gave ‘ninepence for fourpence’, boasted Lloyd George), while ameliorating the wretched health of ordinary workers. This had been critically exposed when a high proportion of Boer War volunteers had been found unfit to serve for medical reasons.
Initially practitioners were up in arms against the National insurance bill. They would not become cogs in a bureaucratic machine run by the state! Doctors would be reduced to the status of petty civil servants. Some, however, looked to the scheme for deliverance from worse servitude – worker-power in friendly societies – and believed the state would be a more benign and distant master, and certain to pay. In the event, after vocal opposition, most opted to become ‘panel doctors’ and found that their relationship with the state was secure and remunerative; doctors’ income rose steadily.
National Insurance reinforced the divide in Britain between the general practitioner and the hospital doctor, which was to have long-term repercussions for the structure of the profession; but it also helped to cement a lasting and valued relationship between the sick and their GPs, secured by the authority of the state. The family doctor was appreciated because he (increasingly she) was a reassuringly tangible presence. ‘We never took weekends off except by special arrangement, which meant we worked six and a half days a week and were on call every night’, recalled Kenneth Lane, who began practice as a junior partner in Somerset in 1929; his sense of identity as a country GP and devotion to the job were typical.
War and the threat of war did not merely expose the ill health of people in modern industrial society; they provoked grave anxiety generally about the nation’s health. Many who might be indifferent about ailing factory workers and insanitary housing became incensed at the thought of sickly soldiers and enfeebled national stock. The most articulate and coherent response on both sides of the Atlantic was the eugenics movement, which directed the health debate to the problem of fitness, understood in national and racial terms.
Diverse nebulous theories of psycho-biological decline were crystallized by Francis Galton (1822–1911), Darwin’s cousin, into a eugenics creed which taught that survival lay in selective breeding. Nature counted for more than nurture; in terms of contributions to national health, breeding stock mattered more than housing stock, wage levels, environmental filth and all the variables which had preoccupied Victorian sanitarians. Unemployment and poverty were the results, not the causes, of social incapacity; malnutrition followed from bad household management, not bad wages. There were in England and Wales, as one eugenist put it in 1931, ‘four million persons forming the dregs of the community and thriving upon it as the mycelium of some fungus thrives upon a healthy vigorous plant’. What was the answer? The eugenically sound should breed more, and the dysgenic should be dissuaded from reproduction, or even prevented by sterilization or segregation.
These new degenerationist and hereditarian creeds gained vocal and in some cases large followings in Protestant Europe and in the United States, with race purity eugenists dismissing public health reform not simply as a waste of money and effort but as positively mistaken: rescuing the dregs risked ruining the race. Eugenists like Leonard Darwin (1850–1943), Charles Darwin’s son, in England, and Charles Davenport (1866–1944) in the USA advocated measures which included stricter marriage regulation, tax reform to encourage the middle classes to produce more babies, detaining defectives, and the sterilization (voluntary or compulsory) of the unfit. The political right had no monopoly on eugenism: ‘the legitimate claims of eugenics’, pronounced the impeccably left-wing New Statesman, ‘are not inherently incompatible with the outlook of the collectivist movement.’
Industrial nations adopted various ingredients of these policies. In Britain the Mental Deficiency Act of 1913 increased powers to place ‘defectives’ in special ‘colonies’. American eugenists championed stricter immigration laws and secured the first compulsory sterilization measures, in due course to be law in forty-four states; 15,000 Americans were sterilized by 1930. Public health advocates and many socialists rebutted these arguments: ill health and other impairments were the products of poverty and poor environment, not inborn defects. And they opposed eugenic policies with ambitious programmes of preventive and sanitary medicine and education. ‘Social efficiency’, environmentalists claimed, lay in public health management and welfare programmes for mothers, infants, school children, the acute and chronic sick, the tubercular and the aged. Some demanded a comprehensive health system, administered by local health authorities and funded by taxation.
The public health and reformist cases on the one hand and the eugenist on the other shared some common ground. Both could embrace the importance of planning for the future. Healthy mothers produced healthy babies, so state and local agencies and activists invested in mother and baby welfare. In this there were already voluntarist traditions on which to build; so-called ‘mothers’ ignorance’ had been tackled in Victorian times. Founded in 1857, the Ladies’ National Association for the Diffusion of Sanitary Knowledge had distributed a million and a half tracts, with titles like Health of Mothers (which contained advice on pre-natal care, food, exercise and the evils of tight corsets), How to Rear Healthy Children, How to Manage a Baby and The Evils of Wet-Nursing, full of baby-care jingles like:
Remember, he can’t chew
And solid food is bad for him
Tho’ very good for you.
From the 1880s milk depots were founded (though before cows’ milk was tuberculin-tested, these were a mixed blessing); ante-natal care programmes were initiated; clinics set up, especially in poor neighbourhoods, where babies were inspected and weighed, given food, vaccinations, and, later, vitamin supplements; women were taught baby care and the benefits of breast-feeding – or alternatively encouraged to use brand-name powdered milk as hygienic artificial feeding; instruction and pressure from big food companies meant that bottle-feeding became the norm for American mothers. Propaganda for ‘scientific motherhood’ denigrated the traditional mother but idealized the mother of the future. Mothers were persuaded to defer to the expertise of doctors and their clinics, and came under pressure from physicians, women’s magazines and particularly the hospitals in which they were increasingly delivered. They became key instruments in the dissemination of new health values.
Doctors, district nurses and health visitors were all asserting their superior knowledge and authority, establishing moral sanctions on grounds of health and the national interest, and running down traditional methods of child care – in particular care by anyone except the mother. Neighbours or grandmothers looking after babies were assumed to be dirty, unfit and remiss. The authority of state over individual, of professional over amateur, of science over tradition, of male over female, of ruling class over working class, were all involved in the ‘elevation’ of motherhood in this period, and in making sure that the mothers of the race were carefully schooled.
In France the emphasis fell on pro-natalism; after the crushing defeat in the Franco-Prussian War of 1870–1 and with the French population being outstripped by the German, health propaganda encouraged large families. A different tack was taken by supporters of contraception, including Marie Stopes (1880–1958), a dedicated eugenist (as such, she disapproved of her son marrying a woman who wore glasses). Birth control, claimed Stopes and her followers, would permit the spacing of babies and ensure optimal family size. Planned children would be healthier because adequate resources would be available for them. Quality counted more than quantity.
Sceptics complained that all these gestures were designed primarily to produce fitter cannon-fodder for the battlefield. Die they certainly did in unparalleled numbers between 1914 and 1918 as trench warfare in the First World War set new standards in horror and new peaks in victims. The staggering numbers of casualties, most of whom were not professional soldiers but volunteers and conscripts, and the duration of hostilities, forced governments to construct medical organizations far larger and more centralized than anything conceivable in peacetime. Thousands of buildings were requisitioned as hospitals and convalescent centres; staff were recruited; nursing became a major field of war work; doctors, used to practising in their parlour, discovered the advantages of working in a large, co-ordinated system in which civil servants, specialists, surgeons and women availed themselves of opportunities hitherto denied them.
Though such medical machines were dismantled after the armistice, outlooks were permanently changed. Among the victors, doctors who had specialized in wartime surgery, shellshock or heart medicine returned to civilian life with a passionate vision of a better medical future, partly thanks to contact with American specialists accustomed to well-equipped hospitals and enthusiastic for the new medical technology. Wartime medicine gave doctors a vision and a voice.
The Great War was a watershed, confirming that health was a national concern, but its impact on medicine varied from nation to nation. After Prime Minister Lloyd George’s ringing promise to create a land fit for heroes, and his insistence that ‘a C-3 population would not do for an A-1 empire’, the victorious British were troubled by the accusing contrast of postwar poverty, unemployment, hunger and sickness. Consonant with Lloyd George’s belief that ‘at no distant date, the state will acknowlege a full responsibility in the matter of provision for sickness, breakdown or unemployment,’ a Ministry of Health was established in 1919 and an inquiry set up, leading to the Dawson Report, written by the eminent London physician, Bertrand (later Lord) Dawson (1864–1945).
Insisting that ‘the best means for procuring health and curing disease should be available for every citizen by right and not by favour,’ Dawson recommended a state-organized rationalization of medical provision based on district hospitals and primary health centres. These would in effect be cottage hospitals, staffed by GPs, who would use them as their surgeries. They would provide operating and treatment rooms, laboratory and X-ray facilities and dentistry, and would also be used by the local authority for maternity and child welfare work and by the school medical service. The Dawson Report aroused much interest before financial crisis led to its being shelved and abandoned.
The conviction among senior civil servants that the state must do something led, if not to better medical provision for the people, at least to better funding for medical scientists. In the 1920s the Medical Research Council was led by a new breed of investigators, scornful of traditional clinicians but on good terms with government. Investment in research was the way, they claimed, to eliminate worthless practices and yield remedies for disease. With the state shouldering the cost of medical care for the labouring classes, it made sense to study prevalent diseases and develop ‘social medicine’, in line with Virchow’s insistence that ‘medicine is a social science, and politics nothing else but medicine on a large scale.’
Offering a broad vision of public health which transcended Chadwick’s ‘sanitary idea’, social medicine was pioneered between the wars by medical progressives, typically politically left-wing, impressed by the socialization of medicine in the USSR, and convinced that medical professionals knew best. Influential health policy experts like Sidney (1859–1947) and Beatrice Webb (1858–1943) in Britain, and Henry Sigerist (1891–1957) in the United States, praised the Soviet medical system and urged its emulation. In contrast to socialized medicine in Russia – which prized science, planning and expertise and attempted to create a ‘social medicine’ incorporating statistics, social science and prevention – the surgeries run by British general practitioners and the dingy hospital outpatient departments were old-fashioned, chaotic, wasteful and trifling with the nation’s health.
The leading spokesman for this group was John Ryle (1889–1950), a clinician at Guy’s Hospital, later Regius professor of physic in Cambridge and eventually the first professor of social medicine at Oxford. Traditional public health, Ryle reflected, had been concerned with issues such as drainage and water supply. The central object of social medicine, however, should be man and his relation to his environment, which embraced ‘the whole of the economic, nutritional, occupational, educational, and psychological opportunity or experience of the individual or the community’. Ryle and his followers were committed to a socialist vision which attributed bad health to social injustice and advocated care for all.
Humble general practitioners, too, chafed at the petty snobberies which continued to dog English medical care. ‘There were four distinct classes of patient – private, panel, club and parish – in a peck order as rigid as the social groups of the eighteenth century,’ commented Dr Kenneth Lane on his practice in rural Somerset around 1930, and the receptionist
never allowed anyone to forget which class they belonged to. The private patients had their medicines wrapped in strong white paper and sealed. They were addressed with respect. The panel, club, and parish patients had no wrapping for their medicines and had to provide the bottle or pay tuppence for it. Mean as this sounds it was almost universal practice. As a further distinction between panel and parish patients she would hand the latter their bottles of medicine at arm’s length with her head turned away as though she was afraid of catching something. At first this made me laugh then it began to irritate me.
Though inter-war governments did little, medicine continued in piecemeal but significant ways to interact with society. At last gaining the vote in many countries, women carried more weight in the political sphere, and women’s groups campaigned for maternity hospitals, better ante-natal and midwifery care, child care services and the like. Fearful over ‘degeneration’, central government and municipalities were fairly responsive, though in the UK, except for maternity benefits, women and children remained beyond the state insurance system and dependent on medical charities. Responding to public anxieties, governments also built up hospital-centred services for tuberculosis and maternity.
After the abolition of workhouses in 1929, former Poor Law infirmaries were absorbed into municipal government; local authorities assumed responsibility for the bulk of health services: not just drains and notifiable diseases, but clinics, health education, the majority of hospital beds and some special hospitals. Only the ancient, well-endowed charity hospitals retained their splendid isolation, wholly outside a growing local government health remit. State or charity, hospital services were now for all; paupers were no longer segregated and the rich no longer had their surgery at home.
In the inter-war years Mr and Mrs Average and their children were becoming the focus of public medicine and health policies. What developed varied from state to state. The USSR moved in the 1930s from a state insurance system to a salaried medical and hospitalized service which valued science and expertise. Germany continued to operate its Bismarckian state-regulated insurance scheme for workers, administered, as was its mirror in Britain, through friendly societies or employer schemes. Excluded from state benefits, some of the middle classes pre-paid for treatment through private or occupational insurance schemes. In France a state insurance system reimbursed patients rather than physicians, giving free choice of doctor and hospital. Public hospitals, however, were cash-starved and inferior, and the insured flocked to private hospitals, in some cases doctor-owned, which benefited from the system. The ethos of economic liberalism remained strong in France, stressing the freedom of both patients and doctors, and shying away from the ‘Germanic’ policy of compulsory state medical insurance. A social insurance law was finally enacted in France in 1930.
Everywhere medicine, psychiatry and the social sciences infiltrated everyday life. In England a milestone was the founding in 1920 of the Tavistock Square Clinic by Hugh Crichton-Miller (1877–1959). Chiming with the aims of the ‘mental hygiene’ movement, which viewed psychiatry’s agenda in terms of the mental health problems not just of the insane but of the man and woman in the street, the Tavistock approach became important in raising awareness of family psychodynamics and childhood problems. Its children’s department boosted the child guidance movement, which acquired institutional form in the Child Guidance Council (1927), through which emotional lives (buttoned up in the Victorian era) became objects of professional inquiry and expert direction.
The delivery of medical care developed differently in the USA, where the emphasis remained on the market not the state, and on private consumers rather than organized labour or citizens. Beleaguered in the nineteenth century by medical sects and quacks, regulars grew more confident. One of the consequences of the Flexner Report (1910) was the elimination of over half the existing medical schools; this reduced the quantity and improved the quality of medical graduates. Fewer doctors meant higher status and incomes. Economic prosperity from the ‘gilded era’ up to the ‘Crash’ (1929) brought a brisk demand for medical services. It became more common to visit a private doctor for a check-up, or for vaccines and routine ailments, rather as people were increasingly opting for elective and not just emergency surgery; this was the golden age for tonsillectomies. American physicians’ salaries began their uninterrupted climb, and the formerly weak American Medical Association (AM A) became a force in the land. Championing the causes of maternal and child health, health education, pure food and drug laws, and better vital statistics, the AMA’s basic pitch was that what the nation needed to promote all these desiderata was more medicine.
In the United States as well as Europe, health insurance became a major issue. In 1912 the short-lived Progressive Party embraced the concept of compulsory health insurance. The AM A showed interest, keeping its options open, but in the chauvinistic atmosphere during and after the First World War, when everything German or Russian was vilified, attitudes hardened and the association went on record in 1920 as opposing any plan of compulsory health insurance. Morris Fishbein (1889–1976), editor of the Journal of the American Medical Association, explained that it boiled down to ‘Americanism versus Sovietism for the American people’. ‘Compulsory Health Insurance’, declared one Brooklyn physician, ‘is an Un-American, Unsafe, Uneconomic, Unscientific, Unfair and Unscrupulous type of Legislation supported by . . . Misguided Clergymen and Hysterical Women.’
Growing more conservative in the 1920s, the AMA resisted the Sheppard-Towner Act, which provided federal subsidies for states to establish maternal and child health programmes; it also opposed the establishing of veterans’ hospitals in 1924. (Both were seen as taking the bread out of the mouth of the private physician.) As group hospitalization plans developed, the AMA at first expressed reservations and by 1930 was denouncing them as socialist. A few doctors demurred, the Medical League for Socialised Medicine of New York City strongly advocating compulsory health insurance under professional control.
President Franklin Roosevelt’s New Deal, designed to steer the nation out of the Depression, seemed to be leading America in the direction of a national health programme – many New Deal agencies were involved in health. From June 1933 the Federal Emergency Relief Administration authorized the use of its funds for medical care; the Civil Works Administration promoted rural sanitation and participated in schemes to control malaria and other diseases; and the Public Works Administration built hospitals and contributed to other public health projects. In 1935 the Social Security Act authorized the use of federal funds for crippled children, maternity and child care, and the promotion of state and local public health agencies.
The Depression and the popularity of President Roosevelt – himself a polio victim – forced the AM A to temper its views, although it constantly warned of the danger of the government encroaching upon the domain of medicine. The Association counter-attacked by citing the alleged failure of the British National Insurance system. During the Depression, when many could no longer afford to pay and the hospital sector plunged into crisis, charity hospitals began to introduce voluntary insurance schemes to cushion their users; commercial companies also moved into the hospital insurance market. In 1929 a group of school teachers in Dallas contracted with Baylor University Hospital to provide health benefits at a fixed rate. The idea of group hospitalization was picked up by the American Hospital Association in the early 1930s, leading to the Blue Cross (hospital) and Blue Shield (medical and surgical) pre-paid programmes. Initially suspicious, the AMA had the foresight to recognise that private schemes suited their interests better than compulsory federal ones.
Private health insurance became big and lucrative business. In the twenty years from 1940 to 1960 it experienced explosive growth, encouraged by the medical profession’s endorsement; and thereafter dominated the American private medicine market. Middle-class families (or often their employers) paid for primary and hospital care through insurance schemes, and physicians and hospitals competed with each other to attract their custom.
With its stress on specialization and surgery, America enjoyed a hospital boom, and hospitals in turn became the great power-base for the medical elite, the automated factories of the medical production-line. By 1900, the profession had everywhere gained effective control of such institutions and their leadership was reinforced by proclamations that advances in biomedical science were the pledge of progress. Hospital laboratories would generate medical advances, while hospital-based education would disseminate them through a hierarchy of practitioners and institutions. Funds for flagship hospitals and research and teaching facilities were prised out of Washington, state governments, and notably from philanthropic bodies such as the Rockefeller Foundation. Between the wars, the Foundation gave millions to university departments and hospitals in many countries to support the science-based medicine Flexner had envisaged.
The transformation of the hospital from a poorhouse to the nerve centre and headquarters of the new medicine had profound implications. It increased the complexity and costs of medical education. Attached to prestigious universities and hospitals, medical schools had to be subsidized, and support for medical education and training came indirectly from the public, from voluntary bodies which financed capital expenditures, and from payments for poor patients. One consequence was that, by mid century, hospitals were absorbing about two thirds of the resources spent on health care in the United States, and the percentage continued to rise. These hospitals became key centres of medical research, in the conviction that this would generate health improvements. Medical research and medical education grew inseparable, and bigger, costlier and more prestigious hospitals were their status symbols.
Medical politics took an altogether different turn in Germany. In the twenties the Weimar Republic (1918–33) made moves to put medicine on a social footing, with clinics for mothers and children and similar measures. But the desire to avenge defeat in the war fostered the rise of ideologies of national fitness, which in practice meant strengthening the strong and eliminating the weak. Building on a German tradition of racial politics, Hitler, who became chancellor in 1933, demonized Jews, gypsies and other groups as enemies of the Aryan master race in his Mein Kampf (1927) [My Struggle], and Nazi medicine in due course defined some non-Aryan races as subhuman. The anti-semitism which culminated in the Holocaust received strong ideological and practical backing from doctors and psychiatrists, natural and social scientists, organized in particular through the Nazi Physicians’ League.
Founded in 1908, the Archiv für Rassenhygiene [Archive of Race Hygiene], the main organ of the German eugenics movement, had long been urging action to stop what it deplored as the biological and psychological deterioration of the German race. The enthusiasm of German physicians in endorsing ideas of racial degeneracy and implementing race hygiene policies indicated personal opportunism, but it was also the expression of widely held biomedical and anthropological doctrines. Physicians and scientists participated eagerly in the administration of key elements of Nazi policies such as the sterilization of the genetically unfit. Presiding at genetic health courts to adjudicate cases, physicians ordered sterilization of nearly 400,000 mentally handicapped and ill persons, epileptics and alcoholics even before the outbreak of war in September 1939. Thereafter, ‘mercy deaths’, including ‘euthanasia by starvation’, became routine at mental hospitals. Between January 1940 and September 1942, 70,723 mental patients were gassed, chosen from lists of those whose ‘lives were not worth living’, drawn up by nine leading professors of psychiatry and thirty-nine top physicians. To make sure that the programmes were expertly conducted, doctors had the exclusive right to supervise the elimination process by selecting prisoners on arrival at Auschwitz and other extermination camps.
Some of the victims were selected so that German medical scientists could conduct programmes of human experimentation. Camp doctors used inmates to study the effects of mustard gas, gangrene, freezing, and typhus and other fatal diseases. Children were injected with petrol, frozen to death, drowned or simply slain for dissection purposes. The leading Auschwitz physician, Josef Mengele (1911–79), had doctorates in both medicine and anthropology, and won his spurs before the war as assistant to the distinguished professor, Otmar Freiherr von Verschuer (1896–1969), giving expert medical testimony against those accused of committing Rassenschande (racial disgrace, i.e., sexual relationships between Jews and Aryans).
Dedicated to human experimentation, as camp doctor at Auschwitz, Mengele selected over one hundred pairs of twins, injecting them with typhoid and tuberculosis bacteria; after their deaths he thoughtfully sent their organs to other scientists. In the name of science, he also investigated the serological reactions of different racial groups to infectious diseases in projects financed by the Deutsche Forschungsgemeinschaft [German Research Foundation] under the auspices of the country’s most eminent surgeon, Ferdinand Sauerbruch. After the war, Mengele fled to South America; he was never brought to trial. Twenty doctors were, however, tried at Nuremberg for crimes against humanity and four were hanged; but the vast majority of those involved in the atrocities, including Sauerbruch, were allowed to return to their university posts or medical practices.*
Doctors also played a key role in the pursuit of human experimentation in Japan. In 1936 the ‘Epidemic Prevention and Water Supply Unit’ was formed as a new Japanese army division (it was also known as Unit 731). Hundreds of doctors, scientists and technicians led by Dr Shiro Ishii were set up in the small town of Pingfan in northern Manchuria, then under Japanese occupation, to pioneer bacterial warfare research, producing enough lethal microbes – anthrax, dysentery, typhoid, cholera and, especially, bubonic plague – to wipe out the world several times over. Disease bombs were tested in raids on China. Dr Ishii also developed facilities for experimenting on human guineapigs or marutas (the word means logs’). Investigating plague and other lethal diseases, he used some 3000 marutas to investigate infection patterns and to ascertain the quantity of lethal bacteria necessary to ensure epidemics. Other experimental victims were shot in ballistic tests, were frozen to death to investigate frostbite, were electrocuted, boiled alive, exposed to lethal radiation or vivisected. Like Dr Mengele, Dr Ishii experimented to determine the differential reactions of various races to disease.
Initially, his research had all been on Asians, but he broadened his programme to include American, British and Commonwealth prisoners-of-war.
At the end of the war, surviving Pingfan victims were gassed or poisoned, the facilities destroyed and the plague-ridden rats released. Dr Ishii and his team did a deal with the American authorities, trading their research to avoid prosecution as war criminals. The American government chose to keep these atrocities secret. The United States had been manufacturing anthrax and botulin bombs during the war, largely in the expectation that Germany would resort to biological weapons. Britain undertook anthrax tests on the Scottish island of Gruinard and at the Porton Down research station in Dorset. During and after the war, the American military subjected its troops to secret radiation tests as part of its atomic programme: in the climate of World War and Cold War, it was easy for medical scientists to persuade themselves that their involvement in such un-Hippocratic activities would contribute to medical advance, national survival and the benefit of mankind.
One of the reactions in the postwar years against such perversions has been an international ethical movement for medicine. Though the Nuremberg Code, drawn up after the trials, failed to define genocide as a crime,* it was intended to ensure medical research could never again be abused. The Code consisted of ten points giving ethical guidance. The first and crucial one read: ‘The voluntary consent of the subject is essential.’ The other nine principles governing medical research stated:
2 The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random or unnecessary in nature.
3 The experiments should be so designed as to be based on animal experimentation and . . . that the anticipated results will justify the performance of the experiment.
4 The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.
5 No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, where the experimental physicians also serve as subjects.
6 The degree of risk taken should never exceed that determined by the humanitarian importance of the problem . . .
7 [Adequate facilities should be used, and precautions taken] to protect the experimental subject against even remote possibilities of injury, disability or death.
8 [E]xperiments should be conducted only by scientifically qualified persons. . .
9 [The] subject should be at liberty to bring the experiment to an end . . .
10 [The] experiment . . . must be . . . terminate [d] . . . [if] continuation is likely to result in injury, disability, or death to the experimental subject.
These principles were further refined in the Declaration of Helsinki on medical research in 1964, which defined the difference between therapeutic experiments (in which clinical research is combined with professional care) and non-therapeutic experiments (in which the experiments may be of no benefit to the subject concerned but may contribute to knowledge). These guidelines carried no sanctions, and subsequent scandals made it frighteningly clear that it was not only fascist powers who had been engaging in unethical research.
In the postwar years whistle-blowers such as H. K. Beecher (1904–76) in the United States and M. H. Pappworth (1910–94) in Britain were to the fore in exposing the routine performance of unethical experiments, often using the mentally ill or defective as human guineapigs, in leading medical schools and published in prestigious journals. One of the more shocking was the Tuskegee (Alabama) experiment, begun in 1932 by the United States Public Health Service. This involved depriving hundreds of syphilitic blacks of proper medical treatment (while pretending that they were being so treated), to make a study of the long-term degenerative effects of syphilis on the nervous system. The experiment continued until the 1960s, and the study was not ended until 1972. It revealed nothing about syphilis, but much about racism. One hundred men died during the course of the experiment.
The Biological Weapons Convention of 1972 outlawed the development, production and stockpiling of biological and toxic weapons as well as their use. It sets out no means of verification, and it is known that disasters have occurred since then in Soviet plants manufacturing anthrax for biological warfare. The extent of Iraq’s stockpiles of biological weapons at the time of the Gulf War in 1991 remains unclear.
War is often good for medicine. It gives the medical profession ample opportunities to develop its skills and hone its practices. It can also create a postwar mood eager to beat swords into scalpels. The astonishing success of antibiotics used upon troops during the Second World War heightened expectations of wider public benefits. Only in Great Britain, however, was it followed by a dramatic reorganization of civilian medical services. The USA had emerged unscathed and increasingly suspicious of anything ‘unAmerican’, while continental Europe was in collapse and unable to implement far-reaching plans.
The blueprint for reform in the UK was the Beveridge Report on Social Insurance and Allied Services, the work of civil servant Sir William Beveridge (1879–1963). Published in 1942, it declared war on the five giants that threatened society: Want, Ignorance, Disease, Squalor and Idleness. To combat sickness, Beveridge proposed that a new health service be available to everyone according to need, free at the point of service, without payment or insurance contributions and irrespective of economic status. All means tests would be abolished.
Whether the National Health Service outlined in the Beveridge Report would have been implemented had the Conservatives won the general election of 1945 is doubtful; the Labour Party enjoyed a landslide and set about implementing it. A bill was introduced, in April 1946; on 6 November it received the royal assent and the appointed day for its inauguration was 5 July 1948.
The hospital services were in urgent need of reform. With war looming in 1939, the Ministry of Health had taken over the nation’s hospitals on an ad hoc basis, instituting a national Emergency Medical Service which gathered more than a thousand voluntary and over 1500 public hospitals into eleven administrative regions. The scheme coped well with the air raids and extensive civilian casualties. At the war’s end, it was recognized that most hospitals were financially too feeble to be returned to the voluntary sector, and that they had functioned more effectively during hostilities under government control and financial backing. Moreover, hospitals had begun to count on government payments, and had become used to cooperation within a state-planned scheme. No major private insurance sector was going to keep them afloat, as it did in the United States.
Aneurin Bevan, minister of health in the postwar Labour administration, nationalized municipal as well as charity hospitals. No friend of local government, he wanted hospitals, recognized as the flagships of medicine, under the control of central government. The nationalization of the hospital service divided the country into regions, each administered by a regional hospital board associated with a university and containing one or more medical schools. The teaching hospitals won for themselves (the price paid for their support) a privileged status. Each was to be given a measure of autonomy under its board of governors.
This reorganization was the most far-reaching administrative action concerning hospitals ever brought about in a western nation; in the process the government became responsible for 1143 voluntary hospitals with over 90,000 beds, together with 1545 municipal hospitals containing 390,000 beds. Thanks to nationalization, hospital doctors could look forward to better facilities and consultants were permitted to retain considerable independence, including the right to private practice within NHS hospitals.
For general practitioners, however, the proposed health service seemed an altogether more dubious prospect. Fears were expressed, as in 1911, about the imposition of a full-time salaried medical service. The BMA fomented hostility to the bill, and a questionnaire it conducted in February 1948 showed that 88 per cent of its members were against accepting service. Bevan, however, boasting that he had ‘won over the consultants by choking their mouths with gold’, denied that he had any intention of introducing a full-time salaried practitioner service, and guaranteed to GPs the continuation of private practice. Faced with such conciliation, opposition subsided, and on schedule, 5 July 1948, the National Health Service came into operation.
Supporters hoped that reorganization of general practice would follow, anticipating the creation of upwards of 2000 health centres. But progress in this direction proved snail-like; ten years after the Act came into force there was only a handful of health centres in the whole country. Even so, the NHS was enormously popular, bringing about a considerable levelling-up of services, though hopes that good treatment would lead to a need for less medicine and hence a reduction of expenditure were naive. Beveridge had calculated the annual cost of the service at £170 million; by 1951 it was £400 million, and by 1960 £726 million. The White Paper of 1944 predicted that it would be several years before the dental service cost £10 million; the cost in the first year was £28 million. Bevan complained about the ‘cascades of medicine pouring down British throats’, but resigned over the imposition of prescription charges.
The system was efficient and fairly equitable. The NHS did not revolutionize medicine, indeed it perpetuated the old division between hospital consultants and general practitioners, who chose to remain as small businesses under the state. They were widely regarded as less expert than hospital consultants, but their accessibility made them popular. In the 1960s general practice was renovated when, forty years after the Dawson Report, GPs finally began to band together in group practices large enough to employ nurses and other auxiliary services. By then the NHS seemed well established: hospitals, general practitioners and public health were part of a planned and unified service, based on regions and their medical schools. NHS medicine was powerful, popular, and by international standards exceptionally cheap.
Broadly comparable developments had occurred or were to follow in British-influenced countries. In New Zealand, government health care assistance had begun in late-Victorian times with the creation of a national hospital system. The first Hospitals and Charitable Institutions Act (1885) divided the country into twenty-eight hospital districts, each controlled by a board whose members were appointed annually by local authorities. The hospitals were to be financed by patient fees, by voluntary contributions and local rates. The introduction of hospital benefits under the Social Security Act 1938 relieved patients of the payment of fees.
Canada took the path of socialized medicine, though at a later date. Saskatchewan began its Medical Care Insurance and Hospital Services Plan in 1962, enabling residents to obtain insurance covering many medical services. This government administered programme was funded by an annual tax and by the use of federal funds. Shortly afterwards, British Columbia, Alberta and the other provinces adopted similar schemes. A central Medical Care Act (1967) co-ordinated the system. The medical profession initially resisted what seemed to be the encroachment of state medicine, but (as generally happened) fell into line. As health expenditure rose, the Canadian government launched prevention campaigns for traffic accidents, alcohol abuse and smoking, in the hope of curbing costs.
As western Europe recovered from World War II, and moved during the 1950s into an era of prosperity, various forms of state-supported medical systems took shape. Sweden established medical care and sickness benefit insurance in 1955; it was a compulsory scheme, with costs divided among employer, employee and the government. Doctors were not employed by the state, but the government regulated physician and hospital fees.
In postwar divided Germany, the West (the FRG) continued to use sick funds which reimbursed doctors, and France still relied on state welfare benefits through which patients were refunded for most of their medical outlay. Dependants were included in nationalized social security schemes. Private hospitals multiplied and attracted rapidly rising expenditure, while public hospitals (typically rundown buildings catering for long-stay patients) languished. As the French economy recovered, the shabbiness of the public sector became embarrassing and, to counter this, the Debré Law (1965) encouraged liaisons between public hospitals and medical schools, offering incentives to doctors to combine patient care with research and teaching. New installations were added, often housing research laboratories and pursuing science projects found in other nations in universities or other non-clinical institutions. The French state assumed powers to control hospital development, to secure better distribution of services and reduce duplication.
Climbing from the 1950s, West German health expenditure sky-rocketed in the 1970s, hospitals accounting for (as everywhere) the bulk of the budget. A law of 1972 led to state governments assuming responsibility for hospital building, and sick funds were obliged to pay the full daily costs of approved hospitals. Gleaming new hospitals drove up standards and caused costs to spiral, so that by the late 1970s Germany, like other nations, was looking for ways to peg expenditure.
Meanwhile the United States went its own way. From the 1930s those able to afford it took out private health insurance, increasingly through occupational schemes tax-deductible for employers and employees alike. Under a fee-for-service system which rewarded doctors for every procedure undertaken, physicians and hospitals competed to offer superior services: more check-ups, better tests, the latest procedures, a wider range of elective surgery, and so forth; Americans began to see their physicians more often, and to consult a greater diversity of specialists. Living longer, they were beguiled by the possibility that medicine would truly deliver the secret of a healthier and more extended life. In these circumstances, costs inevitably spiralled, on the supposition that everyone wanted, and many could afford, more extensive, more expensive benefits: the sky was the limit, nothing could be too good. While capital expenditures on hospitals were steep, they were often subsidized by federal funds; and rising health costs were masked by insurance and cushioned by affluence. In any event, spending more on health seemed like a good investment.
In the 1930s Franklin Roosevelt had toyed with the possibility of introducing some kind of national health insurance as part of the New Deal, but the postwar mood scotched that. In the Cold War’s anticommunist, anti-foreigner atmosphere, any socialized system smacked of Germanism and Stalinism. When Harry Truman mooted a national health programme in 1948, the AM A campaigned vigorously and effectively against it. Government money, insisted the medical apologists, should fund science not socialized medicine. ‘We are convinced’, maintained Curtis Bok (1897–1962), ‘that the only genuine medical insurance for this country lies in making the benefits of science available to all practitioners and to all patients’. In similar vein in the 1950s a Republican congressman maintained that ‘medical research is the best kind of health insurance.’ Complementing private insurance schemes like Blue Cross came the Health Maintenance Organizations (HMOs), originating with the Kaiser Foundation Health Plan organized in California in 1942 and the Health Insurance Plan of Greater New York dating from 1947. By 1960 each of these was providing complete medical care to over half a million subscribers, and by 1990 the Kaiser-Permanente programme, based in Oakland, California, was employing 2500 physicians and operating 58 clinics and 23 hospitals. Subscribers to these and other HMOs paid monthly dues entitling them to comprehensive medical care. Physicians received a salary plus a percentage of profits. The remuneration system was designed to keep the lid on costs, by curbing the multiplication of unnecessary tests and procedures, inessential hospitalization, surgery and other expensive and lucrative practices encouraged by traditional health insurance with its fee-for-service basis. The number of surgical operations and the amount of hospitalization deployed in HMOs was around a third less than in ordinary private practice. By 1990 HMOs were providing medical care for about eight million Americans.
Despite acclaim for private medicine and private medical insurance, the American government became committed to shouldering a growing proportion of health care. Federal government provided direct medical care to millions of individuals through the Armed Services and Veterans Administration. Some thirty million war veterans are currently eligible for inpatient and outpatient services at the Veterans Administration Hospitals and Clinics, and approximately two million personnel in the armed services, to say nothing of their dependants. The Public Health Service, the Indian Health Service, and a wide range of other governmental agencies provide federally funded health services of one sort or another.
Awareness grew of the disparity between the increasingly lavish provision of health care for the affluent and the situation of the poor and the old. This injustice became a source of national embarrassment and a campaigning platform for the Democratic Party. With the election of John F. Kennedy as president in 1960 and the possibility of federal intervention, the AMA once again issued a call to arms and fought a rearguard struggle against rising public support for a programme to provide medical care for the aged. Capitalizing on a wave of idealism following Kennedy’s assassination in 1963, his successor Lyndon B. Johnson, offering a unifying and healing vision of a ‘great society’, was able to amend the social security laws. In 1965 Congress made medical care a benefit through Medicaid, set up alongside Medicare, the parallel health-and-care plan for old people. Federal grants were made to state governments to cover the costs.
The Medicare programme, which became effective in July 1966, provided a federally financed insurance system for paying hospital, doctor, and other medical bills, covering all individuals eligible for social security benefits. Medicaid was to provide federal assistance to state medical programmes which might include a variety of services: family planning, nursing homes, screening and diagnostic programmes, laboratory and X-ray services, and so forth. Medicaid and Medicare – essentially government-subsidized medical insurance for social security recipients – proved inflationary because providers were reimbursed on the standard fee-for-service basis.
Health became one of the major growth industries in America, encompassing the pharmaceutical industry, manufacturers of sophisticated and costly diagnostic apparatus, laboratory instruments and therapeutic devices, quite aside from medical personnel, hospitals and their penumbra of corporate finance, insurers, lawyers, accountants and so forth. Expenditure has continued to rise at a quite disproportionate rate, as the accompanying table shows. The 1996 figure for the United States was touching 15 per cent. Many factors contribute to this. Private medical insurance is a lucrative business, and insurers benefit from boosting costs as high as the market can bear. Physicians’ incomes run at seven times the national average, and with the rise of medical litigation through malpractice suits, medicine has become a profitable source of business for lawyers, accountants and other expensive professions parasitical upon medicine. Hospital trustees and administrators traditionally have a stake in making medical care lavish and munificent, so hospitals added costly units – kidney machines, scanners and coronary care centres – as prestige items or to gratify local pride, often duplicating similar facilities in neighbouring institutions. Managing ‘non-profit’ institutions, hospital boards have typically had little incentive to curb expenses, being able to meet their growing budgets by raising charges, staging appeals and securing public money. Vast inefficiency and duplication have come to characterize health care delivery.
No small factor in spiralling costs has been the ceaseless growth of specialties. Specialty practice is attractive to physicians. Specialists generally earn more money, and achieve greater professional and social recognition than the GP. Specialization has inevitably led to a growing population of practitioners and a proliferation of consultations.
In short, from the 1930s the United States has invested in more, more elaborate, and more expensive health care for the well-off. The number of medical schools jumped from 77 in 1945–6 to about 120 in 1990, the number of graduates doubled, and the uptake of physicians’ services increased at an even greater rate as medicine rose in the public estimation. Infectious diseases were, it seemed, being conquered, and doctors were innovation-oriented, as were ‘research-based’ pharmaceutical companies like Hoffman-La Roche, Merck, Hoechst, Eli Lilly, Upjohn and others, producing a series of new and costly drugs. Requiring elaborate tests, the proliferation of paramedical staff and the provision of sophisticated drugs, new medical procedures dramatically increased medical labour and costs – to say nothing of outlays on services aimed at documenting and justifying clinical procedures, so-called ‘defensive medicine’ (taking an X-ray in case the obvious sprained ankle turned out to be a fracture).
Comparison of health care expenditure as share of Gross Domestic Product in OECD countries, 1970–92
Such developments – medicine seemingly expanding to consume all the funds available – were bound to draw growing criticism. Some denounced Medicare and Medicaid as a blank cheque, corrupting to consumers and providers alike. Others deplored the channelling of vast resources into a defective, high-tech, high-cost system geared to benefiting suppliers rather than sufferers. Scientific medicine (the fulfilment of Flexner’s dreams) came under attack, especially in the 1960s’ populist counter-culture backlash when all established institutions were fair game. Critics of vast, impersonal mental hospitals campaigned for ‘community care’. Feminists lambasted the evils of ‘patriarchal medicine’, as evidenced in the hospitalization of normal births, and called for the right to choose home confinement. Other consumer groups mobilized patients and challenged the profession’s monopoly. High-tech medicine came under fire as part of a wider critique of the shortcomings of science and technology. Disasters with new drugs, notably the thalidomide tragedy, were seen as proof of technical failure and professional dominance – the interests of medi-business taking precedence over the sick.
Radical criticism eroded confidence, caused questioning, and led many into the paths of alternative medicine. But it produced few structural reforms. What achieved most during the 1980s and 1990s were new pressures towards financial stringency in reaction to the soaring cost of high-tech medicine and the uncontrollable and insatiable demand it had excited. Since then the leading factor in medical policy-making has become the quest for cost restraint. The consequences of budgetary crisis have been most crudely evident in the former USSR and the eastern bloc, where political and economic transformation and restructuring – in some cases this has meant collapse – have led to a permanent health care crisis, with shortages of drugs and medical staff not being paid. But everywhere a new financial stringency is in the driving seat.
Medical policy had for so long been focused on conquering disease; from around 1980 the conquest of costs assumed prime importance. Slogans like ‘if it’s not hurting, it’s not working’, used by the Conservative administration in the UK in the 1990s, seemed to symbolize the fact that high taxes and high inflation had become, at least in the government’s eyes, more of a threat to national well-being than poor health.
In this cost-cutting atmosphere in various countries, professional autonomy simultaneously became threatened by the economic liberalism of the resurgent New Right, which questioned state-provided services, criticized professional monopoly and deified market mechanisms in the name of efficiency and competitiveness. Aware that (as an earlier health minister, Enoch Powell, had put it) ‘there is virtually no limit to the amount of health care an individual is capable of absorbing,’ and looking for ways to cap NHS spending, the Conservative administrations of the late 1980s came up with the idea of an ‘internal market’.
Larger hospitals were encouraged to become independent trusts and to compete with each other for patients and resources. GPs were encouraged to accept independent budgets in the expectation that this would make them more cost-conscious. Hospital consultants and GPs were given stakes in the provision of patient care at less cost; the ‘discipline of the market’ was supposed to achieve a more cost-effective and socially responsive service. These measures met stern opposition from large parts of the medical profession, particularly as, in trust hospitals, they tended to subordinate senior medical staff to lavishly paid professional managers brought in from industry and commerce.
Concluding that market mechanisms had increased administrative costs and inequalities, the new Labour government, elected in May 1997, pledged itself to abolish the internal market. What is conspicuous is that the UK, with the highest percentage of state-controlled medicine in the western world (in 1991, 89 per cent of health care expenditure came from the public sector, compared with 41 per cent in the US) also spends the smallest percentage of the GNP on health. This can be interpreted as proof, despite right-wing propaganda, of the efficiency of state medicine, or that one of the weaknesses of postwar British health policy has been failure to invest adequately in health.
In the United States, the crisis over out-of-control health costs highlighted the plight of those excluded from the mainstream. As of the 1990s, over 35 million Americans had no medical insurance: almost one in six citizens under the age of sixty-five. Another 20 million had such inadequate insurance that a major illness would lead to bankruptcy. Indeed, almost 17 million Americans who are gainfully employed lack health insurance – 3 million more than in 1982, and each year approximately 200,000 are turned away from hospital emergency rooms for this reason. Half a million American mothers have no form of insurance when they give birth, and 11 million American children are not covered by any medical insurance. The failure of President Clinton’s health initiative in 1992–94 makes it unlikely that this question will be tackled in the near future.
Meanwhile, significant changes were occurring within American private medicine, as new managerial business outlooks were applied to hospitals. Humana, one of the largest hospital chains, had ninety-two hospitals and $1.4 billion in revenues by 1980. Its president said it wanted to provide as uniform and reliable a product as a McDonald’s hamburger coast to coast. Business attitudes were also extended to HMOs, which realized that their best prospects of high profitability lay not in the proliferation of expensive services but in cost control, cost-cutting, ‘down-sizing’, rationalization of services and mergers.
Corporate management has taken over HMOs and embarked upon programmes of buying up municipal and non-profit hospitals, closing down others, amalgamating facilities to end duplication, and capping target expenditures for tests, hospital stays and drugs. Staff numbers (including physicians) have been slashed, as has the range of medical choices available to subscribers. Traditional non-profit institutions have been brought into the ‘for-profit’ sector. To many doctors, it seemed that all rationales in health were being subordinated to the budgetary. A physician who is part of a family medicine unit in a small town in California commented on the medical changes consequent on the financial imperative:
At first we prided ourselves on keeping our patients healthy and out of the hospital. But the hospital administrators didn’t like this. They wanted to keep the hospital full and keep the patients in as long as possible, except for a certain number of indigent patients whom we tried to admit but the administration wanted to keep out. But now the hospital has signed up with a number of HMOs. So it’s in the hospital’s interests to keep patients out too. This is the crazy logic of health-care financing in the United States.
This new trend underlines how inflationary the physician-driven character of American medicine was during previous generations (the so-called ‘golden age’ from the 1920s to the 1970s). It is ironic that only when big business moved into medicine was there sufficient incentive to slash costs (and hence often wasteful, futile and unprincipled medicine) in the name of higher profits. It is not clear whether the cutting edge of profitability can produce better medicine in the twenty-first century – and for whom. It is equally unclear whether the increasingly fierce medico-political war being fought, between the traditional medico-industrial complex, the medical profession, the federal government, new managerial finance and the customer, has anything to do with the fulfilment of real health needs in the United States.
POLICING HEALTH
Its spokesmen during the last couple of centuries have liked to emphasize medicine’s autonomy and its benevolence as an enabling profession. Notably in the US, the profession has jealously rejected ‘encroachments’ by the state. In reality, however, medicine and the state have become ever more closely bonded, eating off each other’s plates.
The medical profession depends upon government money for institutions, research, education and salaries; and governments have followed and justified various policies on medical grounds. In most respects this growing (if unacknowledged and often disavowed) rapprochement between the state and the medical profession embodied benign logic: who could deny health to the people? Who would doubt that the encouragement of medicine was the best way to supply it? Indeed, the twentieth century brought countless ad hoc interventions to create services to reduce health risks and help the helpless. Mothers, babies, children, the elderly, and many other groups, have been objects of interventions benignly intended and often beneficial. But the state and the medical profession have often joined in alliances in which social policing and political goals have counted for more than the promotion of personal health. We shall now survey briefly one such increasingly salient area: narcotics control.
Opium was a commodity traditionally available on the free market. Its chemically produced form, morphia, was introduced in the 1820s, and the hypodermic syringe in the 1850s – ‘the greatest boon given to medicine since the discovery of chloroform’, it was declared in 1869. ‘Nothing did me any good,’ Florence Nightingale noted during one of her illnesses, ‘but a curious little new fangled operation of putting opium under the skin which relieved one for twenty-four hours.’ In 1898, the German company Bayer introduced Heroin (diacetylmorphine), the ‘heroic drug’ which, they said, had the ‘ability of morphine to relieve pain, yet is safer’.
For most of the nineteenth century there was little attempt by governments to regulate the sale of drugs – indeed opium became a bizarre test case in free-trade. Cultivating opium poppies in Bengal, Britain’s East India Company exported the drug illegally to China, the trade amounting in 1839 to 400,000 chests of opium. China grew anxious about the threats to health, morale and its silver reserves. The British pushed the drug, China resisted, and war resulted. After the First Opium War (1840–42), China lost Hong Kong; the Second Opium War (1857–60) meant further losses of Chinese sovereignty and an enforced open door for the opium trade.
In Europe, as the hypodermic syringe led to increased morphia use, doctors grew aware of dependency and withdrawal symptoms. In Die Morphiumsucht (1878) [The Morbid Craving for Morphia], Eduard Levinstein (1831–82) described morphia addiction. In England, the Society for the Study and Cure of Inebriety, founded in 1884 by Dr Norman Kerr (1834–99), pursued Levinstein’s ideas, investigating alcohol abuse, opium, chloral hydrate and cocaine. Morphine and opium usage were soaring, largely due to medical prescribing. In the US, the Ebert Prescription Survey of 1885, covering 15,700 prescriptions dispensed in nine Illinois pharmacies, showed the ingredients most frequently used in medicines were quinine and morphine.
Recognition grew that addiction was mainly iatrogenic. A late nineteenth-century American cartoon features a bartender gazing enviously at a druggist and grumbling, ‘The kind of drunkard I make is going out of fashion. I can’t begin to compete with this fellow,’ while contented customers walk out of the pharmacy carrying opium-based medicines labelled ‘Bracer’ and ‘Soothing Syrup’.
Important in the framing of the ‘drugs problem’ was the idea developed by doctors and psychiatrists of an ‘addict type’. Addiction became defined as a disease. However, after initial hopes, optimism about cures had waned by the 1920s. Concluding in the 1930s that addiction betrayed a psychopathic personality, Dr Lawrence Kolb of the US Public Health Service demanded strict regulation; the drug addict should be considered a potential criminal: predisposition to addiction included a sociopathic tendency.
Restrictive legislation on over-the-counter medicines was first passed in Britain in 1860 in an attempt to control the new substances; prescription-only drugs came into being and the class was subsequently extended. Although patent medicine manufacturers lobbied feverishly, the Pure Food and Drug Act of 1906 ended the availability of narcotics over the counter in the USA and established the first legally enforceable Pharmacopoeia of the United States of America. Rather as the Eighteenth Amendment to the Constitution and the Volstead Act (1919) were to prohibit alcohol sale, the comprehensive Harrison Act (1914) criminalized drug addiction, making opiates and other narcotics legally available only on prescription for treating disease. The Supreme Court ruled that supplying addicts through prescriptions was illegal under the act – contraventions led to some 25,000 physicians being arraigned and 3000 of them serving prison terms. The Act made bad worse. In 1925 Robert A. Schless observed that ‘most drug addiction today is due directly to the Harrison Anti-Narcotic Act. . . . The Harrison Act made the drug peddler and the drug peddler makes drug addicts.’
Penalization bred panic, drugs were dubbed ‘mankind’s deadliest foe’, and in 1930 the Federal Bureau of Narcotics was formed, many of its officers being laid-off prohibition agents. ‘How many murders, suicides, robberies, criminal assaults, hold-ups, burglaries, and deeds of maniacal insanity [smoking marijuana] causes each year, especially among the young, can only be conjectured,’ thundered the Bureau’s chief commissioner, Harry J. Anslinger (b. 1892). The ‘cannabis problem’ was created by the passing of the Marijuana Tax Act 1937, which, by imposing huge taxes, bureaucratic restrictions and penalties, put an end to legal use and drove it underground.
The spirit of the 1930s carried over into the ‘war on drugs’ launched in 1971 by President Nixon, with the allocation of greater federal funds, powers and manpower. ‘America’s Public Enemy No. 1 is drug abuse,’ declared Nixon, who surely knew about public enemies. Soft and hard drugs were demonized together, the consequence being that in the 1980s some 300,000 Americans were being arrested annually on cannabis charges – in 1982 a Virginia man received a forty-year jail sentence for distribution of nine ounces of pot. The war on drugs was hypocrisy; the CIA had long supported Asian and South American drugs barons as props against communism.
As public policy changed, the medical profession changed its tune respecting the dangers of narcotics. Doctors had once thought rather well of cannabis, as of opium. Set up in 1893 by the British government, the Indian Hemp Drugs Commission concluded in a 3000-page report that moderate use had no appreciable medical, psychological or moral effect. Banning it might drive the Indian poor ‘to have recourse to alcohol or to stimulants or narcotics which may be more deleterious’ and prohibition or even ‘repressive measures of a stringent nature’ would create ‘the army of blackmail’. The commission’s findings might have been skewed by the fact that cannabis, like opium, was a key source of revenue to the Raj, but they also reflected sound medical opinion.
With the political drive against cannabis in America from the 1930s, medical thinking shifted. It was in 1934 that ‘drug addiction’ first appeared in the American Psychiatric Association’s diagnostic handbook and, four years after the 1937 Marijuana Act, cannabis disappeared from the US Pharmacopoeia. When a commission of the New York Academy of Medicine set up by Mayor LaGuardia concluded in 1944 that there was little evidence that marijuana harmed health, the American Psychiatric Association advised its members to disregard the commission’s findings, because they would do ‘great damage to the cause of law enforcement’. The Association knew which side its bread was buttered.
The American medical profession fell into line with the criminalization of narcotics, accepting funds made available for setting up detoxification programmes and developing anti-addiction drugs like methadone. They could easily convince themselves that they were helping addicts and society, while doing their careers a favour. The 1960s brought a shift in Britain too. With the government obliged to be seen to be responding to a growing drugs menace, and sensing that capital was to be made out of scapegoating, new restrictions were imposed. No longer could addicts routinely be supplied by GPs: they had now to be registered at special clinics for treatment. As in America, the upshot of tougher laws and policing was that the traffic went underground, achieved new allure, and became more deeply enmeshed with criminality and corruption.
With respect to narcotics – innumerable other questions from genetic engineering to euthanasia and spare-part surgery could be cited – medicine has bedded down with authority in the modern state. Some of the consequences for sick people of such developments are examined in the final chapter.
CONCLUSION
During the twentieth century medicine became integral to the social and political apparatus of industrialized societies. Its impact is not easy to evaluate. The enormous inequalities of health between rich and poor revealed by nineteenth-century statisticians have certainly not disappeared.
In England the Black Report, published as Inequalities in Health in 1980, showed that the affluent continued to live longer than the poor and were far healthier: e.g., in 1971 the death rate for adult males in social class V (unskilled workers) was nearly twice that of adult men in social class I (professional workers). The political upshot of the document was revealing.
Commissioned by a Labour government, the report, written by Sir Douglas Black (b. 1913), was virtually suppressed by the succeeding Conservative government, presumably because it showed how the health of poorer parts of the community still lagged and called for massive public spending to rectify these inequalities:
Present social inequalities in health in a country with substantial resources like Britain are unacceptable, and deserve to be so declared by every section of public opinion . . . we have no doubt that greater equality of health must remain one of our foremost national objectives and that in the last two decades of the twentieth century a new attack upon the forces of inequality has regrettably become necessary.
Patrick Jenkin, then secretary of state for Social Services in Mrs Thatcher’s first administration, turned the report’s findings to Conservative advantage by contending that they demonstrated that the nature and roots of inequalities of health were such that they could not be eradicated by vast injections of public money. The history of the NHS, he said, bore this out: ‘We have been spending money in ever-increasing amounts on the NHS for thirty years and it has not actually had much effect on increasing people’s health.’ Certainly, it is far from clear that the way to end social differentials in health is the provision of more medicine.
* Nazi practices seemingly confirmed the fears of nineteenth-century antivivisectionists who had prophesied that vivisection experimentation was bound to proceed from animals to humans. The novelist Ouida wrote, ‘Claude Bernard, Schiff and many other physiologists have candidly said that human subjects are absolutely necessary to the perfecting of science: who can doubt that in a few years time, they will be openly and successfully demanded and conceded?’
* The International Convention on the Prevention and Punishment of the Crime of Genocide was adopted by the United Nations General Assembly in Paris on 9 December 1948; it was ratified by the USA in 1988.