For all its formative power, the dramas of youth are mostly personal. Within those years lie excess deaths for males as a result of their tendency to violence and risk-taking. The tendency is slight, but against the backdrop of little else happening it acquires importance. It is easy to come up with frightening figures showing that horse-riding or trampolining or knife fights are leading causes of death for the young. They are, but it’s principally because little else can touch them. Society’s job can be seen as that of a designer of a playground. The goal is to keep the interest and adventure while making sure the falls are safer. Society has been more than helpful. It has not eliminated our impulses but it has tamed them. Duels are no longer a requirement of masculine honour. Wars are still fought but they are fought less, which is to say that, for decades, the number of lives claimed by battle has been falling not only relative to the number of people alive, but, more impressively, in absolute numbers too.1 The effects of global media have been important in increasing the value of the lives of the lowest ranked, but so have the declines in infant and youthful mortality. To lose a young man in war was no less catastrophic for the parents of previous generations but the loss to society was not as noticeable. So many were lost each day anyway. The best thing we can do for the young is to let them grow old – a reminder that we do not just want them to avoid an early death, we want them to grow up. The point of youth is to waste it and the point of being able to do that is to learn from experience how to stop.
Events do occur in youth, however, with important effects on health. The obvious ones are those responsible for the few lives lost during these years. Trauma, in the absence of other problems, looms large, with violence and road traffic injury the most reducible parts. The latter is modifiable in the immediate future by spending on road safety and, to a far more exciting degree, by the automation of cars. Those changes have a chapter of their own later, and the aim of the chapter is to persuade you that it merits being one. Violence is a harder part of human ecology to deal with but it remains modifiable. Causes of death and ill health remain just that when the interventions required to deal with them are not medicines but changes in culture. Women are more likely to be victims of violence where they are less educated and marry younger.2 Interventions to improve the situation need to consist of cultural change, of better education, greater sexual equality and later marriage. That these interventions are not drugs or vaccines makes them no less relevant or powerful.
Vaccinations continue to offer new benefits: vaccinating against human papillomavirus (HPV), it has been conservatively estimated,3 could prevent half a million deaths each year. The estimate is conservative because it looks only at the reduction in deaths of cervical cancer, a disease now known to be almost entirely sexually spread through transmission of the virus. The benefits are undoubtedly greater and more varied.
The notion that a virus could cause cancer was incredible for most of the twentieth century: Peyton Rous suggested it in 19104 but was not believed. Only from the 1960s onwards did its existence seem real, and then only in non-human settings (the first cancer-causing virus identified affected only chickens; the second, rabbits). When Rous received his Nobel Prize in 1966 it was not for his clinical or his veterinary impact but because he had changed our ideas of what viruses could do. The main benefit of Rous’s discovery was understood as being the better comprehension that cancer cells operated under the same physiological rules as others. It took until the 1970s for the role of a virus in a human cancer to be demonstrated. The demonstration was of HPV in the neck of the uterus* and the Nobel Prize for that came only in 2008. The true extent of HPV in human cancers is still uncertain but appears broad. Spread by close contact† the virus causes benign tumours, warts, which can progress to malignant ones, cancers. It accounts for most cervical, anal and rectal cancers, as well as the majority of vaginal, vulval, penile, oral and pharyngeal ones.5 At the moment the strategy in most high-income countries is to vaccinate girls and men who have sex with men. The vaccine is timed to be given before sexual lives start but, because its effects are thought to wane, not too long before. A more rational (and cost-effective 6) strategy would be to vaccinate everyone, thereby increasing herd immunity, removing any requirement to pry into sexual choices, and not incidentally giving wholly heterosexual men the reassurance that they were not killing their lovers. An Australian study showed that even a limited vaccination strategy was already having much greater effects than merely the protection against cervical cancer – the occurrence of genital warts was plunging and, in young women who had been vaccinated, had entirely disappeared.7 Improvements in the appearance of Australian genitals herald deeper improvements to come in later life and its vulnerability to cancer.
The eventual number of deaths that could be preventable by HPV vaccination is immense but unknown. In the meantime childhood cancer trends, affecting far fewer than are struck down by HPV-associated diseases, are an established success. Survival rates have climbed steadily and progressively: whereas a century ago cancers of childhood were overwhelmingly likely to kill those who got them, survival rates are now of the order of 90 per cent. The way cancer care and cancer research for children have been organised has been partly responsible. Cancer in children attracts our attention because of its horror but also because it is rare. Its rarity means its treatment has tended to be concentrated in specialist centres. The proportion of children entering clinical trials has, as a result, always been extremely high,8 which is not true of adults. Progress has not come from miracle breakthroughs but the accumulation of steady benefits, each minor or moderate, but each adding to the sum (a recurring story in this book, but a formulation that bears repeating.) There is also something physiologically different about childhood cancers. As one editorial put it, ‘for reasons that are still obscure, many childhood cancers are very responsive to treatment, and cure has long been both a feasible objective for treatment and a powerful motivator of physicians’ behaviour’.9 Tumours in adults, the combination of decades of mutations, are fantastically varied in the genetic changes that underwrite their escape from the body’s normal control of cell division. Those that emerge in childhood and youth often involve a smaller number of molecular pathways. The fewer of those there are, the more able we are to attack them.
For these reasons – reasons of physiology and genetics, and also of culture – improvements in the treatment of childhood cancers have proven particularly achievable. Here too, where progress has been most rapid, the story is of steady improvement, not great leaps forward. Half the trials report no benefits at all while those that do show benefits adding a few percentage points to a good outcome here, an extension of life by a fraction there.10 There have been so many trials that these small steps combined have taken us a very long way.
That cultural traits apply so powerfully to what at first glance seems a purely technical problem – how do you stop the rapidly dividing cells of childhood cancer? – is a reminder that the problem of human health is never a strictly scientific one. Or more accurately a reminder that science itself cannot be separated from our wider culture. Neither the questions asked nor the answers we come up with nor the tools available to us are impartial, drawn simply from objective calculations.
Émile Durkheim, struggling to argue that there was such a thing as sociology, chose suicide as a major example. There was, he argued, such a thing as society: a level of organisation that gave rise to phenomena which could only properly be understood at that level. Suicide was a good example partly because it was such a personal act, a test case to see whether considering it at the level of a society would reveal qualities invisible at the level of individuals.
Suicide can be explained as a chain of levels, some explanations being more proximate to the event and others more distant. The distance does not alter an explanation’s truth quotient, only the level of organisation they address and the sorts of insight they give. Suicide from paracetamol overdose can be explained in a very proximate manner as a result of the toxicity of the drug: the liver breaks it down into a substance that destroys glutathione, an antioxidant the liver uses to protect itself against certain cellular reactions. Take enough paracetamol and the liver dies.
For any individual suicide from paracetamol, this explains their death. So do the reasons behind the person taking the overdose. Answers emerge from psychology that are just as true as, but different in nature from, answers from pharmacology and hepatic physiology. At a societal level, Durkheim argued, new insights appeared that could no more have been reliably predicted from psychology than the psychology could have been understood by peering at the liver. Emergent or irreducible properties needed to be approached differently from resultant properties, those that could be understood by extrapolating from simpler levels of understanding. ‘Human beings in society have no properties but those which are derived from, and may be resolved into, the laws of the nature of individual man,’ John Stuart Mill had declared.11 Sociology and Durkheim said otherwise. The phenomena of society could not be wholly resolved into the properties of individuals. Conglomeration was not accumulation. New properties emerged, new phenomena requiring new explanations.
The idea that we should be seeking to understand the underlying reasons for events, not just their proximate causes, lay behind the rebranding, in the UK, of hospital emergency departments. Previously, they were known as ‘accident and emergency’. When a person trips over their cat, it’s an accident. National figures for these injuries, however, show trends and show links with other properties of society; these figures represent a collection of random events which, taken together, have non-random properties. In America the Centers for Disease Control and Prevention record pet-related injuries. They record injuries separately for cats and for dogs and for owners tripping over the pets themselves or a pet-related item.‡ Medicine meets sociology and the upshots were recommendations for reducing such injuries.12,13 It may not seem a terrific step forward for human welfare to raise awareness of the potential to trip over your hound, or the extent to which it can be reduced by taking them for obedience training, but its triviality makes it a fine example of how medicine works and how it has come about that we live longer and healthier lives than our ancestors. In the last year reported, 80,000 Americans sought emergency treatment after falls related to pets. Some were bumps and bruises people should never have sought attention for but 10 per cent of individuals – 8,000 – had serious enough injuries to require admission to hospital. In the pursuit of better human health, small differences matter. Whether they matter enough to warrant the fuss they involve is a different matter, but the answer is, at least potentially, that they do.
In Britain, reductions in emergency department attendances were noted in particular weeks scattered through the early 2000s. Investigations revealed that each one followed, and appeared to be explained by, the release of the latest Harry Potter book.14 The authors of the study considered ‘a committee of safety conscious, talented writers who could produce high-quality books for the purpose of injury prevention’ but noted the potential problems of an ‘increase in childhood obesity, rickets, and loss of cardiovascular fitness’. The study was a deliberate joke but the sort of relationship it demonstrated between societal and apparently entirely individual events was real.
Durkheim identified cultural traits he thought contributed to some societies having higher suicide rates than others. He was trying to prove a point rather than alter practice, but the point is that practice followed. Durkheim noted a rise in suicide when economies boomed and when they bust. It was the disruption to normality, he thought, that was dangerous. A modern study of economic hardship in Greece found suicide to be linked not to the 2008 recession, but to the policies of economic austerity it inspired.15 Durkheim thought prosperity harmful (reducing the ties that bound societies together in harder times) but the Greek study concluded prosperity’s effects on suicide rates had been absent or actively helpful. What matters is not so much which of them is correct, as that Durkheim won his argument. We can disagree with the conclusions of his sociology but we do so by challenging them with alternative sociological evidence. Arguing that sociology can be done badly only strengthens the notion that it matters when done well.
How one could make society less alienating and more encouraging is a question medical science cannot answer – answers have to come from social psychology and politics, culture and religion – but it can suggest methods by which proposals can be tested rather than argued over. Randomised controlled trials have allowed us to test interventions that change the odds of events to a degree that unstructured observation could never safely distinguish from the background noise of human variety. These techniques are used in agriculture and medicine and veterinary science but rarely elsewhere. They have the transforming power to replace belief with knowledge. Many questions in politics, economics, criminology, education and other fields that are currently decided by ideology and argument would be better settled by experiment. The experiments are often possible, we just lack the culture of doing them, and as a result we suffer by their absence. Even in medicine, where the majority of interventions are based on reliable interventions in a way they were not a few decades ago,16 the use of trials has not extended far enough. Opinions are fiercest, it has been dryly observed, when the evidence to support or refute them is weakest.
Medicine has shown itself capable of proposing smaller scale interventions that make a substantial difference even to individual choices like suicide. One example, and a good one for illustrating where evidence can be good but should have been better, comes from changes made to the size of packets of paracetamol.
In both the UK and the USA, paracetamol poisoning is the top cause of liver failure. In 1998, UK law limited the amount of paracetamol people could buy at any one time. The aim was to make it harder, however slightly, to kill oneself. At the cost of making life slightly more irritating for those with pain or fevers, the hope was that numbers of impulsive suicide attempts would be thwarted. We think the measure worked but we are not sure. A before and after study showed that rates of suicide by paracetamol overdose went down, but suicide rates in general were falling so it was impossible to be sure why that happened.17 We could have trialled the intervention properly. The changes could have been rolled out in a randomised fashion, with different geographical locations allocated the new limits or the old. Such an approach would have had benefits beyond the satisfaction of revealing the truth. The irritation of limiting sales of paracetamol to those with pain or a fever is minor but real. The minor irritations of ever-increasing bureaucracy and risk-averse policies accumulate, just like the benefits they aim to bring. Since some of the benefits are real, and add up, the whole business deserves to be taken seriously enough to remember that the same can be said for the harms. A cluster-randomised test of the effect of limiting paracetamol pack size (where different geographical locations were randomised to different strategies) would not only have determined its effects, but would also have set a powerful precedent for testing well-meaning interventions that may or may not work.
Childhood cancer rates have dropped partly because of a consistent culture of subjecting every well-meaning and well-designed intervention to a proper test. Interventions for childhood cancer are, in the scheme of things, relatively predictable – certainly more so than societal interventions aimed at altering human behaviour. Despite that, when subjected to proper trials, interventions for childhood cancer are as likely to harm as to help. Taken overall, when the history of childhood cancer research was explored, each trial was shown only to have a fifty–fifty chance of working.18 Progress does not come from expert opinion which, even in a field so highly reducible to physiological theory, is no better than a coin toss. Progress comes from structured experiments that reliably test those theories and opinions and identify which are right. We are wrong to assume that it is better to do something than nothing, or that wisdom and compassion act first and worry later.
The history of medicine, and its increasingly explosive success over the past century, makes a powerful argument for the limited ability of wisdom when it comes to predicting outcomes. ‘It is a layman’s illusion that in science we caper from pinnacle to pinnacle of achievement,’ wrote the immunologist and Nobel laureate Peter Medawar,
and that we exercise a Method which preserves us from error. Indeed we do not; our way of going about things takes it for granted that we guess less often right than wrong, but at the same time ensures that we need not persist if we earnestly and honestly endeavour not to do so.19
The mistake medicine historically made was to use science to generate guesses about interventions and then put the guesses into practice. Only over the past ninety years has medicine developed ways (and noticed the need) to test whether the guesses are correct.
The amino acid methionine acts as an antidote to the toxic effects of paracetamol. It may be that combining it with paracetamol in tablets would eliminate all remaining deaths from deliberate or accidental overdose of the drug and at the same time abolish any need to limit pack size. We are unsure to what extent methionine might cause mild irritations, both physical ones like headache and nausea and more certain commercial ones like the cost of building it in. Hence we are uncertain whether the combination is worth introducing.20 What holds us back is cultural as much as scientific – the partial penetration of scientific method into culture. Lacking the tradition of testing our hypotheses, we overlook the need to and the answers go begging. People die today from paracetamol overdose who we might have saved. If all paracetamol packets contained their own antidote to an overdose, no sales limits would be needed. Similar issues plague the question of whether putting barriers on bridges, to make it harder to jump, reduces suicide rates. Before and after studies show confidently they reduce suicides at those bridges, but leave uncertain whether people simply find somewhere else.21,22 The difference matters. We are left unsure whether we have wasted resources and inappropriately intruded into society by erecting barriers that do not work, or whether we are causing harm and needless death by not intervening more.
Most people survive the years between their fifth birthday and the start of middle age three decades later. Habits and predispositions are learnt, adopted and cemented with implications for future health. But the immediate hazards remain small. For those between the ages of five and nineteen, the most likely thing to kill you is a road traffic accident, followed by suicide or accidental poisoning. Between the ages of twenty and thirty-four those two causes of death remain pre-eminent but their order is reversed. Suicide will rise in importance in the years to come. It will do so because roads will get safer. ‘Drive safely’, said a memorable Ugandan road sign; ‘bloodless roads look good.’ Haphazard deaths by car are common. They will soon be rare.
*
In Jane Austen’s novels a cough and cold are enough to convulse her characters with fear. They gather in hushed tones by bedsides or make anxious enquiries from a distance. As a teenage boy I recognised how ridiculous this was. With the warm and generous heart of youth, I pitied the sentimental melodrama of the author.
Not the first adolescent male to be too stupid to understand Austen, it took years for me to realise the full extent of my error. To force teenage boys to study Mansfield Park, as my syllabus did, was to guarantee a bad result, and the moment I discovered Austen’s other books my doltish adolescence began to recede a little. Admiring delight in her work was quick; what took longer was a proper understanding of history. Austen was a tough and realistic woman. Writing to a brother in the navy, she wholeheartedly wished him warm luck, urging him to remain good humoured even in the likely event he realised his death was upon him. The hazards of the navy were known to her. So were those of life on land. She lived in a time when mild infections were often fatal, and with little warning. Well on Monday, a fever on Tuesday, dead by the weekend. Her books, and books of her period and in all the periods of human life before antibiotics, are littered with the accidents of premature death. They were the stuff of novels because they were what shaped and determined lives. People gathered anxiously at bedsides or sent nervously for news because they feared the worst. They were right to do so.
* Cervix means neck in Latin and the underlying image is of the uterus as an upside down flask, its opening and neck at its base.
† The relative contributions of sex, kissing, oral sex and other close contact are not certain. Unless one is going to allow intimate behaviour to be more determined by hygiene than desire, they probably don’t matter, and they certainly don’t if you have a vaccine.
‡ And separately still if the pet was in no way responsible. ‘Patient jumped off a fence and fell onto a doghouse,’ was the example given. The medical journals publish this stuff partly because it matters and partly to brighten up the lives of those of us who have to read them. Aware of what else they print, they know we need it.