PROBLEMS AND PROSPECTS
If, in the early 1980s, campaigning in terms of the human right to health would have been unthinkable, the landscape now has changed dramatically. Just to take the academic literature, a new journal called Health and Human Rights was founded in 1994, first edited by Jonathan Mann and now by Paul Farmer, and is available online and open-access. Established and highly respected journals such as The Lancet, The British Medical Journal and The New England Journal of Medicine regularly publish articles promoting the human right to health. Recent articles have highlighted human right to health issues in, to name just a few, Burma, Gaza, Ecuador, Cambodia, Mexico, Argentina, the Netherlands, the USA, and Nepal. Groups suffering particular problems include women, children, asylum seekers, prisoners, gypsies, disabled people, indigenous groups, deaf people, lesbians and drug addicts. Alleged violators include not only governments but Big Pharma, donors, international financial institutions, the medical profession, medical educators, and private companies. And specific concerns have been raised about malaria, neglected tropical diseases, essential surgery, polio and compulsory vaccination, substance abuse, maternal mortality, and female general mutilation, again just to name some of the more prominent examples.
Over the decades activists have made a number of charges that could hardly be more important: people are dying unnecessarily in their millions. The world has not stood still. In some of the highest-profile issues, the critics have won the argument. A familiar pattern of complaint and response has emerged. First, an issue is highlighted by activists, whether NGOs, concerned medical staff, or even governments themselves. Against official fatalism that little, if anything, can be done, small-scale initiatives provide a model for what might be possible. International institutions and donors wake up and sponsor research. Position papers are issued, voluntary codes sought, and by some means or other the world community begins to find ways of taking its responsibilities seriously. In most cases, very sadly, we are not yet at a stage where we can show that the burden of disease has been dramatically reduced, but there are many promising signs. Often, of course, action has turned out to be ineffective, inappropriate, or even counterproductive, and sometimes a matter of mitigating damage rather than making new progress. But all of this is progress of a sort. In the remainder of this chapter we will look at some of the most widely discussed case studies and draw conclusions that can help guide our thinking where new issues emerge.
THE WORLD BANK AND HEALTH SYSTEMS
The World Bank, alongside the International Monetary Fund (IMF), was set up as a result of the Bretton Woods Conference in 1944, as part of a program to finance the rebuilding of Europe after the Second World War. This is not the place to enter into a general discussion of World Bank policies or, as former World Bank economist Joseph Stiglitz argues, whether the real villain is the IMF,1 but it is the place to explore how World Bank policies affected health, and in particular the ability of developing countries to meet their duties to protect and fulfill the right to health of their citizens.
In the 1950s and 1960s, large infrastructure projects were the bank’s preferred development strategy, but in the 1970s, under the leadership of Robert McNamara, health became more prominent in its activities. In 1974 it sponsored a successful project, which turned out to last for thirty years, to eliminate river blindness in West Africa. The bank’s World Development Report of 1980 emphasized the importance of health and the ability of governments to tackle problems if given sufficient assistance. In 1993, in a major development, health became the main focus of the World Development Report, which that year was entitled Investing in Health. By 2004 it could be written that “the World Bank is now the world’s largest external funder of health, committing more than $1 billion annually in new lending to improve health, nutrition, and population in developing countries. Moreover, it is one of the world’s largest external funders of the fight against HIV/AIDS, with current commitments of more than $1.3 billion, 50 percent of that to sub-Saharan Africa.”2
The World Bank must be given credit for taking bold and imaginative steps that have reduced the global burden of disease. Yet critics point out that underneath this triumphalism is a darker side. In the mid-1980s, in the era of the governments of Ronald Reagan in the USA and Margaret Thatcher in the UK, the World Bank and the IMF became gripped by what is often called “market fundamentalism” or “the Washington Consensus.” This is represented by its critics as a “one-size-fits-all” approach to development in which macroeconomic stability, free markets, trade liberalization, and shrinking the public sector are seen as the route to economic growth and the end of poverty. The programs, now so reviled in the development movement, are known as “structural adjustment,”3 and are intended to bring a type of economic stability to a country, controlling inflation and making it more attractive to external investors.
Whatever sound theory there may have been behind the practice, and whatever good structural adjustment might have done in other respects, it is widely argued that structural adjustment has been a disaster for health, at least in some of the poorest countries of the world.4 At best, one might see the World Bank’s policies as a tragic misapplication of the ideas of the social determinants of health.5 Once it is noted that, for poor countries, there is an impressive correlation between wealth, as measured by GDP per capita, and health indicators, it may appear that the best way to achieve good health outcomes is to do whatever is necessary to drive growth. This was, however, doubly problematic and seems to have turned out to be a gross and fatal oversimplification.
First, the World Bank often required countries to reduce public spending, including spending on health, as part of the structural adjustment to drive economic growth. Whatever the long-term effects, the shorter-term consequences on health are unlikely to be encouraging. Reduction in public spending on health is often accompanied by charging user fees to the patient. Basic economics should reveal that any price may be too high for some people, and if they don’t have access to the money they cannot buy the service, even if they are in severe need. User fees quite obviously lead to lower take-up of services.6 Pressing the criticism in detail, an analysis of World Bank policies written in 2004 notes that according to the World Bank’s own figures, a health budget of US$13 per person per year was needed for a country to meet its core obligations related to health. The authors then point out that “In Mozambique, per capita health expenditure fell from US$3.50 in 1986 to US$0.68 in 1988—after the introduction of a structural adjustment program in 1987. . . . one can only conclude that SAPs resulted in retrogression in realizing the right to health in many developing nations.”7 Indeed, UNICEF estimated that structural adjustment had led to 500,000 child deaths in twelve months, although the World Bank’s own research, not surprisingly, questions the evidence.8
I mentioned a double flaw in the World Bank’s argument that cutting health services to advance economic growth would be beneficial to health. The first was the damage done by cutting health services. The second is that structural adjustment has not, in general, been successful in eliminating poverty. It is often claimed that the World Bank and the IMF have made no significant inroads on poverty. More accurately, growth in GDP per capita has often been achieved, but the distribution of wealth in many countries can be so unequal that poverty for the masses remains unchanged. Short-term pain has not generated the hoped-for long-term gain.
To see how structural adjustment works in more detail, consider the World Bank report Investing in Health, published in 1993. It sets out a three-pronged program for improving health. The first sounds broadly admirable, even though it is worded in a way that triggers alarm bells:
First, governments need to foster an economic environment that enables households to improve their own health. Growth policies (including, where necessary, economic adjustment policies) that ensure income gains for the poor are essential. So, too, is expanded investment in schooling, particularly for girls.9
The second element is a recommendation for governments to reassign spending from high-tech hospital-based services to much more cost-effective public health initiatives. Many welcomed this approach, but nevertheless critics were worried about the notion of cost-effectiveness applied to health. The danger, of course, is that on cost-effectiveness grounds people in low-resource environments might be deemed not cost-effective to treat, if health budgets are low and the cost of treating them is high.10 But the third element, so critics allege, spoils whatever promise there is in the first two, and must be quoted at length:
Third, governments need to promote greater diversity and competition in the financing and delivery of health services. Government financing of public health and essential clinical services would leave the coverage of remaining clinical services to private finance, usually mediated through insurance, or to social insurance. Government regulation can strengthen private insurance markets by improving incentives for wide coverage and for cost control. Even for publicly financed clinical services, governments can encourage competition and private sector involvement in service supply and can help improve the efficiency of the private sector by generating and disseminating key information. The combination of these measures will improve health outcomes and contain costs while enhancing consumer satisfaction.11
It is not hard to believe that many national health systems are complacent and inefficient, and where there is a large or dispersed population may have great difficulty providing anything like universal coverage of many health services. Mainstream economic theory would predict that opening up the health sector to competition from the private sector could have beneficial effects. But even if this is true for some countries, in others, the poorest, and where the burden of disease is highest, it simply seems beside the point. It is hard to believe that the main problem with health services in the world’s poorest countries is lack of competition in the health sector, leading to diminished “consumer satisfaction.” Who could believe that children are dying in Liberia and Angola because of the lack of diversity and competition in financing and delivery of health services? What evidence is there that privatizing health care leads to improved health outcomes or cost containment, never mind both together? The self-confidence with which these claims are asserted is staggering. The failure of such policies was not a surprise. And it is not surprising that critics saw a double meaning in the title Investing in Health: not only an assertion that the people’s health is an investment, but also an invitation to private business to find ways of profiting from health policy in the developing world.12
In the interest of balance we should note that structural adjustment programs were often introduced at times of severe financial crisis, making it hard to know whether they really did do as much harm as claimed. What we need to understand is not how things were before, but rather, how they would have been if some alternative program was followed—and this is something we can never really know. In any case, the critics won the argument and structural adjustment ended in the mid-1990s. Have lessons been learned since? The World Bank’s program after 1999 allowed developing countries more “ownership” of their economic plans, yet according to critics continued to make it very difficult for countries to raise enough money to meet their core human rights obligations. Countries that could not afford to meet the minimum standard looked to overseas aid for help, but the World Bank was worried—perhaps rightly—that large-scale foreign aid would cause macroeconomic problems. Experience elsewhere suggests that large influxes of foreign currency ruin export industries, and so, as export is a main hope for future development, extensive financial aid can be self-defeating in the long run.13 Therefore, the World Bank continued to cap government health expenditure.14 If this theory is right, then there is a serious dilemma. Should a country seek longer-term economic growth, even if this means neglecting current health needs? Or should present urgent needs have priority, especially as economic prophecy is so uncertain? Perhaps more importantly, should it be the role of the World Bank to pressurise a country into making the decision one way rather than the other? Some critics make the provocative claim that not only does the World Bank violate the human right to health by putting macroeconomic factors before the right to health, but that donor countries to the World Bank have been equally culpable.15 Furthermore, the World Bank has been accused of allowing “leakage” of funds from the health sector on a huge scale, so that in many sub-Saharan African countries, money designated for health expenditure simply never reaches its destination, and the World Bank has done little to investigate or take steps to remedy the situation.16 Even the World Bank’s own internal evaluation admits, “The accountability of Bank and IFC-financed HNP [Health, Nutrition and Poverty] projects to ensure that results actually reach the poor has been weak.”17 The evaluation report goes on to suggest that a significant number of projects that were justified in the name of poverty reduction in fact had no internal mechanism to identify and target the poor, and so the benefits tended to go to people who were not regarded as poor. For example, a sanitation project in Nepal was intended to improve the health of the poor, but in reality the project only assisted communities that lived alongside main roads, whereas the poorest people tended to live in more remote areas.18
For decades the World Bank and the International Monetary Fund have been accused of doing harm to the health of the poor by preventing their governments from pursuing beneficial health policies. Due to concerted criticism the period of the greatest damage is over. But the jury is still out on the question of whether, in the area of health, the World Bank and the IMF have yet shown significant results for their efforts. Still, it does appear that its critics have forced a retreat from the most damaging policies, and the World Bank and the IMF’s willingness to listen to criticism, and to engage in self-criticism, is encouraging.
TRIPS AND THE PRICE OF MEDICINES
Human right to health activists often pay a great deal of attention to the underlying determinants of health and illness, such as poverty and illiteracy. Yet once people have become ill, a cure is needed. The WHO maintains a list of essential medicines that should be available in all countries. Millions of people die through lack of access to such medicines, even though most of the medicines on the list are not at all costly for governments and are off-patent so available in generic form. Lack of access to medicine is often, then, a problem of organization and infrastructure, not a problem of cost. Nevertheless, in a significant number of cases cost is a major problem—and it is more of a problem than it was two decades ago.
Prior to the 1990s, many developing countries did not recognize patents or copyrights that had been granted in other countries. In India, Brazil, and other large developing countries with industrial capability, it was standard practice to develop generic versions of pharmaceuticals that would otherwise have been available only at an unaffordable price. However, the World Trade Organization, the third Bretton Woods institution and the last to play a major role on the world stage, has attempted to bring in a level of harmonization of intellectual property. The main instrument by which this has been done is known as TRIPS—Trade Related Aspects of Intellectual Property Rights. (One might think that a more appropriate shortening of the name would have been TRAPS, but this would have given the game away.) TRIPS was negotiated in 1994. Membership of the WTO is seen as advantageous in gaining access to world markets, and TRIPS is binding on all members. The essence of TRIPS is that although there are various transitional exemptions, mostly expired now, the aim is to move to a world in which all countries respect international copyrights and patents. The agreement included a twenty-year patent period for new medicines, allowing the producer company a temporary monopoly during which it can set its own price without having to worry about competition. Countries cannot pick and choose which provisions to follow, and so while it may be beneficial to trade to be a member of the WTO, it provides a great challenge to countries that simply do not have the resources to purchase novel medicines at full price; they will be unable to meet their obligation to fulfill the right to health for their citizens. Arguably, then, all signatories of TRIPS have failed to protect the human right to health by allowing drug companies to charge unaffordable prices for essential medicines. Although TRIPS does allow for “compulsory licensing” of patented medicines in cases of national emergency, in practice this provision is rarely used.19 However, Brazil has managed to make low-cost, generic HIV treatments available, and a complaint against Brazil in 2001 from the USA to the WTO was withdrawn,20 most likely because of the likely world outrage if the USA had managed to make HIV treatment unaffordable in Brazil, leading to the early deaths of tens of thousands.
Although Big Pharma is portrayed as the villain of this story it is important to understand why drug companies feel they need this period of patent. While manufacturing a drug may not be enormously expensive, the cost of research and development can be phenomenal. And it must be recognized that most research undertaken by pharmaceutical companies never issues in a commercial product. Hence the loss they make on other lines of research must be recouped from the very few successful products. If they were not permitted a temporary monopoly position they simply could not afford to research, and the supply of new medicines would dry up completely. The real dilemma, then, is how to allow access without destroying the industry.
To explore this further it is worth looking at the example of drug-resistant tuberculosis. Tuberculosis has been known since ancient times, and is one of a host of infectious diseases that blighted our ancestors. According to medical historian Roy Porter, “until the nineteenth century, towns were so insanitary that their populations never replaced themselves by reproduction, multiplying themselves only thanks to the influx of rural surpluses who were tragically infection-prone.”21 Tuberculosis may have been one of the main killers: in Paris hospitals at the beginning of the nineteenth century it was said to be the cause of 40 percent of deaths.22 Although public health measures, such as campaigns to end spitting in public places and the opening of sanatoria, were attempted in the latter part of the century, death rates remained high. But in the first half of the twentieth century a whole raft of public health measures, from the provision of clean water to the placing of lids on dustbins, created a much more hygienic urban atmosphere and TB infection rates plummeted.23 The discovery of antibiotics used in combination finally provided an effective means of treating TB.24 Further refinements have led to more successful attempts to cure TB and reduce the chance of resistant strains forming.
In one sense, then, TB is a great success story of medical science. Yet today around 1.5 million people a year die of TB. Why is that? Not all cases will respond to medication and, of course, many people in the developing world do not receive treatment. Furthermore, drug-resistant strains of TB are on the rise and now cause about 10 percent of these deaths.
The possibility that patients could develop resistance to antibiotics was a problem from the start, which is why the antibiotics were used in combination. As early as 1958, it was known that drug-resistant TB could result from poor prescription practices and poor adherence to treatment protocols and could, therefore, lead to resistant strains emerging in the general population.25 The warning, though, was not enough, and the feared outcome of careless practices has been realized. Drug-resistant tuberculosis, we already noted, is killing about 150,000 people a year. These conditions can be traced back to several decades of poor use of antibiotics: disease induced through medical means. And susceptibility to TB in all its forms is compounded with the HIV/AIDS epidemic.
To improve adherence to therapy, “DOTS”—directly observed treatment short-course—has been developed, in which a “treatment partner” watches and makes a record of the patient taking their medication. Yet many patients have what is known as multi-drug-resistant tuberculosis (MDRTB) which is resistant to at least two of the normal first-line drugs. This condition is now widespread and often affects people who have previously been treated for TB, indicating either that they were given the wrong initial diagnosis or that their treatment caused the development of a new resistant strain. Paul Farmer, for example, reports on something close to an epidemic ravaging Russian prisons in the 1990s. He notes that patients likely to have MDRTB were nevertheless being treated with standard first line therapies, which would be not only ineffective but likely to encourage even more resistant strains to emerge. Add to this the hugely overcrowded conditions in Russian prisons and there is a perfect breeding ground for infection.
First-line therapy for TB is relatively cheap and often effective. Second-line therapy is much more problematic. It can cost up to one hundred times as much, perhaps tens of thousands of dollars per person, and takes much longer to deliver results as well as being more toxic to the patient. Nevertheless, Paul Farmer, with the group Partners in Health, had shown in 1992 in Peru that it was possible to use second-line therapy to treat cases that previously were thought to be untreatable.26 He had managed to find donors to pay for treatment for a small group, just fifty patients, with excellent results, and was ready to scale up treatment. But in “low-resource settings,” such as Russian prisons, much of this treatment was deemed unaffordable or not cost-effective and so patients were simply left to die.
Cost-effectiveness is the bugbear of every health system. No health system can offer every possible therapy, even when theoretically it is available. But, Paul Farmer asks, why are these treatments not cost-effective? Why are they so expensive? And why is there no money? In the case of Russian prisoners, Farmer points out that the drugs are expensive because of the decisions of drug companies, and Russia has little money available partly as a result of the dismantling of its health system under the orders of the World Bank and partly because of the redistribution of wealth with huge amounts of cash in the hands of its elite flowing out of Russia to New York, Switzerland, and the Caribbean. These are not natural facts but the result of human decisions: decisions that lead to the neglect—violation—of the human right to health.27
We have already reviewed some of the issues connected with the World Bank. Let us instead consider how the world might have responded before TRIPS. Quite possibly a pharmaceutical company in, say, India would have obtained samples of the new drugs, and devised ways of producing generic versions. These could then have been made available throughout the developing world to the great benefit of huge numbers of people. The TRIPS agreement, it seems, makes such an approach highly problematic for any drug under patent, although the later Doha Declaration and some other modifications do provide some exceptions in case of epidemics.28 (We should note, however, that Russia is not bound by TRIPS as it is not a member of the WTO, but TRIPS has made generic versions of patented drugs less generally available.)
Some people have argued that the current situation is so problematic that we need an entirely new approach to international patents. Philosopher Thomas Pogge, for example, has put forward a proposal for a highly ingenious Health Impact Fund, in which pharmaceutical companies can opt for a different type of patent arrangement. In this approach, companies would receive payment not only for their drugs but also for the good they do: their “health impact.” Accordingly, drug companies would bring down prices to maximize distribution and also, perhaps, encourage licensed generic production and research for neglected diseases where there is the most good still to be done.29 However, moving to a new international patent system would be a hugely complex task, and it is not entirely clear how this new approach would be funded. Pogge accepts that it would be highly expensive, but looks to wealthy countries to fund it as part of their human rights obligations. Unfortunately, whether funds would be forthcoming on a sufficient scale in any sustained fashion is highly uncertain.
In practice a different, dual strategy has been adopted: price differentiation and direct support from donors. Price differentiation is a matter of charging different prices for the same good. This is well known as a commercial strategy. Theatres will often sell seats to students on the day at a discount: it is better to raise some revenue than to have empty seats. Equally a drug company may feel that it is better to sell a drug at a lower price in a poor country than to charge a high price but to make very few sales. They might even be prepared to sell at cost price, as part of a “corporate social responsibility” program, and for the goodwill that will accrue. But there is, of course, a huge commercial danger. Drugs bought in one market can be moved to another, and hence a black market in cheap drugs may appear elsewhere. If this were to happen it would be to give up the advantages of the patent system. Hence, if price differentiation is to be employed, it needs to be done so in a highly controlled and regulated fashion.
Indeed, this is what we are starting to see. Aware of the dangers, drug companies typically do not allow low-price drugs on to the open market. Rather they work with institutions, NGOs and donors such as the World Health Organization or Médicins Sans Frontières to bring low-price drugs to target groups. Multi-drug-resistant TB turns out to be an excellent example of this strategy. The World Health Organization has declared that treating MDRTB, and indeed the even more serious XDRTB (extensively drug-resistant TB) is cost-effective.30 Funding has been made available through the Global Fund for HIV, Malaria, and Tuberculosis, and drug companies have provided medicines at lower prices through a pathway set up by the WHO and MSF called the Green Light Committee Initiative, which started in 2000. Although the amount flowing so far is nowhere near enough to offer everyone treatment, it is a major move in overcoming the high price of drugs without destabilizing the commercial market. Related projects include an internationally funded initiative to ensure that all countries have the laboratory facilities to diagnose different forms of TB. Current targets are for 80 percent of all drug-resistant cases to be treated by 2015.31 This is optimistic and, perhaps, highly unlikely to be met, but it is a powerful indication that human rights campaigns, accompanied by civil society initiatives showing what can be achieved, can stir international organizations into action. Governments can be urged, pushed and helped into meeting their obligations to protect and fulfill the human right to health. Similar initiatives are being attempted with malaria funding, for example.32
In fact, drug companies donating their products on a significant scale is nothing new. Even in 2002, the Nuffield Council on Bioethics could give examples of donations of therapies for river blindness, elephantiasis, trachoma, and malaria, as well as contraceptives and condoms.33 What is new is the scale of funds now available from donors and multinational organizations to spread the financial cost.
In the World Health Report for 2006, the WHO calculated that there was a world shortfall of 2.4 million doctors, nurses, and midwives, and 4.3 million health workers in total, with much of sub-Saharan Africa in deepest crisis.34 Many smaller African countries do not have their own medical schools to train doctors and so must rely on attracting doctors trained elsewhere.35
The shortage of health workers is bad enough in its own right. It has been argued that it compromises the quality of health care in other ways:
The economies of most developing countries cannot sustain adequate medical staffs to meet the demand for health services. The few medical staff who can be retained are so valuable that the system cannot conceive of disciplining them for substandard practices. In many cases, qualified, dedicated medical professionals are frustrated by long hours of work for meager salaries. Patients are too scared to make any noise, lest they cut themselves off from the opportunity for medical care in the future. To the extent that resources remain this limited in our developing countries, it will be extremely difficult to achieve health and human rights for all.36
What is the global community doing to help the developing world make it good its failure to fulfill the right to health of so many of its citizens? Although efforts have been made to increase the numbers of people in the developing world receiving a medical education, broadly over the last couple of decades the wealthiest countries have been making matters worse through the international recruitment of health workers. Predicting a chronic shortage in the supply of future health workers, countries of the developed world undertook active overseas recruitment drives. Part of the shortage in nurses in the developed world stemmed from the expansion of other opportunities for women in the labor market, which made nursing a much less obvious career choice than it had been in the past. At the same time several developed countries were greatly increasing their investment in health, in the face of an aging population and a rise in chronic disease. To deal with the perceived lack of doctors and nurses they looked abroad. For example, a report by the UK King’s Fund in 2004 states that “In 2002 nearly half of the 10,000 new full registrants on the General Medical Council (GMC) register [i.e. doctors] were from abroad; in 2003 this had risen to more than two-thirds of 15,000 registrants. Most of the growth has been in doctors from non-European Economic Area countries.”37
The situation elsewhere is similar. For decades about 25 percent of doctors practicing in the USA received their training elsewhere. This now amounts to close to 200,000 doctors educated outside the USA. Around 5,000 were trained in sub-Saharan Africa, predominately, but not exclusively, Ghana, Nigeria and South Africa. In 2002 there were forty-seven Liberian-trained doctors working in the US, and just seventy-two working in Liberia.38 A parallel, though less dramatic, story can be told for nursing staff. In Canada, New Zealand, and Australia the picture is the same. In fact, there are patterns of chains of migration: the USA recruits from Canada, Canada from South Africa, and South Africa from the rest of Africa.39 Ultimately the shortage will be felt by the poorest. The WHO lists ten countries that lose more than 50 percent of their trained doctors, including Haiti, Sierra Leone, Angola, and Mozambique.40
In most countries, especially in the developing world, doctors are trained at public expense. If a doctor from Ghana is recruited to the USA, not only does Ghana lose its doctor, it loses the money paid for the training. Now it may be that the doctor is likely to send a portion of earnings back home (known in the development business as remittances). But this is scant compensation. Not only is the developed world reducing the ability of countries in the developing world to meet their duty to fulfill the right to health, they are taking a massive hidden financial subsidy from them. Furthermore, the emigration of skilled professionals is problematic from the point of view of civil society and democracy; doctors and nurses are respected and influential members of their societies. They are intelligent people and often have a sense of civic responsibility (which partially explains their career choice). They are a force for stability, but also for beneficial reform in their own countries. However much money they send back home, and even if health clinics are staffed by Western volunteers, nothing can make up for their loss.41
What can be done? Recognition of the problem is the first step. In some circles it has been known for many years, but in the last decade or so acknowledgment has widened and deepened. Complaints from Africa have, at the minimum, made the developed world uncomfortable enough to want at least to be seen to be doing something. One relatively early response was a code of practice adopted in 2003 by the ministers of health of all the fifty or so member states of the Commonwealth, which includes the UK, Australia, Canada, New Zealand, South Africa, Nigeria, Uganda, India, Pakistan, Bangladesh, and many other African, Caribbean, Asian, and South Pacific states. This is the residue of the British Empire: predominantly English-speaking countries with similar education systems and a practice of permitting immigration, and so historically a great source of migrant labor. The Commonwealth Code of Practice recognizes the individual and social benefits of a mobile labor force, not only in terms of individual ambition but also in the return of staff who have learned new skills abroad. Hence it does not restrict rights of individuals to move. However it encourages governments to agree to an ethical framework which provides benefits to both sides. It appears that the UK has taken this approach seriously. The NHS applied these ideas beyond the Commonwealth and adopted the policy of recruiting nurses only from countries that signed agreements and considered themselves to have a surplus of trained staff. Such agreements were signed with Spain, some countries in Eastern Europe, and the Philippines, which has a deliberate policy of training a surplus of nurses. Accordingly, recruitment of nurses from South Africa fell from 2,114 in 2002 to thirty-nine in 2007.42 It has to be noted, though, that like any system it has loopholes. The relatively small private sector in the UK is not bound by these agreements and still recruits from the global market. Once medical staff are present in the UK, it is easier for them to switch into the National Health Service.
Still, substantial progress has been made, and at the same time the UK declared that it would help train more medical staff in the developing world and made it easier for British health workers to spend time abroad. Perhaps most importantly of all, it decided to increase the number of medical students trained in the UK by opening new medical schools, thereby reducing the “pull factor”: the need actively to recruit. Some have criticized the UK for doing this, for there was a period where there appeared to be a huge oversupply of doctors. However, changes to the visa system seem to have allowed matters to settle down and the UK has no need actively to recruit overseas-trained doctors. With a few exceptions—Norway has been cited in this regard—other developed countries have responded more slowly, and so this is still a work in progress.43 Nevertheless, the World Health Authority has passed a number of similar resolutions starting in 2004, progressing through the World Health Report in 2006, and culminating in 2010, with a Global Code of Practice.44 Like the Commonwealth Code it is a voluntary code, and so can be ignored without sanction. But it is a recognition that something is stirring in this area.
Country-to-country migration is not the only problem of the brain drain. All countries struggle with the fact that on the whole doctors would rather live in cities than in rural locations, and, within cities, in wealthier parts of town. This was identified many years ago by British doctor Julian Tudor Hart as the “inverse care law”: those who are in the least medical need have the greatest access to doctors. It is a simple consequence of freedom of choice for medical staff, and can be overcome only by compulsion, persuasion, or financial incentives. All of these have been tried with varying degrees of success.
Much more important, though, is the issue discussed in the last chapter: the effect of vertical programs on health systems. When a new, well-funded program opens in a country it needs trained staff, and it will take them away from their current work. Health workers, like anyone else, will respond to incentives of higher money, but perhaps even more important for them is the opportunity to work in a well-equipped environment. The consequence, of course, is that the residue of the health system loses its best people, causing further decline.
These difficulties have been identified and publicized widely only in the last few years, but they have bubbled up to the attention of the World Health Organization.45 Since the 1960s it has been known that vertical programs will be relatively unsuccessful in the longer term unless followed up with a more general strengthening of the health care system.46 The complaint now is not simply about long-term ineffectiveness, but the long- and short-term damage of attracting staff to externally funded programs. The WHO itself points to examples:
In Ethiopia, contract staff hired to help implement programmes were paid three times more than regular government employees, while in Malawi, a hospital saw 88 nurses leave for better paid nongovernmental organization (NGO) programmes in an 18-month period.47
Inevitably recruitment of health workers to vertical programs will lead to a degradation in routine care.
Yet, once again, the picture is mixed. The WHO claims there are examples where vertical programs have been harmonized with local priorities to strengthen health systems.48 The first step, as always, is to be aware of the damage that may be done as an unintended consequence of good intentions, and wherever possible to guard against it.
It is also said that international organizations such as the WHO are creating problems. Some of the most experienced and influential doctors from developing countries are appointed to high-level posts in Geneva, where they sit at a desk writing reports and making recommendations rather than attending directly to the health needs of their compatriots. Whether this is a fair criticism is not so clear. There is often a tendency to denigrate anything except “front-line care,” but in fact we have seen that much of the global burden of disease is a failure of policy, infrastructure, and organization. There are cases where good management can do much more good than good health care. And bad management can do very much more harm than bad health care.
In summary, for a number of reasons health systems in the developing world have become weakened by a chronic shortage of health workers, and this has been exacerbated by external and internal brain drains. These undermine a country’s ability to fulfill its right to health duties, and, rather than aiding them to fill the gap, wealthy countries have been making things worse. There are some promising attempts to stop the flow and repair health systems, but a great deal remains to be done.
A famous claim from the 1990s was that there was a 10/90 gap in research funding for health: that is, less than 10 percent of research funding was devoted to the conditions that afflicted low- and middle-income countries where 90 percent of the world’s preventable diseases are found.49 This evocative statistic was one of a number of factors that led to a scaling up of research into relatively neglected conditions.
Research, initially, is lab-based, using test tube and then animal studies. But there comes a point where potential new therapies must be tested in human populations, first on a small scale for safety, and then on a larger scale for efficacy, through various phases. For several reasons these drug trials are now often conducted in developing countries. First, if a new therapy to address the 10/90 gap is discovered, it must be tested in the setting where the condition occurs. Second, if a treatment is proposed that is known to be effective elsewhere, it must be retested in the new environment, especially in populations with different genetic profiles.50 Finally, in some tension with the previous reasoning, therapies designed for the developed world are also tested in developing world settings, both for financial reasons and because often regulations are less restrictive. In addition to this, there are also what can be thought of as pure research projects, in which no drugs are tested.
It was, indeed, a notorious example of a pure research project that ushered in the new age of research ethics: the Tuskegee Syphilis Study in Alabama, conducted by the U.S. Public Health Service from 1932 to 1972. About 600 African American men, mostly sharecroppers, four hundred with syphilis, were followed so that the researchers could find out more about the natural course of the disease. When a cheap cure became available in 1947—penicillin—it was not offered to the subjects as it would have ruined the study. The experiment ended in 1972 when the press uncovered the story and it became front-page news.51 Chillingly, it has recently been discovered that the US Public Health Service undertook a similarly unethical project in Guatemala in the 1940s, where 700 people were deliberately infected with syphilis and other sexually transmitted diseases to see if penicillin, if used immediately after sex, could prevent transmission. The research subjects were not told the nature of the experiment or asked to consent, and the results were never published.52
Of course, the Tuskegee experiment was not the first known example of deeply unethical research in the twentieth century.53 The Nazi doctors, tried at Nuremberg, experimented on concentration camp inmates at Dachau and elsewhere, by, for example subjecting people to extreme cold temperatures, observing the effects of hypothermia, and trying out different methods of rewarming. The purpose of the research, in which many subjects died, was to be able to provide the best response for German pilots shot down over the North Sea.54 These experiments led, ultimately, to the World Health Assembly Helsinki Declaration of 1964, amended several times since, regulating scientific research on human subjects. The declaration puts the idea of informed consent at center stage.55 Yet the Tuskegee experiment carried on even past the Helsinki Declaration.
It is, of course, one thing to have a protocol to govern the ethics of experiments. It is another for scientists to be fully attuned to all the ethical issues that can arise in relation to research, and sensitivities can be much higher when the research takes place in the developing world. The Nuffield Council on Bioethics set out four issues where ethical concerns—and potential human rights abuses—arise in connection with medical research in developing countries.56 Although distinct, we will see, in fact, that they are all related. First, the gold standard of ethical research is informed consent. This can be fraught with difficulties. Some people entering the trial may not really absorb the fact that a trial has two “arms”: one that will receive the treatment and the other, the control, which will receive something else, either a placebo or an existing treatment. If assigned to the control arm, they may feel they have been tricked into the trial. There are also questions about who has the right to consent. In Western trials only the individual subject has the capacity, but in some traditional societies women are expected to do what their husbands or elders instruct. Still, there may be imaginative ways around this problem.
A second problem concerns the review of research ethics in the country concerned. In the developed world, research on human subjects must first undergo rigorous ethical investigation. The developing world has had to catch up, and a concern can be that ethics committees do not always have the political might to question proposals from powerful interests. However, funders are increasingly insisting on local ethical review, and so this seems to be a matter that over time is being addressed.
The third and fourth problems are much more difficult. These are the problems of “what happens after the trial?” and “standard of care,” the latter of which came to international attention in 1997 as a result of publicity given to a series of HIV trials in several developing countries. The purpose was to explore transmission of HIV from mothers to infants. By this time it was understood that a dose of Zidovudine (AZT) administered to a mother could reduce the likelihood of transmission. However, there was a question as to what the optimum dose should be and a hypothesis was formed that a lower dose than customary could also be equally effective. If so, this could allow much wider treatment without increasing costs and, perhaps, reducing the chance of side effects, and so the question was an important one. The experimenters followed normal protocol for a randomized clinical trial, assigning a group of patients to the experimental arm and a group to the control arm. The controversial element was what those in the control arm should be given.
Standard experimental procedure, following the Helsinki Declaration, states that the control arm should be offered the best currently available standard of care. One argument is that the best current standard of care was to offer the high dose of Zidovudine. Instead, the control arm received only a placebo. The experimenters’ defense was that these patients would not, in the ordinary context of the health systems of the countries in which the research was conducted, have received any treatment, and hence a placebo was equivalent to the best local standard of care: nothing. Several ethics committees agreed. Yet in a famous critique, Lurie and Wolfe decry the double standard of this approach. Research that would have been condemned as unethical if conducted in the United States (because of the local standard of care) is somehow declared ethical in the developing world. As Lurie and Wolfe argue, “Residents of impoverished, post-colonial countries, the majority of which are people of color, must be protected from potential exploitation in research. Otherwise, the abominable state of health care in these countries can be used to justify studies that would never pass muster in the sponsoring country.”57
In defense, it might be said that applying US standards to the developing world and providing the standard dose of AZT to mothers in the control arm would have been prohibitively expensive, and so the research simply could not have been done. Lurie and Wolfe contend that the manufacturers would have donated the drug free of charge for the trial, and so although cost can be relevant—for example if a coronary care unit had to be built—in this case it was not. Why, then, did the researchers insist on a placebo? Apparently, because they believed that this would make the study scientifically more powerful. This decision probably cost the lives of hundreds of infants. If this is correct, it seems a clear violation of the Helsinki Declaration, which requires the researchers never to put the interests of science over the interests of the participants. Thus it was a violation of the right to health of the participants, even if they would never have received treatment without the trial.
It seems that the Lurie and Wolfe article, as well as the editorial by Marcia Angell in the same edition of The New England Journal of Medicine, was a wake-up call to researchers. Although other trials conducted at around that time were also criticized (for example, a trial in Uganda looking at the susceptibility of HIV patients to TB),58 it seems that the critics have won the argument that the “best existing standard of care” for the control arm of a randomized control trial cannot be interpreted as “best local standard of care,” unless the cost is prohibitive or other exceptional circumstances apply. And there can be exceptional circumstances. For example, the Nuffield Council suggest that sometimes using the “universal” standard of care will provide research results that are not relevant to the health care needs of the country where the research is undertaken.59 Indeed, in criticism of Marcia Angell, Solomon Benatar has suggested that in the very HIV trials that caused the controversy it may not have been appropriate to use the standard of care demanded by the critics, for the full standard of care to prevent mother-to-child transmission of HIV involves replacing breast feeding with bottle feeding, and in much of the developing world this is a very serious health risk to infants.60 When we start to think of cases like this we can see that exceptional circumstances are, in fact, rather commonplace, and the cut-and-dried certainty of Lurie and Wolfe and Marcia Angell does not do justice to the complexities.
Even more difficult is the question of what happens once the trial is over. This is not a central issue in the case discussed by Lurie and Wolfe, as it was the test of a short-course treatment to prevent mother-to-child transmission, and, at least in relation to the experiment, follow-up treatment is not directly relevant. However, in many trials a treatment is given for a chronic condition to see if any benefit is observed. Suppose such a trial takes place, and the experimental arm shows significant improvements over the control arm; hence the experiment is a success. For scientific purposes, at that point the experimenters can pack up and go home, but there is an argument that they have an ethical obligation to continue the treatment for the experimental arm and to introduce it for the control arm. After all, the results would not have been possible without the cooperation, at some risk, of all of these people, and extensive obligations have been acquired. Accordingly, it is reasonable to insist that all participants should benefit from research.
However, there is the question of what form that benefit should take. For example, is payment enough to discharge other obligations? Or simply participation even in the control arm of the trial, which generally offers access to other forms of health care for the duration of the trial?61 Not, apparently, according to the version of the Helsinki Declaration adopted in 2000, Section 30 of which states: “At the conclusion of the study, every patient entered into the study should be assured of access to the best proven prophylactic, diagnostic and therapeutic methods identified by the study.”62 Is this a reasonable demand? In some cases, such as some successful vaccine trials, it would seem highly negligent to ignore it. And withdrawing treatment can be very harmful; for example, if this would encourage the development of drug-resistant strains of a virus. But can these examples be generalized?
Where a medical trial is undertaken to research therapies for chronic conditions, continued treatment for all is an open-ended, and potentially very expensive, commitment. It would be even worse if the trial shows only modest benefits for a very expensive drug. Should every participant be provided with this drug for the rest of their lives, even though it would not pass a test for cost-effectiveness? Or is this simply the price a sponsor must pay for conducting research? The Nuffield Council notes:
If sponsors of research were required to fund the future provision of interventions shown to be effective to research participants or the wider community, many would cease to support such research. Sponsors from the public sector, such as the UK MRC or US NIH, would simply be unable to bear the costs involved without curtailing other research. Although the financial resources of many pharmaceutical companies are large, many of them would be equally reluctant to take on the additional burden of long-term commitment.63
A recent controversy concerns trials to test preventative interventions for HIV, such as male circumcision, or behavioral interventions. Inevitably in these trials some people will be HIV-negative at the start of the trial but will seroconvert, as it is put, and end up HIV-positive, by the end of the trial. No preventative intervention will be totally effective, and for the control arm no intervention was offered in any case. Should those who become infected during the trial receive treatment?64 This could be a huge commitment given the size of some trials, and, as the Nuffield Council indicates, the costs could be enough to deter researchers from undertaking such a project.
It may seem more helpful to put matters the other way round. If it is not possible to offer, even to the participants, the benefits of a successful trial, this should raise alarm bells. Perhaps the inability to offer continued treatment simply reveals that the trial was, all along, unethical, in not meeting local health care needs. Indeed, including the condition of post-trial access to health care will be a way of ensuring that a trial in a developing country is not simply the next stage up from an animal trial. Research in developing countries is not to be undertaken lightly, and built into the trial should be a plan for treatment follow-up, if appropriate, whether with local health care providers or external donors. Otherwise the trial runs the danger of exploiting vulnerable communities.65
But to insist on such a hard and fast rule is problematic. First, there may be a significant time lag between a trial and the technical possibility of rolling out treatment, even if that is the full intention. Second, it is not uncommon for what was initially thought to be not cost-effective to become so, if, for example, drug companies can be persuaded to donate drugs or donors to fund them.66 This is an argument that Paul Farmer has made over and over again. What is and is not affordable very often depends on pricing decisions, which are made by humans and not facts of nature. A successful trial can make the difference in persuasion. Therefore, there can be strategically beneficial reasons for conducting a trial, even if extending the benefits is at the present time unaffordable. In fact, work of this type is one of the most important developments in extending access to expensive treatments.
Less encouragingly, though, there can be movement in the other direction: what might have been thought to be affordable is found to be unaffordable. The Nuffield Council reports the following depressing tale:
Although a successful national trial of bed-nets treated with insecticide in The Gambia reduced overall child mortality from malaria by approximately 30%, it was decided by the researchers, sponsors and the Gambian Ministry of Health that when the research was implemented nationally the cost of the insecticide would have to be recovered because the Ministry could not afford to provide free insecticide indefinitely. Charging for insecticide led to a reduction in the number of young children sleeping under an insecticide-treated net from around 70% to 20%.67
Here we see, once more, the clash between progressive realization and core obligations. Under the view of progressive realization, it is not a violation of the right to health of these children not to provide insecticide, for resource constraints are so tight that it is not possible for the government to satisfy the demand. Alternatively, it could be argued that keeping children malaria-free by a cheap and simple means is a core obligation, and hence the duty of the government is to seek external resources to support the program. And it would be a duty of the world community to meet the need.
Yet we always have to be on the lookout for unintended consequences. In a discussion with a treasury official in Namibia, I asked whether the government was supplying free treated nets to children in the vulnerable areas in the north of the country. I was told that they had tried to interest outside parties to help and received an offer from a donor to supply imported treated bed nets. However, Namibia already had a manufacturer of bed nets producing them at below the cost of the imported nets. The government was reluctant to accept the offer of free nets as it would be likely to put their domestic producer out of business, and in a year or two, when the donor had lost interest, their cheap supply of local nets would no longer be available. Accordingly, the government asked for money to purchase nets from the local supplier. But the donor was not interested. Some people then criticized the Namibian officials for refusing the offer of free help. Obtaining the right sort of external assistance to meet a country’s human rights obligations is rarely straightforward.
MATERNAL MORTALITY
AND NEWBORN SURVIVAL
One fascinating thing about human rights is that while the general idea of human rights has strong support, very often particular claims that human rights are being violated are treated with irritation, even contempt. In the UK, for example, it is sometimes thought that European human rights legislation is exploited by asylum seekers or prisoners to gain undeserved privileges. From time to time politicians argue that something has gone wrong with a system in which human rights are unpopular. In fact, unpopularity is exactly what we should expect. The whole point of human rights is to protect the marginalized, oppressed, or excluded from the tyranny of common discriminatory or oppressive practices. Broadly, human rights wouldn’t be needed if claims made in terms of human rights were easily accepted.
Having said that, not all human rights claims need be unpopular. In some cases they simply point out that a series of decisions, policies, actions, or, indeed, inactions, turn out to have a set of unintended consequences that cannot be tolerated. Consider the shocking facts about maternal and newborn survival in much of the developing world. It is not easy to be sure of figures here, as those who are most at risk are members of highly excluded populations where record collecting is very imperfect, but nevertheless estimates have been made and are extraordinary. Here are some widely cited figures, from a joint statement by the WHO, UNFPA, UNICEF, and the World Bank in 2008:
Every minute a woman dies in pregnancy or childbirth, over 500,000 every year. And every year over one million newborns die within their first 24 hours of life for lack of quality care. Maternal mortality is the largest health inequity in the world; 99 per cent of maternal deaths occur in developing countries—half of them in Africa. A woman in Niger faces a 1 in 7 chance during her lifetime of dying of pregnancy-related causes, while a woman in Sweden has 1 chance in 17,400.68
How are these women and newborns dying? “Women in developing countries are bleeding to death after giving birth, writhing in the convulsions of eclampsia, and collapsing from days of futile contractions, knowing that they have suffocated their babies to death.”69 Yet until very recently maternal mortality attracted relatively little attention, compared, say, to the major infectious diseases. This is not to say that it has been ignored entirely. Number five of the Millennium Development Goals set out by the UN in 2000 was “Reduce by three-quarters, between 1990 and 2015, the maternal mortality ratio.” This required year-on-year improvement of 5.5 percent between 2000 and 2015. In 2009 the UN put out a statement on maternal mortality and human rights acknowledging the scale of the problem and the very unimpressive pace of improvement to date: 1 percent per year.70
Why has the world been so slow to think of maternal mortality as a health crisis, and so ineffective in dealing with it? For one thing it is not new; it has always been part of the human condition, and poor communities may have grown used to the idea that a significant proportion of women will die in childbirth. Sexism and fatalism have a powerful numbing effect. For another, maternal mortality is not glamorous. This is not to say that suffering from HIV/AIDS is glamorous, but HIV is the type of condition for which a major technological advance may be possible. There are Nobel Prizes, knighthoods, and congressional medals at stake. Funders are in a competitive race to seek a cure. A breakthrough in the lab could save millions of people and make billions of dollars. But there is no pill or vaccine to end maternal mortality. Indeed, we already have the knowledge we need to bring down the numbers of deaths dramatically. The contributors to maternal mortality are lack of prenatal care, of skilled birth attendants, and of basic health facilities such as blood banks, together with very poor rural transport infrastructure, user fees both official and through bribery and corruption, and a whole host of other mundane facts of grinding hardship. As Amnesty International put it: “Maternal mortality reflects the cycle of human rights abuse—deprivation, exclusion, insecurity and voicelessness—that defines and perpetuates poverty.”71
What this picture misses, though, is that relieving poverty is not enough to tackle maternal mortality. It is one of the few areas where improvements in overcoming poverty—improved nutrition, improved female literacy, improved housing conditions—make relatively little difference on their own. In previous centuries, members of royal families died in childbirth as well as the peasant women on their estates. Maternal complications are brutely physical, and the basic underlying conditions, such as hemorrhage and sepsis, can be found in similar proportions throughout the world and throughout history. The great majority of birth complications cannot be predicted or prevented. What differs is our ability to react to problems when they occur. Almost all can be treated, if accessible medical services are in place. And there lies the problem.72
In this respect maternal mortality importantly contrasts with child (as distinct from newborn) mortality. The WHO reports that in 2008 8.8 million children died from preventable diseases before the age of five. Many of these deaths occurred in countries with large populations: China, India, Nigeria, Pakistan, the Democratic Republic of Congo, and Ethiopia. But this masks the fact that the highest rates, as distinct from total numbers, are in countries that have recently been through civil war, where massive displacement of peoples, breakup of family structure, rape, and neglect create a health crisis which makes young children especially vulnerable. Such countries include Angola, Sierra Leone, Liberia, and, once more, the Democratic Republic of Congo.73 A teenage mother living in an unsanitary, overcrowded refugee camp, with little medical help and no mother, aunts, or village healers to support her, will struggle to keep her children alive. In the worst cases, one in four children do not survive.
The remedies for child death are mostly a matter of prevention or simple cure. Rarely is emergency treatment essential: the fatal condition normally could easily have been prevented. The situation is the reverse for maternal mortality. Some prevention is possible—diagnosis of pre-eclampsia is one area—but on the whole the key to maternal survival is medical: quick and effective intervention if a complication arises, and access to safe abortion. The government’s human rights responsibility cannot be to eliminate all maternal deaths, for sadly this cannot be achieved. Rather, it is to create a structure in which good-quality obstetric services are in place, and UN guidelines exist by which this can be monitored.74
Countries where rates of maternal mortality are high are failing to protect and fulfill the human right to health of women and their children. Yet making significant inroads will be incredibly expensive, simply because it is a question of providing quality medical and transport infrastructure. Of course there are different ways in which money can be spent. A country that decides to put its money into high-tech hospitals in the capital city, rather than training midwives in rural areas, may have made decisions that cut against its human rights duties, but even had it spent its meager resources more efficiently it will not solve the problem. To meet their core obligations, many of the countries of the world need external assistance. It becomes our problem too. What can be done? There is, at last, major international attention to this scandalous situation. Amnesty International has started a campaign called Demand Dignity; Sarah Brown, wife of former UK prime minister Gordon Brown, is Patron of the White Ribbon Alliance for Safe Motherhood; and other organizations are now scaling up their public profile and fundraising initiatives.
However, as observed by Arial Frisancho from the International Initiative on Maternal Mortality and Human Rights, work is also needed on the “demand side.” Not only is it necessary for medical facilities to be made available; women have to be prepared to use them. There may be cultural barriers to access—the role of traditional birth attendants, fear of authority, concern about being away from families—as well as financial issues of cost that need to be addressed, in addition to the provision of facilities. Cultural sensitivity—for example, allowing women to give birth in traditional ways, such as standing supported rather than lying on a hospital bed—is crucially important. Soyata Maïga, the Special Rapporteur on the Rights of Women in Africa, noted that “Women do not go to the health center because they’re too far and doctors don’t speak our language and in our culture, women cannot take their clothes off in front of men who are not their husbands.”75 This brings to mind the insistence in General Comment 14 that medical provision must be “culturally appropriate.” But perhaps even more important is overcoming entrenched forms of discrimination, in which death in childbirth is largely taken for granted and barely noticed. Despite claims that we know how to avoid most of the world’s maternal mortality, it is not an easy matter to bring this about. Progress is there, but it needs rapid acceleration.
There are many obstacles to the fulfillment of the human right to the highest attainable standard of health for all. Progress in other areas, such as economic development or copyright harmonization, can, we have seen, create new barriers too. We saw in chapter 2 that there are also worries that approaching health in terms of human rights can be damaging, siphoning money and resources away from more cost-effective approaches to health, replacing rational planning with allocation to the most litigious. This can happen. A government under pressure to fund an HIV/AIDS treatment program may have to cut other forms of spending, or eat into the education or transport budget. On the other hand, through the case studies in this and the previous chapter we have also seen something rather different. Instead of diverting resources away from other health areas, human right to health campaigns can bring new resources into the health sector. If drug companies are pressured into lowering their prices in the developing world and still make a modest profit, then everyone wins. Even if they take a small loss, it will be felt by their shareholders and not other parts of the health system. Similarly, private donors, motivated by energetic and persuasive human right to health campaigns, provide money that might otherwise have sat in their bank accounts or have been used for luxury consumption. Economists emphasize the idea of opportunity cost: if you spend money one way, then it is not available to spend another way. Although this must be true, at least in the short term, it is a very static, and rather dispiriting, view of the world. Spending money on health can create real benefits that also generate, or at least save, money in the future, by, for example, allowing people to return to work and to live long enough to bring their children up in a loving and safe environment. In any case, and this is the main point, a highly focused and well-thought-out human right to health campaign is likely to bring new resources to the developing world, and not simply divert resources from other health priorities. Problems will always be with us—think of the internal brain drain—but to be aware of the dangers is the first step toward avoiding them.