All the great laws of society are laws of nature.
From The Rights of Man, Thomas Paine (1791)
Right is right, and wrong is wrong, and a body ain’t got no business doing wrong when he ain’t ignorant and knows better.
From The Adventures of Huckleberry Finn, Mark Twain (1884)
Yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.
From Jurassic Park, directed by Steven Spielberg (1993); based on Jurassic Park, Michael Crichton (1990)
Over the millennia of human civilisation, societies and cultures have developed the concepts of moral rightness and moral wrongness. Indeed, there are some modern‐day writers such as David Cook (formerly of Oxford University) and Francis Collins (former Director of the Human Genome Project; now Director of the US National Institutes of Health) who have suggested that humans have a universal moral code, albeit that its outworkings may vary between different cultures. In thinking this they are following in the footsteps of Immanuel Kant: Two things inspire me to awe – the starry heavens above and the moral law within. Be that as it may, a question for all cultures is ‘How do we decide what is right and what is wrong?’ To help in answering this question, we need to consider more general aspects of ethics and ethical decision‐making. In more academic discussions, a distinction is often made between ethics and morals, with ethics being defined as a (philosophical) study of the principles involved in making moral decisions, while morals or morality is about the right versus wrong decisions themselves. As has been said in another publication,1 ethics covers the theory and morals covers the practice.2 However, in less academic circles, we often use the words interchangeably; thus ethical decision‐making and moral decision‐making are regarded by most people as the same thing.
So what sort of decisions are we talking about? One everyday example is that it is widely accepted that telling lies is wrong. Indeed, the philosopher Immanuel Kant believed that it is always wrong. Our relationships with each other only function well if there is a presumption that what we say to each other is true. Trust is essential in human relationships and in public life. One of the most painful experiences is to discover that someone in whom we have a deep trust has lied to us. In a societal context, politicians tend to lose elections when they lose the trust of the electorate because it has been demonstrated that they lied about an important issue. However, is it wrong to tell a lie that saves the life of someone in danger, whether that danger comes from a person (or persons), an organisation or even the state itself? We may recognise that while telling lies is usually wrong, under some circumstances the outcome of telling the truth is a greater wrong. Thus, families sheltering Jewish people in Nazi‐occupied countries in the Second World War were obliged to tell lies in order to protect those whom they were hiding.
So, while ethics is about what we ought and ought not to do, it is also about setting priorities in human behaviour. Ethics is not always about what is absolutely right or wrong; it sometimes, as we have just seen, involves weighing one view of right and wrong against another. Thus it involves working out what is the best decision in particular circumstance, what is the lesser of two evils and what is the balance between doing good and causing harm. What then are the principles on which we base such decision‐making? In the rest of this chapter we discuss different types of ethical principle, often known as the ethical frameworks; in the rest of the book, those principles are applied to the complex and exciting developments that are taking place so rapidly in biological and biomedical science.
First, however, it is important to understand the complex relationship between law and ethics. Ethics forms the foundation on which law is built but not all ethics is enshrined in law. Cheating on one’s spouse or partner is not illegal in the Western world but most of us regard it as morally wrong. On the other hand, murder is both morally wrong and unlawful; a person who murders another and is found guilty will be punished by the state. With murder, the view that human life is unique and precious is enshrined in law. However, there are circumstances under which killing is permitted in law. At the level of the state, these include war (and in some countries, capital punishment) and at the level of the individual may include self‐defence. The law also recognises that killing another person may occur completely accidentally, but may still impose some punishment if a person’s behaviour had contributed to the accidental death of another (e.g. by driving recklessly).
Deciding on which side of the road we drive is not itself a question of moral right or wrong. But as there would be a serious risk of injury or death to ourselves and others if we chose individually on which side to drive, the law decides whether we drive on the right or on the left. The law also intervenes where there is a conflict between individuals about the best interests of other people. So the courts often have to decide which parent should have the custody of children when a couple divorce. Parents and doctors may disagree about whether or not a child should have medical treatment. Quite properly, the courts are asked to analyse the ethical principles in each case, set out what the law says and decide what is in the best interests of the child.
Thus, there are actions that are regarded as morally wrong that are also illegal, there are actions that are (widely) regarded as morally wrong but which are not illegal and there are actions that in themselves are not morally wrong but which come under the law for the greater good of society or for the good of individuals such as children. These relationships between law and ethics are played out in some of the issues that are discussed in this book. We will also see further examples of the tension between private and public morality, where significant numbers of individuals regard an action as morally wrong and believe that it should be illegal, while the state, usually reflecting a majority view, does not regard it as wrong and thus permits it. Our discussion of abortion in Chapter 4 provides a clear example of this.
Thus, making moral decisions is not always easy, the relationship between individual morality and the law is sometimes complex and applications of both individual morality and the law to complex issues in biomedical science may well be difficult. And so we turn to consider the development of the ethical principles or frameworks that help us to make these difficult decisions.
The development of ethical thinking or moral reasoning is a long and complex story in which many different strands intertwine, fall apart and are reconfigured. Religious and non‐religious thinkers have been engaged in this process for at least three thousand years. Debates have often been fierce. At times, decisions and practices have been driven less by objective reasoning and more by events. Some of the views put forward by the great thinkers of the past are difficult for us to grasp and seem very odd to us in the 21st century. In our own society there is, for some issues, a divide between religious and secular views. But the long story of the development of moral reasoning continues to influence the decisions we make now. The contributors to that story are very many indeed. Here we shall only mention some of its most influential.
Part of the story begins in ancient Greece where the epic poems of Homer, put together in the 8th century BCE, were regarded as the authoritative source of moral reasoning. Epics such as the Trojan War were about courage, justice, heroism, piety, lust, love and the relationships between humans and the gods. These were what guided people. But Socrates (470–399 BCE) questioned the usefulness of these stories and asked what really characterised a good life. As a result he was accused of corrupting the youth of Athens, condemned and executed by poison. In accepting his sentence he enunciated an important principle, ‘It is better to suffer wrong than to do wrong’.
Socrates’ student, Plato (427–347 BCE), wrote up much of what Socrates had said and developed his own complex theories of ethics. He wrote of an imaginary dialogue between a fictitious character Meno and the late Socrates. Meno asks Socrates, ‘Can virtue be taught?’, and from that point the discussion flows to and fro. The point of this is that Plato recognised the good qualities, the virtues, that contributed to the good of society and was trying to tease out where they came from. These ideas were further developed by Plato’s pupil Aristotle (384–322 BCE), who believed that people become more virtuous and therefore make better moral decisions by practising the virtues. Thus, we have virtue ethics, which is based on the virtuous character of the person making the decision. An important principle here is that we become virtuous by practising virtue. Thus I can truly only be an honest person if I practice honesty and so I become trustworthy. Virtue is more about the expression of character than about keeping rules. In the Christian era, this was picked up by the early Church where what were regarded as specifically Christian virtues, such as self‐giving love, were added to the classical virtues of the Greeks. Later in the Christian era, virtue ethics was strongly espoused by Thomas Aquinas (1225–1274) and has also seen a revival amongst both religious and secular thinkers in the late 20th and early 21st centuries (see Section 2.2.7).
While the Greeks were grappling with these issues, the Hebrew Bible (what we know as the Old Testament) was being put together. Central to Jewish ethical thinking was the belief that God had spoken through the ancient patriarchs (Abraham, Isaac and Jacob), Moses (‘the lawgiver’) and the prophets who were active over several centuries. At the heart of what the Jews believed God to have said were the Ten Commandments. These are often taken as a series of prohibitions ‘You shall not…’ but actually also contain some injunctions that are both positive and open ended: ‘Honour your parents…’. Thus moral behaviour was seen as a series of duties, some of which were easily assessable. It is clear enough, for example, whether one has ‘committed adultery’ even if the perpetrators try to keep it secret. On the other hand, loving God and honouring your parents seem to be much more about attitude, even though attitudes may be worked out in action. Thus there is at least a hint in this duty‐based system of the development of virtues.
However, the Jewish people developed a much more complex set of rules that were added on to the Ten Commandments, emphasising again the idea that moral behaviour is based on duty, a duty to observe rules. This was criticised by the early Christian church, based on the ethical aspects of the teaching of Jesus Christ, because keeping rules could easily be done in what we would call a ‘box‐ticking’ attitude. Hence, as indicated already, the early Church called for a virtuous way of life. For example, in the 4th century, the anti‐Christian Roman emperor Julian was forced to admit that ‘These…Galileans not only feed their own poor, but ours also; …Whilst the pagan priests neglect the poor, the hated Galileans devote themselves to works of charity’. Julian, who was trying to return Rome to its pre‐Christian pagan religion, thus encouraged the pagan priests to start their own charities to care for Rome’s needy.
Nevertheless, duty‐based ethics took centre stage again with the work of Immanuel Kant (1724–1804). He was a professor of logic and metaphysics at the University of Konigsberg in East Prussia. He was not satisfied with a system of ethics based on God’s revelation. He believed that it was only reason that could legislate in a dispassionate and universal manner. From this he developed his ‘imperatives’, the basis on which all human beings ought to act at all times. The most influential was that we should ‘act so as to treat humanity never only as a means but also as an end’. Other humans are therefore not to be used just as an instrument of our wishes and the end does not justify the means (a crucial area of debate in modern biomedical ethics). We have already noted another of Kant’s imperatives, namely, that no one should ever tell lies. This type of system in which moral decision‐making is based on duty is called deontological ethics (from the Greek deon, duty) or Kantian ethics.
We have already suggested that there are circumstances in which telling lies might be the right (or less wrong?) thing to do. Doubtless Kant would have disagreed with us but many people would agree that under pressure, when someone’s life is in danger, telling a lie may be the better of two alternatives. Thus, an action may be judged to be morally right or morally wrong according to the results or consequences of the action. This type of ethical decision‐making is known, perhaps obviously, as consequentialism and it is certainly very widespread in the 21st‐century society. Within consequentialism we can discuss a number of distinct strands. The first is utilitarianism, particularly associated with Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873). Bentham’s view was that the rightness or wrongness of an action depended on its consequences in respect of pleasure or happiness. Right actions are those that produce the greatest amount of pleasure, happiness or satisfaction for those affected by its consequences. Conversely, something is wrong if fails to generate pleasure or satisfaction but rather produces pain or harm. What is right is that which maximises good outcomes, the most good for the most number of people. Early proponents of utilitarianism believed it was a good antidote to what they perceived as the negativity of Christian ethics.3 Bentham developed a system of calculating the pleasure/pain balance. While such an attempt seems distinctly odd to us in the 21st century, the principle continues to the major determinant in framing public policy and making political decisions. Indeed, it is often said, perhaps cynically, that politicians are bound to make decisions this way in order to promote the most widespread satisfaction among the electorate. Further, as we shall see, utilitarianism has become an important principle in deciding what should and should not be allowed in modern biomedical research and practice.
We need to mention again (see Chapter 1) the philosopher Friedrich Nietzsche (1844–1900). He suggested that, since there was no higher authority for moral values, the individual should become the arbiter of such values. He went on to suggest that, rationally, the yardstick for moral/ethical decision‐making was what is good for the individual. So this form of consequentialism, known as egoism or rational egoism, is at the other end of the scale from utilitarianism. Thus, according to Nietzsche, ‘egoism is the very essence of the noble soul’. A good ethical decision is one that has good consequences for me. This Nietzschean approach to ethics has been supported strongly by the Russian‐American author and philosopher Ayn Rand (1905–1982) in her development of a philosophy known as objectivism. She categorised altruism as evil and selfishness as a virtue. She also rejected totally all religions and was especially scornful of the emphasis that Christians place on love and community. Many readers of this book may be surprised to know that during the run‐up to the 2012 presidential election in the United States, certain prominent Republican politicians expressed support for Rand’s views and especially that totally laissez‐faire capitalism is the only sociopolitical system in which individuals can be truly free.
In considering natural law as an ethical theory, we return again to ancient Greece. It originated with the Stoic philosophers in the 5th century BCE and was further developed by Aristotle. Essentially, this approach to ethics suggests that everything that exists, including, at one of the scale, the totally inanimate parts of nature and, at the other end, God (or the gods), has its reason for being, its natural purpose or its telos. Virtuous ethical decision‐making allowed telos to be fulfilled in accordance with natural law. It was inevitable that this should be increasingly rooted in scientific and philosophical reflection because how else would we decide about the particular telos of a particular component of nature and, in particular, what was the purpose of human life?
Interestingly, just as the 13th‐century theologian Thomas Aquinas picked up and extended Aristotle’s virtue ethics, so he also picked up natural law theory. Natural law was, for Aquinas, what God intended for the world. Therefore, good virtuous ethical decision‐making would be in line with that intent and that intent is at least partly seen in the natural functions of things.
This became the basis of much Roman Catholic ethical teaching and indeed remains so in the area of sex. In Thomastic thinking (i.e. thinking based on the work of Aquinas), the function of sexual intercourse is to produce children. Therefore any sexual activity where the intention is not to have children or where the possibility of conception is prevented by using contraception is immoral. In 20th and 21st century Western society, this ethical stance has provoked great antipathy, especially in view of the world’s burgeoning population (Chapter 15).
More informally, natural law thinking has been distorted such that some believe that ‘it’s not natural’ is a moral prohibition of any advance in science and technology. This goes far beyond the essential ideas of the natural law system of ethics: natural law does not mean that ‘natural is good and unnatural is bad’; indeed, this would be unworkable as an ethical system.
In the 20th century there were two philosophical movements that had, at least temporarily, an effect on ethical thinking. The first was positivism, which was at its peak in the 1930s but persisted for many years after that. Essentially it states that the only meaningful statements are those that be assessed in some objective way. It is linked with scientism, the view that the only realities are those that are revealed and may be investigated by the methods of science. Thus in positivism, moral statements are meaningless because they cannot be objectively measured. The statement that is wrong to bully my employees carries no more weight than the statement that I like blue shoes. The second philosophical movement is postmodernism, which arose in the 1970s and continued into the 21st century. As we have already seen in Chapter 1, this philosophy holds that anyone’s world view, concept or version of the truth is as valid as anyone else’s; there are no universal truths, overriding themes or ‘metanarratives’. This applies as much in ethics as in any other field. Thus, in a recent study in the United States in which 230 young adults were interviewed about ethics, by far the most common theme to emerge was that morals are mostly a matter of personal truth or preference; therefore we cannot speak for other people in terms of what is regarded as right or wrong. Interestingly, however, there were actions that were regarded as wrong by most of these young adults, namely, murder, rape and child abuse. Neither do we suppose that the most ardent positivist would deny that these are wrong, even if he or she could not justify the statement from their philosophy.
For several centuries, virtue ethics had very little influence in Western society. However, in the second half of the 20th century there was a reawakening of interest in this approach to moral decision‐making. A key event in this revival was the publication in 1981 of a book entitled After Virtue, written by the philosopher Alasdair MacIntyre.4 He suggested that in modern society, much moral discourse was totally dysfunctional and he was especially critical of Nietzsche’s rational egoism, of Sartre’s existentialism and of moral relativism. He proposed that we should return to the ‘forgotten alternative’ of virtue ethics in the tradition of Aristotle and Thomas Aquinas. The comeback of virtue ethics continues to gather pace in the 21st century amongst both secular and religious philosophers and writers. Interestingly, it has been suggested that in the world of banking, finance and ‘big business’, there needs to be a return to virtue, as so eloquently expressed in Ted Malloch’s 2013 book The End of Ethics and a Way Back.5
At the personal level, virtue is the system espoused by both the writers of this book. Nevertheless it needs to be said that virtue ethicists do not disregard the law or throw away the rule book. After all, if I drive at 50 mph (80 kph) down my village street, I am hardly acting virtuously. Neither do they ignore the likely outcomes of their actions. Appropriate consideration of both rules and consequences is part of a virtuous approach to moral decision‐making.
In Western society, the concept of ‘rights’ has become very influential. This is not a new idea. It is enshrined in the American Constitution where citizens are declared to have the ‘right to life, liberty and the pursuit of happiness’ and in the cries of the French Revolution of Liberté, Egalité, Fraternité. These right and fundamental concepts have evolved over the succeeding two centuries into a mass of rights and demands that are often linked to the concept of autonomy, my right as an individual to make my own decisions. For example, a woman has a right to an abortion; I have a right to a child; when a mistake is made or an accident occurs, I have a right to compensation. Historically, the enshrining of rights in constitutions and conventions was to prevent the abuse of people by those with power over them. This is exemplified by the concept of human rights enshrined in the United Nations’ Charter of Human Rights, which we may regard as a worthy effort to ensure universal standards for the treatment of other humans in a pluralistic world. However, it is doubtful if rights alone are an adequate basis for ethics without corresponding responsibilities and duties. Thus I may have a right to a child but I also have a responsibility to care for it and bring it up. I may have a right to recreational sex but I also have a responsibility to ensure that an unwanted pregnancy does not occur or that a sexually transmitted disease is not spread by my activities. The pursuit of rights alone often infringes the rights of others. Hence, a workable rights‐based ethical system must include the concept of duty to others and for this reason, rights‐based ethics, whatever its weaknesses,6 is often classified as a deontological system (see Section 2.2.3).
In the United Kingdom, the website of the national public broadcaster, the BBC, contains a page entitled Religion and Ethics and there is a sense in which discussion of ethics leads to at least a mention of religion even in a book like this. Most of the major religions of the world have moral codes laid out in sets of rules and/or indicated in wide‐ranging principles or virtues. Indeed, even in secularised societies such as 21st‐century Britain, we owe much to Judaeo‐Christian ethical codes and virtues. In this context, the comment made in July 20127 by the atheist philosopher Julian Baggini is very interesting: ‘…the decline of morality…has paralleled the decline of respect for…the Church’. Nevertheless it is abundantly clear that a moral code may be developed in the absence of religious belief. The British philosopher Mary Warnock acknowledges the specific contributions made by Christianity to our ethical thinking in the United Kingdom (especially in the area of virtue). However she also argues convincingly that secular societies can and indeed do develop ethical systems. She suggests that human nature itself needs an ethical system based on a personal sense of moral good and a tendency to altruism (‘when someone begins to see he must postpone his immediate wishes for the sake of the good’). Nevertheless, she recognises the existence of a minority who do not seem to possess the sense of moral good nor show any indication of altruism.
In the Western world in the 21st century, there are three main ways in which ethical decisions are made. The first is deontological ethics, based on duties or rules. It often embodies the concept of rights linked to duties. The second is consequentialism, based on the outcome of actions. The third is virtue ethics, based on the application of virtue to ethical decision‐making. A fourth framework, natural law, is used much less but still has some influence in aspects of the Roman Catholic Church’s sexual ethics. There are weaknesses in all three of the main systems. In deontological ethics, duties or rules may clash with each other and thus a decision needs to be made about which takes precedence; further, following rules rigidly may sometimes lead to outcomes that are regarded as morally wrong. In consequentialism there is the danger that the outcome always trumps all other considerations. This leads to the view that the end always justifies the means: it may be acceptable to do something that others regard as morally wrong if the outcome is good. In virtue ethics, there is the danger of being rather vague or even ‘fluffy’. If one virtue is applied to the exclusion of all others, moral decision‐making may lack rigour. For example, if the words of The Beatles, All you need is love, are applied, then we may see decisions that lack moral courage and wisdom.
So far we have seen that ethical decision‐making has a long history, arises from different religious and philosophical positions, has many strands and has become increasing complex. But the simple fact is that ethics is about making decisions, about making choices – do I do this or that? – and these choices have consequences. And in fact, we engage in ethical decision‐making every day of our lives:
Even our everyday conversation has an ethical component to it. When I am recounting something that happened to me, do I tell it accurately or embellish it just a little to impress people?
So how do we make these day‐to‐day decisions, as well as deciding on the larger issues currently in the public domain? Decisions may differ according to which ethical framework we adopt and also according to our underlying world view or belief system. A very good example is the position of the ‘Christian Right’ in the United States. It sees abortion and same‐sex civil partnerships/marriages as unvirtuous or immoral. But it is apparently held as virtuous or moral to embark on a war against Iraq,9 hold suspected terrorists outside the due processes of law and possibly abandon international agreements on the environment.10 Needless to say, the ‘Christian Left’, an Internet‐based network of politically liberal Christians, disagree with these views.
An associated issue concerns the ‘targets’ of our ethical decision‐making. Firstly, in any situation, more than one party may be involved and they may have different interests in the matter under discussion. It is often difficult to decide on our priorities. Secondly, ethical systems in general relate to ways that people treat each other, whether as individuals, groups or even whole societies. We speak of humans as having moral value or moral significance. It is our treatment of other humans that may be defined in terms of right or wrong, under whichever ethical system we operate. However, many of the issues discussed in subsequent chapters are more complex than this and require us to think about the ‘boundaries’ of our ethical concern.
Ethical decision‐making can therefore at times be very difficult. It requires careful, clear thinking if we are to make the best or the most virtuous decision. During the process, especially with complex issues, it is often helpful to consult other people whose judgement you trust. Often just ‘thinking aloud’ can clarify matters. Sometimes it is important to stand back and review the sources of your value system. Usually in everyday life, time does not allow for that. But taking time out from time to time to review how you make decisions can be very valuable. Often we have to be content not with the ideal solution but to the best one we can arrive at under the circumstances.
In another publication we have set out a stepwise process for trying to arrive at an ethical decision in situations where there are conflicting priorities.11 A similar approach has been proposed recently by the American bioethicists Adil Shamoo and David Resnik12; it is worth quoting albeit in a slightly modified form:
Some of the most complex issues in today’s moral maze arise from biomedical science and thus at this stage an excursion into medical ethics is very helpful. Since the early formalisation of medicine in ancient Greece, doctors have operated under clear ethical guidelines. Hippocrates (460–370 BCE), who taught and practised medicine on the island of Kos, required all doctors trained by him to take an oath, the Hippocratic Oath, which was used in Western medicine for many hundreds of years. Since the Second World War, ethical codes for doctors have been updated (see Section 2.5) but there is still the requirement to assent to a clear ethical code of practice.
Rather than present these codes in detail, we draw attention to the very helpful approach that has been provided by two American ethicists, Tom Beauchamp and James Childress, in their book, Principles of Biomedical Ethics, first published in 1979 (the most recent edition, the seventh, was published in 2013). As the title suggests they were particularly concerned with the advances in biomedical science but actually their framework is useful in many other areas too (as we indicate below). It recognises that there are several ethical principles that have to be taken into account, that have to be prioritised and that do not always have the same weight and at times in conflict.
The first principle applied by Beauchamp and Childress is autonomy, that is to say that we recognise a person’s rights as an individual. Thus a doctor may not embark on medical treatment against a patient’s wishes however necessary the doctor may consider such treatment to be. In wider contexts, it may be virtuous to give money to charity but my employer cannot, without my consent, take a percentage of my salary and donate it to charity for me. Autonomy of course has its limitations. I may wish to smoke in a public place but is my autonomy in this matter outweighed by the harm that passive smoking may do to other people? This leads to the second and third principles.
Our responsibility to other people is to benefit them. We should not harm others. So a doctor’s duty is to provide beneficial treatment, that is, to practice beneficence rather than harmful treatment, that is, to avoid maleficence. In practice of course, this is often a balancing act. A drug, for example, may be beneficial but it inevitably has side effects. So the decision to prescribe it is about weighing those two things up. Does the benefit outweigh the harm? Can the harm be limited? We may also need to weigh the interests of different parties in our decision‐making. In respect of good versus harm, do the interests of a pregnant woman outweigh the interests of the foetus or vice versa? Wisdom, one of the classical virtues, is clearly needed in making such decisions. In wider society, if you liberalise gambling laws, people who wish to gamble may more easily do so and government revenue is increased but there is a risk that more people become addicted to gambling with all the consequences that has for them and their families.
The fourth principle is that of justice. That is to say we have a responsibility to look to the wider consequences of our decisions and as far as possible to treat all people equally. Again, this is easily understood in medicine. A new drug may be beneficial to a small number of patients but it is very expensive. In the United Kingdom, funds for healthcare are limited and therefore to spend money on the drug may mean that money is less available for things that would benefit many more people. So what does a hospital do – spend money on the drug or replace the chairs in the old peoples’ unit? Or it may be that the drug is only available to some patients; how is it decided which patients should benefit?
These principles have been criticised as being but a pallid version of traditional ethics. But actually, they are based on key principles, not least from the Judaeo‐Christian tradition and Greek and Enlightenment/humanist principles of seeking what is the most good and of not using people as means to an end. Indeed, at its best this is a framework based on virtue ethics. Underlying the four principles we can see the need for the virtues of charity (love) and empathy, which need to be applied with skill (a result of the doctor’s training and experience), wisdom and often patience. All four of the principles are open‐ended with continuing requirements: a doctor can never complete the task of applying beneficence or justice. The four principles are readily interpreted as virtues and thus we may regard this as a virtue ethics system. Their value is that they provide a framework for thinking, of exploring all the options and consequences of a proposed action.
Finally we note that, as in several areas of modern applied ethics, decisions need to be kept under review in the light of new information. A good medical example of this is that in the early days of the AIDS epidemic, a patient with AIDS‐related pneumonia was not treated in an intensive care unit because the prognosis for AIDS was not much more than a year. Nowadays, when drug treatment has greatly improved the prognosis, intensive care treatment is entirely appropriate under such circumstances.
Particularly in the area of modern biomedical science (as discussed in subsequent chapters), it is all too easy to respond to a development with either ‘Yuk’ or ‘Wow’. Our gut reaction may be that this particular development is terrible: it must never be allowed to happen. Or it may be that this is wonderful: it opens up so many possibilities. These were precisely the responses to the birth in 1978 of Louise Brown, the world’s first test‐tube baby. Human sperm was used to fertilise a human egg in the laboratory where the embryo was allowed to develop for a few days. It was then implanted into the mother’s uterus. The baby was carried by her for a normal pregnancy and Louise was born. Inevitably this involved several embryos being produced in the laboratory but only one becoming a baby. Some people said, ‘This is dreadful. We are playing God. Embryos, which are human beings, are being destroyed in the process. Furthermore, you should not separate reproduction from sexual love. Yuk!’ Others thought it was wonderful. ‘Now couples who cannot have a child by natural means will be able to have a family. Wow!’ While both responses are understandable, in neither have all the relevant issues been thought through. Ethical decision‐making demands careful, thoughtful reflection if we are to make the best virtuous decision we can and this may be very difficult with these issues thrown up by modern biomedical science. As Margaret Killingray put it in her 2001 book, Choices, ‘Sometimes our problems with knowing what is wrong and what is right arise because our world is changing so fast that we are constantly facing new situations that do not fit into our existing ways of thinking.’ Isaac Asimov has made as similar point: ‘…science gathers knowledge faster than society gathers wisdom’13. With that in mind, we turn to consider the development of bioethics.
During the 25 years after the Second World War (1939–1945), several factors came together to give rise to the birth of the discipline of bioethics. These were
Frogs had been cloned in the United Kingdom in the early 1950s (see Chapter 5). In 1953, Watson and Crick made their groundbreaking discovery of the structure of DNA. This sparked a huge interest in the biochemistry of DNA. However, it was not until the early 1970s that modern molecular biology really took off, opening up our later ability to manipulate genes, to study them in great detail, to diagnose and select against genetic disease in the early human embryo and to clone mammals. Massive advances were also taking place in many other branches of biomedicine; human organs were being transplanted; life could be prolonged by drugs and surgery; the function of organs such as the kidneys and the lungs could be taken over by machines. Studies of mammalian fertilisation led to the creation of ‘test‐tube’ babies. The list could go on, but we are sure that we have said enough to set the scene.
So enormous were the possibilities raised by these advances that there was increasing concern in society that the traditional framework of ethical thinking could not bear their weight. How could religious absolutes be applied to issues about which such texts as the Bible and the Quran obviously said nothing? It was much more difficult to weigh the balance between benefit and harm (in Bentham’s consequentialist language, pleasure and pain). What, for example, was the benefit of keeping somebody alive for many weeks by machines only to find that their subsequent quality of life was severely reduced? Furthermore, traditional ethical principles conflicted at times. Should the early human embryo be given all the legal protection owed to an already‐born human being, thereby preventing research being carried out in certain areas of disease when such diseases could be eliminated by such research? Where do the boundaries of ethical concern lie – who or what is included within them?
A further concern was the degree of scrutiny of scientists and doctors by society. This had its origins in the Nuremberg trials after the Second World War, where the actions of scientists, doctors, lawyers, military personnel and politicians were subject to forensic investigation and criminal proceedings by the Allies. In many cases individuals were judged to have acted outrageously, violating the most basic respect for their fellow humans. It was these trials that led eventually to new codes of practice to which doctors give assent (the Declaration of Geneva, covering most facets of medical practice and the Declaration of Helsinki, covering the use of human subjects in medical research).
As the 20th century moved on, the public demanded to have more say in how biomedical discoveries were used. Sometimes this was part of the decline in paternalism and deference that was taking place in Western society as it became more egalitarian. At other times there was frank mistrust of scientists (see previous chapter). So bioethics developed as a discipline, mainly at first in the United States of America and later throughout the world. In bioethics, philosophers, lawyers, theologians, sociologists and lay people join with biomedical scientists in assessing what is the appropriate use of new developments and technologies. In many areas there are now formally constituted groups that have a major input into public policy and the regulation of science.
However, many of these issues were and still are about individual choices: my desire to have a child, my wish not to have a handicapped child and my anxiety about being kept alive only to face a poor quality of life. Much wider concerns were also developing in the 1960s and 1970s, particularly about the environment. Hitherto, the assumption had been that the Earth and its resources were primarily for the benefit of human beings with little thought being given to the effects such use would have on the environment. As early as the 1940s, there were renewed concerns about human encroachment on the wild places of the world and the damage this was causing (see Chapter 14) and then, in 1962, Rachel Carson’s seminal book, Silent Spring, drew attention to widespread and often deeply damaging chemical pollution. It was a cell biologist, VR Potter, who brought these concerns together in his book, Bioethics: Bridge to the Future, published in 1970 and he is credited with coining the word ‘bioethics’.
Thus bioethics embraces the effects of scientific advances not just on individuals but also on communities, the environment and nonhuman species. Environmental ethics, genetically modified crops and animal welfare are all part of bioethical discussions. However, during this period of rapid change in the second half of the 20th century, huge advances were taking place not only in biomedical science but also in philosophy and ethics. The principles of both were being radically reviewed so that the framework for ethical decision‐making in Western society at the beginning of the 21st century is by and large very different from that at the beginning of the 20th century, as we have indicated in previous sections.