14 On new technologies

Stephen Clarke

14.1 Introduction

Ethical concerns about new technologies may be divided into two broad categories: concerns about newly introduced technologies, and concerns about technologies that might be introduced in the future. It is sometimes thought that we should focus exclusively on actual technologies and that concerns about possible future technologies are ‘just science fiction’ – mere speculations about what might be the case, which distract us from a proper consideration of what is the case. A contrary point of view is that we should attempt to anticipate ethical problems in advance of the implementation of new technologies. Instead of adapting norms and practices to accommodate new technologies after these have become available, we should try to produce new technologies that are in keeping with the values and practices we currently adhere to. In order to anticipate the ethical problems that new technologies will raise, we need to try to anticipate which of the possible future technologies that raise ethical concerns are likely to become actual, and try to respond to these.
In this chapter, we will consider prominent ethical concerns that have been raised about newly introduced technologies and about technologies that might be implemented in the future. In the case of newly introduced technologies, we will focus on concerns about privacy, individual autonomy and threats to safety. These are far from the only ethical concerns raised by newly introduced technologies, but they do appear to be the ones that have provoked the most discussion in the media and in academic circles. Concerns about privacy and threats to safety also feature prominently in discussions of future technologies, as do concerns about autonomy. A very prominent form of concern about threats to safety, in the discussion of possible future technologies, addresses catastrophic future scenarios. Concerns about autonomy have a tendency to become mixed up with concerns about the future of the human species, when our focus is on possible future technologies. This tendency will be reflected in our discussion.

14.2 Newly introduced technologies

14.2.1 Privacy
A host of recently available technologies enable governments, corporations and individuals to monitor individual activity in ways that would have been impossible only a few decades ago. Closed Circuit Television (CCTV) cameras transmit a signal to a restricted set of monitors and are used in the surveillance of specific locations. The Global Positioning System (GPS) utilizes satellite-based technology to accurately pinpoint the location of objects and individuals equipped with receivers, anywhere on the surface of the Earth. Radio Frequency Identity (RFID) tags can also be used to identify the location of objects and individuals. RFIDs are microscopic electronic tags that are mostly used by retailers to keep track of their products. They are either attached to the surfaces of products or implanted within them. They have also been implanted in animals and in a few cases in humans.
In 2004, the Food and Drug Administration approved the implantation of the VeriChip in humans (Foster and Jaeger, forthcoming). By 2006, these had been implanted in about 70 people in the US, mostly for medical reasons and to control access to high security areas (Waters 2006). It has been variously suggested that RFIDs should be implanted in employees of certain companies, immigrants and guest workers in the United States, sex offenders and US soldiers (Foster and Jaeger, forthcoming). Many commentators think that the use of RFIDs will become much more widespread soon. According to van den Hoven and Vermaas, ‘Governments and the global business world are preparing for a large-scale implementation of RFID technology in the first decades of the 21st century’ (2007, p. 291). Eastman Kodak has filed patent applications on a new technology that will enable RFID chips to be ingested (Tedjasaputra 2007). If this technology becomes readily available, it may become very easy to use RFID chips to monitor the whereabouts of individuals, without their consent, or even without them knowing that their movements are being monitored.
Unsurprisingly, the prospect of RFIDs being used to monitor the whereabouts of humans has met with fierce resistance from privacy advocates, such as CASPIAN (Consumers Against Supermarket Privacy Invasion and Numbering (www.nocards.org/)), and has prompted various publications warning of the threat to individual privacy. RFIDs are considered particularly suspect by some fundamentalist Christians, who see these as the ‘Mark of the Beast’ that we are warned about in the Book of Revelations (Albrecht and Macintyre 2006). In response to such concerns, three US states, California, Wisconsin and North Dakota, have passed laws prohibiting the forced implantation of RFIDs (Anderson 2007).
As well as technology that can monitor our movements, there are various technologies that can be used to interpret information collected about us. The interpretation of information provided by CCTV and other forms of video surveillance can be assisted by the use of face and gait recognition systems (Liu and Sarkar 2007). Information collected by aural surveillance devices can be interpreted using devices such as the ‘Truth Phone’, which analyses voice stress during telephone calls, in an attempt to detect lying (Davies 2003, p. 21). The ‘love detector’ operates similarly, identifying levels of excitement and arousal in speech, in an attempt to identify people's feelings for those that they are speaking to (see www.love-detector.com/index.php).
In addition to concerns about the aspects of our lives that are being monitored and concerns about how collected data can be analysed, there are concerns about who has access to personal data. Companies such as Acxiom specialize in buying data from businesses about their customers, integrating these into powerful data bases, and selling access to the data bases created to other businesses. Customers, who may be willing to accept that data about them may be made available to particular businesses that they have dealings with, may also be very unhappy about that data being passed on to other companies. Also of concern is the placement of data about individuals, such as court records, on the Internet. Enabling free and easy access to data that might otherwise be hard to access radically increases the number of their potential users and can alter the nature of their uses (Nissenbaum 2004).
It is widely accepted that individuals are entitled to certain forms of informational privacy and that certain information, such as personal financial data, which may find their way into some parts of the public realm, should be kept out of other parts of the public realm. Nissenbaum (2004) has argued that we should accept ‘contextual integrity’ as a benchmark for informational privacy in the public sphere. According to her, different contexts within the public sphere – the context of friendship, the context of the classroom and so on – are implicitly governed by particular norms of behaviour, including norms relating to respect for privacy. If her position is accepted, then information should not be made generally available, within a particular context, without due regard for the governing norms that implicitly shape our sense of what is appropriate within that context. Furthermore, information of a type that it is appropriate to make generally available in one context should not be transferred to a different context without due regard for norms that implicitly govern the flow of information between these particular contexts. Later, we will see that the scope of Nissenbaum's claims regarding the importance of contextual integrity stand in need of qualification, as Nissenbaum herself allows.
As well as debates about the right to informational privacy in general, there are debates about the right to informational privacy in the workplace. The surveillance of employee's emails, telephone calls and other forms of communication, by their workplace supervisors, is widespread. According to a recent survey, 73 per cent of American companies engaged in some form of electronic surveillance of their employees (Blackwell 2003). Although some commentators have argued that there is a presumptive right to privacy in the workplace, they generally acknowledge that this right needs to be balanced against the interests of employers, customers and other employees, all of whom have a legitimate interest in ensuring that employees are working effectively and are not using the workplace to conduct illegal activities (e.g. Miller and Weckert 2000). The topic will not be elaborated on here as the interested reader may find further information in Chapter 7.
14.2.2 Autonomy
One of the more important arguments for respecting the informational privacy of individuals is that, if people's activities are unknown to others then they cannot be deliberately interfered with by others. Therefore, they will be better able to act on their own life plans and devote their energies to satisfying their own preferences. In other words, they will be better able to realize the value of autonomy, a value which is deeply embedded in Western culture. Isaiah Berlin captured the core sentiments of the many of us who wish to live autonomously:
I wish my life and decisions to depend on myself, not on external forces of whatever kind. I wish to be the instrument of my own, not of other men's acts of will. I wish to be a subject, not an object: to be moved by reasons, by conscious purposes, which are my own, not by causes which affect me, as it were from outside.
(Berlin 1969, p. 131)
Of course, freedom from the external interference is not all there is to autonomy. Internal forces can also limit our ability to experience autonomy. Claustrophobics live with a fear of confined spaces. However, there may be some occasions when it is in the interest of a claustrophobic to enter particular confined spaces. If a claustrophobic is unable to overcome her fear when she decides that it is in her interest to do so, she has had her autonomy interfered with by an internal force.
In order, among other things, to ensure that people's autonomy is respected, interpersonal interactions in Western societies are typically structured around the ideal of mutual consent (Kleinig 1982). A business transaction can only occur when both buyer and vendor agree to complete that transaction. Similarly, a marriage can only occur when both bride and groom agree to the marriage. To provide ‘effective consent’ to an action, an agent must comprehend what they are consenting to, or at least have the opportunity to comprehend the major consequences of consenting to that action. Also, they must have sufficient, relevant information so that their consent can be the consequence of an informed autonomous decision. This is true of the many areas of human activity in which our governing norms of behaviour include consent requirements (Clarke 2001).
One way in which our autonomy can be compromised, when we are using new technologies, is that, sometimes, we may not understand what we are asked to consent to. An example of a complicated request for consent, that may be difficult to comprehend, is the request to consent to receive targeted behavioural advertising when a free Gmail email account is set up. New subscribers to free Gmail accounts are asked to consent to having the content of their emails mechanically scanned for keywords that are then used to select targeted advertisements, which appear alongside email messages. So, for example, if a Gmail user sends or receive emails containing an above average use of the word ‘holiday’, Gmail may direct advertisements for holidays to that user, rather than advertisements for some other product or service.
In medical contexts, it is a standard condition of the informed consent process that doctors discuss a recommended procedure or course of treatment with a prospective patient, so as to ensure that comprehension has been achieved (e.g. Wear 1998). It is not simply assumed that patients will comprehend complicated technical information disclosed for the purposes of enabling informed consent to be obtained. In the context of commercially provided software, it is perhaps unrealistic to suggest that companies should actively ensure that particular customers have comprehended technical information disclosed for the purposes of informed consent. However, it is reasonable to expect that companies do what they can to aid comprehension. Friedman et al. (2005, p. 515) describe the documents on Gmail's registration and user interfaces as going ‘a good distance toward helping ensuring comprehension’. And indeed the language used in the relevant documentation is admirably clear. Comprehension is more likely to be acquired as a result of reading the relevant documentation on Gmail's registration and user interfaces than if this is not read. Nevertheless, there may be significant numbers of Gmail users who consent to use a Gmail account without properly comprehending what they are consenting to. They consent to use a Gmail account, but it is possible that they do not provide effective informed consent to the use of a Gmail account.
Targeted behavioural advertising seems set to become more confusing than it already is, due to new technology that will enable Internet service providers (ISPs) to track the websites visited by their customers in order to support targeted behavioural advertising. The current market leader in this area is a company called Phorm which has recently signed deals with the three biggest ISPs in the UK, BT, Virgin Media and TalkTalk to enable them to use their technology (see www.economist.com/science/tq/displaystory.cfm?story_id=11482452). One's movements on the Internet can be tracked by HTTP cookies that have been downloaded onto computers, by search engines, by email providers and now by ISP providers. As well as raising obvious privacy issues, this situation raises significant concerns about autonomy. In order to provide effective informed consent to targeted behavioural advertising, computer users need to comprehend the means by which their behaviour is monitored. The fact that it may be monitored in a variety of different ways makes comprehension more difficult to achieve.
The introduction and uptake of new technologies creates a host of new social situations in which individuals have to decide how to behave. In such new situations, we may not have had time to collectively develop norms to guide behaviour. The rapid growth of information technologies has led to a slew of circumstances in which it is currently unclear what the appropriate norms of behaviour are and what the scope of consent should be. Should someone's consent be obtained before a photograph or a piece of video footage containing their image is posted on a site on the Internet (these days such visual images may be easily acquired without people's knowledge or consent using mobile camera phones)? Is it acceptable to post a document that has been created by another person, and forwarded to you, on a publicly accessible website, without their consent? Should I be allowed to create a website devoted to publicizing personal information about a third party, without obtaining their consent? The answers to these questions are not clear and answers that may be given will be contested.
Over time, as people interact with new technologies and with one another, norms can be expected to emerge that will guide behaviour. Alternatively, their development may be guided by the deliberate activities of policy makers, activists, lawyers and ethicists. Questions about the proper scope of consent have a clear ethical import. The answers to them that we generally accept will have a significant role in determining the scope of the sphere of individual autonomy. If we answer these questions without considering their ethical aspects then legal, institutional and practical considerations will do much to shape the norms that govern our behaviour at the expense of ethical considerations. The technology that we use will tend to shape the ethics that we accept, and the ethics that we accept will do little to shape the technology that we use. The reader interested in knowing more about recent discussions of the ways in which ethical standards are shaped by new technology and in which new technologies may be shaped by ethical considerations may wish to consult Budinger and Budinger (2006), Spier (2001), van den Hoven and Weckert (2008) and Winston and Edelbach (2008).
14.2.3 Threats to safety
There are many potential dangers associated with the use of new technologies. Some of these can be removed before particular new technologies are made publicly available. Still, there are usually going to be public concerns about the safety of some of the new technologies that are available. Public concerns about the safety of new information and communication technologies (ICT) have not received the same media attention as have public concerns about new biotechnologies. Nevertheless, concerns have been voiced about, for example, the safety of mobile phones and the radiation emitted by them and by the masts used to transmit phone signals, as well as about ICT human implants. The latter include cardiac pacemakers, cochlear implants, RFIDs that are implanted subcutaneously, and implantable neurostimulation devices, which are used, among other things, to manage chronic pain and to control seizures in epileptics. More broadly, there is growing concern about the increased use of nanotechnology. In particular, there is much concern about the consequences for humans of inhaling manufactured nanoparticles, or otherwise ending up with exotic nanoparticles in their bodies (Jones 2007, p. 75). Indeed, because of the importance of miniaturization in ICT, nanotechnology is playing an increasingly significant role in ICT. Nanotechnologies are already used in the production of computer chips, information storage technologies and optoelectronics. In the near future, nanotechnology is expected to play a role in other areas of ICT, including hard disk technologies and sensor technologies (Royal Society and the Royal Academy of Engineering, 2004, Chapter 3).
Those who voice concerns about the potential risks of using new technologies, including nanotechnology, often argue that we should apply the precautionary principle (PP) when evaluating their implementation. The PP is a conceptual tool, employed in risk management and in policy making in the face of uncertainty. The core intuition behind the PP is that, when in doubt, it is better to act precautiously, that is, it is ‘better to be safe than sorry’. This is, of course, a commonsense saying, and the PP is often defended as simply being an extension of everyday reasoning (Sandin 2007), although this characterization is open to dispute (Clarke 2009). The Independent Expert Group on Mobile Phones (2000) recommends the application of the PP to mobile phone use and the European Group on Ethics in Science and New Technologies (2005) recommends the application of the PP to the use of ICT human implants (2005). Som et al. (2004) suggest that the PP should be applied more frequently to information technologies than it has been; while both the European Commission Scientific Committee on Emerging and Newly Identified Health Risks (2006, p. 54) and the ETC group (2005, p. 16) recommend a precautionary approach to the use of new nanomaterials.
The PP is often contrasted with cost–benefit analysis (CBA), an approach to risk management in which one attempts to determine the probability of benefits occurring as well as the probability of costs being incurred, when the implementation of a new policy is being considered. The expected balance of costs and benefits, for a given policy option, is then compared with the equivalent balances of costs and benefits that would be expected to result from the introduction of alternatives to that policy, and the policy with the overall best balance of expected benefits over expected costs is selected. While CBA involves weighing expected costs and benefits, application of the PP involves a more exclusive focus on the potential costs of introducing a new policy.
In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.
(United Nations Environment Programme 1992)
The Final Declaration of the First European ‘Seas at Risk’ Conference (1994) states that:
If the ‘worst case scenario’ for a certain activity is serious enough then even a small amount of doubt as to the safety of that activity is sufficient to stop it taking place.
(1994, Annex 1)
There are many other variants of the precautionary principle that could also be listed here (e.g. Som et al. 2004, pp. 788–789).
‘Principle 15’ exemplifies what is sometimes referred to as the weak version of the PP. It does not replace CBA and can usefully be understood as offering us guidance in the interpretation of CBA. It advises us to ensure that CBA is not used in a selective manner and that risks which are only established with some degree of confidence are considered in any application of CBA, alongside risks established with ‘full scientific certainty’. What weak versions of the PP have in common is that they instruct us to pay special attention to uncertainties, in one or other way, when formulating policy. The Final Declaration of the First European ‘Seas at Risk’ Conference, by contrast with ‘Principle 15’, is not compatible with CBA and is an example of what is sometimes referred to as a strong version of the PP. It advises us not to attempt to weigh the expected costs and benefits of a particular policy, but to formulate policy by considering the potential serious harms of a policy, under certain conditions, regardless of how potentially beneficial a particular policy may be, even if the estimated probability of the potential serious harms occurring is extremely low.
There is a well-known and seemingly devastating criticism of strong versions of the PP, which does not apply to typical weak versions of the PP (Sunstein 2005, Manson 2002). This is that strong versions of the PP, if applied consistently, lead to paradoxical outcomes. To see this, consider the application of a strong version of the PP to mobile phone use (note that the Independent Expert Group on Mobile Phones (2000) applies a weak version of the PP, so they would not go along with this line of reasoning). We do not know for sure what the risks to human health of exposure to radiation emitted by mobile phones and masts are, but they may be significant. So, an application of a strong version of the PP seems to lead to the conclusion that we should ban all mobile phones and masts until we have well-established data on what these effects are. However, if we do not have mobile phones and an operating system of transmitting mobile phone signals available, then individuals who find themselves in emergency situations may be unable to contact others to get help and human lives may be placed in jeopardy. Therefore, it seems that the application of a strong version of the PP leads to the conclusion that we should not ban mobile phones and masts. The consistent application of strong versions of the PP leads to the recommendation of contradictory policies and strong versions of the PP are, therefore, paradoxical.
There have been a number of attempts to defend strong versions of the PP from the charge of leading to paradox (e.g. Weckert and Moor 2006, Gardiner 2006). Although these appear to be unsuccessful (Clarke 2009), prominent defenders of the PP have not given up hope that a coherent version of the strong PP can and will be found (e.g. Sandin 2007, p. 102). Given that weak versions of the PP are not vulnerable to the charge of paradox, why do Sandin (2007) and others not simply adopt one of these and forego attempts to resuscitate strong versions of the PP? One answer to this question is that they may think that weak versions of the PP are too weak. If all the PP amounts to is a way of ensuring that the potential costs of a policy are fairly considered, then it must be allowed that these may sometimes be trumped by benefits when potential policies are being considered. If this is so, then applications of the PP may sometimes fail to recommend that we act precautiously. If the benefits of new mobile phone technologies, ICT implants and new nanomaterials are judged to outweigh costs, then, along with CBA, weak versions of the PP may end up recommending that we allow their use, subject to appropriate regulation. This may be good advice, but it is often not the advice those who advocate the PP hope to be able to recommend.

14.3 Future technologies

14.3.1 Catastrophic future scenarios
Possible catastrophic scenarios receive widespread public discussion. This should not be surprising. We have a vested interest in considering the scenarios that are most important to us, and possible futures in which humanity is exterminated, or experiences a miserable existence, are of obvious concern. Perhaps, runaway technological development will lead us to become the authors of our own demise. Curiously, some of the most prominent science fiction dystopias are concerned not with the consequences of runaway technological development, but with technological development that has ceased to advance beyond the level that suits a future dictatorship. George Orwell's 1984 (1949) and Aldous Huxley's Brave New World (2006) [1932] fall into this category. We will return to consider these famous novels, which, in different ways, exemplify themes that are recurrent in much contemporary extemporizing about the dangers of future technology. But first, let us consider some catastrophic scenarios that may result from unfettered technological development.
Future developments in information technology offer us the prospect of creating genuinely powerful artificial intelligence (AI). A threshold in the development of AI will be reached if and when an artificial intelligence is able to act so as to improve its own intelligence. At that point, an artificial agent may be able to become extremely intelligent – and vastly more intelligent than humans – very rapidly. This prospect is of obvious concern, as it is unclear how a powerful artificial superintelligence would regard its much less intelligent, and much less powerful human creators. If the artificial superintelligence was ill-disposed towards humanity then the prospects for the latter could be grim. One way of heading off this possibility would be to seek to design ‘friendly AI’. But it is unclear whether a friendly artificial agent that could rewrite its own programming would remain friendly for long. In any case, a powerful artificial agent that sought to act benevolently towards humans could not be guaranteed to act in ways that we would actually consider to be benevolent (Yudkowsky 2008).
An additional route to the possibility of inadvertently creating a malevolent superintelligence is via the possibility of uploading. An upload is a mind that has been transferred from a brain to a computer that is able to emulate the computational processes that occurred in the biological neural network located in the original brain. At present this is, of course, just a theoretical possibility, but it is one that has been taken seriously by some commentators for some time now (e.g. Hanson 1994). It could be much easier for an uploaded mind to increase its intelligence than it is for us biologically bounded beings. An uploaded mind which was connected to the Internet could access additional computational resources to dramatically improve itself (Bostrom 2002). In addition to worrying about the possibility of an artificial superintelligence that is ill-disposed towards us, we may need to be concerned about the possibility of a posthuman superintelligence that is ill-disposed towards us.
Another dramatic dystopian scenario, which is due to Eric Drexler (1986), is encapsulated in the ‘grey goo problem’. Drexler (1986) speculates that advances in nanotechnology may lead to nano-scale assemblers that can be used to rearrange matter one atom at a time. With the aid of significant computing power, such assemblers could be used to turn physical items into completely different physical items; waste material into diamonds, and so on. A concern here is that we might program such assemblers to turn other things into themselves. If such assemblers were created, and they were able to turn all other things into themselves, then the entire Universe could end up being composed only of these assemblers (it would be ‘grey goo’). Whether this is really possible is unclear, but the scenario is taken seriously by a number of commentators (Laurent and Petit 2005), and even a mild version of the grey goo problem, in which assemblers turned many other things into themselves, would be catastrophic.
It is difficult to know what to do about such speculative dystopian scenarios other than keep them in the back of our minds. Unless we have reason to think that these are at least somewhat likely to occur, then the chance that they might possibly occur does not seem sufficient to outweigh the benefits of conducting research in artificial intelligence, nanotechnology and general information technology. So, it seems that neither CBA nor weak versions of the PP could be used to recommend restrictions on research in any of these areas of technology, in light of the possibility of catastrophic future scenarios. As we have seen, advocates of strong versions of the PP argue that we should not consider the potential benefits of new technologies, in circumstances where significant harms are possible, when formulating policies to manage risk. So, it might be thought that strong versions of the PP could be used as the conceptual basis for restrictions on research in particular areas of technology, in light of the possibility of catastrophic future scenarios. However, the paradoxical consequences of applying strong versions of the PP appear when we attempt to apply strong versions of the PP to any scenario, so they will appear in catastrophic scenarios just as they do in non-catastrophic ones (Clarke 2005).
14.3.2 1984
The events described in George Orwell's 1984 take place in Oceania, one of three warring states, which rule all areas of the world in this dystopian novel. Oceania is an oppressive dictatorship, controlling its population with the aid of rigorous and systematic surveillance techniques. There is no possibility of privacy in the public domain in this society, as public areas are continuously monitored by cameras, hidden microphones and government spies. There is some possibility of privacy in the home – the traditional private sphere – but this is constantly under threat as children are indoctrinated to spy on their parents and others, and to report suspect activities to the ‘Thought Police’. 1984 has become emblematic of the fears that people have that our future may become that of an all-encompassing surveillance society. Their fears are encapsulated in the slogan of Oceania: ‘Big Brother is Watching You’.
Is the society depicted in 1984 particularly objectionable because the behaviour of its citizens is being continuously monitored or because the information obtained is collected by agents of a totalitarian government and used to oppress that society's citizens? This question is explored, in a general form, by David Brin in his The Transparent Society (1999). Brin argues that it is inevitable that our society is going to become an all-encompassing surveillance society. The important question for him is what sort of surveillance society we are going to become. He sees two broad alternatives. One sort of future society is the one familiar to Orwell's readers, in which surveillance technology is used to collect information about people's behaviour in public places and then transmitted to government agencies. In a second possible future society, the collected information is made available to everyone. In some ways, we seem to be evolving towards this second type of future society. Information about behaviour in public, including video footage, is now widely available on the Internet. Webcams, which can upload live feeds of video footage to the Internet, are increasingly common. And since 2007, Google Maps has offered a service called ‘Google Street View’ (see http://maps.google.com/help/maps/streetview/). This website currently contains regularly updated still photographs of streets in major American, Australian, French and Japanese cities, which are available for public access.
Brin (1999) argues that the second type of possible future society would be very different from, and much preferable to, the first; and it would also have many advantages over our current society. One advantage of such a society that he points to is that people, including employees of government agencies, would be deterred from attempting to commit crimes. In his view, the rise of CCTV cameras is already beginning to have this effect on our current society. Another example of our society taking a step in the direction of the second sort of surveillance society, which Brin (1999) points to, is the increasing popularity of ‘Kindercam’ (see www.kindercam.com), an online service that allows parents password-protected access to video cameras that monitor the day care centres hosting their children.
In effect, Brin (1999) is arguing that we will, and perhaps should, flout Nissenbaum's (2004) criterion of contextual integrity as a benchmark for privacy in the public sphere. One criticism that we might make of Nissenbaum (2004) is that a strict application of her criterion runs the danger of locking future societies into institutional structures that are geared around the norms of the present. Societies evolve and the norms that govern them can be expected to evolve accordingly. It is not hard to imagine that, in a future society, we will have little, or even no, expectation of privacy in the public sphere, and may care very little, or even not at all, for this type of privacy. Some argue that the very notion of a public sphere, with a set of attendant norms and expectations, is basically a product of modernity and so relatively recent in origin (Lyon 1994, p. 184). And if the notion of a public sphere is of relatively recent origin then so is the possibility of privacy in the public sphere. It may be that, as well as being a relatively recent phenomenon, privacy in the public sphere turns out to be a relatively short-lived phenomenon. Nissenbaum is aware that her position is susceptible to the charge of entrenching the status quo. She argues that, although it sets up a presumption in favour of the status quo, this does not mean that such a presumption cannot sometimes be overturned (2004, p. 127). Indeed, she argues that the status quo should sometimes be overturned, when we identify adequate reasons for doing so, grounded in fundamental social, moral and political values (2004, p. 129).
14.3.3 Brave New World
Orwell's 1984 is a dystopian novel portraying a repressive totalitarian state. Huxley's Brave New World is a dystopian novel portraying a benign dictatorship. In the future society, depicted in Brave New World, the vast majority of the populace is unremittingly happy and unthinkingly obedient to its government. Citizens in this society are bred, not born, in ‘hatcheries’. A combination of selective breeding, and the use of drugs, administered as part of the process of foetal development, ensures the production of suitable proportions of members of a rigid hierarchy of castes, bred to perform distinct work roles. These citizens are uniformly promiscuous and do not form deep romantic relationships. They are encouraged to be good consumers and discouraged from seeking solitude. Negative feelings are soon drowned out by the use of the socially approved drug ‘Soma’. In short, the lives of people who inhabit the society depicted in Brave New World are ones that we now think of as unrelentingly shallow.
In 1984 and in Brave New World, the potential for individual autonomy is much reduced from what it is today. In 1984, individual autonomy is undermined mostly from without, by a repressive state. In Brave New World, the autonomy of ordinary individuals is undermined mostly from within. They lack the motivation and means to question the ways in which they are encouraged to live, as their upbringing renders them almost completely incapable of reflecting critically on their circumstances.
In recent times, a group of scholars, who have come to be known as ‘bioconservatives’, has held up Brave New World as a warning of the potential dangers of ‘enhancing’ human beings. Bioconservative commentators, including Francis Fukuyama (2002), Leon Kass (2003) and Michael Sandel (2007), worry about the long-term, societal consequences of allowing enhancement technologies and worry that the use of these may result in us ceasing to be human and becoming ‘posthumans’. However, ‘transhumanists’, such as Bostrom (2003), argue that the use of enhancement technologies is likely to be beneficial for us overall and that it would be a good thing for us, all things considered, if we were to be transformed into posthumans.
Roughly, enhancement is the use of technology to raise people's physical and mental capacities above the levels which these might otherwise reach. Enhancement is conventionally contrasted with therapy, which aims to restore lost functioning, although this distinction is somewhat problematic. Nowadays, humans can enhance themselves by using performance-enhancing drugs, various forms of cosmetic surgery, and some non-cosmetic surgeries, such as laser eye surgery, which can improve vision above and beyond natural levels (Saletan 2005). There is a plethora of ways in which it has been suggested that humans will become able to enhance themselves in the future, some of which involve possible future developments in ICT. These include the development of ‘collective cortex’ systems that aid in shared cognition, the development of software that will make human cognition more efficient and the development of software that mediates between the human mind and a wearable computer, a possibility that has been explored in some detail by Steve Mann (1997, 2001). If we understand the human mind very broadly, to include the ‘exoself’ of files, web pages, online identities and other personal information, then many other more conventional advances in ICT can be counted as contributions to human enhancement (Sandberg and Bostrom, 2007).
One recurrent theme in bioconservative scholarship is that there is a danger that, once we are sufficiently enhanced, we may cease to be autonomous individuals. Future beings may become so integrated in collective communication structures that they become incapable of operating as individual autonomous agents. Furthermore, future beings may cease to desire individual autonomy. They may become the unquestioning obedient subjects of Brave New World. But while bioconservatives see the posthuman world as being something more like a bad combination of Brave New World and 1984, Bostrom (2003) and other transhumanists imagine the posthuman world as a tolerant and liberal society, in which enhanced and unenhanced individuals live side by side and respect one another's choices and lifestyles. One thing that seems clear is that debates about the consequences of allowing enhancement technologies will not go away any time soon.
Thanks to Rafaela Hillerbrand, Steve Matthews and Luciano Floridi for helpful comments on an earlier version of this chapter.