We now turn to the particular ways in which issues of autonomy and agency play out in our five selected technologies.
During the 1960s, the exponential growth of speed and storage capacity in integrated circuits was recognized, giving us what
is now known as
Moore's Law (Moore
1965). The 1970s saw the introduction of ‘
genetic algorithms’ (Holland
1975). Together, these developments suggested that the enormous power of evolution would soon be harnessed for computational purposes.
In the 1980s, early proponents of ALife imagined that success was just around the corner. However, the task of engineering
virtual computational environments in which ALife can flourish has proven much more difficult than its early advocates assumed.
Partly this is due to the fact that ALife researchers started out with what we now know in hindsight to be overly simplistic
ideas about the relationship between genotypes and phenotypes. For example, if an ALife researcher wanted to evolve artificial
creatures whose artificial neural networks were capable of some task, they typically assumed it was appropriate to have one
‘gene’ for each connection in the network to determine the strength (‘weight’) of that connection. The fact that there aren't
enough genes to code for every connection in the brain had been understood from at least the 1950s, even by computer scientists.
But the one gene-one weight approach to ALife was nonetheless adopted as a reasonable simplifying assumption to allow modelling
to begin.
This stance was partly based on the old dogma from biology that genes code for proteins in a one-to-one fashion. However,
the genome mapping projects that were started in the 1990s and culminated with the publication of the first complete human
genome in 2003 taught biologists that the development of organisms is far more complicated than this dogma suggests. The construction
of cells and living organisms requires many resources not just from outside the DNA but from outside the cell and outside
the organism itself. There is much machinery between transcription of a piece of DNA and the appearance of a specific protein
in a cell. Furthermore, the so-called regulatory sequences of DNA need not code for proteins at all, and even those pieces
of DNA that
are used for the production of proteins do not directly specify the proteins expressed, but are subject to processes such
as post-transcription editing by RNA. The result is that there is no one–one correspondence between genes and products, let
alone between genes and phenotypic traits. The interactions are fearsomely complex, and it is part of the reason why humans
and flatworms can be so different anatomically and behaviourally despite the fact that they have roughly the same number of
genes, with many of those genes even being shared. So far this discussion has focused on the evolution of virtual organisms
for computationally defined environments, but progress is also being made on evolvable hardware. Much of this work reinforces
the lessons about the complexity of physical interactions and the surprising ways in which physical components can interact,
even sometimes confounding the expectations of
engineers (Thompson
1996, Lohn and Hornsby
2006).
Until the process of organismic development is more fully understood, ALife ‘organisms’ will remain but caricatures of actual
living entities. Although there's plenty that's artificial about ALife, there's still not enough that's significantly life-like
about it.
Computer viruses and worms are sometimes claimed to be examples of successful artificial
life forms; the extent to which such pieces of ‘malware’ successfully replicate themselves from one computer to another suggests
a life-like process. However, these automatically propagating programs do not provide an example of an adaptive, self-regulatory
process. They do not make use of (simulated) genetic mechanisms, and there is not a single case of a wild-type mutation in
a virus or worm leading to an adaptive change in the means by which it propagates or infects vulnerable systems. Although
‘polymorphic’ and ‘metamorphic’ viruses are programmed to reorganize their code as a means to evade detection by virus scanners,
these variations do not allow the viruses to adapt to changing ecological conditions, such as changes in the host operating
systems, or transmission to entirely different operating systems, and they have no way to adapt to more sophisticated scanners.
Despite the utility of evolutionary algorithms for finding efficient solutions to difficult hardware and software design problems,
these approaches yield systems that can be hard to analyse and understand. This lack of transparency, due to the complexity
of the interactions involved, may be a source of worry about effective risk analysis for those who wish to deploy artificially
evolved hardware or software. Nevertheless, current ALife technology is far from sustaining adaptive, self-regulatory and
self-reproducing lineages of autonomous agents. The technological limitations noted here for both virtual and physically embodied
ALife should, therefore, temper concerns about ALife somehow escaping the clutches of scientists, and running amok. Worries
about the perils and ethical consequences of life in silico are destined to remain science fiction into the foreseeable future. Many commentators believe, however, that a more plausible
path towards artificial life forms consists in hybridization of biological life forms with machines: cyborgs.
Embodied artificial agents are designed for operation in physical environments. Cyborgs are one main genus of embodied artificial
agents, the other main genus being robots, which are completely inorganic in nature.
Cyborg technology covers a very wide spectrum of hybrid systems. It may be difficult to give a sharp definition that captures
just what interests us about cyborgs in this context. A common dictionary definition, for example, identifies a cyborg as
a ‘person whose abilities are extended beyond normal human limitations by mechanical elements built into the body’. In this
case, anyone with
dental implants might count as a cyborg, if those artificial teeth are considerably stronger or more cavity resistant than
their natural counterparts.
Prosthetic devices of all kinds are capable of enhancing human abilities, and the 2008 case of whether double amputee
Oscar Pistorius could compete in the Olympic Games with his ‘Cheetah’ blade legs highlighted the extent to which artificial
limbs might provide an ‘unfair’ advantage because their material characteristics provide more efficient energy storage and
return than natural legs. In the end, Pistorius failed to beat the qualifying time for entry to the 400 m event, although
his time certainly put him well beyond average human running speed. Perhaps, however, the dictionary definition excludes Oscar
Pistorius from the category of cyborgs on the grounds that his Cheetah blades were strapped on, not ‘built into’ the body.
Still, it is not too much of stretch to imagine such prostheses soon being surgically implanted via an artificial knee or
hip joint. Other borderline cases of cyborg technology include robotic
exoskeletons that are being developed for military use (e.g., by the Utah-based robotics company Sarcos) and hybrid robots
that use actual biological neurons grafted onto a
multi-electrode array (MEA) to control simulated and embodied systems (Potter, Wagenaar and DeMarse
2006; as widely reported by news media, Potter trained 25,000 rat neurons on an MEA to control a flight simulator).
Setting definitional matters aside, the kinds of cyborgs that have the most interest for the purposes of this chapter are
those which offer cognitive enhancements to humans through implanted computational devices. For many citizens of technologically
advanced societies,
cell phones, PDAs and pocket Wi-Fi devices have become indispensable external aids to our fallible memories and limited knowledge
bases. Prototypes for cell phone implants have already been developed, but while these are physically embedded within the
human body, they so far involve no direct links to the human nervous system. External readings of brain waves via EEG technology
are being tested for
the control of
speech synthesizers, wheelchairs and avatars in virtual environments. The prospect of even tighter integration of information
technology with human minds via direct implantation of devices to the human nervous system seems to offer enormous possibilities
for cognitive enhancement.
Neural-cognitive implants are being investigated mostly with medical applications in mind, but some for the sheer thrill of
being at a technological frontier. Thus, brain–computer interfaces are being investigated for speech synthesis and brain-implanted
electrodes have been used to control prosthetic limbs in monkeys and humans (Lebedev and Nicolelis
2006). At the more speculative and futuristic end of the spectrum, Kevin Warwick, professor of cybernetics at Reading University,
has experimented with a variety of implants (Warwick
2004).
These technological developments raise some very general ethical issues that accompany the use of any technology. For example,
the adoption of advanced technologies frequently widens the gap between haves and have-nots. Much has been made of the ‘
digital divide’ between those who have access to computers and the Internet, and those who do not (see, e.g., Servon
2002). The ‘
cyborg divide’ could be just as significant if these enhancements provide major advantages to those who adopt them. Specific
enhancements also raise particular ethical issues. For instance, there are already hearing enhancers that can increase hearing
sensitivity twentyfold, which means that you may no longer safely assume that a distant person cannot hear your private conversation.
Human-cyborg technologies afford greatly enhanced agency in the real world through an increase in cognitive and physical capacities.
If a cyborg human can do the physical work of ten men or the intellectual work of a team of experts, this has potentially
profound consequences for increasing the freedom and autonomy of those who are enhanced, and decreasing the freedom and autonomy
of those who are not.
While cyborgs are still mostly confined to the research laboratory,
robots have long been a presence in the human environment. Industrial robots are very widely used, but these special-purpose
machines are typically bolted to the factory floor where they do one job, and one job only. In non-industrial applications,
however, autonomous or semi-autonomous robots that are free to roam and may have multiple capabilities are becoming mass-market
consumer items as well as occupying more specialized niches. One morally significant area where the widespread use of robots
is currently envisaged is in the burgeoning field of
elder-care (Anderson and Anderson
2008). Technologically advanced countries are facing a large bulge in their elderly populations. Japanese society is particularly
faced with the challenge of insufficient health care workers to take care of the aging population, and the introduction of
robots in this context has been made an official government policy. So-called ‘
carebots’ may be responsible for dispensing medicines, making sure they are taken, encouraging exercise and providing basic
companionship (Floridi
2008a).
The field of robotics is also a major focus for
military planners, for example the
US Army's Future Combat Systems program. At the same time there is increasing penetration of semi-autonomous robots into homes,
whether as programmable
robotic toys (e.g., Sony's now-discontinued AIBO robotic dog, or current offerings such as Ugobe's Pleo robotic baby dinosaur)
or as household appliances (e.g., iRobot's
Roomba robotic vacuum cleaner). Roboticist and iRobot founder Rodney Brooks argues that there is presently an inverse relationship
between price and autonomy (Brooks
2007). The
expense of military robots means that there is an incentive to place them under human supervision to protect the investment.
But home robots must be cheap to be commercially successful, and their owners do not want to have to supervise them continually.
Neither does the average home contain the infrastructure that would be required for comprehensive monitoring and control.
Thus the expensive robots that have been deployed for space exploration or military applications have thus far been almost
entirely tele-operated, while the cheap robots intended for the mass market are relatively uncontrolled. Although Brooks may,
for the time being, be right about the relationship about being cheap and out of control, the logic of military deployment
nevertheless drives towards giving robots greater autonomy (Wallach and Allen
2008). For instance, the ‘unmanned’ drones that have been used extensively by the US military for missions in Afghanistan, Pakistan
and Iraq, are in fact tele-operated from a base in Nevada by crews of four or more highly trained individuals. Increasing
the autonomy of the drones would allow more of them to be flown with the same number of operators. The advantages for military
superiority of having greater numbers of fighting units provides an incentive for increasing machine autonomy.
To work effectively with humans, robots need to engage human interest and maintain it. The field
of human–robot interaction (HRI) investigates visual, linguistic and other cues that support engaging interactions. Among
the characteristics that are systematically investigated by HRI scientists are the emotionally significant facial expressions
and non-verbal properties of speech, such as rhythm or prosody, that play a significant role in human social interaction.
These ‘minimal cognition’ cues are a focus of behaviour-based robotics (Breazeal
2002), and have proven surprisingly effective in giving people the sense that they are dealing with intelligent agents. However,
some critics fear that the addition of such traits to robots is fundamentally deceptive, relying on the strong tendency of
humans to anthropomorphize objects by projecting human-like characteristics onto things that don't have them. This may prove
especially problematic in the context of elder-care, with people who are relatively starved of human companionship (Turkle
2005, Turkle
et al.
2006).
All of these applications of robotics – whether home, military or healthcare – involve their own ethical issues. This is especially
clear for military applications where there is considerable concern about ensuring that combat robots follow acceptable rules
of war, as encoded by the Geneva conventions
for instance (Arkin
2007), and for healthcare applications where the issue of patients’ rights (for instance, to refuse medication) are of concern
(Anderson and Anderson
2007). Arkin (
2004) has also suggested that the likely military, service and sexual applications of robots may serve to revive callous attitudes
towards life and liberty that characterized earlier stages of human history, thus undermining moral progress in domains such
as the formal abolition of slavery. Arkin neatly captures these concerns with his title phrase, ‘
Bombs, Bonding, and Bondage’.
More generically, the presence of autonomous robots in real-world environments, where it may not always be possible to constrain
their actions to well-defined contexts (robots do roam), raises questions about whether these machines will need to have on-board
ethical
decision-making capacities (Moor
2006, Turilli
2007, Wallach and Allen
2008). With the current state of robotics and artificial intelligence, the suggestion that artificial moral agents are possible,
let alone necessary, may seem far-fetched. However, assessments of the autonomy and morality of artificial agents may hinge
less on ‘deep’ metaphysical facts about moral agency, and more on the fact that people will adopt different stances towards
artificial agency based on whether they understand (or care to understand) the underlying mechanisms (Floridi and Sanders
2004, Grodzinsky
et al.
2008). The need is illustrated by the fact that artificial agents operating in virtual environments are already autonomously making
decisions with potentially significant ethical consequences, even though they are blind to those consequences. The computer
that denies your credit card purchase makes no prediction about whether this will ruin your day or your life, and gathers
no information that might help it make such a determination.
Robotics thus forces us to think hard about whether and how autonomous agency might be located or replicated in the computer
algorithms that control our most advanced technologies. The challenge of building machines that can be regarded as artificial
moral agents raises not only questions about the ethics of doing so, but also raises questions about the nature of ethics
itself, such as whether ethical rules and principles are the kinds of things that can effectively guide behaviour in real-time
decision-
making (Wallach and Allen
2008).
Virtual artificial agents (aka ‘bots’) already exist. They operate inside computationally generated environments ranging from
the Internet as a whole, to electronic markets such as
NASDAQ and eBay, to computer games, and they exist within networked virtual worlds such as
Second Life that are designed primarily for entertainment, but are also increasingly being used for communication and education. Various
kinds of autonomous systems have been built for these contexts. On the Internet as a whole there are
information-gathering bots, for example the
web crawlers used by search engines, which gather information with varying degrees of respect for the privacy of that information.
In electronic markets, there are automated trading and bidding programs that carry out transactions based on formulas that
in some cases are beyond the comprehension of those who rely upon them.
Credit card approval decisions rely on the evaluation of multiple factors, whose combined effects may not have been fully
appreciated or anticipated by their programmers, and that consider only general statistical patterns rather than the particular
needs of the individual purchaser. Software games have long included agents with some degree of artificial intelligence, and
the sophistication of these agents is continuously increasing.
The presence of bots in contexts where people are seeking social interaction has been a persistent concern to ethicists and
technologists. These concerns predate the rich, virtual, multi-user environments exemplified by
Second Life. They were raised in the context of the early text-based ‘
MUDs’ (multi-user dungeons) and ‘MOOs’ (MUDS object oriented) that were constructed on early computer networks using only
textual communication, and these concerns were raised even earlier than that in the context of artificial intelligence. A
frequent concern is that people are easily tricked into forming social attachments to entities that are incapable of reciprocating
them. This worry arose in the early days of artificial intelligence research in connection with
ELIZA (Weizenbaum
1965), a program that simulated (or, perhaps, parodied) the question-asking technique of
Rogerian psychotherapy, and that is credited as being the first ‘chatterbot’ or ‘chatbot’ – a piece of software which attempts
to sustain a conversation with a human interlocutor. Carl Sagan enthusiastically suggested that in the future there would
be computer psychotherapists available in street corner booths everywhere. Weizenbaum emphatically rejected this vision of
the future and renounced his work on ELIZA, maintaining that computers would always lack the important human qualities of
compassion and wisdom (Weizenbaum
1976).
The fact that people tend to overestimate the intelligence and complexity of computer programs is now known to AI researchers
as the ‘ELIZA effect’. It is a manifestation of the previously mentioned human tendency to anthropomorphize things. However, this tendency may be exacerbated by the much richer graphics-based virtual environments
that have become common in the forty years since ELIZA's inception. Furthermore, in mixed virtual environments where some
avatars are human-operated and others operated by autonomous software, it may be especially difficult to distinguish which
are which. This is partly because the range of human actions in a virtual world is itself limited by the constraints on expression
and action that are imposed by the rules of the virtual environment, a topic we shall return to in the following sections.
Although autonomous agents in virtual worlds are sometimes difficult for other users to distinguish from real human beings
operating in those worlds,
they are just as frequently all too easy to identify. And while there are many systems operating in virtual environments without
direct human oversight, all of them are ‘ethically blind’ (Wallach and Allen
2008) in the sense that they lack specific information that would be relevant to ethical decision-making, they have no means of
assessing the ethically relevant effects of their decisions, and they lack other capacities such as empathy that are important
to human morality.
Work is also being done on virtual equivalents of cyborgs –
on-screen avatars that combine tele-operation by humans with software enhancement. For instance, an on-screen avatar in a
virtual reality context may derive its behaviour and appearance from the actions and facial expressions of a person connected
to bodily motion sensors and cameras capable of providing enough information to render a three-dimensional model of the actor
in the virtual setting. But such a tele-operated avatar can also be filtered through an intermediate layer of software and
enhanced in specific ways, for instance by blending the appearance of the remote operator with the face of a famous person,
or by enhancing and sustaining facial expressions that are known to influence the responses of other people to the avatar.
The potential for ‘persuasion’ by non-verbal means is only just beginning to be investigated, but the potential for ethical
abuse in such situations is already
clear (Bailenson
et al.
2008).
We have already touched upon machine-centred VR in discussing the bots that operate there. While the autonomy of such agents
is relatively limited at present, nevertheless the construction of machine-centred VR gives artificial agents an advantage
compared to humans attempting to operate in the same environments. The simple example of online auctions illustrates this.
Because the format in which information is presented favours machines, they are much more capable than humans of precisely
timing a final bid to win an auction at the last second.
Of course, it is not all bad, and people reap benefits from having their software systems operate in environments that have
been shaped so as to make it easy for the software rather than the people. Nevertheless, the design decisions involved in
shaping such environments often have unintended effects, just as the shaping of our physical environments to accommodate motorized
vehicles has had unintended effects on our ability to walk or use other modes of transportation, which in turn has both positive
and negative consequences for general health and well-being. Similarly, the original use of the very limited set of characters
belonging to the
ASCII code made it easy for machines to process documents originally written in English, and for computers (and their English-speaking
users) to communicate over networks. But the limitations
of ASCII made it difficult for speakers of languages other than English to represent their documents in machine-processable
formats and to communicate via email, instant messaging, and other information and communication technologies (ICTs). The
adoption of
Unicode, with its much richer character sets, has gone a long way towards rectifying these problems, of course, but not without
the expense entailed in updating the software and revising archived materials so as to provide forward compatibility. Furthermore,
ICTs embody other
culture-specific assumptions about communication, such as the relative importance of words or text over non-verbal aspects
of communication – for example, the facial and bodily gestures which in some cultures are important for showing proper respect
to others. Thus it is important to recognize that technologies which appear to be value-neutral from one cultural perspective
may in fact raise ethical issues in a different cultural context (Ess
2009).
The ethical issues raised by machine-centred VR environments are thus not just about the artificial agents operating within
them, but concern the limitations placed by these environments upon people who attempt to operate within them. Current metaphors for information retrieval are frequently library-based metaphors, such as browsing and searching. Science fiction
writers have long dreamed of technologies that would transform digitally encoded machine-centred VRs into information environments,
epitomized by the brain–computer interface that the primary character of William Gibson's Neuromancer uses to ‘enter’, perceive and move around in cyberspace as if it were physical space. The field of information visualization
is beginning to provide tools that support rich visual presentation of abstract information, and has been especially successful
in providing visual representations of information networks. The much richer anthropomorphic VR access to the machine-centred
underpinnings of the Internet envisaged by science fiction writers such as Gibson awaits the invention of appropriate physical,
spatial and visual metaphors for the underlying information.
Immersive, computer-generated environments attempt to present a ‘realistic’ world of experience to users. As with the
previous section on machine-centred VR, our concern in this section is not with the agents populating these virtual realms, but with the ethical
issues underlying the features of the environments themselves and the uses to which they are put (Brey
1999b). Anthropomorphic VR has been deployed for many purposes, including games, cybersex, tele-conferencing, pilot training, soldier
training, elementary school education, and it is under investigation for use in many more applications, for instance police
lineups. These applications make different demands on the amount of realism involved.
Flight simulators, for instance, aim to be realistic in almost all respects.
Educational applications for anthropomorphic VR, do not always
aim for full realism, since pedagogical objectives may lead their designers to limit both the range of actions that individuals
can take within the virtual world, and to limit the range and complexity of responses to user actions. Consider, for example,
all the ways that a school science project or field trip can go awry. A VR environment that is intended to substitute for
a field trip may be deliberately designed so that it is less likely to fail (Barab
et al.
2007). One might ask whether such a deliberate departure from realism teaches children the wrong lesson about how easy it is to
do good scientific research, by oversimplifying the task of data collection and setting up in them the expectation that the
world is more predictable than it actually is. But this seems like a relatively minor worry when the alternative might be
no field trip at all because of the difficulty and expense
involved.
VR-based games typically depart from realism in numerous ways. Games may, for instance, enable users to pretend that they
have magical powers that can transcend the limitations of the physical world. It is hard to see how exercising the capacity
to act out such fantasies would be harmful or unethical, except insofar as the activities are so intrinsically rewarding that
they lead to addictive behaviour. A more worrisome aspect of anthropomorphic VR concerns what it affords to individuals by
way of freedom to act out violent or coercive fantasies. When the person is operating as a single user within an isolated
VR application, the ethical issues raised by actions within that environment are necessarily indirect. Their indirectness
may not make them any less pressing, however, as concerns are widespread about how the ability to rehearse violent, misanthropic
or misogynistic actions in video games may desensitize users to those issues in the real world, or even provide a training
effect that makes game players more likely to engage in similar actions outside of the VR context. However, there is also
a longstanding response that violent games actually divert antisocial propensities into a context where they are harmless.
The scientific research on this topic remains controversial (but see Ess
2009 for a balanced treatment).
When the VR context involves networked computers and multiple users, the ethical issues take on a different character because
there is the possibility of direct harm to other users. A case of ‘
cyberbullying’ of a teenager by an adult in the United States using the
MySpace social networking site allegedly led to the teenager's suicide, but resulted only in conviction in 2008 on misdemeanour charges
of illegal computer access; a more serious charge of conspiracy was dismissed. Social networking in VR environments also allows
users to simulate acts of violence or coercion that would be unethical or socially unacceptable outside the VR context. For
instance, a case of virtual rape has been reported in
Second Life and investigated by Belgian police. Also, real opportunities for fraud and for theft of valuable property are made possible
with
Second Life. The question of whether VR relationships can be adulterous has also been raised (Thomas
2004, Stuart
2008; see also Ess
2009), and regardless of the
philosophical discussion of this point, there have been cases where online relationships have resulted in real-world divorce.
In affording the opportunity to transcend physical or social barriers, anthropomorphic VR provides technological enhancement
of personal autonomy. Such opportunities are not ethically neutral, however. They may affect other individuals directly, or
have indirect effects by training or reinforcing habits of action that may spill over into the real world.