A host of recently available technologies enable governments, corporations and individuals to monitor individual activity
in ways that would have been impossible only a few decades ago. Closed Circuit Television (CCTV) cameras transmit a signal to a restricted set of monitors and are used in the surveillance
of specific locations. The Global Positioning System (GPS) utilizes satellite-based technology to accurately pinpoint the location of objects and individuals
equipped with receivers, anywhere on the surface of the Earth. Radio Frequency Identity (RFID) tags can also be used to identify the location of objects and individuals. RFIDs are microscopic
electronic tags that are mostly used by retailers to keep track of their products. They are either attached to the surfaces
of products or implanted within them. They have also been implanted in animals and in a few cases in humans.
In 2004, the Food and Drug Administration approved the implantation of the
VeriChip in humans (Foster and Jaeger, forthcoming). By 2006, these had been implanted in about 70 people in the US, mostly for medical
reasons and to control access to high security areas (Waters
2006). It has been variously suggested that RFIDs should be implanted in employees of certain companies, immigrants and guest
workers in the United States, sex offenders and US soldiers (Foster and Jaeger, forthcoming). Many commentators think that
the use of RFIDs will become much more widespread soon. According to van den Hoven and Vermaas, ‘Governments and the global
business world are preparing for a large-scale implementation of RFID technology in the first decades of the 21st century’
(2007, p. 291). Eastman Kodak has filed patent applications on a new technology that will enable RFID chips to be ingested
(Tedjasaputra
2007). If this technology becomes readily available, it may become very easy to use RFID chips to monitor the whereabouts of individuals,
without their consent, or even without them knowing that their movements are being monitored.
Unsurprisingly, the prospect of RFIDs being used to monitor the whereabouts of humans has met with fierce resistance from
privacy advocates, such as
CASPIAN (Consumers Against Supermarket Privacy Invasion and Numbering (
www.nocards.org/)), and has prompted various publications
warning
of the threat to individual privacy. RFIDs are considered particularly suspect by some
fundamentalist Christians, who see these as the ‘Mark of the Beast’ that we are warned about in the
Book of Revelations (Albrecht and Macintyre
2006). In response to such concerns, three US states, California, Wisconsin and North Dakota, have passed laws prohibiting the
forced implantation of
RFIDs (Anderson
2007).
As well as technology that can monitor our movements, there are various technologies that can be used to interpret information
collected about us. The interpretation of information provided by CCTV and other forms of video surveillance can be assisted
by the use of
face and gait recognition systems (Liu and Sarkar
2007). Information collected by aural surveillance devices can be interpreted using devices such as the
‘Truth Phone’, which analyses voice stress during telephone calls, in an attempt to detect lying (Davies
2003, p. 21). The
‘love detector’ operates similarly, identifying levels of excitement and arousal in speech, in an attempt to identify people's
feelings for those that they are speaking to (see
www.love-detector.com/index.php).
In addition to concerns about the aspects of our lives that are being monitored and concerns about how collected data can
be analysed, there are concerns about who has access to personal data. Companies such as Acxiom specialize in buying data
from businesses about their customers, integrating these into powerful data bases, and selling access to the data bases created
to other businesses. Customers, who may be willing to accept that data about them may be made available to particular businesses
that they have dealings with, may also be very unhappy about that data being passed on to other companies. Also of concern
is the placement of data about individuals, such as court records, on the Internet. Enabling free and easy access to data
that might otherwise be hard to access radically increases the number of their potential users and can alter the nature of
their uses (Nissenbaum
2004).
It is widely accepted that individuals are entitled to certain forms of informational privacy and that certain information,
such as personal financial data, which may find their way into some parts of the public realm, should be kept out of other
parts of the public realm. Nissenbaum (
2004) has argued that we should accept
‘contextual integrity’ as a benchmark for informational privacy in the public sphere. According to her, different contexts
within the public sphere – the context of friendship, the context of the classroom and so on – are implicitly governed by
particular norms of behaviour, including norms relating to respect for privacy. If her position is accepted, then information
should not be made generally available, within a particular context, without due regard for the governing norms that implicitly
shape our sense of what is appropriate within that context. Furthermore, information of a type that it is appropriate to make
generally available in one context should not be transferred to a different context without due regard for norms that implicitly
govern the flow of information between these particular contexts. Later,
we will see that the scope of Nissenbaum's claims regarding the importance of contextual integrity stand in need of qualification,
as Nissenbaum herself
allows.
As well as debates about the right to informational privacy in general, there are debates about the right to informational
privacy in the workplace. The
surveillance of employee's emails, telephone calls and other forms of communication, by their workplace supervisors, is widespread.
According to a recent survey, 73 per cent of American companies engaged in some form of electronic surveillance of their employees
(Blackwell
2003). Although some commentators have argued that there is a presumptive right to privacy in the workplace, they generally acknowledge
that this right needs to be balanced against the interests of employers, customers and other employees, all of whom have a
legitimate interest in ensuring that employees are working effectively and are not using the workplace to conduct illegal
activities (e.g. Miller and Weckert
2000). The topic will not be elaborated on here as the interested reader may find further information in Chapter
7.
One of the more important arguments for respecting the informational privacy of individuals is that, if people's activities
are unknown to others then they cannot be deliberately interfered with by others. Therefore, they will be better able to act
on their own life plans and devote their energies to satisfying their own preferences. In other words, they will be better
able to realize the value of autonomy, a value which is deeply embedded in Western culture. Isaiah Berlin captured the core
sentiments of the many of us who wish to live autonomously:
I wish my life and decisions to depend on myself, not on external forces of whatever kind. I wish to be the instrument of
my own, not of other men's acts of will. I wish to be a subject, not an object: to be moved by reasons, by conscious purposes,
which are my own, not by causes which affect me, as it were from outside.
Of course, freedom from the external interference is not all there is to autonomy. Internal forces can also limit our ability
to experience autonomy. Claustrophobics live with a fear of confined spaces. However, there may be some occasions when it is in the interest of a
claustrophobic to enter particular confined spaces. If a claustrophobic is unable to overcome her fear when she decides that
it is in her interest to do so, she has had her autonomy interfered with by an internal force.
In order, among other things, to ensure that people's autonomy is respected, interpersonal interactions in Western societies
are typically structured around
the ideal of
mutual consent (Kleinig
1982). A business transaction can only occur when both buyer and vendor agree to complete that transaction. Similarly, a marriage
can only occur when both bride and groom agree to the marriage. To provide
‘effective consent’ to an action, an agent must comprehend what they are consenting to, or at least have the opportunity to
comprehend the major consequences of consenting to that action. Also, they must have sufficient, relevant information so that
their consent can be the consequence of an informed autonomous decision. This is true of the many areas of human activity
in which our governing norms of behaviour include consent requirements (Clarke
2001).
One way in which our autonomy can be compromised, when we are using new technologies, is that, sometimes, we may not understand
what we are asked to consent to. An example of a complicated request for consent, that may be difficult to comprehend, is
the request to consent to receive targeted behavioural advertising when a free Gmail email account is set up. New subscribers to free Gmail accounts are asked to consent to having the content of their emails mechanically scanned for keywords that are then used
to select targeted advertisements, which appear alongside email messages. So, for example, if a Gmail user sends or receive emails containing an above average use of the word ‘holiday’, Gmail may direct advertisements for holidays to that user, rather than advertisements for some other product or service.
In
medical contexts, it is a standard condition of the informed consent process that doctors discuss a recommended procedure
or course of treatment with a prospective patient, so as to ensure that
comprehension has been achieved (e.g. Wear
1998). It is not simply assumed that patients will comprehend complicated technical information disclosed for the purposes of
enabling informed consent to be obtained. In the context of commercially provided software, it is perhaps unrealistic to suggest
that companies should actively ensure that particular customers have comprehended technical information disclosed for the
purposes of informed consent. However, it is reasonable to expect that companies do what they can to aid comprehension. Friedman
et al. (
2005, p. 515) describe the documents on
Gmail's registration and user interfaces as going ‘a good distance toward helping ensuring comprehension’. And indeed the language
used in the relevant documentation is admirably clear. Comprehension is more likely to be acquired as a result of reading
the relevant documentation on
Gmail's registration and user interfaces than if this is not read. Nevertheless, there may be significant numbers of
Gmail users who consent to use a
Gmail account without properly comprehending what they are consenting to. They consent to use a
Gmail account, but it is possible that they do not provide
effective informed consent to the use of a
Gmail account.
Targeted behavioural advertising seems set to become more confusing than it already is, due to new technology that will enable
Internet service providers (ISPs) to track the websites visited by their customers in
order to support targeted behavioural advertising. The current market leader in this area is a company called
Phorm which has recently signed deals with the three biggest ISPs in the UK,
BT,
Virgin Media and
TalkTalk to enable them to use their technology (see
www.economist.com/science/tq/displaystory.cfm?story_id=11482452). One's movements
on the Internet can be tracked by HTTP cookies that have been downloaded onto computers, by search engines, by email providers
and now by ISP providers. As well as raising obvious privacy issues, this situation raises significant concerns about autonomy.
In order to provide effective informed consent to targeted behavioural advertising, computer users need to comprehend the
means by which their behaviour is monitored. The fact that it may be monitored in a variety of different ways makes comprehension
more difficult to
achieve.
The introduction and uptake of new technologies creates a host of new social situations in which individuals have to decide how to behave. In such new situations, we may not have had time to collectively
develop norms to guide behaviour. The rapid growth of information technologies has led to a slew of circumstances in which
it is currently unclear what the appropriate norms of behaviour are and what the scope of consent should be. Should someone's
consent be obtained before a photograph or a piece of video footage containing their image is posted on a site on the Internet
(these days such visual images may be easily acquired without people's knowledge or consent using mobile camera phones)? Is
it acceptable to post a document that has been created by another person, and forwarded to you, on a publicly accessible website,
without their consent? Should I be allowed to create a website devoted to publicizing personal information about a third party,
without obtaining their consent? The answers to these questions are not clear and answers that may be given will be contested.
Over time, as people interact with new technologies and with one another, norms can be expected to emerge that will guide
behaviour. Alternatively, their development may be guided by the deliberate activities of policy makers, activists, lawyers
and ethicists. Questions about the proper scope of consent have a clear ethical import. The answers to them that we generally
accept will have a significant role in determining the scope of the sphere of individual autonomy. If we answer these questions
without considering their ethical aspects then legal, institutional and practical considerations will do much to shape the
norms that govern our behaviour at the expense of ethical considerations. The technology that we use will tend to shape the
ethics that we accept, and the ethics that we accept will do little to shape the technology that we use. The reader interested
in knowing more about recent discussions of the ways in which ethical standards are shaped by new technology and in which
new technologies may be shaped by ethical considerations may wish to consult Budinger and Budinger (
2006), Spier (
2001), van den Hoven and Weckert (
2008) and Winston and
Edelbach (
2008).
There are many potential dangers associated with the use of new technologies. Some of these can be removed before particular
new technologies are made publicly available. Still, there are usually going to be public concerns about the safety of some
of the new technologies that are available. Public concerns about the safety of new information and communication technologies
(ICT) have not received the same media attention as have public concerns about new biotechnologies. Nevertheless, concerns
have been voiced about, for example, the safety of
mobile phones and the radiation emitted by them and by the masts used to transmit phone signals, as well as about ICT human
implants. The latter include
cardiac pacemakers, cochlear implants, RFIDs that are implanted subcutaneously, and implantable
neurostimulation devices, which are used, among other things, to manage chronic pain and to control seizures in epileptics.
More broadly, there is growing concern about the increased use of
nanotechnology. In particular, there is much concern about the consequences for humans of inhaling manufactured nanoparticles,
or otherwise ending up with exotic nanoparticles in their bodies (Jones
2007, p. 75). Indeed, because of the importance of miniaturization in ICT, nanotechnology is playing an increasingly significant
role in ICT. Nanotechnologies are already used in the production of computer chips, information storage technologies and optoelectronics.
In the near future, nanotechnology is expected to play a role in other areas of ICT, including hard disk technologies and
sensor technologies (Royal Society and the Royal Academy of Engineering,
2004,
Chapter 3).
Those who voice concerns about the potential risks of using new technologies, including nanotechnology, often argue that we
should apply the
precautionary principle (PP) when evaluating their implementation. The PP is a conceptual tool, employed in risk management
and in policy making in the face of uncertainty. The core intuition behind the PP is that, when in doubt, it is better to
act precautiously, that is, it is ‘better to be safe than sorry’. This is, of course, a commonsense saying, and the PP is
often defended as simply being an extension of everyday reasoning (Sandin
2007), although this characterization is open to dispute (Clarke
2009). The
Independent Expert Group on Mobile Phones (
2000) recommends the application of the PP to mobile phone use and the European Group on Ethics in Science and New Technologies
(
2005) recommends the application of the PP to the use of ICT human implants (
2005). Som
et al. (
2004) suggest that the PP should be applied more frequently to information technologies than it has been; while both the
European Commission Scientific Committee on Emerging and Newly Identified Health Risks (
2006, p. 54) and the ETC group (
2005, p. 16) recommend a precautionary approach to the use of new nanomaterials.
The PP is often contrasted with cost–benefit analysis (CBA), an approach to risk management in which one attempts to determine the probability of benefits
occurring as well as the probability of costs being incurred, when the implementation of a new policy is being considered.
The expected balance of costs and benefits, for a given policy option, is then compared with the equivalent balances of costs
and benefits that would be expected to result from the introduction of alternatives to that policy, and the policy with the
overall best balance of expected benefits over expected costs is selected. While CBA involves weighing expected costs and
benefits, application of the PP involves a more exclusive focus on the potential costs of introducing a new policy.
The common use of the phrase ‘the precautionary principle’ appears to suggest that there is one widely accepted formulation
of the PP, but this is not the case. There are many versions of the PP and these can be quite distinct from one another. Although
they are both considered to be statements of the PP, Principle 15 of the
Rio Declaration on Environment and Development is quite different from the
Final Declaration of the First European ‘Seas at Risk’ Conference (
1994). Principle 15 of the 1992 Rio Declaration on Environment and Development states that:
In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities.
Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason
for postponing cost-effective measures to prevent environmental degradation.
(United Nations Environment Programme
1992)
The Final Declaration of the First European ‘Seas at Risk’ Conference (
1994) states that:
If the ‘worst case scenario’ for a certain activity is serious enough then even a small amount of doubt as to the safety of
that activity is sufficient to stop it taking place.
(1994, Annex 1)
There are many other variants of the precautionary principle that could also be listed here (e.g. Som
et al.
2004, pp. 788–789).
‘Principle 15’ exemplifies what is sometimes referred to as the weak version of the PP. It does not replace CBA and can usefully
be understood as offering us guidance in the interpretation of CBA. It advises us to ensure that CBA is not used in a selective
manner and that risks which are only established with some degree of confidence are considered in any application of CBA,
alongside risks established with ‘full scientific certainty’. What weak versions of the PP have in common is that they instruct
us to pay special attention to uncertainties, in one or other way, when formulating policy. The Final Declaration of the First
European ‘Seas at Risk’ Conference, by contrast with ‘Principle 15’, is not compatible with CBA and is an example of what
is sometimes referred
to as a strong version of the PP. It advises us not to attempt to weigh the expected costs and benefits of a particular policy,
but to formulate policy by considering the potential serious harms of a policy, under certain conditions, regardless of how
potentially beneficial a particular policy may be, even if the estimated probability of the potential serious harms occurring
is extremely
low.
There is a well-known and seemingly devastating criticism of strong versions of the PP, which does not apply to typical weak
versions of the PP (Sunstein
2005, Manson
2002). This is that strong versions of the PP, if applied consistently, lead to paradoxical outcomes. To see this, consider the
application of a strong version of the PP to
mobile phone use (note that the Independent Expert Group on Mobile Phones (
2000) applies a weak version of the PP, so they would not go along with this line of reasoning). We do not know for sure what
the risks to human health of exposure to radiation emitted by mobile phones and masts are, but they may be significant. So,
an application of a strong version of the PP seems to lead to the conclusion that we should ban all mobile phones and masts
until we have well-established data on what these effects are. However, if we do not have mobile phones and an operating system
of transmitting mobile phone signals available, then individuals who find themselves in emergency situations may be unable
to contact others to get help and human lives may be placed in jeopardy. Therefore, it seems that the application of a strong
version of the PP leads to the conclusion that we should not ban mobile phones and masts. The consistent application of strong
versions of the PP leads to the recommendation of contradictory policies and strong versions of the PP are, therefore, paradoxical.
There have been a number of attempts to defend strong versions of the PP from the charge of leading to paradox (e.g. Weckert
and Moor
2006, Gardiner
2006). Although these appear to be unsuccessful (Clarke
2009), prominent defenders of the PP have not given up hope that a coherent version of the strong PP can and will be found (e.g.
Sandin
2007, p. 102). Given that weak versions of the PP are not vulnerable to the charge of paradox, why do Sandin (
2007) and others not simply adopt one of these and forego attempts to resuscitate strong versions of the PP? One answer to this
question is that they may think that weak versions of the PP are too weak. If all the PP amounts to is a way of ensuring that
the potential costs of a policy are fairly considered, then it must be allowed that these may sometimes be trumped by benefits
when potential policies are being considered. If this is so, then applications of the PP may sometimes fail to recommend that
we act precautiously. If the benefits of new mobile phone technologies, ICT implants and new nanomaterials are judged to outweigh
costs, then, along with CBA, weak versions of the PP may end up recommending that we allow their use, subject to appropriate
regulation. This may be good advice, but it is often not the advice those who advocate the PP hope to be able to
recommend.
Much has been written on the management of risks associated with new technologies and this discussion has really just scratched
the surface of that literature. For more on this subject see Bainbridge and Roco (
2006), Fisher
et al. (
2006), Sunstein (
2005) and Adler and
Posner (
2001).