We next examine some specific claims that expansionists have put forth with respect to Question 3(a).
Expansionists such as Adam (
2005) and Górniak-Kocikowska (
1996) claim that we need a new ethical theory to understand and resolve issues in CE. It is important to point out that neither
Adam nor Górniak-Kocikowska argue that a new theory is needed
because CE has any unique features or issues. It is also worth noting that they provide two very different accounts of the kind of
ethical theory that is needed, as well as very different kinds of answers to the question of why a new theory is needed.
Adam argues that conventional ethical theories, such as utilitarianism and deontology, are insufficient for analysing CE issues
because they have either ignored or greatly underestimated the importance of
gender in CE. This, in turn, has resulted in gender issues being ‘under-theorized’ in CE research. Arguing that a gender-based
ethical theory is needed to remedy this problem, she puts forth a theory that is based on a feminist ethics – in particular,
on the ‘ethic of care’. Adam then offers some reasons why an ethic of care can
improve our understanding of gender issues affecting CE. For one thing, she claims that it helps us better understand ethical
concerns and controversies affecting power and privacy. With regard to the former, a gender-informed framework enables us
to see some of the relations of power that are involved in the development and use of computers. Regarding the latter topic,
she holds that her theory can help us to see that the concept of privacy can be different for men and women. As a result,
Adam argues that a gender-informed theory can help us to understand CE issues involving cyberstalking and Internet pornography
in ways that the standard ethical theories cannot.
Even if we accept Adam's arguments for why the standard ethical theories used in CE are inadequate, it is still unclear why
the gender-informed theory she articulates is peculiar to issues affecting computing per se, as opposed to broader areas of
applied ethics that also are affected by controversies surrounding privacy and power. For example, ethical concerns affecting
privacy and power for women could arise in other fields of applied ethics, such as legal ethics, bioethics and biotechnology
ethics. So, her arguments for a new, gender-informed ethical theory for CE would also entail that such a theory would be required
for other fields of applied ethics as well. In that case, Adam's thesis is as applicable to contemporary applied ethics in
general, as it is for CE in particular. Thus, any need for a new ethical theory, based on the account that Adam provides,
would not seem to arise solely from concerns affecting computer technology.
Górniak-Kocikowska presents a very different kind of case for why a new ethical theory is needed for CE. Her argument can
be analysed into two stages. The first stage begins with a historical look at how certain technologies have brought about
‘social revolutions’. For example, she notes that the ‘revolution’ brought on by the printing press affected both our social institutions and our (theorizing about) ethical values in profound and fundamental
ways. She then draws some comparisons with the ‘computer revolution’, claiming that it too has affected our institutions and
values in the same ways. The second stage focuses on the ‘global’ aspect of computing technology. Górniak-Kocikowska claims that, because the computer revolution is global in its
impact, we need a ‘new global ethical theory’. In particular, she argues that we need a new ethical theory that can respond
to problems generated globally by computer technology, in much the same way that new ethical theories arose in response to
social issues that resulted from technologies such as the printing press and the ensuing ‘printing-press revolution’.
The first stage of Górniak-Kocikowska's argument suggests that new ethical theories might be required whenever a ‘revolutionary
technology’ is introduced. But, what will count as a technology that is ‘revolutionary’ vs. one that is merely influential?
For example, the automobile, when introduced, might have been viewed by many people as a revolutionary technology. Yet, we
did not need a new ethical theory to address social issues affecting the
impact of automobiles. Turning to the second stage of her argument, it would seem that new ethical theories are needed to
handle technologies that have a ‘global impact’. But is a new ethical theory needed for CE merely because of the global impact
that computing has had to date? Consider that many other technologies, such as aviation, space travel or reproductive technologies
(such as
in vitro fertilization), have also had a global impact. However, no one has argued that we need a new (universal) ethical theory to
account for their global impact. So, it is unclear how Górniak-Kocikowska's argument can convince us that a new ethical theory
is needed for CE, at least on the basis of the evidence she provides. We next turn to Question 3(b)
.
While some novel, methodological frameworks have been proposed for CE, many proposals have also been based on the modification
and extension of existing applied-ethics frameworks that can be tailored in ways to address specific concerns affecting CE.
An example of the latter is articulated by van den Hoven (
1997), who has argued for a method of
‘reflective equilibrium’, based on the model introduced by Rawls. Such a methodological scheme is applicable for CE (and for
engineering ethics as well), van den Hoven claims, because it provides the appropriate levels of generality and particularity
needed to ‘shuttle’ back and forth between specific cases affecting computing technology and general principle and theories
that can be applied within and across the various cases.
Van den Hoven (
2008, p. 59) has described some recent methodological trends in CE, as well as in engineering ethics, in terms of a
‘design turn’. He notes, for example, that just as there was a shift in ethical analysis from metaethics to applied ethics
in the second half of the twentieth century – i.e., an
‘applied turn’ in ethics – a more recent shift has occurred in that significant attention is being paid to the role that design
decisions can play in the analysis of applied-ethics issues. For example, some ethicists now focus much of their early analysis
of ethical problems involving technologies (and practices affecting those technologies) on the various kinds of values that
can be either consciously or unconsciously built into those technologies. While some of the proposed methodological frameworks
have been fairly modest, others include requirements that are more controversial. We next examine a model,
‘disclosive computer ethics’, introduced by Brey (
2000) as a methodological framework for CE. It is relatively modest in terms of the required changes it proposes.
Brey argues that the standard methodology used by philosophers to conduct research in applied ethics needs to be modified
for CE. The revised method that he proposes builds on some of the models advanced by analysts working
in the area of ‘value-sensitive design’ (or VSD). For example, Friedman
et al. (
2008) have argued that implicit values, embedded in computing technologies, need to be identified at the design stage of their
development. They claim that designers need to understand which kinds of values they are ‘building in’ to technologies they
develop and implement. Brey argues that, in the case of computing technology, this requires some changes in the standard or
‘mainstream’ method of applied ethics. Because that model was developed to analyse (already) known moral controversies, Brey
worries that it can easily fail to identify features and practices that may have ‘moral import’ but are yet unknown. He describes
such features and practices as
‘morally opaque’, which he contrasts with those that are ‘morally transparent’. While the latter kinds of features and practices
are easily recognized as morally problematic, it can be difficult to identify some morally opaque features and practices affecting
computer technology. For example, Brey notes that many people are aware that the practice of placing
closed circuit video surveillance cameras in undisclosed locations may be controversial, from a moral point of view. Many
people may also be aware that computer spyware can be morally controversial. However, Brey argues that other kinds of morally
controversial practices and features involving computer technology might not be as easily discerned because of their opaqueness.
Brey notes that a practice or a feature affecting computer technology can be morally opaque for one of two reasons: (a) it
is yet unknown, or (b) it is known but perceived to be ‘morally neutral’. An example of (a) includes computerized practices
involving cookies technology, which would be ‘unknown’ to those who are unfamiliar with Internet cookies. An example of (b) includes
practices affecting online search facilities, a technology with which most computer users are familiar. However, users may
not be aware that this technology is used in practices that record and store information about a user's online searches, which
may be controversial from a moral point of view, as some learned for the first time in 2005, when the US Government subpoenaed
the search records of users for Google, MSN and Yahoo.
According to Brey, an adequate methodology for CE must first ‘disclose’ any features and practices affecting computers that
otherwise might not be noticed as having moral implications. Appropriately, he calls his methodology the ‘disclosive method’
of CE because its initial aim is to reveal any moral values embedded in the various features and practices associated with
computer technology. It is in this sense that the standard applied-ethics methodology needs to be expanded to accommodate
specific challenges for CE. It remains unclear, however, why the disclosive method should be limited to CE. For example, medical
technology also includes features and practices that are morally opaque. Consider that
in vitro fertilization and stem cell research are examples of medical technologies that, initially at least, had morally opaque aspects
that needed to be ‘disclosed’. So the methodological changes proposed
by Brey for CE would seem to have far wider applications for applied ethics in general.
Because Brey's ‘disclosive method’ mainly expands upon the standard method used in applied ethics, it is not a radically new
framework. However, others argue that we need an altogether different kind of methodological framework for CE. Perhaps the
most provocative proposal is Floridi's Information Ethics (IE) methodological/macroethical framework.
We have already examined some important aspects of IE in
Section 15.5.2, where our focus was on the status of informational objects that qualified as moral patients in the infosphere. In this section,
our primary emphasis is on how IE also functions as a
macroethical theory/methodological framework for CE. In
Section 15.5.2, we saw how Floridi's IE macroethics was different, in several key respects, from standard ethical theories such as utilitarianism,
Kantianism and virtue ethics. We next examine some of Floridi's arguments for why IE is a superior methodological framework
for CE. They are based on distinctions he draws between: (i) macroethical vs. microethical issues; (ii) patient-centred vs.
agent- and action-centred systems; and (iii) (non-moral) agency and moral agency. We begin with a brief look at Floridi's
arguments involving (i).
Floridi (
1999b) claims that one virtue of IE, as a methodological framework, is that it enables us to distinguish between macroethical and
microethical aspects of CE. He argues that IE, as a macroethics, helps us to analyse specific microethical issues in CE, such
as privacy, in a way that the standard macroethical frameworks cannot. Floridi notes that the concept of privacy is not well
theorized by any of the standard macroethical theories used in CE, and he shows how IE can help us to understand some of the
ontology-based considerations that need to be taken into account in analysing the concept of privacy and framing an adequate
informational-privacy theory. Floridi (
2005d) advances a theory, called the
ontological theory of informational privacy, which he argues is superior to the classic theories of informational privacy. His ontological privacy theory is provocative
for several reasons; for one thing, it shifts the locus of a violation of privacy away from conditions tied to an agent's
personal rights involving control and ownership of information to conditions affecting the information environment, which
the agent constitutes. In this sense, his theory provides us with a novel way of analysing the impact that digital technologies
have had for informational privacy. However, one critique of Floridi's privacy theory is that it does not explicitly distinguish
between descriptive and normative privacy regarding claims about privacy expectations for informational objects. As a result,
one might infer that, in Floridi's theory, every informational object deserves normative privacy protection (Tavani
2008). Unfortunately, we cannot further examine Floridi's privacy theory here, since doing so would take us beyond the scope of
this chapter.
We next examine some of Floridi's arguments for (ii). As we saw in
Section 15.5.2, Floridi holds that one advantage that IE has, over the standard macroethics frameworks, is that the former is
patient-centred, as opposed to being merely action-oriented or agent-oriented. Whereas
virtue ethics is ‘agent oriented’ in that it focuses on the moral character development of individual agents, Floridi characterizes
both utilitarianism and deontology as ‘action oriented’ because they are concerned with the consequences and motives of individuals
engaged in moral decisions. And because action-oriented and agent-oriented theories focus primarily on agents and on the actions
(and character development) of agents, Floridi claims that they do not adequately attend to the recipients of moral actions
(i.e., moral patients). He argues that the IE methodological framework provides the conceptual apparatus needed to understand
our role, as well as the roles of artificial moral agents, in preserving the well-being of the infosphere. This brings us
to (iii), Floridi's accounts of agency and moral agency. We noted earlier that in IE, informational objects can qualify as
moral agents (in addition to being moral patients).
How does IE differentiate a moral agent from a moral patient? Floridi (
2008e, p. 14) describes a moral agent as an
interactive, autonomous, and adaptable transition system that can perform morally qualifiable actions.
(italics Floridi)
By ‘interactive’, Floridi means that ‘the system’ and its environment ‘can act upon each other’. A system is ‘autonomous’ when it is able to ‘change state without direct response to interaction, i.e., it can perform internal transition
to change its state’. To be ‘adaptable’, the system's ‘interactions (can) change the transition roles by which it changes state’. Finally, an action is
‘morally qualifiable’ when it can cause some ‘good or evil’. So any (interactive, autonomous and adaptable) individual or
system that is capable of causing either good or harm in the infosphere qualifies as a moral agent in IE.
Floridi points out that the moral agents, inhabiting the infosphere, include ‘artificial’ agents, which are not only ‘digital agents’ but also ‘social agents’ (such as corporations). These artificial
agents also qualify as (artificial) moral agents if they can be held ‘morally accountable for their actions’ (Floridi, 15). But we should note that in IE, accountability is not identical to moral responsibility.
Floridi draws a distinction between moral responsibility, which requires ‘intentions, consciousness, and other mental attitudes’, and moral accountability, which he argues does not require these criteria. Whereas responsibility is associated with ‘reward and punishment’, Floridi
argues that accountability can be linked to what he calls ‘agenthood’ and ‘censure’. Thus, Floridi claims that there can be
agency based only on accountability but in the ‘absence of moral responsibility’.
In IE, humans are special moral agents, who have what Floridi and Sanders (
2005) call
‘ecopoietic responsibilities’ – i.e., responsibilities towards the construction and well-being of the whole infosphere.
2 ‘Ecopoiesis’ refers to the ‘morally informed construction of the environment’, based on what Floridi describes as an ‘ecologically
neutral perspective’. Floridi (
2008e) believes that humans, qua members of
Homo Poieticus, have a moral obligation not only to be concerned with their own character development but also ‘oversee’ the ‘well-being
and flourishing of the whole infosphere’. More specifically,
Homo Poieticus, as a human moral agent, has special responsibilities to the infosphere that are guided by four moral principles (Floridi,
p. 17):
(1) entropy ought not to be caused in the infosphere;
(2) entropy ought to be prevented in the infosphere;
(3) entropy ought to be removed from the infosphere;
(4) the flourishing of informational entities as well as the whole of the infosphere ought to be promoted by preserving, cultivating
and enriching their properties.
The four principles are listed in order of increasing value. In IE, a moral agent is accountable for any action that increases
the level of entropy (defined in
Section 15.5.2) in the infosphere. In particular, human moral agents can be held accountable for the evil produced – i.e., the harm caused
to the infosphere (as well as harm caused to the ecosphere and to other humans). Because of this, Floridi argues that human
moral agents have special moral responsibilities that exceed those of other moral agents (in the infosphere). And because
of IE's attention to the roles that moral agents play vis-à-vis moral patients, Floridi argues that IE is able to address
issues that the standard moral methodological frameworks are unprepared to handle.
IE provides a robust methodological framework for CE, but it has also been criticized on several grounds. Floridi (p. 18)
notes that two general types of criticisms tend to recur in the CE literature: one that centres on conceptual challenges for
IE's accounts of agency and moral agency;
3 and one based on the notion that IE is ‘too abstract’ to be useful in applied ethics. In responding to these criticisms,
Floridi points out that IE is not intended to replace the standard
macroethical theories. Instead, he proposes that IE can ‘interact with those theories’ and thus ‘contribute an important new
perspective’ from which we can analyse CE issues (Floridi, p. 20). In this sense, IE can be construed as a methodological
framework that is intended to supplement (rather than
replace) the standard macroethical frameworks used in CE. So, Floridi claims that his IE framework is not as radical as some
of his critics have suggested
.
We conclude this section by summarizing some key points in our response to Question (3b). If IE is correct, then our standard
or methodological framework for applied ethics, as well as Brey's method of disclosive computer ethics, will fall short. However,
we have seen that IE has been criticized and thus has not been fully embraced, at least not yet, as the received methodological
framework for CE. But we have also seen that one of IE's strengths is the way that it anticipates CE issues affecting agency
and moral agency; even many of IE's critics are acutely aware of the controversial roles that artificial agents may soon be
capable of performing. Also, Floridi argues that IE provides a ‘common vocabulary’ for identifying and analysing a wide range
of microethical problems that will likely arise in the not-too-distant future, in connection with highly sophisticated ‘bots’
and other artificial agents. From the perspective of a new macroethics, IE arguably has heuristic value in that it causes
us to question some key assumptions about many of our foundational metaphysical and ethical concepts, in addition to agency
and moral agency. However, the question of whether a new methodological framework, such as IE, is required for CE research is one that still remains open.