The existing literature on embedded values in computer technology is still young, and has perhaps focused more on case studies
and applications for design than on theoretical underpinnings. The idea that technology embodies values has been inspired
by work in the interdisciplinary field of science and technology studies, which investigates the development of science and
technology and their interaction with society. Authors in this field agree that technology is not neutral but shaped by society.
Some have argued, specifically, that technological artefacts (products or systems) issue constraints on the world surrounding
them (Latour
1992) and that they can harbour political consequences (Wiener
1954). Authors in the embedded value approach have taken these ideas and applied them to ethics, arguing that technological artefacts
are not morally neutral but value-laden. However, what it means for an artefact to have an embedded value remains somewhat
vague.
In this section a more precise description of what it means for a technological artefact to have embedded values is articulated
and defended. The position taken here is in line with existing accounts of embedded values, although their authors need not
agree with all of the claims made in this section. The idea of embedded values is best understood as a claim that technological
artefacts (and in particular computer systems and software) have built-in tendencies to promote or demote the realization
of particular values. Defined in this way, a built-in value is a special sort of built-in consequence. In this section a defence
of the thesis that technological artefacts are capable of having built-in consequences is first discussed. Then tendencies
for the promotion of values are identified as special kinds of built-in consequences of technological artefacts. The section
is concluded by a brief review of the literature on values in information technology, and a discussion of how values come
to be embedded in technology.
The embedded values approach promotes the idea that technology can have built-in tendencies to promote or demote particular
values. This idea, however, runs counter to a frequently held belief about technology, the idea that technology itself is
neutral with respect to consequences. Let us call this the
neutrality thesis. The neutrality thesis holds that there are no consequences that are inherent to technological artefacts, but rather that
artefacts can always be used in a variety of different ways, and that each of these uses comes with its own consequences.
For example, a
hammer can be used to hammer nails, but also to break objects, to kill someone, to flatten dough, to keep a pile of paper
in place or to conduct electricity. These uses have radically different
effects on the world, and it is difficult to point to any single effect that is constant in all of them.
The hammer example, and other examples like it (a similar example could be given for a laptop), suggest strongly that the
neutrality thesis is true. If so, this would have important consequences for an ethics of technology. It would follow that
ethics should not pay much attention to technological artefacts themselves, because they in themselves do not ‘do’ anything.
Rather, ethics should focus on their usage alone.
This conclusion holds only if one assumes that the notion of embedded values requires that there are consequences that manifest
themselves in each and every use of an artefact. But this strong claim need not be made. A weaker claim is that artefacts
may have built-in consequences in that there are recurring consequences that manifest themselves in a wide range of uses of
the artefact, though not in all uses. If such recurring consequences can be associated with technological artefacts, this
may be sufficient to falsify the strong claim of the neutrality thesis that each use of a technological artefact comes with
its own consequences. And a good case can be made that at least some artefacts can be associated with such recurring consequences.
An ordinary gas-engine automobile, for example, can evidently be used in many different ways: for commuter traffic, for leisure
driving, to taxi passengers or cargo, for hit jobs, for auto racing, but also as a museum piece, as a temporary shelter for
the rain or as a barricade. Whereas there is no single consequence that results from all of these uses, there are several
consequences that result from a large number of these uses: in all but the last three uses, gasoline is used up,
greenhouse gases and other pollutants are being released, noise is being generated, and at least one person (the driver) is
being moved around at high speeds. These uses, moreover, have something in common: they are all
central uses of automobiles, in that they are accepted uses that are frequent in society and that account for the continued production
and usage of automobiles. The other three uses are
peripheral in that they are less dominant uses that depend for their continued existence on these central uses, because their central
uses account for the continued production and consumption of automobiles. Central uses of the automobile make use of its capacity
for driving, and when it is used in this capacity, certain consequences are very likely to occur. Generalizing from this example,
a case can be made that technological artefacts are capable of having built-in consequences in the sense that
particular consequences may manifest themselves in all of the central uses of the artefact.
It may be objected that, even with this restriction, the idea of built-in consequences employs a too deterministic conception
of technology. It suggests that, when technological artefacts are used, particular consequences are necessary or unavoidable.
In reality, there are usually ways to avoid particular consequences. For example, a gas-fuelled automobile need not emit greenhouse
gases into the atmosphere if a greenbox device is attached to it, which captures carbon dioxide and nitrous oxide and converts
it into bio-oil. To avoid this objection, it may be claimed that the notion of built-in consequences does not refer to necessary,
unavoidable consequences but rather to strong tendencies towards certain consequences. The claim is that these consequences are normally realized whenever the technology is used,
unless it is used in a context that is highly unusual or if extraordinary steps are taken to avoid particular consequences.
Built-in consequences are therefore never absolute but always relative to a set of typical uses and contexts of use, outside
of which the consequences may not occur.
Do many artefacts have built-in consequences in the way defined above? The extent to which technological artefacts have built-in
consequences can be correlated with two factors: the extent to which they are capable of exerting force or behaviour autonomously, and the extent to which they are embedded in a fixed context of use. As for the first parameter, some artefacts
seem to depend strongly on users for their consequences, whereas others seem to be able to generate effects on their own.
Mechanical and electrical devices, in particular, are capable of displaying all kinds of behaviours on their own, ranging
from simple processes, like the consumption of fuel or the emission of steam, to complex actions, like those of robots and
artificial agents. Elements of infrastructure, like buildings, bridges, canals and railway tracks, may not behave autonomously
but, by their mere presence, they do impose significant constraints on their environment, including the actions and movements
of people, and in this way engender their own consequences. Artefacts that are not mechanical, electrical or infrastructural,
like simple hand-held tools and utensils, tend to have less consequences of their own and their consequences tend to be more
dependent on the uses to which they are put.
As for the second parameter, it is easier to attribute built-in consequences to technological artefacts that are placed in
a fixed
context of use than to those that are used in many different contexts. Adapting an example by Winner (1980), an overpass that
is 180 cm (6 ft) high has as a generic built-in consequence that it prevents traffic from going through that is more than
180 cm high. But when such an overpass is built over the main access road to an island from a city in which automobiles are
generally less than 180 cm high and buses are taller, then it acquires a more specific built-in consequence, which is that
buses are being prevented from going to the island whereas automobiles do have access. When, in addition, it is the case that
buses are the primary means of transportation for black citizens, whereas most white citizens own automobiles, then the more
specific consequence of the overpass is that it allows easy access to the island for one racial group, while denying it to
another. When the context of use of an artefact is relatively fixed, the immediate, physical consequences associated with
a technology can often be
translated into social consequences because there are reliable correlations
between the physical and the social (for example between prevention of access to buses and prevention of access to blacks)
that are
present (Latour
1992).
Let us now turn from built-in consequences to embedded values. An embedded value is a special kind of built-in consequence.
It has already been explained how technological artefacts can have built-in consequences. What needs to be explained now is
how some of these built-in consequences can be associated with values. To be able to make this case, let us first consider
what a value is.
Although the notion of a value remains somewhat ambiguous in philosophy, some agreements seem to have emerged (Frankena
1973). First, philosophers tend to agree that values depend on
valuation. Valuation is the act of valuing something, or finding it valuable, and to find something valuable is to find it
good in some way. People find all kinds of things valuable, both abstract and concrete, real and unreal, general and specific.
Those things that people find valuable that are both ideal and general, like justice and generosity, are called
values, with
disvalues being those general qualities that are considered to be bad or evil, like injustice and avarice.
Values, then, correspond to idealized qualities or conditions in the world that people find good. For example, the value of
justice corresponds to some idealized, general condition of the world in which all persons are treated fairly and rewarded
rightly.
To have a value is to want it to be realized. A value is realized if the ideal conditions defined by it are matched by conditions in the actual world. For example, the
value of freedom is fully realized if everyone in the world is completely free. Often, though, a full realization of the ideal
conditions expressed in a value is not possible. It may not be possible for everyone to be completely free, as there are always
at least some constraints and limitations that keep people from a state of complete freedom. Therefore, values can generally
be realized only to a degree.
The use of a technological artefact may result in the partial realization of a value. For instance, the use of software that
has been designed not to make one's personal information accessible to others helps to realize the value of
privacy. The use of an artefact may also hinder the realization of a value or promote the realization of a disvalue. For instance,
the use of software that contains spyware or otherwise leaks personal data to third parties harms the realization of the value
of privacy. Technological artefacts are hence capable of either
promoting or
harming the realization of values when they are used. When this occurs systematically, in all of its central uses, we may say that
the artefact embodies a special kind of built-in consequence, which is a
built-in tendency to promote or harm the realization of a value. Such a built-in tendency may be called, in short, an
embedded value or
disvalue. For example,
spyware
-laden software has a tendency to harm privacy in all of its typical uses, and may therefore be claimed to have harm to privacy
as an embedded disvalue.
Embedded values approaches often focus on moral values. Moral values are ideals about how people ought to behave in relation to others and themselves and how society should be
organized so as to promote the right course of action. Examples of moral values are justice, freedom, privacy and honesty.
Next to moral values, there are different kinds of non-moral values, for example, aesthetic, economic, (non-moral) social
and personal values, such as beauty, efficiency, social harmony and friendliness.
Values should be distinguished from norms, which can also be embedded in technology. Norms are rules that prescribe which kinds of actions or state of affairs are forbidden, obligatory or allowed. They are often
based on values that provide a rationale for them. Moral norms prescribe which actions are forbidden, obligatory or allowed from the point of view of morality. Examples of moral norms
are ‘do not steal’ and ‘personal information should not be provided to third parties unless the bearer has consented to such
distribution’. Examples of non-moral norms are ‘pedestrians should walk on the right side of the street’ and ‘fish products
should not contain more than 10 mg histamines per 100 grams’. Just as technological artefacts can promote the realization
of values, they can also promote the enforcement of norms. Embedded norms are a special kind of built-in consequence. They are tendencies to effectuate norms by bringing it about that the environment
behaves or is organized according to the norm. For example, web browsers can be set not to accept cookies from websites, thereby
enforcing the norm that websites should not collect information about their user. By enforcing a norm, artefacts thereby also
promote the corresponding value, if any (e.g., privacy in the example).
So far we have seen that technological artefacts may have embedded values understood as special kinds of built-in consequences.
Because this conception relates values to causal capacities of artefacts to affect their environment, it may be called the
causalist conception of embedded values. In the literature on embedded values, other conceptions have been presented as well. Notably,
Flanagan, Howe and Nissenbaum (
2008) and Johnson (
1997) discuss what they call an
expressive conception of embedded values. Artefacts may be said to be expressive of values in that they incorporate or contain symbolic
meanings that refer to values. For example, a particular brand of computer may symbolize or represent status and success,
or the representation of characters and events in a computer game may reveal racial prejudices or patriarchal values. Expressive
embedded values in artefacts
represent the values of designers or users of the artefact. This does not imply, however, that they also function to
realize these values. It is conceivable that the values expressed in artefacts cause people to adopt these values and thereby contribute
to their own
realization. Whether this happens frequently remains an open question. In any case, whereas the expressive conception of embedded
values merits further philosophical reflection, the remainder of this chapter will be focused on the causalist
conception.
The embedded values approach within computer ethics studies embedded values in computer systems and software and their emergence,
and provides moral evaluations of them. The study of embedded values in Information and Communication Technology (ICT) has
begun with a seminal paper by Batya Friedman and Helen Nissenbaum in which they consider
bias in computer systems (Friedman and Nissenbaum
1996). A biased computer system or program is defined by them as one that systematically and unfairly discriminates against certain
individuals or groups, who may be users or other stakeholders of the system. Examples include educational programs that have
much more appeal to boys than to girls, loan approval software that gives negative recommendations for loans to individuals
with ethnic surnames, and databases for matching organ donors with potential transplant recipients that systematically favour
individuals retrieved and displayed immediately on the first screen over individuals displayed on later screens. Building
on their work, I have distinguished
user biases that discriminate against (groups of) users of an information system, and
information biases that discriminate against stakeholders represented by the system (Brey
1998). I have discussed various kinds of user bias, such as user exclusion and the selective penalization of users, as well as
different kinds of information bias, including bias in information content, data selection, categorization, search and matching
algorithms and the display of information.
After their study of bias in computer systems, Friedman and Nissenbaum went on to consider consequences of
software agents for the autonomy of users. Software agents are small programs that act on behalf of the user to perform tasks.
Friedman and Nissenbaum (1987) argue that software agents can undermine user autonomy in various ways – for example by having
only limited capabilities to perform wanted tasks or by not making relevant information available to the user – and argue
that it is important that software agents are designed so as to enhance user autonomy. The issue of user autonomy is also
taken up in Brey (
1998,
1999c), in which I argue that computer systems can undermine autonomy by supporting monitoring by third parties, by imposing their
own operational logic on the user, thus limiting creativity and choice, or by making users dependent on systems operators
or others for maintenance or access to systems functions.
Deborah Johnson (
1997) considers the claim that the
Internet is an inherently democratic technology. Some have claimed that the Internet, because of
its distributed and nonhierarchical nature, promotes democratic processes by empowering individuals and stimulating democratic
dialogue and decision-making (see
Chapter 10). Johnson subscribes to this democratic potential. She cautions, however, that these democratic tendencies may be limited
if the Internet is subjected to filtering systems that only give a small group of individuals control over the flow of information
on the Internet. She hence identifies both democratic and undemocratic tendencies in the technology that may become dominant
depending on future use and
development.
Other studies, within the embedded values approach, have focused on specific values, such as privacy, trust, community, moral
accountability and informed consent, or on specific technologies. Introna and Nissenbaum (
2000) consider biases in the algorithms of
search engines, which, they argue, favour websites with a popular and broad subject matter over specialized sites, and the
powerful over the less powerful. Introna (
2007) argues that existing
plagiarism detection software creates an artificial distinction between alleged plagiarists and non-plagiarists, which is
unfair. Introna (
2005) considers values embedded in
facial recognition systems. Camp (
1999) analyses the implications of Internet protocols for democracy. Flanagan, Howe and Nissenbaum (
2005) study values in
computer games, and Brey (
1999b,
2008) studies them in computer games, computer simulations and virtual reality applications. Agre and Mailloux (
1997) reveal the implications for privacy of
Intelligent Vehicle-Highway Systems, Tavani (
1999) analyses the implications of
data-mining techniques for privacy and Fleischmann (
2007) considers values embedded in
digital
libraries.
What has not been discussed so far is how technological artefacts and systems acquire embedded values. This issue has been
ably taken up by Friedman and Nissenbaum (
1996). They analyse the different ways in which
biases (injustices) can emerge in computer systems. Although their focus is on biases, their analysis can easily be generalized
to values in general. Biases, they argue, can have three different types of origins.
Preexisting biases arise from values and attitudes that exist prior to the design of a system. They can either be
individual, resulting from the values of those who have a significant input into the design of the systems, or
societal, resulting from organizations, institutions or the general culture that constitute the context in which the system is developed.
Examples are racial biases of designers that become embedded in loan approval software, and overall gender biases in society
that lead to the development of computer games that are more appealing to boys than to girls. Friedman and Nissenbaum note
that preexisting biases can be embedded in systems intentionally, through conscious efforts of individuals or institutions,
or unintentionally and unconsciously.
A second type is technical bias, which arises from technical constraints or considerations. The design of computer systems includes all kinds of technical
limitations and assumptions that are perhaps not value-laden in themselves but that could result in value-laden designs, for
example because limited screen sizes cannot display all results of a search process, thereby privileging those results that
are displayed first, or because computer algorithms or models contain formalized, simplified representations of reality that
introduce biases or limit the autonomy of users, or because software engineering techniques do not allow for adequate security,
leading to systematic breaches of privacy. A third and final type is emergent bias, which arises when the social context in which the system is used is not the one intended by its designers. In the new context,
the system may not adequately support the capabilities, values or interests of some user groups or the interests of other
stakeholders. For example, an ATM that relies heavily on written instructions may be installed in a neighborhood with a predominantly
illiterate population.
Friedman and Nissenbaum's classification can easily be extended to embedded values in general. Embedded values may hence be
identified as
preexisting, technical or emergent. What this classification shows is that embedded values are not necessarily a reflection
of the values of designers. When they are, moreover, their embedding often has not been intentional. However, their embedding
can be an intentional act. If designers are aware of the way in which values are embedded into artefacts, and if they can sufficiently
anticipate future uses of an artefact and its future context(s) of use, then they are in a position to intentionally design
artefacts to support particular values. Several approaches have been proposed in recent years that aim to make considerations
of value part of the design process. In
Section 3.4, the most influential of these approaches, called
value-sensitive design, is discussed. But first, let us consider a more philosophical approach that also adopts the notion
of embedded values.