During the
Copernican Revolution in the sixteenth and early seventeenth centuries, major changes in science and philosophy led human
beings to see themselves, and indeed the entire Universe, in a radically different perspective. Traditional assumptions about
human nature and the role of humans in the Universe were in question, and significant changes occurred in our understanding
of society and religion. Centuries later, first
Darwinian biology and then Freudian psychology altered – yet again – some fundamental assumptions about human nature (Floridi
2008a), though they did not significantly change our view of the Universe as a whole. Today, the information revolution, including
its associated scientific and philosophical developments, has begun to yield a radically different view of human nature and
the world. In physics, for example, recent discoveries indicate that
the Universe is made of information, and human beings are exquisitely complex ‘informational objects’ (Wheeler
1990, Lloyd
2006). In philosophy – in particular, in the ‘philosophy of information’ of Luciano Floridi (e.g., Floridi
2008a, Floridi
2008e) – human beings are viewed as sophisticated informational beings, similar in many ways to the vast array of ICT-artefacts
emerging in our ‘information society’. At a certain ‘level of abstraction’, humans and their ICT-artefacts can be seen as
companions – fellow travellers on a mutual journey of existence in an informational universe (Floridi
2008a). This new way of understanding human nature and the place of humans in the world raises a number of new ethical questions
(Floridi
2008d).
In the 1940s and 1950s, philosopher/scientist
Norbert Wiener was a seminal figure for today's informational understanding of the Universe and the role of humans within
it. In addition, his
‘cybernetic’ analyses of human nature and society led him to create philosophical foundations for the ethical field that is
currently called
information and computer ethics. Indeed, Wiener played a double role in the information revolution: on the one hand, he helped to generate the necessary
technology for that revolution; and, on the other hand, he provided a philosophical foundation for information and computer
ethics to help the world cope with the resulting social and ethical consequences. Even
before Wiener, seeds of today's informational understanding of the Universe and of human nature could be found in the works
of other philosophers, such as
Aristotle's metaphysics, his theory of perception and his account of human thinking (see Bynum
1986). Given these ideas, the present chapter has three goals:
(1) It examines some metaphysical assumptions of Aristotle and Wiener that can be seen as philosophical roots of today's information
and computer ethics.
(2) It describes some milestones in information and computer ethics from Wiener's contributions to the present day.
(3) It briefly describes Floridi's new ‘macroethics’ (his term), which he calls information ethics (henceforth ‘IE’) to distinguish Floridi's ‘macroethics’ from
the general field of information ethics that includes, for example, agent ethics, computer ethics, Internet ethics, journalism
ethics, library ethics, bioengineering ethics, neurotechnology ethics – to name some of its significant parts.)
More than two thousand years ago, Aristotle developed a detailed theory of the nature of the Universe and of the individual
objects within it. Some of the questions that he asked himself were these: What are individual objects made of? What do all
animals have in common? What distinguishes human beings from all the other animals? How does an animal acquire information
from objects outside of its body, and what happens to that information once it gets inside of the animal's body? Aristotle's
answers to these and related questions are remarkably similar to many of the answers we would give today, including answers
closely related to the informational understanding of the Universe and of human beings.
According to Aristotle (
Metaphysics), individual entities in the
Universe consist of
matter and
form. Matter is the underlying substrate of which an entity is made, while form is ‘taken on’ by the matter thereby making an
individual thing what it is. Matter and form always occur together, neither exists without the other – so there is no matter
that is formless, and there is no form that is not ‘enmattered’. Consider, for example, what makes a house a house: a heap
or collection of bricks, wood, glass, etc. does not constitute a house. Such materials must be assembled into a certain
form to create the house. The form is
essential to the house; it is what makes the house a house. The particular matter out of which the house happens to be made is, in
some sense, accidental. One could replace the original bricks with other bricks and
the original wood with different wood and it would still be a house. One could even replace the bricks with wooden blocks
and the wood with appropriate pieces of plastic and it would still be a house.
The form of a house is what makes it a house and enables it to fulfil the functions of a house.
According to Aristotle, the same is true for living things, including people. Aristotle himself, for example, remains Aristotle
over time, even though the matter out of which he is made is constantly changing through metabolic processes, such as breathing,
eating, digesting and perspiring. The form of Aristotle is essential to his existence in the world, and to the functions he is capable of fulfilling; but the particular
bits of matter out of which Aristotle happens to be made at any given moment are incidental. ‘Form’ in this case is much more
than just the shape of Aristotle's body and its parts; it includes all the other essential qualities that make him what he
is, and these are ‘enmattered’ in his body.
Aristotle distinguished animals from plants by the fact that animals can perceive, while plants cannot. During perception, information from objects outside of an animal gets transferred into the animal. How is this possible? According
to Aristotle (On the Motion of Animals, On the Soul), nature has so structured the sense organs of animals that they are able to ‘take in the form [of what is being perceived]
without the matter’. Eyes take in forms like colours and shapes, ears take in the pitch and loudness of sounds, and so on.
How is it possible to ‘take in forms without the matter’? Aristotle used the analogy of pressing a metal ring into soft wax.
The wax is able to ‘take on’ the shape and size of the ring without taking in the metal out of which the ring is made.
According to Aristotle, perception accomplishes a similar result within the body of an animal: in the process of perception,
the form of an object is carried to the animal's sense organ by means of a medium such as air or water. The sense organ then
takes in the form that is being carried by the medium, but it does not take in the matter of the object of perception – nor even the matter
of the medium. From the sense organ, the form is transferred, through the animal's body, to a region where all perceptual
forms are interpreted, thereby creating
percepts. Percepts contain information from the object of perception, information which interacts with the physiology of the animal,
generating pleasure or pain and initiating the animal's reaction to the perceived object. After a perceived object is no longer
present, most animals nevertheless retain perceptual traces – ‘
phantasms’ – inside their bodies. These lingering physical entities are fainter versions of original percepts, containing similar information,
and they typically have a similar effect upon an animal's behaviour (Bynum
1986).
In human beings, as in other animals, responses to perceptions are sometimes ‘automatic’. However, humans can choose, instead,
to control their responses using
reasoning. According to Aristotle, there are two kinds of
reasoning:
theoretical reasoning, which generates knowledge and beliefs about the world, and
practical reasoning, which generates choices and actions. Excellent practical reasoning is the kind that leads to
virtuous actions. The ability of humans to choose their actions and control their behaviour, using theoretical and practical reasoning,
distinguishes humans from all the other animals, according to Aristotle, and makes them ethically responsible for what they
do and what they become. In
On the Soul, Aristotle describes thinking and reasoning as processes that either
are the physical manipulation of phantasms, or at least
require the presence of such manipulation. For Aristotle, therefore, thinking and reasoning require sophisticated information processing that
is dependent upon the physiology of the human
body.
For centuries, philosophers have been debating what Aristotle meant by metaphysical terms like ‘matter’, ‘form’, ‘substance’
and others. There is no need for us to join that debate here; but it is worth noting, in the present context, that what Aristotle
called a
‘form’ either
is, or at least
includes, information from whatever object is being perceived. For purposes of the present chapter, it is of interest to note that
the underlying metaphysics, physiology and psychology of Aristotle's theory of human nature – when interpreted as described
here – yield the following conclusions:
(1) Individual entities in the Universe are made out of matter and forms, and forms either are or at the very least contain information. So matter and information are significant components of every physical thing in the Universe.
(2) Aristotle's account of perception assumes that all animals are information-processing beings whose bodily structures account for the ways in which information gets processed within them.
(3) Information processing within an animal's body initiates and controls the animal's behaviour.
(4) Like other animals, humans are information-processing beings; but unlike other animals, humans have sophisticated information-processing
capabilities called theoretical reasoning and practical reasoning, and these make ethics possible.
Aristotle's metaphysics, physics, biology, physiology, theory of animal behaviour and account of human reasoning are more
than twenty-three hundred years old. They provided to Aristotle a rich scientific and philosophical foundation for developing
his
virtue ethics theory. In addition, as indicated above, they also contained a number of ideas suggestive of today's informational
theory of the nature of the Universe and of human beings. A pioneer in the development of today's theory was the philosopher/scientist
Norbert
Wiener, whose achievements in cybernetics, communication theory, computer design, and related fields, in the 1940s and 1950s,
helped to bring about the current
‘information age’. Wiener made the following important assumptions:
(1) Objects and processes in the Universe are made of matter/energy and information.
(2) All animals are information-processing beings whose behaviour depends centrally upon such processing.
(3) Humans, unlike other animals, have bodies that make the information processing in their central nervous systems especially sophisticated.
Wiener combined these assumptions with his extensive knowledge in philosophy, physics, biology, communication theory, information
science and psychology. The result was an impressive philosophical and scientific foundation for today's
information ethics and computer ethics theories. (See Wiener
1948,
1950,
1954,
1964.)
Wiener's assumptions about the ultimate nature of the Universe included his view that information is physical – subject to the laws of nature and measurable by science. The sort of information that he had in mind is sometimes called
‘Shannon information’, which is named for Claude Shannon, who had been a student and colleague of Wiener's. Shannon information
is the syntactic sort that is carried in telephone wires, TV cables and radio signals. It is the kind of information that
computer chips process and DNA encodes within the cells of all living organisms. Wiener believed that such information, even
though it is physical, is neither matter nor energy. Thus, while discussing thinking as information processing, he noted that a brain or a computer
does not secrete thought ‘as the liver does bile’, as the earlier materialists claimed, nor does it put it out in the form
of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does
not admit this can survive at the present day.
According to Wiener, matter-energy and Shannon information are different physical phenomena, but neither can exist without the other. So-called
‘physical objects’ – including living organisms – are actually persisting patterns of Shannon information encoded within an
ever-changing flux of matter-energy. Every physical process is a mixing and mingling of matter-energy and information – a
creative ‘coming-to-be’ and destructive ‘fading away’ – as old patterns of matter-energy-encoded information erode and new
patterns emerge.
A related aspect of Wiener's metaphysics is his account of human nature and personal identity. Human beings, too, are
patterns of information that persist through changes in matter-energy. Thus, in spite of continuous exchanges of matter-energy between a person's body and the world outside the body (via respiration,
perspiration, excretion, and so on), the complex organization or
form of a person – that is,
the pattern of Shannon information encoded within a person's body – is maintained, thereby preserving life, functionality and personal identity. Thus, Wiener stated:
We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves.
The individuality of the body is that of a flame. . .of a form rather than of a bit of substance.
To use today's language, humans are ‘information objects’ whose personal identity is tied to internal information processing
and to persisting patterns of Shannon information within their bodies. Personal identity is not dependent upon specific bits
of matter-energy that happen to make up one's body at any given moment. Through breathing, drinking, eating and other metabolic
processes, the matter-energy that makes up one's body is constantly changing. Nevertheless one remains the same person over
time because the pattern of Shannon information encoded within the body remains essentially the same.
With this idea in mind, Wiener engaged in a remarkable thought experiment: If one could encode, in a telegraph message, the
entire exquisitely complex Shannon-information pattern of a person's body, and then use that encoded pattern to reconstitute the
person's body from appropriate atoms at the receiving end of the message, people could travel instantly from place to place
via telegraph. Wiener noted that this idea raises knotty philosophical questions regarding, not only personal identity, but
also ‘forking’ from one person into two, ‘split’ personalities, survival of the self after the death of one's body, and a
number of others (Wiener
1950,
Ch. VI,
1954,
Ch. V).
An additional aspect of Wiener's metaphysics is his account of good and evil within nature. He used the traditional distinction between ‘natural evil’, caused by the forces of nature (for example, earthquakes,
volcanoes, diseases, floods, tornados and physical decay), and ‘moral evil’ (for example, human-caused death, injury, pain
and sorrow). The ultimate natural evil, according to Wiener, is entropy – the loss of useful Shannon information and useful energy that occurs in virtually every physical change. According to the
second law of thermodynamics, essentially all physical changes decrease available Shannon information and available energy.
As a result, everything that ever comes into existence will decay and be destroyed. This includes anything that a person might
value, such as one's life, wealth and happiness; great works of art; magnificent architectural structures; cities, cultures
and civilizations; the sun and moon and stars. None of these can survive the decay and destruction of entropy – the loss of
available Shannon information – for everything in the Universe is subject to the second law of thermodynamics.
In his book,
Cybernetics: or Control and Communication in the Animal and the Machine (
1948), Wiener viewed animals and computerized machines as
cybernetic entities – that is, as dynamic systems with component parts that communicate with each other internally, and also with the outside
world, by means of various channels of communication and feedback loops. Such communication helps to unify an animal or a
machine into a single functioning entity.
Wiener also viewed communities and whole societies as cybernetic entities: Beginning in 1950, with the publication of The Human Use of Human Beings, Wiener assumed that cybernetic machines will join humans as active participants in society. For example, some machines will participate along with humans in the vital activity of creating, sending and receiving the
messages that constitute the ‘cement’ that binds society together:
It is the thesis of this book that society can only be understood through a study of the messages and the communication facilities
which belong to it; and that in the future development of these messages and communication facilities, messages between man
and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part.
In addition, Wiener predicted that certain machines, namely digital computers with robotic appendages, would someday participate
in the workplace, replacing thousands of human factory workers, both blue collar and white collar. He also foresaw artificial
limbs and other body parts – cybernetic ‘prostheses’ – that would be merged with human bodies to help persons with disabilities
– or even to endow able-bodied persons with unprecedented powers. Today, we would say that Wiener envisioned societies in
which
‘cyborgs’ would play a significant role and would have ethical policies to govern their behaviour. In summary, Wiener foresaw
what he called the
‘Machine Age’ or the ‘Automatic Age’ in which machines would be integrated into the social fabric, as well as the physical
environment. They would create, send and receive messages; gather information from the external world; make decisions; take
actions; reproduce themselves; and be merged with human bodies to create beings with vast new powers. By the early 1960s,
these were not just speculations by Wiener, because he himself had already designed or witnessed early versions of devices,
such as game-playing machines (checkers, chess, war, business), artificial hands with motors that are controlled by the person's
brain, and self-reproducing machines like non-linear transducers. (See especially Wiener
1964.) Wiener's predictions about future societies and their machines caused others to raise various questions about the machines
that Wiener envisioned:
Will they be ‘alive’? Will they have minds? Will they be conscious? Wiener considered such questions to be vague semantic
quibbles, rather than genuine scientific issues:
Now that certain analogies of behaviour are being observed between the machine and the living organism, the problem as to
whether the machine is alive or not is, for our purposes, semantic and we are at liberty to answer it one way or the other
as best suits our convenience.
Similarly, answers to questions about machine consciousness, thinking, or purpose are pragmatic choices, according to Wiener; although he did believe that questions about
the ‘intellectual capacities’ of machines, when appropriately stated, could be genuine scientific questions:
Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be
expected from it. . .
Theoretically, if we could build a machine whose mechanical structure duplicated human physiology, then we could have a machine
whose intellectual capacities would duplicate those of human beings.
(Wiener
1954, p. 57, italics in the original)
By viewing animals and cybernetic machines in the same way – namely, as dynamic systems with internal communications and feedback
loops, exchanging information with the outside world, and thereby adjusting to changes in the world – Wiener began to view
traditional distinctions between mechanism and vitalism, living and non-living, human and machine as pragmatic choices, rather
than unbreachable metaphysical ‘walls’ between kinds of beings.
Wiener's presuppositions about the ultimate nature of all entities in the Universe, that
they consist of information encoded in matter-energy, anticipated later research and discoveries in physics. During the past two decades, for example, physicists – beginning
with Princeton's John Wheeler (Wheeler
1990) – have been developing a
‘theory of everything’ which presupposes that the Universe is fundamentally informational, that every physical ‘object’ or
entity is, in reality, a pattern or ‘flow’ of
Shannon information encoded in matter-energy. Wheeler's hypothesis has been studied and furthered by other scientists in recent
years, and their findings support Wiener's metaphysical presuppositions. As explained by MIT professor Seth Lloyd:
The universe is the biggest thing there is and the bit is the smallest possible chunk of information. The universe is made
of bits. Every molecule, atom and
elementary particle registers bits of information. Every interaction between those pieces of the universe processes that information
by altering those bits.
I suggest thinking about the world not simply as a machine, but as a machine that processes information. In this paradigm, there are two primary quantities, energy and information, standing on an equal footing and playing off
each other.
Science writer Charles Seife notes that ‘information is physical’ and so,
[Shannon] Information is not just an abstract concept, and it is not just facts or figures, dates or names. It is a concrete
property of matter and energy that is quantifiable and measurable. It is every bit as real as the weight of a chunk of lead
or the energy stored in an atomic warhead, and just like mass and energy, information is subject to a set of physical laws
that dictate how it can behave – how information can be manipulated, transferred, duplicated, erased, or destroyed. And everything
in the universe must obey the laws of information, because everything in the universe is shaped by the information it contains.
In addition, the matter-energy-encoded Shannon information that constitutes every existing entity in the Universe appears
to be
digital and finite. Wheeler's one-time student Jacob Beckenstein, for example, discovered the so-called
‘Beckenstein bound’, which is
the upper limit of the amount of Shannon information that can be contained within a given volume of space. The maximum number of information
units (‘bits’) that can fit into any volume is
fixed by the area of the boundary enclosing that space – one bit per four ‘Planck squares’ of area (Beckenstein
2003). In summary, then, the matter-energy-encoded information that constitutes all the existing entities in the Universe appears
to be finite and digital; and only so much of it can be contained within a specific volume of space. (For an alternative non-digital
view, see Floridi
2008c.)
Norbert Wiener's intuitions or assumptions about the nature of the Universe – now supported by important developments in contemporary
physics – provide a new account of the ultimate nature of the Universe, a new understanding of life and human nature, and
indeed a new view of every existing entity as an
‘information object’ or an ‘information process’. Consider, for example, living organisms: Genes in their cells store and
process Shannon information and use it to create the ‘stuff of life’, such as
DNA, RNA, proteins and amino acids.
Nervous systems of animals take in, store and process Shannon information, resulting in bodily motions, perceptions, emotions
and – at least in the case of humans – thinking and reasoning. And, as Charles Seife points out,
Each creature on earth is a creature of information; information sits at the center of our cells, and information rattles
around in our brains. . .Every particle in the universe, every electron, every atom, every particle not yet discovered, is packed
with information. . .that can be transferred, processed, and dissipated. Each star in the universe, each one of the countless
galaxies in the heavens, is packed full of information, information that can escape and travel. That information is always
flowing, moving from place to place, spreading throughout the cosmos.
In addition to developing a metaphysical and scientific foundation for information ethics, Wiener made a number of early contributions
to the applied ethics field that later would be called ‘computer ethics’ (see
Section 2.7 below). However, he did not see himself as developing a new branch of applied ethics, so he did not coin a name like ‘computer
ethics’ or ‘cybernetics ethics’. He simply raised ethical concerns, and offered suggested solutions, about the likely impacts
of computers and other cybernetic machines. One of his chief worries was that cybernetic science was so powerful and flexible
that it placed human beings ‘in a position to construct artificial machines of almost any degree of elaborateness of performance’
(Wiener
1948, p. 27). Cybernetics, therefore, would provide vast new powers that could be used for good, but might also be used in ethically
disastrous ways:
Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence
of another social potentiality of unheard-of importance for good and for evil.
Wiener worried that factory owners might replace human workers with automated machines and bring about massive
unemployment. Cybernetics, he said, could be used to create mechanical ‘slaves’ that would force human workers to compete
for jobs against ‘slave labor’. Thus a new industrial revolution could ‘devalue the human brain’ the way the original industrial
revolution devalued human physical labour. Instead of facing ‘dark satanic mills’, human workers might lose their jobs to
cybernetic machines (Wiener
1948, pp. 27–28; see also
1950,
Ch. X).
Another serious worry for Wiener was the creation of
machines that can learn and make decisions on their own. Some of them simply played games, like checkers and chess, but others had more serious applications, like economic
planning or even military planning. As early as
1950, Wiener expressed concern that government computers might already be using John von Neumann's mathematical game theory to
make war plans, including plans for the use of nuclear weapons. He warned against accepting machine-made decisions too easily.
Of special concern would be machines that
can learn before making their decisions, because such decisions might turn out to be ethically terrible:
For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not,
is to cast his responsibility to the winds, and find it coming back seated on the whirlwind.
Nevertheless, Wiener's view of cybernetic machines was often positive, rather than negative. Such machines, he said, provide
choices between
good and evil, and he believed that in the future they often will bring about wonderful results. He himself participated in
experiments with machines that mimic human muscle-control disorders, so that such disorders could be better understood and
more successfully treated. He also worked to create a ‘hearing glove’ to help a deaf person compensate for hearing loss by
using a cybernetic glove to generate appropriate vibrations in one's hand (Wiener
1950,
Ch. X).
Another positive use for cybernetic devices that Wiener envisioned – in this case, including a global electronic communications
network – was the possibility of
working on the job while being hundreds or even thousands of miles away from the job site (Wiener
1950,
Ch. VI). This will be possible, he said, because
where a man's word goes, and where his power of perception goes, to that point his control and in a sense his physical existence
is extended. To see the whole world and to give commands to the whole world is almost the same as to be everywhere.
Wiener illustrated this point with a thought experiment: He imagined an architect in Europe supervising the day-to-day construction
of a building in the United States without ever physically travelling to America. Instead, the architect would send and receive
‘Ultrafax’ facsimiles of plans and photos, and he would interact with the work crew by telephone and teletype machine. This
thought experiment provided perhaps the first example of ‘teleworking’ and a community with some geographically separated
members who participate in the community ‘virtually’.
In addition to the few computer ethics topics mentioned here, Wiener analysed, or at least touched upon, a wide variety of
issues decades ago which are still considered ‘contemporary’ today, for example, agent ethics, artificial intelligence, machine
psychology, computers and security, computers and religion, computers and learning, computers for persons with disabilities,
responsibilities of computer professionals, and many other topics as well. (See Bynum
2000b,
2004,
2005.)
Wiener was far ahead of other thinkers in his ability to foresee social and ethical impacts of cybernetics and electronic
computers. As a result, his pioneering achievements in computer ethics and information ethics, in the 1940s and 1950s, were
essentially ignored until the late 1990s. In the meantime, growing computer ethics challenges – such as, invasions of privacy,
threats to security and the appearance of
computer-enabled crimes – began to be noticed by public policy makers and the general public. In the late 1960s, for example,
Donn Parker – a computer scientist at SRI International – became concerned about the growing number of computer professionals
who were caught committing serious crimes with the help of their computer expertise. Parker said, ‘When some people enter
the computer center, they leave their ethics at the door.’ He began to study unethical and illegal activities of computer
professionals, and gather example cases of computer-enabled crimes. In 1968, he published the article, ‘Rules of Ethics in
Information Processing’, in
Communications of the ACM; and he headed the development of the first Code of Professional Conduct for the Association for Computing Machinery (eventually
adopted by the ACM membership in 1973). Later, he published books and articles on computer crime. (See, for example, Parker
1979, Parker
et al.
1990.)
It was not until the second half of the 1970s that the name ‘computer ethics’ was coined by Walter Maner, then a faculty member in philosophy at Old Dominion University. While teaching medical ethics, he noticed that
ethical problems in which computers became involved were often worsened or significantly altered by the addition of computing
technology. It even seemed to Maner that computers might create new ethical problems that had never been seen before. He examined
this same phenomenon in areas other than medicine and concluded that a new branch of applied ethics, modelled upon medical
ethics or business ethics, should be recognized by philosophers. He coined the name ‘computer ethics’ to refer to this proposed
new field, and he developed an experimental course designed primarily for students of computer science. The course was a success,
and Maner started to teach computer ethics on a regular basis.
Using his teaching experiences and his research in the proposed new field, Maner created a
‘Starter Kit in Computer Ethics’ (Maner 1978) and provided copies of it to attendees of workshops that he ran and speeches
that he gave at philosophy conferences and computing conferences in America. His ‘Kit’ contained curriculum materials and
pedagogical advice for university teachers. It
also included suggested course descriptions for university catalogues, a rationale for offering such a course in a university,
a list of course objectives, some teaching tips, and discussions of topics like privacy and confidentiality, computer crime,
computer decisions, technological dependence and professional codes of ethics. In 1980, Helvetia Press and the National Information
and Resource Center for Teaching Philosophy published Maner's computer ethics ‘starter kit’ as a monograph (Maner
1980) that was widely disseminated to colleges and universities in America and a number of other
countries.
In developing the first university computer ethics course, Maner defined the field as a branch of applied ethics that would
study problems ‘aggravated, transformed or created by computer technology’. He believed that a number of already existing
ethical problems are worsened by the involvement of computer technology, while other,
new and unique problems are generated by such technology. A colleague in the Philosophy Department, Deborah Johnson, became interested in Maner's proposed new
branch of applied ethics. She agreed with him that computer technology can aggravate or ‘give a new twist’ to old ethical
problems, but she was sceptical of the notion that computers can generate wholly new ethical problems that have never been
seen before. In discussions with Maner, during which his suggested ‘unique’ cases were examined, Johnson saw
new examples of old issues regarding privacy, ownership, just distribution of power, and so on, while Maner saw
problems that would never have arisen if computers had not been invented. These early discussions between Maner and Johnson eventually led to conference presentations and publications that launched
a decades-long conversation – the ‘uniqueness debate’ – among computer ethics scholars, beginning with Maner and Johnson themselves.
(See
Chapter 3 below.)
Several years after the ‘uniqueness’ discussions had begun between Johnson and Maner, Johnson published the first major computer
ethics textbook (Johnson
1985). There she noted that computers ‘pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems,
and forcing us to apply ordinary moral norms in uncharted realms’ (Johnson
1985, p. 1). She did not, however, grant Maner's claim that computers create
wholly new ethical problems. Her highly successful textbook set the research agenda in the field of computer ethics for more than a
decade, including topics such as ownership of software and intellectual property, computing and privacy,
responsibility of computer professionals, and the just distribution of technology and human power. In later editions (1994,
2001), Johnson added new ethical topics, such as ‘hacking’ into people's computers without
their permission, computer technology for persons with disabilities, and the Internet's impact upon democracy.
In the later editions of her textbook, Johnson added to the ongoing ‘uniqueness debate’ with Maner and other scholars. She
granted that computer technology has created new kinds of entities – such as software and databases – and new ways to ‘instrument’
human actions. These innovations, she said, do lead to new, unique, specific ethical questions – for example, ‘Should ownership of software be protected by law?’ and ‘Do huge databases of personal information
threaten privacy?’ She insisted, in both later editions of her textbook, however, that the new specific ethical questions are merely ‘new species of old moral issues’ like protection of human privacy or ownership of intellectual property. They are not, she said, wholly new ethical problems
requiring additions to traditional ethical theories, as Maner had claimed.
A watershed year in the history of computer ethics was
1985, not only because of the publication of Johnson's agenda-setting textbook, but also because of the appearance of James Moor's
classic paper ‘What is Computer Ethics?’ (Moor
1985). In that paper, Moor offered an account of the
nature of computer ethics and an explanation of
why computer technology generates so many ethical questions compared to other technologies. Computing technology is genuinely
revolutionary, said Moor, because it is
‘logically malleable’:
Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms
of inputs, outputs and connecting logical operations.. . .Because logic applies everywhere, the potential applications of computer
technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers
are largely the limits of our own creativity.
Logical malleability makes it possible for people to do a wide variety of things that they never were able to do before. Because
such things were not done in the past, it is possible, perhaps likely, that there is no law or standard of good practice or
ethical rule to govern them. Moor calls such cases
‘policy vacuums’, and these can sometimes lead to ‘conceptual muddles’:
A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used.
Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for
conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine
what we should do in such cases, that is, formulate policies to guide our actions.. . .One difficulty is that along with a policy
vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection
reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual framework within
which to formulate a policy for action.
This explanation of the nature and cause of computer ethics problems was found to be insightful and helpful by many thinkers.
It provided a way to understand and deal with emerging computer ethics problems, and it quickly became the most influential
account of the nature of computer ethics among a growing number of scholars.
More than a decade later, Moor significantly enhanced his theory of computer ethics (Moor
1998). For example, he introduced the notion of
‘core values’ – such as
life,
health,
happiness,
security,
resources,
opportunities and
knowledge – which are so important to the continued survival of a community that essentially all communities must value them. If a
community did not value these things, it would likely cease to exist. With the help of ‘core values’ and some ethical ideas
from Bernard Gert (Gert
1998), Moor later added an account of justice, which he called ‘just consequentialism’, combining deontological and consequentialist
ideas (Moor
1999).
Moor's way of analysing and resolving computer ethics issues was both creative and practical. It provided a broad perspective
on the nature of the information revolution; and, in addition, by using effective ideas like ‘logical malleability’, ‘policy
vacuums’, ‘conceptual muddles’, ‘core values’ and ‘just consequentialism’, he provided a very effective problem-solving method:
(1) Identify a policy vacuum generated by computing technology.
(2) Eliminate any conceptual muddles.
(3) Use core values and the ethical resources of ‘just consequentialism’ to revise existing, but inadequate, policies or to create
new policies that will fill the vacuum and thereby resolve the original ethical problem.
A common thread that runs through much of the history of computer ethics, from Norbert Wiener onwards, is concern for the
protection and advancement of major human values like life, health, security, freedom, knowledge, happiness, resources, power
and opportunity. Wiener, for example, focused attention on what he called ‘great human values’ like freedom, opportunity,
security and happiness; and most of the specific examples and cases included in his relevant works are examples of defending
or advancing such values – e.g., preserving security, resources and opportunities for factory workers by preventing massive
unemployment from robotic factories, or avoiding threats to national security from
decision-making war-game machines. In Moor's
computer ethics theory, respect for ‘core values’ is a central aspect of his
‘just consequentialism’ theory of justice, as well as his influential analysis of human
privacy. The fruitfulness of the ‘human-values approach’ to computer ethics is reflected in the fact that it has served as
the organizing theme of major computer-ethics conferences, such as the 1991 watershed
National Conference on Computing and Values that was organized around impacts of computing upon security, property, privacy,
knowledge, freedom and opportunities. In the late 1990s, a new approach to computer ethics,
‘value-sensitive computer design’, emerged (see
Chapter 5 in this book), based upon the insight that human values can be
‘embedded’ within technology, and so potential computer-ethics problems can be avoided, while new technology is under development,
by
anticipating possible harm to human values and designing new technology from the very beginning in ways that prevent such
harm. (See, for example, Friedman and Nissenbaum
1996, Friedman
1997, Brey
2000, Introna and Nissenbaum
2000, Introna
2005, Flanagan
et al.
2008.)
By the mid 1990s, the information revolution, which Wiener had distantly envisioned fifty years before, was well under way.
A vast diversity of information and communication artefacts had been invented and were proliferating across the globe: mainframe
computers; mini, desktop and laptop computers; software; databases; word processors; spreadsheets; electronic games; the Internet;
email; and on, and on. Robots had joined or
replaced human workers in some factories; some people had become
‘telecommuters’ working from home online, instead of travelling to an office or a factory;
‘virtual communities’, with geographically dispersed members, were multiplying; and decision-making machines were replacing
certain people in medical centres, banks, airplane cockpits, classrooms, etc. At the same time, influential
physicists – like John Wheeler at Princeton (Wheeler
1990) – had begun to argue that the Universe is made of information.
In this context, philosopher Luciano Floridi launched an ambitious project to create a new philosophical paradigm, which he
named ‘the philosophy of information’ (henceforth PI). He believed that other paradigms in philosophy – such as, analytic philosophy, phenomenology, existentialism,
and so on – had become ‘scholastic’, and therefore stagnant as intellectual enterprises:
Scholasticism, understood as an intellectual topology rather than a scholarly category, represents the inborn inertia of a
conceptual system, when not its rampant resistance to innovation. It is
institutionalized philosophy at its worst.. . .It manifests itself as a pedantic and often intolerant adherence to some discourse (teachings, methods, values,
viewpoints, canons of authors, positions, theories, or selections of problems, etc.), set by a particular group (a philosopher,
a school of thought, a movement, a trend, etc.), at the expense of alternatives, which are ignored or opposed.
Philosophy, said Floridi,
can flourish only by constantly re-engineering itself. A philosophy that is not timely but timeless is not an impossible philosophia perennis, which claims universal validity over past and future intellectual positions, but a stagnant philosophy.
As an alternative to scholastic philosophical systems and communities, Floridi set for himself the ambitious task of creating
a new philosophical paradigm which he believed would someday become part of the ‘bedrock’ of philosophy (philosophia prima). At the heart of his new paradigm was to be the concept of information, a concept with multiple meanings,
a concept as fundamental and important as being, knowledge, life, intelligence, meaning, or good and evil – all pivotal concepts
with which it is interdependent – and so equally worthy of autonomous investigation. It is also a more impoverished concept,
in terms of which the others can be expressed and interrelated, when not defined.
Upon first sight, the metaphysical presuppositions of Floridi's PI paradigm seem much like those of Wiener's metaphysics.
For example, both assume that objects in the Universe are made of information and both consider
entropy to be a fundamental evil. Such initial impressions, however, are in need of further qualification because the kind of information
that Wiener had in mind is Shannon information, which is syntactic, but not semantic, and it is subject to laws of physics
such as the
second law of thermodynamics. Floridi's fundamental information, on the contrary, is ‘strongly semantic’ and not subject to
the laws of physics; and Floridi's entropy is not the thermodynamic kind that Wiener presupposed, but is synonymous with Non-Being.
The informational universe that Wiener had in mind is the materialistic one that physicists study; while Floridi's universe,
which he named
‘the infosphere’, is Platonic and Spinozistic and includes ‘the semantic environment in which millions of people spend their
time nowadays’ (Floridi
2002b, p. 134). It includes not only material objects understood informationally, but also entities, like Platonic abstractions
or possible beings, that are
not subject to the laws of physics (Floridi
2008e, p. 12).
A major component of Floridi's new philosophical paradigm is the ethical theory that he calls INFORMATION ETHICS (henceforth
IE to distinguish Floridi's theory from the more general field of information ethics in the broad
sense). Floridi describes his IE theory as a
‘macroethics’ (his word),
similar to virtue ethics, deontologism, consequentialism and contractualism in that it is intended to be applicable to all ethical
situations. On the other hand, IE is
different from these traditional theories because it is
not intended to replace them, but rather to supplement them with further ethical considerations that can sometimes be overridden by more traditional ethical
concerns (Floridi
2005b).
What are the fundamental components of IE? According to Floridi, every existing entity in the
Universe, when viewed from a certain ‘level of abstraction’, can be construed as an ‘informational object’ with a characteristic
data structure that constitutes its very nature. And, for this reason, the Universe considered as a whole can be called ‘the
infosphere’. Each entity in the infosphere can be damaged or destroyed by altering its characteristic data structure, thereby
preventing it from ‘flourishing’. Such damage or destruction Floridi calls
‘entropy’, which results in the ‘empoverishment of the infosphere’. Entropy, therefore, constitutes evil that should be avoided
or minimized. With this in mind, Floridi offers four
‘fundamental principles’ of IE:
(0) entropy ought not to be caused in the infosphere (null law)
(1) entropy ought to be prevented in the infosphere
(2) entropy ought to be removed from the infosphere
(3) the flourishing of informational entities as well as the whole infosphere ought to be promoted by preserving, cultivating
and enriching their properties
By construing every existing entity as an ‘informational object’ with at least a minimal moral worth, Floridi shifts the focus
of ethical consideration away from the actions, characters and values of human agents toward the ‘evil’ (harm, dissolution,
destruction) – ‘entropy’ – suffered by objects in the infosphere. With this approach, every existing entity – humans, other
animals, organizations, plants, non-living artefacts, electronic objects in cyberspace, pieces of intellectual property, stones,
Platonic abstractions, possible beings, vanished civilizations – can be interpreted as
potential agents that affect other entities, and as
potential patients that are affected by other entities. Thus, Floridi's IE can be described as a ‘patient-based’ non-anthropocentric ethical
theory instead of the traditional ‘agent-based’
anthropocentric ethical theories like deontologism, contractualism, consequentialism and virtue theory.
The addition of Floridi's IE to traditional anthropocentric ethical theories adds a new basis for ethical judgement and fills
important ‘gaps’ left by those other theories:
(i) The Western anthropocentric ethical theories do not successfully account for a significant aspect of human ethical experience,
namely, the feeling or attitude of
respect for all of nature. Such respect or reverence has been
a significant aspect of
other Western ethical theories, like that of Spinoza or some of the Stoics, and it is an important feature of Eastern ethical theories
like those of Buddhism and Taoism (Hongladarom
2008).
(ii) The Western anthropocentric ethical theories, because they focus exclusively upon human actions, characters and values, are
not well suited to the task of ethically analysing or informing the activities of new kinds of ‘agents’ – like robots, softbots
and cyborgs – which are proliferating rapidly and playing an ever-increasing role in the information society.
Floridi's IE is an ethical theory for the information age, rooted in the science, technology and social changes that have
made the information revolution
possible.