Over the last thirty or so years, equality legislation has gradually been put in place in many Western countries.
Equal treatment of men and women was a founding principle of the European Economic Community when it was founded in 1957,
long before gender equality appeared in national agendas (Rees
1998, p. 1). Over this period, gender discrimination law has been enacted in the EU, with the development of extensive case law
by the European Court of Justice. Taking the UK as a paradigm example, it has been illegal to discriminate on the grounds
of gender since the mid 1970s (Equal Pay Act, Sex Discrimination Act). The European Commission designated 2007 as the ‘European
Year of Equal Opportunities for All’. This involved an information campaign, and ‘Equality Summit’ and a framework strategy
on non-discrimination and equal opportunities, which aimed to ensure that EU legislation in this field is properly implemented
and where a number of EU member states were condemned for failing to apply EU equality legislation properly. Since the introduction
of gender equality law, legislation addressing race and disability discrimination has followed. Age, religion and sexuality
discrimination are the latest equality legislation to enter the statute books in the UK and legislation aimed at all these
forms of discrimination will shortly be wrapped into a single equalities bill. A key part of the lead-up to the ‘Equal Opportunities
for All’ year was the mandate for member states to create a single contact point responsible for creating national strategy
on discrimination covered by Article 13 of the Treat of Amsterdam, namely discrimination on the grounds of sex, racial or
ethnic origin, religion or belief, disability, age or sexual orientation.
Despite apparent advances in recognizing and promoting equality, equality legislation is notoriously difficult to enforce
and there are concerns that single equality legislation, rather than strengthening the battle against discrimination, may,
instead, weaken it, as attention is deflected from the material experiences of different types of discrimination (Pearson
and Watson
2007). However, the aim of this chapter is not to dwell on the problems of equality legislation, as such. Whatever its pitfalls,
the legislation is based on a recognition of
diversity in society and that the rights of individuals belonging to
diverse groups should be preserved. In particular, an individual should reasonably expect not to be discriminated against
on the grounds of age, gender, disability, race, sexuality or religion.
The categories, age, gender, disability, race, sexuality and religion are important parts of an individual's identity. As noted above, the right to equal treatment, or at least the right not to be discriminated against, on the grounds
of one's identity is gradually becoming enshrined in legislation. However, in a wider sense, the question of how people with
diverse identities should be treated, and how their identities are to be respected and to be allowed to flourish, is an ethical
matter. The construction and flourishing of individual identity is a complex concern, as even a relatively simple example
illustrates. On an employment application in the UK, one will often be asked to tick a box on a form indicating ethnicity
and one might find that either the ethnicity with which one identifies is not represented on the form at all, or that it is
perfectly appropriate to tick two or more boxes. The ways in which personal identity is constructed and maintained is, of
course, a much more complex question, which the tick boxes of a form oversimplify. Additionally, information and communications
technologies (ICTs) have an increasing part to play in the construction, maintenance and flourishing of identity, and it becomes
increasingly important to understand the ways in which ICTs are implicated in avoiding discrimination of individuals on the
grounds of identity. This is especially important given the reliance of many countries on ICTs for a wide range of services,
the provision of goods, participation and education. Such issues can be construed as a concern for computer ethics.
Hence this chapter is concerned with casting the expression and maintenance of identity in relation to ICTs as a computer
ethics problem. As a central part of personal identity involves identifying oneself, explicitly or otherwise, as a member
of a group with similar interests or more explicitly as a community, this chapter begins by reviewing research on virtual
communities, noting early enthusiasm for Internet-based communities, critiques and more recent interest in the communities
of social networking sites of Web 2.0.
Gender, disability and age are major markers of individual identity and major areas for equality legislation, given historical
inequalities between men and women, given the exclusion of disabled people from many areas of social life and given an increasing
awareness that denying access to employment and services on the grounds of age constitutes discrimination. Hence these are
key dimensions along which with the maintenance of identity through the use of ICTs may be explored. The following section
considers gender identity in relation to ICTs, how aspects of gender identity are shaped through the use of ICTs and how this
may be understood in computer ethics terms. The
next section turns to disability and ICTs. The identity politics of disability is a hotly debated area. The way that so much of the World
Wide Web remains
inaccessible to disabled people and the implications for identity and community acts as a focus for this section. Finally
the ways in which older people construct and maintain identity through the use of ICTs is considered
.
The central theoretical theme relates to equality and how this may be maintained, allowing individuals to express identity
through living and working with ICTs. All too often equality is expressed as a liberal value without consideration of the
deeper structural reasons that cause the inequality. Expressing the desire to achieve equality, without understanding the
material, structural causes of inequality, will not go very far towards ending inequality. A good example of this can be found
in the equality and diversity policies that many organizations have developed and which underpin the equality statements on
employment advertisements. Hoque and Noon (
2004) argue that many of these equality policies are
‘empty shells’ which are not backed up by substantive action to address inequality. Having an ‘empty shell’ equality policy
is worse than having no equality policy at all, if it is assumed that an organization has done its duty by putting a policy
in place without implementing actions to enforce the equality policy.
Technology has a central role in understanding how inequalities are maintained. As argued below, hopes that new technology
would bring more equal ways of living and working have been widespread. One sees this in relation to the discussion on virtual
communities, gender, disability and age. This is based on a view of technology autonomous in its trajectory, a force which
is somehow independent of society. The idea that technological progress is inevitable and that technology drives society,
rather than, perhaps, the other way round or something more mediated in between, is termed
‘technological determinism’ and it has been criticized by many authors, who argue, instead, for the inextricable intertwining
of the social and the technical (McKenzie and Wajcman
1999). In science and technology studies, technological determinism has been rejected in theoretical moves that have seen substantial
development in the
sociology of scientific knowledge (Bloor
1976), the social construction of technology (McKenzie and Wajcman
1999) and actor-network theory (Law and Hassard
1999). This research develops the idea that technology and society are not independent forces. Indeed, it is not possible to identify
something as unequivocally ‘social’ as technology is thoroughly entwined in the making and maintenance of social life. Similarly,
nothing can be regarded as purely technological. Technologies have histories. They are made in societies and designers design
in conceptions of those who are expected to use the technology. Technologies can be designed in such a way as to reinforce
inequality.
A famous example (although one which has since been questioned) is that of the design of highways in New York State. Winner
argued that the highways to
Long Beach, a desirable resort, were originally designed with low bridges so that buses could not pass underneath (Winner
1999). As buses were used by poorer people, this helped maintain Long Beach as a middle-class resort. Therefore it is possible
to make the claim that technologies are not neutral, they have politics. If society and technology influence and define each
other then we should not necessarily expect new technologies to be free of old patterns of behaviour and old prejudices. Arguing
against determinism, and thereby taking an alternative position, means that we need not see ourselves as being swept along
by a relentless tide of technology; we may, instead, have choices in the way we use different technologies. Hence, it becomes
an important political act to make explicit, understand and evaluate the potential choices
.
Since the beginnings of the Internet as a mass communication medium, there has been considerable interest in the concept of
the virtual community (Rheingold
2000). No doubt, some of this interest is fuelled by perceptions, in Western societies, of a breakdown in traditional community,
signalled in Putnam's (
2001) influential work on the diminution of social capital,
Bowling Alone. This focuses on the idea of suburban life, centred round the car and a long commute to work, where there are no supposedly
safe suburban or urban spaces, where community activities have diminished, where children are no longer encouraged to play
outside, away from adult gaze, and where fears of crime and terrorism are part of a ‘moral panic’ (Critcher
2006). In times increasingly perceived as uncertain, the promise of the Internet virtual community initially appeared seductive.
If one struggled to identify oneself as part of a real community, a virtual community offered the promise of a potentially
safe alternative to the dangerous world outside.
Rheingold's (
2000)
The Virtual Community: Homesteading on the Electronic Frontier, originally published in 1993, represents a seminal early work on virtual community and is still widely discussed and reviewed
(e.g. see references in Goodwin
2004). It was based on Rheingold's own experiences in various online communities, notably WELL (Whole Earth ’Lectronic Link),
and is a readable and persuasive account. Rheingold is often thought of as utopian in inspiration in his enthusiasm for virtual
communities. However he argues that his experience of WELL was always grounded in real life in that he regularly met with
WELLites in the San Francisco Bay area (Rheingold
2000, p. xvi) in the mid 1980s and this added to his view that the virtual community was an authentic community. His original
work was written in 1993 when virtual
communities were the province of relatively few enthusiasts and early adopters and when the potential of the World Wide Web
was yet to be realized. Writing in 2000, his second edition is more nuanced as to what kind of societies might emerge online.
He acknowledges a broader debate on the social impact of new media. He is less deterministic in tone, agreeing with Winner
that technology is not an autonomous force. ‘To me, the most penetrating technology critic in the largest sense today is Langdon
Winner. . .He convinced me that the notion of authentic community and civic participation through online discussion is worth close
and sceptical examination. However, I would disagree that such media can never play such a role on the same grounds that Chou
En Lai refused to analyze the impact of the French Revolution: “It's too early to tell”’ (Rheingold
2000, pp. 348–349).
A number of commentators warn against the
utopian ideals of the early conceptions of the virtual community and have argued that social boundaries constructed round
virtual communities may be used to exclude others. Notably, Winner (
1997) argued that virtual communities, far from being more inclusive, egalitarian and open than real communities, may achieve
the opposite. In real communities, all sorts of people with quite different beliefs and values have to rub along together.
Virtual communities can be exclusive, only accepting as members people with very similar beliefs, and can be used to reinforce
and magnify prejudices rather than reducing them. The risk of only talking to those with like-minded views is not, of course,
confined to virtual life. In real life people congregate into groups of like-minded individuals, they read newspapers which
reflect their political views and so on. Despite this, in real life, one must interact with people who do not share one's
views; one cannot necessarily avoid broadcast media where a variety of political opinions are reflected. However, virtual
communities can insulate themselves against interacting with those who no not share their views and the pace and intensity
of virtual interactions may reinforce this. Winner (
1997) argues that the technological determinism, inherent in the utopian ideal of a virtual community, may go hand in hand with
an extreme form of liberalism or libertarianism; indeed
‘cyberlibertarianism’ is the term he emphasizes. He argues that Internet communities could pose serious threats to democracy
by promoting radical, self-interest groups, which obviate their responsibilities towards promoting true equality.
‘Cyberlibertarianism’ is a dominant view in popular discussions of computers and networking. It is a form of extreme right-wing
liberalism in the shape of a libertarianism where no controls are imposed and the workings of the free market are assumed
to create egalitarian, democratic societal structures. Winner interprets cyberlibertarianism as a dystopian position, combining
extreme enthusiasm for computer-mediated life coupled with ‘radical, right wing libertarian ideas about the proper definition
of freedom, social life, economics, and politics in the years to come’ (1997, p. 14). Cyberlibertarianism
or ‘technolibertarianism’ as it has been termed by Jordan and Taylor (
2004) has parallels with the
‘hacker ethic’ (Himanen
2001, Jordan and Taylor
2004, Adam
2005) in which libertarian views are expressed in the ideal that all information should be free and that equality spontaneously
emerges from such freedoms. Such a view looks to a society free from regulation, social ties and community obligations. However
appealing such a view might seem on the surface, the interests of vulnerable groups may not be taken into account and the
supposed freedoms that are offered may not be equally available to all.
Winner (
1997, pp. 14–15) identifies an adherence to technological determinism as the most central characteristic of such a cyberlibertarian
view: ‘the dynamism of digital technology is our true destiny. There is no time to pause, reflect or ask for more influence
in shaping these developments. Enormous feats of quick
adaptation are required of all of us just to respond to the requirements the new technology casts upon us each day.’ A level
of self-interest, and rights to self-determination without considering other groups, a trust in free market capitalism and
a distrust of government intervention are the characteristics of this position. However, cyberlibertarian communities still
adhere to a rhetoric that virtual communities will give rise to democracy, spontaneously. This mirrors Ess's (
1996) concerns with naïve views of democracy that are promoted in online interactions. In any case, not all virtual communities
are interested in egalitarian ideals; some are clearly created to sustain antisocial or criminal activities
.
Taking an extreme example, nevertheless one that remains a problem, there is considerable evidence that the activities of
paedophile rings are sustained on the Internet. An individual who is a member of a ring not only has access to more material
and activities but is also given reinforcement, by other members of the ring, that such activities are acceptable. Paedophile
rings existed before the Internet, but they are much easier to create and maintain in an online world, and there is evidence
of individuals becoming paedophiles through Internet use, and indulging in activities, such as making videos of abuse, which
they might not have done were the Internet unavailable. This also signals that the safe haven that the Internet may have initially
promised for parents fearing for the safety of their children out of doors is something of an illusion, when one cannot know
the identity of the person one's pre-teen daughter is contacting in a chat room. The ease of Internet interaction fuels such
groups (Adam
2005).
With the advent of Web 2.0 and the explosion of interest in social networking sites, many of these messages appear to have
been forgotten. Alongside the many benefits that social networking sites may bring, including a sense of connectedness, keeping
in touch with family members, friends and others, the social support that one may gain from social networking, there may also
be negative aspects which may affect different social groups differentially (Barnes
2006).
The very group which makes most use of social networking sites and which may well find most benefit from them, namely young
people, may also be the group that suffers disproportionately from negative aspects of these sites. For instance, the question
of
privacy on social networking sites has attracted considerable media attention (Goodstein
2007). Social networking sites actively encourage the sharing of personal, sometimes highly personal, information. Indeed the
raison d’être of social networking sites is the sharing of personal information. The result is that young people seem surprisingly willing
to give up their privacy online without realizing how they may become targets for advertising and marketing or predatory behaviour.
The recent furore regarding the difficulties of removing one's Facebook profile, where it is not enough to ‘unsubscribe’, one has to delete each file, adds to the privacy problem in that
it is very difficult to remove personal data from the Internet once it is posted there.
The computer ethics problem described here relates to privacy (see
Chapter 7). Much, although not all, of the problem concerns the way that information, which should be ephemeral, becomes persistent,
if not actually permanent. The potentially unwanted persistence of personal data is, of course, a feature of networked ICTs.
This is why UK data protection law, for example, mandates that personal data should not be held longer than is necessary.
Leaving aside behaviour which is actively criminal or antisocial, it is a feature of youth to be able to get up to things
away from an adult gaze and not to have the results of youthful high spirits follow one into adult life. This is surely part
of growing up, part of constructing one's identity as a young person in a group. It could be argued that people are freely
giving away personal data without coercion and it is natural for young people to do this by using ICTs. But they are doing
so without a full understanding of the implications for their privacy and without an assurance that software suppliers understand
the privacy implications of their software.
Potential pitfalls of social networking sites are not confined to failing to understand the implications of releasing personal
data. Griffiths and Light (
2008) have coined the term
‘antisocial networking’ to describe the scamming, stealing and bullying which can occur in social networking
gaming sites. There are an increasing number of such game sites and the users of these can be regarded as consumers, as a
credit card is required to purchase various virtual commodities necessary to play the game. Griffiths’ and Light's (
2008) analysis centres around one such game site,
‘Habbo Hotel’, where virtual rooms are furnished with ‘furni’ which can be bought, traded and won in the game. The software
supplier deliberately increases the rarity, hence desirability, of some types of furni by only releasing rare pieces at specific
times. As well as being bought, sold and traded, furniture can, of course, be stolen, signalling one of a range of antisocial
behaviours (Griffiths and Light
2008).
‘Cyberbullying’ is the term coined to describe the bullying of children by children (e.g. see
www.stopcyberbullying.org/).
Complex social interactions are
involved in these games. Some of these involve manipulation of a market by games’ producers. Desirable commodities are made
artificially rare. Antisocial behaviour follows
.
Turning to gender identity in relation to ICTs, for almost the whole of the lifetime of the digital computer, there has been
considerable interest in the relative absence of women in computing and IT-related jobs (Grundy
1996). This must be balanced against the historical fact that many of the first human ‘computers’, i.e. the army of workers who
performed mathematical calculations manually or with the aid of a mechanical calculator, were actually women (Grier
2005). As computing and IT developed as a clear career path in the 1970s and 1980s, rather than women becoming more encouraged
to go into computing, the percentage of women in higher education in many countries actually dropped from around 25% of total
numbers, to around 10% in the early 1980s. IT did not represent a new career path, potentially free from old gender stereotypes
but quickly became a job for men.
In many ways this should not be seen as surprising. A number of authors, including Wajcman (
2004) and Cockburn and Ormrod (
1993) argue that technology, or at least prestigious technologies rather than domestic technologies, is related to
masculinity to the extent that technology and masculinity mutually define each other. In other words, definitions of masculinity
are bound up with skilled use of technology, while definitions of technology relate to masculine activity. This is not to
say that there are no ‘feminine’ technologies – think of sewing and knitting. Historically, the production of clothing for
the family was in the hands of women (Cowan
1989), yet sewing and knitting are not usually thought of as technologies and they are not regarded as skilled technologies with
the same status as computer programming or engineering. There are no objective reasons why writing a computer program is seen
as more skilled than, say, producing a knitting pattern (and then producing a garment from the pattern). Cookery, in the home,
as a female technology is not regarded as particularly skilled. However the job of the chef, outside the home and usually
male, is regarded as skilled. Skill appears to be more to do with whether an activity is designated masculine or feminine
than with some absolute measure of skill.
It is useful to explore some of the implications of the claim that masculinity is entwined with technological skill and the
ways in which this helps to explain why women may be discouraged from IT and computing education and careers. There have been
a number of campaigns over the years to attract women into computing (Henwood
1993). These have often been coupled with campaigns to attract women into wider areas of science and
engineering. Commentators have been quite critical of such efforts (Henwood
1993), arguing that campaigns which assume that technology is neutral and where women have to make all the changes to fit into
technological careers, will not succeed unless men can change too. Such critiques can be set alongside research into women's
working lives in IT. There is no doubt that IT and computing represent well-paid, interesting career choices which should
be widely available. However, women can experience difficulties in a masculine workplace and can be marginalized and suffer
pay discrimination (Adam
et al.
2006). Although they are not always thought of as part of the agenda for computer ethics, some researchers have cast such issues
explicitly as computer ethics problems (Turner
1998,
1999). This is useful, as it gives the potential to highlight the inequalities that still remain for women in the IT workplace.
This is another example of an argument against
technological determinism and for an alternative view which considers the mutual definition of technology and society.
If IT and computing workplaces are still problematic in gender terms, it is reasonable to ask whether there are issues to
be addressed relating to the more widespread use of ICTs, given that ICTs and Internet usage have rapidly become pervasive
in many societies. An important issue for computer ethics centres on the question of whether men and women receive equal treatment
in Internet interactions. A considerable literature has developed on this topic (e.g. see bibliography in Adam (
2005)). During the early years of the Internet's growth into a mass communication medium, there was a widespread utopian view
that new technologies would spawn more egalitarian communities (as in early ideas about virtual communities). As outlined
above, this was mirrored in the expectation that men and women would be equal in the IT workplace. Similarly, there was an
assumption that gender relations would be more equal on the Internet. Once again, such views focus on the idea that new technology
is free from old prejudices and that old patterns need not be played out in new technologies. The
democratizing potential of new technologies has been an extraordinarily tenacious myth, but it is a myth based on technological
determinism because it ignores the ways that social relations are already designed into technologies.
In the early 1990s, there was a view that women could be mistresses of the Internet and that the new communications technology
held untold promise for women. This view found particular expression in
‘cyberfeminism’ (Plant
1997). However, cyberfeminism was criticized for not being rooted in women's real experiences, for being insufficiently political,
for being uncritical of the technology on which it was based and, importantly, for being hopelessly utopian (Adam
1998). In the twenty-first century, one rarely hears of cyberfeminism.
At around the same time, research was published indicating that, in certain circumstances, women were not having the positive
experiences that cyberfeminism seemed to promise. Herring's (
1996) research on men's and
women's posts in computer-mediated communication suggested that stereotypical gender relations were being reproduced and even
magnified online, with men more often using an aggressive, hostile style of interaction (termed ‘flaming’), while women were
more likely to use a supportive style. Reports of
‘cyberstalking’ began to appear from the early 1990s. These indicated that the majority of perpetrators are male, while the
majority of victims are female. In the face of such behaviour, it becomes more difficult to maintain the view that the Internet
offers a neutral space in gender terms, let alone the utopian space declared by cyberfeminism
.
Assistive technology has played an important part in the way that disability has historically been defined. This is because
technology designed to assist disabled people has often been adapted from technology which was originally designed for those
deemed ‘able-bodied’ (Adam and Kreps
2006). Unfortunately this reinforces a norm of
‘able-bodiedness’ against which disability is regarded as deficiency. The definition of disability, and how this relates to
technology, is an important element in any theory that argues that disability is socially constructed and that society puts
barriers in place that make some people disabled (Shakespeare
2006).
There are tensions between the social construction of disability model and older models of disability. Broadly speaking, the
older models can be characterized in terms of charity and medical models of disability (Fulcher
1989).
The medical model emphasizes impairment as loss, with the deficit seen as belonging to the individual. The professional status
and assumed neutrality of medical judgement defines disability as an individual issue for medical judgement. The
charity model sits alongside this view in assuming that disabled individuals are to be the objects of pity and require charity
rather than necessarily having a set of rights within the welfare state and within government policy (Goggin and Newell
2000). However, a potentially more radical approach is offered by the
social construction of disability model, which emphasizes that locating disability in the individual as opposed to society
is a political decision.
Appropriate technology, and how it is used, is an integral part of the social model of disability. Indeed the social model
argues that disability can be created by designing technology in such a way that some people cannot use it. However, there
are tensions. As Goggin and Newell (
2000, p. 128) note: ‘Disability can thus be viewed as a constructed socio-political space, which is determined by dominant norms,
the values found in technological systems, and their social context.’ They argue that research has focused on analysis of
particular types of impairment with the development of technical
solutions specifically designed to address them. This reflects the dominant medical paradigm of disability (2000, p. 132).
In other words, the dominant view is that there should be an individual technical solution for a specific impairment. As Moser
(
2006, p. 373) contends, technologies are strongly implicated in reinforcing what is taken to be ‘normal’, particularly when an
assistive technology is designed against a norm of able-bodiedness. ‘Technologies working within an order of the normal are
implicated in the (re)production of the asymmetries they. . .seek to undo
.’
There are important ways in which the story of assistive technology relates to technological determinism. If the trajectory
of technologies designed for ‘normal’ people is taken for granted, this can be cast as a determinist view, which assumes that
technologies for disabled people will always be designed in terms of a norm of non-disabled. Indeed, Goggin and Newell (
2006, p. 310) argue that much work on disability and ICT proceeds ‘as if it were “business as usual”, in replicating charity,
medical and other oppressive discourses of disability’. They argue that this maintenance of the
status quo goes against the grain, not only of newer work on critical disability studies, but also research in science and technology
studies which describes the way that technology and society mutually define each other. They suggest that notions of identity,
the body, disability and dependence are changing rapidly. ICTs are involved in understanding these changes and the new ways
of living based upon them, yet the area is under-researched, certainly in terms of disability.
Many governments regard connection to the
Internet as a way of achieving social inclusion, although this view, in itself, can be seen as determinist, as it assumes
that bridging the so-called
‘digital divide’, or the divide between those who have access to ICTs and those who do not, is a fairly uncomplicated question
of getting people connected to digital technologies (Adam and Kreps
2006). Nevertheless, access to ICTs is crucial for taking advantage of a wide range of goods and services (including political
and educational activities) as a consumer and a citizen. If there are barriers then ICTs will fail to increase social inclusion,
at least in terms of including disabled users. Dobransky and Hargittai (
2006) argue that there is a
‘disability divide’ on the Internet. Drawing on US data, their findings suggest that disabled people are less likely to live
in households with a computer, less likely to use computers and less likely to be online. However, when socio-economic background
is controlled for people with hearing and walking disabilities, it turns out that they use ICTs as much as the non-disabled
population (Dobransky and Hargittai
2006, p. 313).
More specifically, research into web accessibility suggests that much of the World Wide Web remains inaccessible to people
across a wide range of disabilities (Kreps and Adam
2006, Adam and Kreps
2006). This situation prevails, despite disability legislation, in many counties including the UK, USA and Australia. Such legislation
clearly mandates that websites must be accessible. There have been attempts to regulate the Web and to produce standards
for website design, to ensure accessibility. The
World Wide Web Consortium (W3C) was developed as a standards-making body for the rapidly growing World Wide Web (W3C
2004). Part of its remit was the
Web Accessibility Initiative (WAI), which published a set of
Web Content Accessibility Guidelines (WCAG) in 1999. These guidelines are intended to guide the creation of web pages accessible
to all regardless of disability.
While the will towards making the Web accessible is clearly important, the story is complex and reflects the interests of
many different groups. Meanwhile much of the Web remains inaccessible so that web accessibility guidelines could fall into
the
‘empty shell’ trap.
Standardization, and who is involved in making the standards, is important, particularly when we consider that people who
may be seriously affected by a particular set of standards may not be involved in setting them. Consider, for instance, Stienstra's
(
2006) case study of the
Canadian Standards Association (CSA) relating to accessibility standards. The CSA acts as a neutral third party in its involvement
of stakeholder groups, balancing representation between the competing interests of users, producers and government. Nevertheless,
Stienstra argues that ‘the standards system in Canada privileges the voices of industry while creating a discourse of public
accountability and corporate social responsibility’. Furthermore, she argues that the development of industry standards is
always a way of oiling the wheels of the market, strengthening it rather than challenging it, acting as a key determinant
in economic competitiveness. Standards are then key to ‘market-perfecting’ (Stienstra
2006, p. 343).
It is difficult to see how the W3C and WAI can escape the kinds of criticisms that Stienstra (
2006) makes of the CSA, a body which has made serious attempts to be inclusive in its membership, whereas it is difficult to find
evidence that the WAI has attempted to be inclusive. For instance, Boscarol (
2006) claims that WCAG Working Group does not publish information about what user-focused research its members used to create
WCAG 1.0, the first set of published guidelines. He also contends that discussion revolves round technical points rather than
real-world behaviour, which can only be captured by user-focused research.
While Boscarol argues for less technical discussion and more real-world research, Clark (
2006), a well-known critic of WAI activities, points to the corporate interests involved in the making of web accessibility guidelines.
The new guidelines (WCAG 2.0) are designed to apply generally (not just to HTML), they are hugely complicated and difficult
to apply (Clark
2006, p. 3). Paradoxically, it is possible to write an accessible site that would fail accessibility guidelines, but, at the same
time, it is possible to produce a website which would adhere to many of the guidelines, despite being inaccessible to many
users. Following the criticisms of Boscarol (
2006) and Clark (
2006) it is difficult to see the involvement of disability groups in the production of these increasingly unwieldy web accessibility
guidelines. The membership of WCAG WG reflects the interests of large corporations, which are tacitly
adopting something similar to a medical model of disability, by assuming that they know best how to design accessible websites
without involving disabled users. In itself, this is a form of technological determinism, as it assumes the technology is
neutral and can be ‘fixed’ by technological means to suit a group of users without acknowledging how disability is defined
and made through the use of technology
.