8

Epilogue

Impact of Language Technology

In this book, we have explored real-life applications involving language and what is needed to make them work, from investigating writers’ aids to addressing how computers can be relevant in learning a foreign language, from tools supporting effective search in structured and unstructured data to several applications that involve sorting of documents into different classes, on to applications supporting human–computer dialogue and machine translation. While we have now given you a good hold on how computers deal with language and the concepts we need to understand what is going on, in this epilogue we want to wrap things up by asking what happens when such language technology (LT) is actually used. In other words, we want to raise some questions about the impact of computers and language ­technology on society and our self-perception.

Let us start by considering an example of how language technology can change jobs. For example, when you call an airline to book a flight and you find yourself talking to the speech recognizer front-end of a dialog system, what do you think ­happened to the people who used to answer the phone? There essentially are two options. On the one hand, some of the people in the call center will no longer have been needed in the company after the automated flight-booking system was introduced and have therefore lost their jobs. On the other, some of the employees may have received or sought out additional training to learn how to deal with the difficult cases that the automated system cannot handle – and to deal with those customers who learned to press the right button to reach a human, because they cannot or do not want to deal with an automated system.

The first of these options is an instance of deskiling of the workforce. Deskilling is a broad term characterizing the process of dividing up tasks so that each worker will need fewer skills in order to complete their part. Workers needing fewer skills can then be paid less, and some subtasks can be automated so that those workers are no longer needed.

The alternative is known as upskiling. The idea, as you probably can guess, is that mechanization, the introduction of a machine or automated process, takes over some of the menial work, but requires workers to acquire new skills to be able to use that machine. With the menial work out of the way, workers then are free to spend more of their time on complex and conceptual tasks beyond the scope of the ­technology. A good example of upskilling in the context of language technology is mentioned in Doug Arnold and colleagues’ Machine Translation book (http://purl.org/lang-and-comp/mtbook). The example involves one of the early success stories in machine translation, which you may remember from Section 7.9.1, the METEO system. It was designed to translate Canadian weather reports from English into French and vice versa. After METEO was introduced in the early 1980s, it was no longer necessary to translate all weather bulletins by hand. Instead, the job of the translators became checking translations and trying to find ways to improve the system output. This significantly increased the job satisfaction of human translators in the Canadian Meteorological Center, as was evident from the greatly reduced turnover in translation staff at the Center.

While deskilling and upskilling here are characterized in terms of their impact on the workers, another relevant perspective to consider is how the nature and quality of the task change when it is automated. When a dialog system replaces a human answering the phone, the dialog system may be able to carry out the core functions of routing calls, but it naturally lacks the human ability to react to unexpected requests, calm down frustrated customers, explain why a given person is not reachable and when they will be back, or simply serve as an interlocutor for a lonely caller.

The overall performance on a task often depends on a combination of automated and human interventions, the division of labor between which may lead to unintended results. Consider the introduction of spell checkers, discussed in Chapter 2. Such writers’ aids in principle free the writer from worrying about the form of what they write, so that they can spend more time on the conceptual side of what they want to express – an instance of upskilling. At the same time, a study at the University of Pittsburgh showed that students using spell checkers make more errors than those writing and correcting by hand. So the nature and quality of a task may change ­significantly when it is being automated by language technology – which also brings us back to the dialog system example from the beginning of this section, ­providing a partial explanation for why people may at times try to opt out of such automated systems.

A related issue to consider is how we react to computers that can speak and listen to language. For this we need to take a step back and think more broadly about what is involved in using language and how it reflects our identity. Using language is more than merely conveying information and voicing requests. We have many options for expressing a given message, from the particular words and expressions we choose, to the way we pronounce them, on to how directly or indirectly we convey a message. This was particularly apparent in our discussion of dialogue and conversational maxims in Section 6.5.3. Importantly, by making these choices, we can convey that we belong to a particular group and many other aspects of our identity. The question of how the specific use of language relates to questions of individual and social identity is studied in sociolinguistics, a subfield of linguistics that is closely related to anthropology and sociology. While we can only scratch the surface of those issues here, they clearly play an important role in how we react to computers that are equipped with language technology so that they can produce and react to language.

For example, we may ask in which situations it is easier to talk to a computer than to a human. Given that humans generally do a much better job of recognizing and interpreting spoken language than current automatic speech-recognition technology, you may think that the answer is obvious. But think about other aspects of using language than the quality of the speech recognition. When you are trying to find out whether your account is overdrawn, would you prefer to ask the teller in your local bank branch or a machine? A similar situation arises in language teaching, the application context we discussed in Chapter 3. Language teachers often report that some students practice extensively when they are given the opportunity to receive ­immediate, computer-based feedback on pronunciation and other computer-based exercises; yet those same students shy away from practicing with the teacher to avoid loss of face by making a mistake.

Taking the relation between language use and our social and individual identity one step further, the ability to use language is often mentioned as an important ­characteristic that sets humans apart from animals. Of course, animals do make use of communication systems, even rather complex ones. But human languages differ from animal communication systems in several characteristic ways. First, the relationship between the form of words (how they are pronounced and written) and what these words mean in general is arbitrary. By arbitrariness of the form–meaning relationship, we mean that there is nothing inherent in, for example, the three letter sequence c, a, and r that predestines the English word “car” to be used to refer to automobiles – as also supported by the existence of words like “automobile” and “vehicle”, which are different forms with the same or similar meaning. What a word means must be learned, just like other aspects of a language. Human languages are learned by cultural transmission: every child acquires the language anew from the environment around them. Different from animal calls and communication systems such as the bee dance communicating the direction and distance where food was found, human languages are not inherited genetically. But many researchers believe that humans have a genetic predisposition for learning human languages, an innate language faculty that ultimately is the reason human languages cannot be learned by other animals. All human languages also make use of discrete units, such as words and phrases, which can be combined to create complex units, such as sentences. This productivity of a language makes it possible to produce an unlimited number of new sentences. Crucially, it is possible to understand the meaning of those new sentences by understanding the meaning of the discrete parts and how they are ­combined – the so-called compositionality of the meaning. Complementing the ability to compose new sentences to express new meanings when the need arises, human languages support displacement. This means that it is possible to use ­language to talk about ideas and aspects that are not in the immediate environment, which may have happened a long time ago, are assumed to happen in the future, or are not likely to happen at all. And finally, it is possible to talk about language using language, which is referred to as metalinguistics. While some of these properties can be found in some animal communication systems, the full set of properties seems to be realized by human languages alone.

Considering that language is so tightly connected to what it means to be human, what does it mean for the way we see ourselves, our self-perception, when computers now can use language to interact with us? There indeed seems to be some evidence that assumptions we make about every speaker of a language are transferred onto computers that use a language-based interface. This is particularly apparent in reactions to dialog systems such as the chatbot Eliza we discussed in Section 6.7. Eliza’s creator Joseph Weizenbaum was astonished by the reactions of some of the staff at the MIT AI lab when he released the tool. Secretaries and administrators spent hours using Eliza and were essentially treating it like a human therapist, revealing their personal problems. Weizenbaum was particularly alarmed by the fact that Eliza’s users showed signs of believing that the simple chatbot really understood their problems. The language-based interaction was apparently natural enough for people to attribute a human nature to the machine and to take it seriously as an interlocutor. Based on this experience, Weizenbaum started to question the implications of such dialog systems and artificial intelligence in general, which in 1976 resulted in Computer Power and Human Reason, an early influential book questioning the role of computers and their impact on human self-perception.

Weizenbaum’s critical perspective on computer technology and its impact also points to another relevant distinction, which is nicely captured in one of our favorite quotes from Jurassic Park: “your scientists were so preoccupied with whether they could that they didn’t stop to think if they should” (http://purl.org/lang-and-comp/jurassic). Language technology opens up a range of opportunities for new applications and new uses for existing ones – and it is far from trivial to evaluate the

ethical impact of these opportunities. Document classification as discussed in Chapter 5, for example, can be used for anything from spam to opinion mining, and more ­generally opens up the opportunity to classify and monitor huge sets of documents based on their contents. Governments and companies can monitor opinions expressed in emails, discussion boards, blogs, the web, phone conversations, and ­videos – we clearly live in an age where an unbelievable amount of information is available in the form of written or spoken language.

On this basis, governments can detect extremist opinions and possibly identify terrorist threats – yet they can also use language technology to automatically censor the documents available to their people or to monitor and crack down on opposition movements. Using opinion mining, companies can determine which aspects of their new products are discussed positively or negatively in discussion boards on the web, and improve the products accordingly. Or they can send targeted ads to customers based on automatic analysis of their profiles and blogs – which can be useful but is also scarily like Big Brother in George Orwell’s book 1984. Should a company be required to respect the level of privacy that its individual users are comfortable with by giving them a choice to keep their data truly private?

The new opportunities opened up by language technology – providing an unprecedented ability to classify and search effectively in vast repositories of documents and speech – thereby raise important ethical questions.

Checklist

After reading the chapter you should be able to:

Further reading

A good starting point for going deeper into some the topics discussed in this chapter is Language Files (Mihaliček and Wilson, 2011, http://purl.org/lang-and-comp/languagefiles). In the 11th edition, Chapter 10 talks about language variation related to regional, social, and individual identity, and Chapter 14 discusses animal communication systems and how human languages differ from those.

   If you want to learn more about the criteria distinguishing human language from animal communication systems, the classic article by Charles Hockett in Scientific American (Hockett, 1960) works them out in detail. For a vivid picture of how this issue has been explored, there are several interesting experiments that tried to teach great apes communication systems that satisfy the properties of human languages discussed in this chapter. Two well-known case studies involve Koko the gorilla and Nim Chimpsky the chimpanzee, both of whom were raised in human families and taught American Sign Language (ASL). The results are difficult to interpret, but are generally taken to support the view that animals cannot fully acquire a human language such as ASL. There is a wealth of material on those studies, including “A conversation with Koko” available from PBS online (http://purl.org/lang-and-comp/koko) and a 2011 documentary about Nim Chimpsky entitled “Project Nim” (http://purl.org/lang-and-comp/nim). Two of the original articles in Science (Terrace et al., 1979; Patterson, 1981) are useful for anyone interested in a deeper understanding of the issues involved.

   For a deeper look at issues of identity in the context of technology, Sherry Turkle’s book Life on the Screen: Identity in the Age of the Internet (Turkle, 1995) is a classic book discussing the impact of the internet and online games on human identity and self-perception. A discussion of the broader context of the deskilling/upskilling debate can be found in Heisig (2009).

   Finally, the University of Pittsburgh study on the (non)effectiveness of spell checking ­technology mentioned in the context of how the nature and quality of tasks are changed by automation can be found in Galletta et al. (2005).