Chapter 15

Talking about Language Perception and Production

In This Chapter

arrow Looking into how reading works

arrow Producing sentences correctly

arrow Understanding language problems

People use language every day but few understand the complex structures and processes that their brains use as they produce or comprehend language. Cognitive psychologists are interested in understanding the brain processes occurring under the surface, unavailable to conscious introspection.

In this chapter, we look at the physical side of language – how people read, speak and hear language. We cover the problems that the brain has to overcome to use language, as well as some of the ways in which language production and perception can go wrong.

Cognitive neuropsychology (see Chapter 1) supports the idea that different parts of the brain handle different aspects of language. As we describe, patients with certain specific damage to the language-related areas of their brain can display surprisingly selective problems in their use of language. But although the brain may be modular in this sense, other evidence supports the idea of a complex interplay between these modules.

Decoding the Art of Reading

Learning to read is a quite different skill from learning to speak. A typically developing child learns to speak her native language without any special training, and all cultures had spoken language long before the introduction of formal education. So learning to speak seems to be a natural process.

Learning to read is a different matter, however, and doesn’t come as naturally as learning to speak. Historically, reading and writing developed long after humans had been speaking for many generations.

In this section, we discuss the alphabetic principle that English uses, provide insights into teaching reading, describe some fascinating experiments that show how you read in practice, and talk you through the process of the brain coming up with a word.

Reading from A to Z: Alphabetic principle

Writing systems have tended to evolve towards representing the sounds of spoken languages rather than their meanings (for some background, read[!] the nearby sidebar ‘Writing systems through history’).

jargonbuster The alphabetic principle refers to the way that alphabetic writing systems map a small set of visual symbols (letters) onto a small set of sounds (phonemes). This system is more efficient than learning separate symbols for the much larger numbers of morphemes or syllables in a language. But alphabetic languages aren’t easy to learn due to two main problems. (We talk more about phonemes and morphemes in Chapter 14.)

Abstract phonemes

The first problem is that phonemes are abstract. Although they correspond to basic speech sounds, the same phoneme can sound quite different depending on the context in which it occurs. So the child has to learn that the letter ‘t’ corresponds to the phoneme sound /t/ (this notation is used when describing sounds), which can occur in many contexts.

trythis Look in the mirror and watch the shape your mouth makes as you form the /t/ sound in these different words: ‘tax’, ‘butter’, ‘smart’, ‘test’. The same /t/ phoneme corresponds to a wide range of differing mouth shapes and sounds.

realworld In certain accents, such as those around London and Essex, UK, people often don’t pronounce the /t/ sound in ‘butter’ at all. Instead they use what’s known as a glottal stop: they pause and don’t say anything where the /t/ sounds would occur (for example, they say something like ‘buh-err’).

tip Interestingly, the brain tends to fill in this missing sound so that it appears to be present (the phoneme restoration effect). This phenomenon explains how you can hear sounds even though they’re missing or obscured – and so you can follow what’s going on in TV’s TOWIE! This effect is an example of top-down processing, where your brain uses existing knowledge to fill in gaps in your perceptual input.

Writing lags behind sounds

A second problem when learning most alphabetic languages is that fewer written vowels exist than vowel phonemes (except in Turkish). Consider how different the letter ‘a’ sounds in ‘saw’, ‘cat’ and ‘make’. Plus, people can represent a single phoneme in several ways, using different letters or pairs of letters. For example, the /oo/ sound is a single phoneme but it appears to be quite different in the words ‘moo’, ‘blue’ and ‘chew’.

jargonbuster English is difficult in this respect, partly because of its rich and complicated history. Written English has a deep orthography (that is, the spelling-sound correspondence is quite low) compared to, say, Turkish, which has a much simpler and more regular mapping between the spoken phonemes and the written letters, and thus has a shallow orthography. In Turkish, every letter is pronounced the same way, making it one of the shallowest orthographies.

Teaching reading

Cognitive psychologists have had a major influence on educational policy through their science-based advice in various influential reports.

Before they start learning to read, most children have already developed a reasonable level of spoken fluency and vocabulary. They acquire the basic phoneme sounds of their language in the first couple of years, but the intricacies of the system continue to develop into school. Aside from learning spoken language before written language, children also seem to find learning the spoken language much more natural. You don’t need to teach children to speak in the same way that you have to teach them to read.

remember Reading involves two basic skills:

  • Decoding: The ability to recognise words in their written form.
  • Comprehension: Understanding the language and the meaning, which is the same for spoken and written language.

People should certainly be encouraged to develop their comprehension of real language, but the crucial step in learning to read is decoding.

remember Cognitive psychology has a lot to say about the details of learning to decode. The recommendations boil down to two main points and an interesting difference between how poor and skilled readers use context:

  • Learning to read involves learning how the written symbols represent the spoken language the child already knows. Poor readers use the context of a story to help them decode words they may otherwise have trouble recognising.
  • The child must discover this relationship herself or be taught it explicitly. Skilled readers can recognise words out of context and use the context for the higher-level goal of comprehending the text.

Seeing how you read

remember As you read this book, you may think that your eyes are moving smoothly from left to right across each line, but no: the sensation of smoothly reading a line of text is an illusion. If you watch other people’s eyes as they read, you can see that they move their eyes in a series of jumps across the page: they piece together the text from a series of short instances. When you start reading, you pick a spot near the start of the first line and look at it for an instant. You then move your eyes rapidly to the right to a new spot where you pause again before jumping to a third place farther along the line.

jargonbuster Of course, psychologists use jargon to help describe this process!

  • Fixation: Each instance of looking at part of the text. Fixations last for about a quarter of a second and enable you to read one or more words from a text by focusing them in your central vision, where you can perceive detail. Fixations tend to occur just to the left of the middle of words, and short words often aren’t fixated at all.
  • Saccades: Jumps between fixations. Saccades are much shorter (about a tenth of a second) and are fast – in a saccade you don’t take in visual information. Over 10 per cent of saccades are backwards or regressive. Also, you need to decide where the next fixation is before you initiate the saccade, suggesting a certain amount of forward planning.

tip When people stare at a point in a piece of text they can see only a few letters either side of the fixation point in detail, although about 20 letters either side can be seen in a less detailed form.

We know this because of some clever experimental designs pioneered by the late cognitive psychologist, Keith Rayner.

Moving window studies

In a moving window study of reading, researchers alter text dynamically as a person is reading so that only letters in the immediate vicinity of the fixation point are shown normally, and letters in peripheral vision are obscured in some way (some studies blur the letters, others replace them with Xs).

For example, if a person is reading the sentence ‘the quick brown fox jumped over the lazy dog’ and they’re fixating on the word ‘fox’, the sentence may be displayed as follows:

xxx xxxxx xrown fox jumpxx xxxx xxx xxxx xxx

As their eyes move, the window of visible letters moves with them. With this approach, psychologists can vary the size of the window while participants read through lots of sentences, allowing them to study how reading is affected by the changing window size. When the window is very short and only a few letters are shown, people have great difficulty reading normally; but as the window size increases, reading improves. When the window is sufficiently wide, a person’s reading speed is the same as without a window.

tip With this approach, psychologists can estimate how many letters people take in with any one fixation while reading. The answer is about 15 letters (for adults); the average saccade jumps ahead by slightly over half that length.

Boundary studies

A boundary study uses eye-tracking to change what psychologists display to readers depending on where they’re looking. In this case, researchers define some point in a sentence that acts as a boundary. When the reader’s eye movements cross that boundary, it triggers a change in the display. For example, participants read a sentence and a key word later in the sentence changes to a different word when their eyes cross a particular boundary.

This test reveals how much information to the right of fixation affects comprehension. Results show that although readers aren’t fixating on some of the words to the right, the words can influence their reading. This finding indicates some form of parafoveal processing: readers do process words outside the centre of vision to a certain degree. Specifically, the way words look and sound is processed before you read them, but the meaning isn’t.

Looking up words in the brain

jargonbuster Eye-trackers and the clever experimental designs from the preceding section give cognitive psychologists a good insight into the visual processing that occurs during reading. But seeing a word is just the first step: people then have to match it against their stored memories of words, a process called lexical access. Psychologists can’t study this process directly; instead, they infer the processes going on based on measurable quantities, such as how long a person takes to respond to a question about what she’s reading.

tip Psychologists have found that when you hear a word that has several meanings, your brain activates all the meanings at first. But after a very short time all but the appropriate meaning are suppressed or shut down, leaving only the correct meaning active (see Chapter 14 for more).

Lexical decision task

The lexical decision task is used to measure a person’s reaction time to different words. The participant sits at a computer and strings of letters are flashed up on the screen: the person has to press one of two buttons quickly to indicate whether the string of letters is a proper word or not. For example, if she sees ‘elephant’, she presses the left button to indicate ‘yes’; if she sees ‘ephantel’, she presses the right button to indicate ‘no’.

In this way, psychologists can measure differences in how long it takes to access different types of word: on average, words take longer to access if they’re more unusual, and so people respond more quickly to words that tend to occur frequently in everyday language.

remember Usually, cognitive psychology experiments measure quite small effects that are difficult to detect, which is why psychologists don’t just test something once but test it repeatedly, and measure average results.

Priming

jargonbuster Priming is the basic lexical decision task taken a step further. It refers to the fact that people tend to respond more quickly to a word if it’s preceded by a related word. If you see the word ‘doctor’, you hit the ‘yes’ button more quickly if the preceding word is ‘hospital’ than if it’s ‘elephant’.

technicalstuff Priming is a well-established phenomenon, but it’s hard to detect. Although people tend to respond faster to a ‘primed’ word, the difference is very small – maybe only 20 milliseconds. Therefore, psychologists can only reliably detect it by measuring it lots of times with lots of words and averaging the times for primed and unprimed words.

tip Priming is usually explained in terms of a network model of the brain, in which words that are related in meaning are connected: when one is activated, it sends a little wave of activation to all its relations so that they all become slightly more active (see Chapter 11).

Cross-modal priming

When you read the sentence, ‘When the performance ended the audience rose to their feet and applauded’, what do you think happens in your mental lexicon with the word ‘rose’? In this context ‘rose’ is a verb meaning to stand up, but it can also mean a type of flower. So do you consider both meanings – or just the correct one for this sentence?

You can’t answer this question just by thinking to yourself: the processes involved are quick and occur beneath the level of your conscious awareness.

jargonbuster An ingenious way of testing this issue depends on an effect called cross-modal priming. Modal refers to the type of presentation mode (such as whether you hear a word or see it) and so cross-modal priming refers to the fact that a word presented in one mode can prime a word presented in another mode. If you hear the word ‘hospital’, you respond more quickly to a visual presentation of the word ‘doctor’.

In a cross-modal priming experiment, psychologists can study lexical access in online processing: that is, they can see how words are accessed while a person is in the middle of hearing a sentence. Here’s how it works:

  1. A participant sits at a computer wearing headphones and performs a lexical decision task. She looks at the strings of letters flashed on the screen and presses one button if the string is a word and a different button if a string is a non-word. Reaction times are measured.
  2. The psychologists play various sentences over the headphones while the person’s doing this task. She doesn’t have to do anything specific with these sentences: psychologists are interested in how the words she hears affect her reaction time to the words she sees.
  3. The psychologists time the presentation of words on the screen precisely to match the words being listened to. For example, they may present the word ‘flower’ on the screen 50, 100 or 200 milliseconds after the word ‘rose’ has been heard.
  4. The psychologists vary the time between the prime and the target. Their aim is to see how it affects the time taken to make a lexical decision.

remember This cunning design makes precise measurements. Using lots of statistics, researchers have worked out what happens when people encounter an ambiguous word in a specific context. For a very short time after it’s heard, the word ‘rose’ primes the inappropriate meaning ‘flower’, but the effect very quickly disappears. On the other hand, if the researchers present ‘rose’ in a context such as, ‘For Valentine’s Day I gave my sweetheart a lovely red rose’, the priming of ‘flower’ continues for much longer.

Putting Together Coherent Sentences

People produce sentences without conscious effort, and cognitive psychologists are interested in the mechanisms involved. For example, when examining why stutterers ‘get stuck’ at the starts of words, most cognitive psychological models of speech production look at the mechanics rather than the emotions (even though, undoubtedly, stress and anxiety can make matters worse by imposing a greater load on the brain, which makes it harder for the speaker to attend to the task of speaking).

In this section, we discuss the difficulties of researching this area and some models that have, to an extent, overcome them.

Producing a sentence

When psychologists study language perception, they typically set people tasks in which they have to identify words or sentences under various different conditions. But studying the process of language production is hard under laboratory conditions. For example, some of the most compelling evidence about the processes involved in speech production comes from the study of speech errors and these tend to occur in normal conversation when people mix up their words. Eliciting such naturalistic errors under laboratory conditions is difficult. Instead, most studies use a less-controlled diary method, where researchers simply keep a note of every speech error they encounter in their everyday life.

remember This kind of approach is perhaps less reliable than a lab-based study and may be prone to selective bias on the part of researchers (in what they notice or choose to record). Despite these limitations, some interesting patterns have emerged that reveal a lot about the complex sequence of processes that takes place when people produce a sentence.

Looking at models of sentence production

Merrill Garrett used a diary approach to gather speech errors. He used the patterns of observed speech errors in designing a model of speech production that attempts to account for the series of separate processes involved when people speak.

remember In Garrett’s model, a sentence passes through the following three levels of representation before you speak:

  • Message level: A representation of the meaning that you want to convey. If you’re a bilingual speaker, this level may be the same whichever language you want to express yourself in. Bilingual speakers often switch language mid-sentence without altering the meaning of the original message.
  • Sentence level: You take the message to express and choose the particular words and grammatical forms you want to use to express it. Two separate processes exist here: the functional and the positional levels.
  • Articulatory level: You take your constructed sentence and work out the precise sequence of articulatory processes needed to say it. A complex mix of muscles movements is required to produce the sounds.

jargonbuster Garrett’s model proposed that the sentence level involves the construction of a functional frame containing the function morphemes (small words or parts of words that carry the grammatical structure of the sentence). This frame contains slots into which are inserted the lexical morphemes that carry the semantic payload of the sentence. For example, Garrett reports this error: ‘How many pies does it take to make an apple?’ Here, ‘pie’ has slotted into the frame ‘How many ___-s’ and ‘apple’ has slotted into the frame ‘a ___’.

trythis The late American linguist Victoria Fromkin was a pioneer in the study of speech errors. You can access her speech error database online at the following address: http://www.mpi.nl/dbmpi/sedb/sperco_form4.pl.

Recognising Speech as Speech

After you’ve planned what you’re going to say, you need to put it into practice by manipulating the muscles of your vocal system in just the right way to give rise to the spoken words.

Distinguishing different meanings from the same sound

If spoken in normal fluent speech, the two sentences ‘It’s not easy to recognise speech’ and ‘It’s not easy to wreck a nice beach’ sound practically identical. This is down to two factors: the phonemes that make up the phrases ‘recognise speech’ and ‘wreck a nice beach’ are almost identical, and people tend not to pause between words. The same bit of speech can be interpreted as a two-word or a four-word phrase.

We use The CMU Pronouncing Dictionary to produce the following phonetic transcriptions of the two phrases:

Words

Phonemes

RECOGNISE SPEECH

R EH K AH G N AY Z S P IY CH

WRECK A NICE BEACH

R EH K AH N AY S B IY CH

Segmenting speech

remember The problem of how the human brain splits language up into words is the problem of speech segmentation. You may think that the preceding section’s example is unusual and that normally the spaces between words are clear. But, in fact, normal speech rarely contains clear pauses between words.

Psychologist Jenny Saffran and colleagues played a recording of a nonsense language to infants and found that, apparently, they learn statistical associations between the syllables of the language. To create the nonsense language, they combined nonsense syllables to produce words like ‘pabiku’ and ‘golatu’, and then strung them together without any pauses or spaces to produce a few minutes of monotone computerised nonsense speech.

The infants just listened to the speech. They received no feedback or reward, and at the end they were tested on pairs of nonsense words. As infants, they couldn’t be asked to choose a word and so instead they were played the pairs of words; researchers recorded how long the babies attended to each word. For example, the infant may hear a sequence beginning as follows:

‘tupirogolabubidakupadotitupirobidakugolabupadotibidakutupiropadotigolabutupiro …’

The researchers strung together nonsense words: ‘Tupiro golabu bidaku padoti tupiro bidaku golabu padoti bidaku tupiro padoti golabu tupiro’.

Then the child was played two stimuli – one a word from the language such as ‘bidaku’ and another a ‘part-word’ consisting of the end of one word followed by the beginning of another such as ‘pirogo’. Saffran and her colleagues found that infants spent longer attending to the part-words than the whole words, indicating that they’re more familiar with the words – the part-words seem more novel to them and so they’re more interesting. In turn, this suggests that children use the statistical patterns between words to help them work out where one word ends and another begins.

remember Without spaces between words, how did the children learn to distinguish the words from the part-words? The researchers argue that the infants must be keeping track of how often one syllable is followed by another – within a word, each syllable is always followed by the same syllable but at the end of a word the choice of the next syllable is more variable. The infants keep track of these transitional probabilities and use them to carve the speech up into its constituent words.

Delving into Language Problems

Much of this chapter concerns how cognitive psychology can help people to understand language problems, but language problems can also help people develop cognitive psychology. Various types of language problems exist that may be due to a variety of factors, including brain damage, genetic mutations and learning. Here we look at just four such problems.

Being lost for words: Aphasias

jargonbuster Aphasia refers to language impairments. Two well-known types have been known since the 19th century and they’re named after their discoverers:

  • Broca’s aphasia: Identified by Paul Broca, it’s caused by damage to an area in the frontal lobes (now called Broca’s area) that’s responsible for motor control of speech. Broca’s aphasia is typified by disfluency in speech and a tendency to leave out grammatical morphemes: the patient’s speech uses a very simple sentence structure and lacks in normal intonation. People with Broca’s aphasia would find the sentence ‘Peter gave Mike beer’ easy to understand but ‘The beer was given to Mike by Peter’ more difficult.
  • Wernicke’s aphasia: Identified by Carl Wernicke, it’s caused by damage to an area in the parietal and temporal lobes (now called Wernicke’s area) that seems to be responsible for understanding meaning. Patients are usually quite fluent but tend to produce the wrong words and even create their own words (neologisms). A person may say ‘I will sook you dinner’ instead of ‘I will cook you dinner’.

remember These problems seem to affect different aspects of language, which is sometimes used as an example of a double dissociation: two separate parts of the brain handle different aspects of a task independently. Broadly speaking, Broca’s aphasia is characterised by intact semantics but damaged syntax and fluency, and Wernicke’s aphasia is associated with almost the opposite pattern – fluent speech, with correct syntax but impaired semantics.

technicalstuff These two types of aphasia are typically produced by damage to the left side of the brain. Damage to the corresponding right areas produces different language issues, such as complementary problems with production (damage to Broca’s area) and comprehension (damage to Wernicke’s area) of the emotionality of speech.

Sequencing the genes: Specific language impairment

jargonbuster Specific language impairment (SLI) is a rare condition that appears to run in families. Sufferers tend to have specific problems with grammar, leading some people to see this as evidence for a kind of ‘language gene’.

More recent research, however, suggests that SLI may not be specific to language, but that the genetic difference underlying this condition may be responsible for a more general mechanism for handling sequences. For example, the same gene occurs in a very similar form in other mammals, including mice. When this gene is mutated, the mice have problems with organising a sequence of actions.

Speaking in foreign tongues

Brain damage can also affect the pronunciation of speech: some people who suffer a brain injury start speaking in a foreign accent. This problem indicates that brain regions exist that process particular ways of pronouncing and speaking. These brain regions may control the motor system of speech and if damaged cause people to speak in an odd way. To a listener it may be that this new speaking pattern sounds foreign.

Having trouble reading: Dyslexia

jargonbuster People are usually classified as dyslexic when their reading ability trails behind their other cognitive abilities. But this definition is complicated by the fact that dyslexia isn’t a single condition. It can include people with a specific neurological impairment that affects their reading and people who, for one reason or another, haven’t learned the specific decoding skills necessary for reading.

In practice, little evidence exists of a specific neurological or genetic problem underlying most cases of dyslexia. In her book Why Children Can’t Read (Penguin), the cognitive psychologist Diane McGuinness outlines the results of many studies suggesting that people diagnosed as dyslexic may often simply not have acquired the necessary decoding skills. This situation can be fixed with the right kind of remedial programme, and these people can then be taught to read correctly.

remember This research suggests that, in many cases, a diagnosis of dyslexia doesn’t imply a permanent inability to read. Instead, the problem may reflect the fact that the person learned to read in a way that didn’t emphasise the correct mappings between the letters and sounds required to become a skilled reader. By training people in the necessary phonological decoding skills, these adults can be brought up to a reading level more compatible with their general level of intelligence.