6
Tactile Communication Systems

The sense of touch is a very natural candidate for helping those whose daily lives are limited by impairments involving other sensory modalities. This chapter considers how the sense of touch has been used to compensate for deficiencies in the visual, auditory, and vestibular systems. For the visually impaired, tactile displays have been developed to assist in “reading” text, such as braille, and to aid mobility by providing cues regarding obstacles in the immediate environment. Tactile displays that compensate for impaired hearing range from systems that translate environmental sounds into patterns of vibration on the skin, to devices that assist with learning speech reading. For all these applications, there is interest in developing tactile vocabularies to enhance the capacity for communication. The chapter will conclude with an overview of this work and the challenges associated with creating tactile words, known as tactons.

Sensory Substitution

Sensory substitution refers to the use of one sensory modality to substitute or compensate for loss of function, due to disease or trauma, in another sensory system. The sense of touch has been used as a substitute for vision and for audition because it can process both spatial and temporal information. The visual system is extremely effective at processing spatial information about the external environment, and so is our primary means of encoding distance, the size and shape of an object, or the direction in which something is moving. It is much less capable of making discriminations based on the time between events, known as temporal discrimination, for which the auditory system is exquisitely tuned, as reflected in our capacity to process speech and music. The ear is capable of distinguishing between two auditory clicks separated by as little as 1.8 ms, whereas two mechanical pulses delivered to the skin on the fingertip must be spaced 10 ms apart for the two pulses to be perceived as distinct.

Tactile Systems for the Visually Impaired

One of the oldest and most successful sensory substitution systems is braille, in which information (text) usually processed visually is instead acquired tactually through the fingertips. This tactile alphabet was developed in the early nineteenth century by Louis Braille, who lost his sight at the age of three after an accidental injury. The alphabet was based on a 2-by-3 matrix of raised dots, which was a simplified version of a 12-dot (2-by-6) alphabet that had been devised by a French army captain, Charles Barbier. His goal was to create a communication system that could be used by soldiers to read battle commands silently and in the absence of light, and was known as night writing. Louis Braille recognized two limitations of Barbier’s alphabet: first, that the 12-dot matrix was too large, in that it required moving the fingertip across each pattern to decode it; and second, that it made more sense for each pattern to represent an individual letter rather than a sound (phoneme). By trial and error, Louis Braille developed a new alphabet for the blind that was logically constructed and could be used to transcribe any language. The present physical structure of each braille cell, with an interdot distance of 2.28 mm and a dot height that varies from 0.38 mm to 0.51 mm, depending on the printing materials, is very close to that proposed originally by Braille. These values have been shown to be optimal in terms of reading speed and accuracy.

Braille is a static display that visually impaired individuals “read” by moving the fingers across the line of text from left to right. Most readers prefer to use two hands rather than one, and bimanual reading is faster than unimanual reading, in part because it enables faster transitions between lines of text. One finger can be used for reading the textural information while the other finger processes more spatial information, such as interletter or interword spaces, and ensures type alignment. Proficient readers can read braille at around two words per second, which is about one half to one quarter the rate typically achieved for visual reading of English text (five words per second).1 At this reading rate, a single index finger scans 100 to 300 separate braille cells in 60 seconds. More recently, refreshable electronic braille displays have been developed to enable visually impaired individuals to read text on any computer or mobile device. In these displays movable pins generate braille, and between 40 and 80 characters from the screen can be displayed, depending on the particular electronic braille model.

Over the years, a number of devices have been developed to present visual information to those with visual impairments. These systems typically used a camera to capture an image that was then processed by a computer into a two-dimensional tactile pattern, presented on the body using an array of vibrating motors. One such device was the Optacon (optical to tactile converter), developed by Bliss in the 1960s, which converted printed letters into a spatially distributed vibrotactile pattern on the fingertip using a 24-by-6 array of pins.2 A small handheld camera was used to scan the text with one hand, and an image roughly the size of the print letter was felt moving across the tactile display under the fingertip on the other hand. Although reading rates were much slower than those achieved with braille—highly proficient users could read 60 to 80 words per minute—the Optacon was one of the first devices to provide visually impaired people with immediate access to text and graphics. Between 1970 and 1990 more than 15,000 Optacon devices were sold, but beginning in the mid-1990s this technology was replaced by page scanners with optical character recognition, which were less expensive and easier for blind people to learn to use. Those systems have now been largely replaced by speech synthesizers and screen reader technologies.

A much larger system designed to enhance mobility for people with visual impairments was developed by Paul Bach-y-Rita and his colleagues in the late 1960s. This system attempted to address the critical mobility problems faced by visually impaired individuals who need to detect, identify, and avoid obstacles as they move. It was known as the Tactile Vision Substitution System (TVSS) and used 400 vibration motors mounted on a chair to display images on the back that were captured by a CCD camera. Users could recognize simple shapes and the orientation of lines, and with extensive training they could identify tactile images of common objects. In the 1960s and 70s TVSS devices had limited spatial resolution and a low dynamic range. They were cumbersome, noisy, and power hungry and so had limited impact on improving the mobility of visually impaired individuals.3 More recently developed systems use small cameras mounted on remotely controlled mobile robots or on the user’s head to acquire information about the environment. The video feed from the camera is translated into signals that activate electrodes or motors mounted on the body. One such device, the BrainPort, converts the pixels in the images captured by the camera into strong or weaker electrical impulses felt by the tongue. This electrotactile display is attached to the roof of the mouth and is designed to assist blind people using a cane or a guide dog to help them determine the location, position, size, and shape of objects as they navigate.4 An interesting aspect of these TVSSs is that after intensive training users come to perceive the objects that are being displayed as tactile stimuli on their back, tongue, or abdomen as situated some distance in front of them, a phenomenon known as distal attribution.

Tactile Systems for the Hearing Impaired

The development of tactile communication systems for those with hearing impairments dates to the 1930s when electrical stimulation of the skin was used to teach deaf-mute children to understand speech. More recently, tactile-auditory substitution aids have focused on two primary applications. First, the translation of environmental sounds detected by a microphone—like knocking on a door, or a baby’s crying—into time-varying patterns of vibration on the skin. For example, footsteps may be represented by a series of brief low-frequency pulses and a fire alarm as a continuous high-frequency vibration on a device worn around the wrist. Second, tactile devices have been designed to convey aspects of the acoustic speech signal, to assist with speech reading for people with hearing impairments or to improve the clarity of deaf children’s speech. Variables such as the intensity and duration of syllables and phonemes may be encoded by one vibrator operating at a low frequency while specific phonetic features of consonants, such as fricatives (e.g., f and th) and unvoiced stops (e.g., p and t), are displayed using higher-frequency vibration at another location on the arm. These systems have been demonstrated to be useful in learning speech reading, although it is clear that systematic and long-term training is required for hearing-impaired individuals to achieve maximal benefits from the tactile aids.5 This is due to the arbitrary nature of the relation between the acoustic information and the associated tactile cue, which must be explicitly learned.

Tactile Systems for the Deaf-Blind

One sensory substitution system that makes use of the exquisite capabilities of the haptic sense was developed for people who are deaf and blind, and is known as Tadoma. The user’s hand is placed on the speaker’s face and monitors the articulatory motions associated with the speech-production process as shown in figure 12. This method is believed to have been developed by a Norwegian teacher in the 1890s and was first used in the United States in the 1920s, by Sophia Alcorn, to teach two deaf and blind children, Tad Chapman and Oma Simpson, after whom the technique is named. The physical characteristics of speech production that are sensed by the Tadoma reader include airflow, which is felt by the thumb on the lips, the laryngeal vibration on the speaker’s neck, and the in-out and up-down movements of the lips and jaw, which are detected by the fingers. The haptic information provided to the hand of the Tadoma reader is multidimensional, but it is of sufficient resolution that very subtle changes in airflow or lip movement can be detected and interpreted in terms of a linguistic element.

11014_006_fig_001.jpg

Figure 12 Position of the hand used in the Tadoma method to understand speech, with the fingers sensing movements of the lips and jaw, vibration of the larynx, and airflow from the mouth. From Jones and Lederman (2006)9 with permission of Oxford University Press.

The speech reception capabilities of people who are both deaf and blind and who are extensively trained in the use of Tadoma are comparable to those of normal subjects listening to speech in a background of low-level noise. The exceptional individuals who become proficient in Tadoma provide proof that it is possible to understand speech solely using haptic cues. However, the majority of deaf-blind individuals never reach that level of proficiency, and the intimacy required for communication with the hand resting on the face of the speaker limits its use in social interactions. Nevertheless, Tadoma has been shown to be superior to any artificial tactile display of speech developed to date and has demonstrated the potential of speech communication through the haptic sense.6

Another tactile communication method, more commonly used by individuals who are deaf and blind, is the haptic reception of sign language. This method is generally used by deaf people who acquired sign language before becoming blind. To perceive the signs produced by the signing hand(s), the deaf and blind individual places a hand or hands over the signer’s hand(s) and passively traces the motion of the signing hand(s). Not surprisingly, the communication rates obtained with haptic reception of sign language are lower than those achieved with visual reception of signs (1.5 signs per second, compared to 2.5 signs per second), and errors are more common with tactile reception.7 Nevertheless, this haptic method of communication is effective for deaf and blind individuals who are skilled in its use, and the levels of accuracy achieved make it an acceptable means of communication.

Tactile Aids for Vestibular Impairments

The third sensory system for which tactile signals have been used to compensate for sensory loss is the vestibular system, which is involved in maintaining balance and postural control. When the vestibular system malfunctions due to injury or disease, the self-motion cues it provides that help stabilize our vision when moving are reduced, which results in dizziness, blurred vision, and problems in walking and standing. The primary function of balance prostheses based on tactile signals is to reduce body sway and prevent falls. These devices use microelectromechanical systems (MEMS) sensors such as accelerometers and gyroscopes on the head or torso to measure head or body tilt. The sensors are connected to signal processors that translate the outputs into signals, which activate motors in a tactile display worn on the back. The motors are activated when the angular tilt and angular tilt velocity exceeds some threshold value that indicates an increased risk of falling, and the location of the motors activated indicates the direction of body tilt. The objective of the display is to provide tactile cues that the balance-impaired person can use to reduce the body tilt or excessive swaying that can result in falls.8

In most of these applications of tactile sensory substitution systems, users require extensive training in order for the devices to be beneficial. The association between the tactile signal and the associated visual or auditory cue is often arbitrary. Research on tactile sensory processing has provided insight into how this translation from one sensory modality to another can be optimized to reduce training time. For some of these devices, the mapping between the tactile signal and its meaning is fairly intuitive—particularly for spatial information, for which people naturally associate a point on the body with the representation of external space. For example, a tactile signal on the right side of the body in the vestibular prosthesis means the user is leaning too far to the right. In other applications, such as using vibrotactile feedback to enhance the learning of speech reading, the mapping of the tactile signal to the associated lip movements being taught is arbitrary.

Tactile Vocabularies

Braille is an example of a tactile communication system in which the pattern of indentations on the fingertip as it is scanned across braille characters is interpreted as letters or other grammatical symbols, thereby providing people who are visually impaired with access to the written word. Over the years there has been interest in seeing if the skin can be used by anyone as a medium of communication similar to vision and audition, and if so, how to develop a tactile vocabulary for this purpose. In the late 1950s Frank Geldard, a psychologist at the University of Virginia, designed a tactile language, called Vibratese, that consisted of 45 basic elements that varied along three first-order dimensions of vibratory stimulation: the amplitude of the signal, its duration, and the location on the body where the vibration was delivered. These dimensions were selected based on earlier research findings that the frequency and amplitude of vibration on the skin are often confounded, which means that as the amplitude of vibration changes at a constant frequency, both the amplitude and the pitch (perceived frequency) of vibration are perceived to change. This meant that either frequency or amplitude could be used to create a tactile communication system, but not both. The other dimension that did not appear to be useful in a tactile coding system was waveform, for which the skin lacked the processing capacity of the ear, where variations in waveform indicate the timbre of a sound. Geldard used three levels of amplitude (soft, medium, and loud), three durations (0.1, 0.3, and 0.5 second) and five distinct locations across the chest where the vibrating motors were attached to create his tactile language. These 45 elements represented all letters, all numerals, and frequently encountered short words (e.g., in, and, the). After 30 hours of learning the Vibratese alphabet and a further 35-hour training period, one person was able to process 38 five-letter words per minute.9 This compares very favorably with the performance of people trained to receive Morse code tactually, which is on the order of 18 to 24 words per minute. Although Vibratese was never developed further, research on using the skin to communicate continued and experienced a surge of interest in the early 2000s with the advent of small, inexpensive motors that could be incorporated into consumer electronic devices and wearable displays.

More recent work on tactile communication systems has focused on the creation of tactile words or concepts rather than the presentation of individual letters in the word as in Vibratese. These tactile signals are often referred to as tactons, by analogy to visual and auditory icons. This approach takes into account the time it takes to present a word tactually, which will always be much slower than what can be processed visually or auditorily. A typical five-letter English word takes about 0.8 seconds to complete in Vibratese, with a maximum delivery rate of 67 words a minute. If concepts or words are presented rather than the individual letters in the word, then the information content of the message is dramatically increased. It is important that these tactile icons are easy to learn and memorize, and if possible have some intuitive meaning; for example, changing the rhythm of a tacton is readily interpreted as indicating urgency or priority.

When a cell phone vibrates in your pocket, the vibration that you feel can be described in terms of its frequency, amplitude, and duration, as shown in figure 13. We can create different patterns of vibration, or tactons, by varying these properties. A low-frequency, less intense vibration signal could indicate an incoming call from a friend, whereas a higher-frequency, more intense signal could mean a message from your boss. It is easier to remember and identify these tactons if they vary along several dimensions, such as intensity, frequency, and duration, and not just a single dimension.

11014_006_fig_002.jpg

Figure 13 Dimensions of vibrotactile signals that can be used to create tactons. Variations in frequency, the signal duration or repetition rate, and waveform complexity. For the latter a base signal of 250 Hz modulated by a 20 Hz signal (left) feels perceptually rougher than a signal modulated at 50 Hz (right). From Jones and Sarter (2008)10 with permission of the Human Factors and Ergonomics Society.

Some aspects of a vibration are more salient to a user than others; for example, the location on the body that the vibration is delivered to is very easily encoded, so if a tactile display is distributed across the body—perhaps in clothing—then spatial location can be used as a building block to create tactons. Other dimensions, such as intensity, are much more difficult for people to identify; it is generally accepted that no more than three levels of intensity should be used in creating tactons. Although simple variations in waveform are difficult to perceive, by modulating the frequency of a base signal it is possible to create tactile patterns that are perceived to vary in roughness. When a base signal of 250 Hz is modulated by a 20 Hz signal it feels perceptually to be rougher than a signal modulated at 50 Hz (see figure 13). This then becomes a dimension that people can effortlessly use to identify a tacton.

Most of the research on tactons has used a vocabulary of around 8 to 15 items, each of which people learn to associate with a particular concept and then are tested on their ability to identify. With this number of tactons and a fairly brief training period, performance is typically around 70% to 80% correct. This suggests that for many applications it will be challenging to create large tactile vocabularies that are easily learned and remembered, unless they are presented in conjunction with visual and/or auditory cues. One reason for this relatively poor performance is that identifying a stimulus in isolation is much harder than comparing two stimuli and determining whether they are the same or different. To put this in context, people can discriminate about 15 steps in intensity between the smallest and the most intense vibration if they are presented in sequence, but only about 3 steps if they have to identify the intensity (i.e., indicate whether it is low, medium, or high). Similarly, the duration of a tacton could span a range of 0.1 to 2 seconds, and across this range about 25 distinct durations can be discriminated but only four or five durations can be identified accurately.

Other aspects of tactile stimulation have been explored in efforts to see how they could be incorporated into communication systems. There has been particular interest in analyzing how the time at which successive tactile stimuli are presented on the skin affects their perceived location and movement across the skin. As described in chapter 4, these effects are illusory, but they provide a mechanism for creating tactile motion on the skin that can be used as a cue in tactons.

Notes