CHAPTER EIGHT

Motor skills and written language perception: Contribution of writing knowledge to visual recognition of graphic shapes

Jean-Luc Velay and Marieke Longcamp

INTRODUCTION

Letter recognition: the milestone on the pathway to reading

The first cognitive step in written-language processing is to identify black squiggles on a white background as different letters of the alphabet. Indeed, fast and accurate visual recognition of characters is crucial for efficient reading, and mastery of this capacity is regarded as a good predictor of learning to read (Adams, 1990, pp. 61–64; Badian, 1995; Fitzgerald and Shanahan, 2000; Foulin, 2005; Näslund and Schneider, 1996). Sometimes in adults, this initial reading step gets disrupted: this is what occurs in cases of “pure alexia” (alexia without agraphia or “letter by letter reading”). Pure alexia is an acquired reading disorder in which the person is able to read words only by identifying one letter at a time, in a slow and laborious fashion (Déjerine, 1892). Forty days after the onset of illness, a patient wrote in his diary: “I apologize to my reader, my brain has not healed yet; I can barely write, and afterwards, I can no longer read what I have just written. I don’t know the letters, neither my own, nor the ones printed in newspapers or books” (Perri et al., 1996). One of the main hypotheses explaining this reading disorder is that reading is effected in a serial manner, rather than in the parallel way observed in neurologically intact readers (Arguin and Bub, 1993; Bartolomeo et al., 2002; Behrmann et al., 1998; Miozzo and Caramazza, 1998; Mycroft et al., 2002; Perri et al., 1996).

This initial reading step consists of visually detecting associated features to form a representation of the letters in the word (“visual analysis system” or “visual features units”, Coltheart et al., 2001; Ellis, 1993; McClelland and Rumelhart, 1981; Perry et al., 2007). Visual recognition of letters is a very complex skill because of their multiple form instantiations (“a” is also “A”), and their similarities (“C” is not “G”). According to a recent account (Grainger et al., 2008), based on the Pandemonium model (Selfridge, 1959), letter identification is achieved through hierarchically organized layers of feature and letter detectors, and shape invariance is gradually achieved via a hierarchy of increasingly complex neural processors. Rey et al. (2009) showed that a connectionist model (derived from McClelland et Rumelhart, 1981) could explain both the behavioral results and the event-related potentials (ERPs) recorded during letter identification. No matter what specific processes are involved, there is a general and tacit agreement that they are only visual in nature.

Neural bases of letter recognition

Lesions causing pure alexia are usually located in the left occipital cortex, extend into the posterior part of the corpus callosum (splenium), and often involve part of the temporal lobe (for a review, see Damasio and Damasio, 1983; Montant and Behrmann, 2000). The functional properties of these left occipito-temporal regions have thus sparked considerable interest, and have been the focus of a large number of neuroimaging studies. A well-accepted idea in the literature is that part of the left fusiform gyrus (Brodmann area 37), “the visual word-form area”, gradually becomes specialized in written language recognition during reading acquisition (Cohen et al., 2000; McCandliss et al., 2003). However, most studies that have investigated the occipito-temporal regions have used whole words as stimuli, and relatively little work has been carried out to explore the possible specific neural correlates of single-letter perception. In fact, it seems that a part of the fusiform gyrus that is distinct from and slightly anterior to the visual word-form area responds preferentially to letters rather than to other types of stimuli such as symbols (Flowers et al., 2004; Garrett et al., 2000; Gros et al., 2001; James et al., 2005), digits (James et al., 2005; Polk and Farah, 1998; Polk et al., 2002), or other object categories such as faces (Gauthier et al., 2000; Wong et al., 2009). This greater response to letters is not systematic, however, and seems to depend on specific factors such as the task. Fusiform activation is stronger, for example, when abstract representations of letters have to be accessed in memory, as in categorization (Pernet et al., 2005) or naming (Joseph et al., 2006). In addition, although the question of the possible specialization of populations of visual neurons for written language is indeed a crucial one, over-focusing on the occipito temporal cortex gives the false impression that it is the only brain region involved in letter processing. This bias derives from the general assumption that the processes underlying letter recognition are solely visual (Grainger et al., 2008). Obviously, other parts of the brain are recruited when letters are perceived, but descriptions of extra-occipital activations are very sparse in the literature, mostly because such activations have been ignored by most investigators. Frontal (Kleinschmidt et al., 2002), parietal (James et al., 2005; Joseph et al., 2006; Kleinschmidt et al., 2002) and insular (Joseph et al., 2006) activations have been reported in relation to letter processing, but again, the likelihood that they will be observed depends on the task performed. In particular, it is still not known whether there is automatic interplay between the visual and phonological representations of letters and their associated neural correlates. For instance, in the experiment by Joseph et al. (2006), where a matching task and a naming task were tested, the insular and parietal activations occurred selectively in the naming task. According to the authors, this result can be explained by the fact that only naming requires access to the phonological representations of letters. In fact, other arguments in the literature support the view that letter perception does not require systematic access to the letters’ phonological representations (Bowers et al., 1998; Niederbuhl and

Springer, 1979).

To summarize, the earliest step in reading involves analyzing the configuration of strokes that form each letter. According to the literature, this analysis relies mostly on specialized posterior left occipito-temporal regions that initially support object recognition. Although the activity of these regions is certainly crucial to letter recognition, researchers often forget an important feature of letters: the fact that we know how to write them. It is well accepted that motor skills are important for processing the spatial aspects of the environment and of the objects it contains. And if one considers letters as a category of visual objects whose spatial features have to be processed quickly and efficiently, then one also has to take into account their motor counterpart, namely writing movements.

IS THERE ROOM FOR MOTOR PROCESSES IN OBJECT RECOGNITION?

Embodied cognition

The idea that the movements we make to explore and act upon our surroundings constrain our perception and structure our cognition is not new, since it appeared in philosophical texts in the nineteenth century (see Viviani, 1990, 2002). Freyd’s (1983) and Viviani and Stucchi’s (1989) studies were among the first to demonstrate the impact of motor skills on visual perception. The latter authors showed that an ellipse described by a moving dot of light could be erroneously perceived as a circle if the shape of the trajectory and the velocity of the dot were not related as they are in real hand movements. These (and other) findings run counter to strict cognitivism, in which perceptual and motor processes are reduced to the input and output of the symbolic processes underlying cognition. The idea supported by these authors has now evolved into the more general conception of “embodied” perception and cognition. The embodied-cognition paradigm emphasizes the importance of embodiment in cognitive processes, and focuses on human cognition as inextricably and intimately bound to, and shaped by, its corporeal foundation. In this framework, cognition is no longer viewed as a kind of abstract and symbolic information processing, with the brain as a disembodied processing unit. Human cognition is fundamentally grounded in experience and hence is closely intertwined with and mutually dependent on both sensory perception and motor action (Barsalou, 2008; Varela et al., 1991; Wilson, 2002).

Motor skills and object perception

Object perception is perhaps the domain in which the greatest number of examples of functional links between action and perception have been documented, and in which the notion of embodied cognition is the clearest. For instance, behavioral studies have shown that visual object perception can potentiate the reach-to-grasp hand movement it affords and in this way interacts somehow with the actual response movement (e.g. Craighero et al., 1999; Olivier and Velay, 2009; Tucker and Ellis, 2001). A large body of experimental data suggests that objects we can handle are perceived not only via their visual features (shape, color, size, etc.) but also by way of their motor features, via the manipulatory movements associated with their use. Initially, evidence of the interplay between motor representations and object recognition was found in monkeys, whose ventral premotor neurons (“canonical” neurons) discharged both when the animal grasped an object and when it merely looked at graspable objects, the attributes of which afford a specific action in the absence of any actual movements (Murata et al., 1997). This finding was interpreted as reflecting the description, in motor terms, of the objects presented visually. In other words, the visual features of the graspable object would be automatically translated into a potential motor action, regardless of any intention to move (Murata et al., 1997; see also Coello and Bidet-Ildei, this volume; Gentilucci and Campione, this volume; Jacob, this volume). Following electrophysi-ological work on animals, functional neuroimaging of the human brain has had a major influence on our ideas about object recognition, a domain where clinical observation had often given rise to contradictory speculations regarding how semantic knowledge is represented in the brain. There is now a great deal of evidence, mainly from fMRI experiments, indicating that information about an object’s salient properties is stored in the sensory and motor systems that were active when that information was acquired. As a result, object concepts belonging to different categories, such as animals and tools, are represented in partially distinct, sensory-based and motor-property-based neural networks (Martin, 2007). In particular, the visual presentation of pictures of objects to which one can attribute a specific action was found to activate a ventral premotor and an anterior parietal cortical area, even when no actual motor response was required (Chao and Martin, 2000), suggesting that sensorimotor knowledge about the functional properties of manipulatable objects is part of their representation, and can be used to recognize or name them. These motor-perceptual interactions involve associations of objects with potential actions: this is clearly what occurs in the case of tools (Martin et al., 2000).

Such associations between actions and their correlated perceptions are generally learned during childhood. They are crucial to building unified, coherent representations of objects. Once the neural network underlying a given representation has been structured, any one of the inputs that was initially present suffices to reactivate the whole network (Martin et al., 2000; Pülvermüller, 1999). Some studies have directly assessed the role of motor learning in the reorganization of the neural networks involved in object processing (Pollmann and Maertens, 2005; Weisberg et al., 2007; Wolfensteller et al., 2004), with results showing that, after learning, visual presentation of the same stimuli elicits activations in brain regions involved in the programming of the response that was specifically associated with the stimuli during learning.

TOWARDS EMBODIED LETTER RECOGNITION

Behavioral data

Although alphabetic characters are not graspable objects, motor-perceptual links are thought to presumably contribute to their representation, since they are associated with highly specific handwriting movements. In many respects, the connection between letters and handwriting movements is even closer than that existing between an object and the movements made when using it. As a matter of fact, handwriting movements consist of producing a form as close as possible to the corresponding visual model. This visuomotor association is repeated very often during childhood and reinforced throughout the lifespan. Handwriting movements are thus associated with consistent spatial information about a letter. In addition, they are governed by very strict spatial and temporal rules, which have been described as the “grammar of action” (Goodnow and Levine, 1973). These rules may vary between writers (for example, between right- and left-handers), but they are fairly invariant for a given writer. In particular, the order of the strokes, the direction of the horizontal segments, and the rotational direction of each movement (clockwise or counterclockwise) are memorized for a given letter (Goodnow and Levine, 1973; Meulenbroek and Thomassen, 1991). This is even clearer in the case of Chinese and Japanese ideograms: in these graphic systems, each character is composed of a number of strokes that must be written in a strictly precise order during the process of learning to read and write. This characteristic has been used to show that the motor sequence of strokes specific to each ideogram may be an essential component of its central representation. For instance, Flores d’Arcais (1994) differentiated the initial strokes, which are written first, from the final strokes of Chinese ideograms. He assumed that, if the order of the strokes is processed implicitly during the visual presentation of the character, the initial strokes should be processed before the final ones. He showed that priming a character with its initial strokes was more efficient than priming it with the final strokes, and that characters sharing initial strokes are more often confused than those sharing final strokes. Of course, these results were obtained only for Chinese readers (and writers), not for subjects who were not able to write ideograms. Such observations suggest that the order of strokes is used subsequently as a cue for retrieving ideograms from memory. Similar data have also been obtained for Roman characters (Kosslyn et al., 1989; Parkinson and Khurana, 2007; Parkinson et al., 2010) and they challenge “pandemonium”-like models because they imply that letter recognition is not based purely on parallel visual processing of all constituent strokes but also on a motor memory of the tracing order of those strokes.

In addition to the stroke order, the way of drawing each stroke is also important. Babcock and Freyd (1988) found that subjects asked to reproduce artificial letters from memory unconsciously adopted the stroke direction used to draw the template they had memorized. They extracted and memorized from static visual-trace information related to production. In the same vein, repeated writing is an aid commonly used to help Japanese children memorize ideograms and Japanese adults often report that they write with their finger in the air to identify complex characters (a phenomenon called Ku-sho; Sasaki, 1987). Furthermore, some convincing clinical arguments can be advanced in favor of the tight link between the visual and sensorimotor representations of letter shapes. First, it has been reported that patients with pure alexia, who are no longer able to recognize letters visually, sometimes succeeded when they are asked to trace the outline of the letters with their fingers (Bartolomeo et al., 2002; Seki et al., 1995). This “kinesthetic facilitation”, first described by Déjerine (1892), was used to improve reading in two alexic patients (Seki et al., 1995). It turned out that the method proved effective, and following a training period in which finger tracing was required, the visual recognition of letters and simple words was restored without recourse to movements.

Therefore, both behavioral and clinical evidence suggest that handwriting movements are involved in letter memorization. If this is so, then changing the motor conditions during learning by using a typing instead of a handwriting method will probably affect the subjects’ representation of letters and hence their subsequent letter-recognition performance. We checked this by studying the early letter-learning process in very young children (ages 3–5) who had not yet been taught to read or write in school. Two writing modes, handwriting and typing, were compared in two groups of children by testing their letter-recognition performance (Longcamp et al., 2005b; Velay et al., 2004). The aim of that experiment was to determine whether two different types of motor training could induce different kinds of letter memorization. During the training period, for both handwriting and typing, a hand movement was therefore associated with the visual image of a given letter, but the two movements performed were quite different. On the one hand, learning to write by hand requires the writer to perform a movement that completely defines the shape of the letter in order to build an internal model of that letter. Once the learning is completed, there exists a unique correspondence between a given letter and the movement used to write it. On the other hand, learning to type consists of locating a key on the keyboard and pressing it, but since the trajectory depends on the location of the finger before it goes into action, no specific relationship between the visual form of a letter and a given movement is built. Moreover, nothing in this pointing movement informs the learner about the shape of the letter. In short, handwriting provides on-line signals from several sources, including vision, motor commands, and kinesthetic feedback, which are closely linked and simultaneously distributed over time. No such spatio-temporal pattern occurs in typewriting. The results showed that the children who were trained via handwriting performed more successfully on the letter-recognition tests than those who were trained via typing.

Neuroimaging data

In the great majority of clinical observations, alexia occurs following a lesion of the left occipito-temporal areas. However, deficits in the visual identification of letters can sometimes be associated with the inability to write letters (Anderson et al., 1990), an observation that is hardly compatible with a pure visual deficit. These facts suggest, on the contrary, that visual and sensorimotor cerebral representations of letter shapes are somehow coupled. In the case described by Anderson and colleagues (1990), the patient in question became alexic and agraphic following a left premotor cortical lesion.

The involvement of graphomotor skills in the visual perception of letters can be directly tested using a neuroimaging procedure very close to that previously described for graspable object recognition. The question is simple: As observed for graspable objects, does simply viewing a letter activate sensorimotor areas? An initial answer to this question was given in a study showing that several associative and motor areas were activated when ideographic and syllabic Japanese characters are seen and also written (Matsuo et al., 2003) and when subjects were instructed to retrieve kanji ideograms, which consist of several elements, in response to the element that was always written first. However, ideographic stimuli are fairly complex since they represent words and are associated with high-level linguistic representations. Roman letters are by far less complex, so it was thus important to check whether passively viewing Roman letters activates some of the sensorimotor cerebral areas also involved in writing movements. In a neuroimaging study, Longcamp et al. (2003) directly assessed this possibility. Using fMRI on a group of right-handed subjects, they checked to see whether passive letter-viewing induced any activation in the sensorimotor brain areas known to be involved in writing movements. They observed that part of the left ventral premotor cortex (Brodmann Area 6) was activated when letters were being passively observed, and that the same zone was strongly activated when the subjects were actually writing the letters. Interestingly, this area did not respond to the visual presentation of pseudoletters, with which no predetermined motor program could be associated. Furthermore, in a subsequent study, the authors showed that a symmetrical area in the right premotor cortex was activated when left-handed subjects were passively watching letters, confirming that this visually induced activation was writing-hand dependent (Longcamp et al., 2005a). They therefore suggested that this premotor activation reflects the involvement of the motor programs used to write each letter, in agreement with the conclusions drawn by Anderson et al. (1990). These various data indicate that the brain representation of letters might not be strictly visual, but might be based on a complex neural network including a sensorimotor component acquired when the individual is learning to read and write concomitantly.

Another approach has also been used to test the involvement of motor areas in the visual perception of letters (Papathanasiou et al., 2004). It consists of probing the excitability of the corticospinal motor pathways using transcranial magnetic stimulation (TMS). TMS causes a motor-evoked potential (MEP) in the hand muscles, whose size varies with the excitability of the corticospinal motor pathways. For a group of right-handed subjects, the authors found that the size of the MEP in hand muscles increased during visual observation of letters, and that the increase was significant for the dominant hand (left hemisphere) only.

As a whole, these results show that, even in a totally passive context, letters are not processed solely in the ventral visual areas, but also implicate the motor/premotor areas involved in handwriting. The question is when and how do motor processes intervene to direct perception? An interesting paradigm for attempting to answer this question consists of presenting letters stroke by stroke (Parkinson et al., 2010). Under such conditions of visual presentation, faster recognition has been obtained for letters presented as sequences of strokes in a temporal order that is consistent with the letter-writing order rather than inconsistent order. This stroke-order effect shows that motor production has an impact on letter perception even when, in theory, feature-extraction models (“pandemonium”-like models) should be perfectly sufficient. In addition, event-related potentials (ERPs) over posterior scalp areas have revealed that, when the strokes are presented in the correct writing order, early visual processing speeded up as soon as the first stroke was presented (Parkinson et al., 2010). Hence, arguments in favor of a motor contribution to visual recognition of alphabetic characters, although sparse, are convincing. We assume that the main process influenced by motor activity, if any, is likely to be a spatial process taking place during the initial step of written-character recognition. In other words, writing movements may contribute to memorizing letter shapes, and perhaps more crucially, their orientation.

LETTER ORIENTATION: A VISUOMOTOR PROCESS?

Identifying letters, unlike objects, requires the ability to recognize their orientation. Indeed, orientation is a critical factor when readers have to discriminate between letters. In their now old but still fascinating book, Corballis and Beale (1976) theorized about the poor capacity of humans and animals to distinguish between mirror images of objects. They hypothesized that the visual system had evolved in such a way that animals automatically generalize from one to any orientation in space and consequently ignore spatial differences resulting from mirror transformations of visual scenes. This generalization is thought to be very useful from a behavioral point of view: once an animal has seen a predator coming from the left, it is automatically prepared to recognize the same predator even if, this time, it comes from the right. This is also true for objects or faces: when we first see them in a given orientation, we soon become able to recognize them in many other orientations. However, this ability to mirror-reverse becomes a great hindrance for letters. Letters have a peculiar characteristic that makes them different from objects: they do not follow the rules of symmetry. The mirror image of a letter is at best a nonsense letter and at worse another letter (“d” differs in its orientation from “b”, as does “p” from “q”). To be able to identify letters and thus to become a good reader, a child must fight against his or her spontaneous tendency to generalize. For some children, letter orientation remains a problem when they are learning to read and confusions between letters and their mirror images are among the most frequent errors made by young children and “poor readers” (Adams, 1990; Terepocki et al., 2002; Wolff and Melngailis, 1996).

It has long been known that presenting a letter in an orientation that differs from its standard orientation impairs recognition (e.g. Hamm et al., 2004). Response times are longer for mirrored letters than for normal letters, suggesting that an additional process is performed in order to make the “parity” judgment (i.e. between the letter and its mirror image). It is generally assumed that this process is a mental rotation: When the mirror-image of a letter is seen, it has to be mentally reversed first in order to be identified. This mental rotation is thought to rely on purely visual processes. We propose the alternative hypothesis that a mental simulation of automatic and covert handwriting movements is triggered by the visual presentation of the letter. Whatever the letter’s orientation, the motor program corresponding to its correct orientation would be triggered: when the letter is correctly oriented, the motor program and the visual image match and the parity judgment is fast. Conversely, if the letter is mirror-oriented, they do not match and the process would take more time. The detection of a match or a mismatch between the perceived shape and the memorized motor program might contribute to the mirror vs. normal recognition processes. In this view, the parity judgement would rely on visuomotor processes.

A means of testing this hypothesis is to ask subjects to make a parity judgment with letters they know but they are not able to write: if handwriting is somehow involved in the orientation judgment, performance should decrease for these letters. We taught adults to write characters from an unknown alphabet (Tamil, Bengali, etc.) either by traditional pencil-and-paper writing or by typing on a computer keyboard (Longcamp et al., 2006). Training lasted three weeks and the number of repetitions of the characters was rigorously the same for both writing methods. After the training, we evaluated the ability of the subjects to discriminate the newly learned characters from their mirror images. We found stronger and longer-lasting (several weeks) facilitation in recognizing the orientation of characters that had been written by hand compared to those typed. In addition, we wanted to further explore the contribution of acquired writing knowledge to visual recognition of characters by linking behavior with brain function and (re)organization. For that purpose, one week after the end of the training, we asked the subjects who had participated in the behavioral study to make the same orientation judgment as we recorded their fMRI signal (Longcamp et al., 2008). Would the observed differences in recognition performance following handwriting vs. typewriting training be associated with different patterns of brain activity? If so, we would be able to determine which brain areas sustain the possible reactivation of handwriting memory while performing orientation judgments. The functional MRI recordings indicated that the two writing modes were associated with partially distinct neural pathways during the orientation judgment. Greater activity related to handwritten-character identification was observed in several brain regions known to be involved in the execution, imagery, and observation of actions, in particular the left Broca’s area and the bilateral inferior parietal lobes. Taken together, the behavioral and fMRI results provide strong arguments in favor of the view that the specific movements memorized when learning how to write participate in the visual recognition of letter shape and orientation.

CONCLUSION

We assume that, since the visual forms of letters are linked both to the motor programs used to write them and to the associated kinesthetic feedback during reading and writing acquisition, a multimodal letter representation is built. The brain representation of letters would not be strictly visual, but rather supported by a multicomponent neural network. One of the components might be sensorimotor in nature, linked to handwriting. When the learning process is complete, one of the initial inputs (visual input in reading) might suffice to reactivate the whole network, as proposed by the “resonance theory” (Pulvermüller, 1999; Stone and Van Orden, 1994). Besides handwriting movements, it is likely that other sensorimotor processes help in the visual recognition of letters. For example, it has been shown that, in kindergarten, letter recognition is better after training involving both visual and manual exploration of letters than after purely visual exploration of letters (Bara et al., 2004, 2007). It should be noted that initial phoneme identification and pseudoword decoding were also performed better after the haptic training in that study.

Beyond single-letter recognition, the contribution of handwriting knowledge to higher levels of reading remains to be systematically explored. A study on the Chinese language concluded that there is a close relationship between Chinese children’s writing and reading skills (Tan et al., 2005). The authors concluded that the establishment of motor programs leads to the formation of long-term motor memories of Chinese characters and thus facilitates reading. Nevertheless, the debate is far from closed since this stimulating result was radically refuted by another more recent study (Bi et al., 2009). To conclude, even if functional links have been evidenced between global motor skills and reading disabilities both in children (Fawcett et al., 1996) and adults (Nicolson et al., 1999; Velay et al., 2002), further research is required to answer the question of whether learning how to write really helps children learn how to read. If the answer turns to be “yes,” the impact of the major changes occurring in children’s writing habits (today, most writing is done on a digital writing device) should be investigated (Mangen and Velay, 2010).

REFERENCES

Adams, M.J. (1990). Beginning to Read: Thinking and Learning about Print. Cambridge, MA: MIT Press.

Anderson, S.W., Damasio, A.R., and Damasio, H. (1990). Troubled letters but not numbers: domain specific cognitive impairments following focal damage in frontal cortex. Brain, 113(3): 749–766.

Arguin, M. and Bub, D.N. (1993). Single-character processing in a case of pure alexia. Neuropsychologia, 31(5): 435–458.

Babcock, M.K. and Freyd, J.J. (1988). Perception of dynamic information in static handwritten forms. The American Journal of Psychology, 101(1): 111–130.

Badian, N.A. (1995). Predicting reading ability over the long term: the changing roles of letter naming, phonological awareness and orthographic processing. Annals of Dyslexia, 45: 79–96.

Bara, F., Gentaz, E., Colé, P., and Sprenger-Charolles, L. (2004). The visuo-haptic and haptic exploration of letters increases the kindergarten-children’s understanding of the alphabetic principle. Cognitive Development, 19(3): 433–449.

Bara, F., Gentaz, E., and Colé, P. (2007). Haptics in learning to read with children coming from low socio-economic status families. British Journal of Developmental Psychology, 25: 643–663.

Barsalou, L.W. (2008). Grounded cognition. Annual Review of Psychology, 59:

617–645.

Bartolomeo, P., Bachoud-Lévi, A.C., Chokron, S., and Degos, J.-D. (2002). Visually and motor-based knowledge of letters: evidence from a pure alexic patient. Neuropsychologia, 40(8): 1363–1371.

Behrmann, M., Plaut, D.C., and Nelson, J. (1998). A literature review and new data supporting an interactive account of letter by letter reading. Cognitive Neuropsychology, 15: 7–51.

Bi, Y., Han, Z., and Zhang, Y. (2009). Reading does not depend on writing, even in Chinese. Neuropsychologia, 47: 1193–1199.

Bowers, J.S., Vigliocco, G., and Haan, R. (1998). Orthographic, phonological, and articulatory contributions to masked letter and word priming. Journal of Experimental Psychology: Human Perception and Performance, 24(6): 1705–1719.

Cohen, L., Dehaene, S., Naccache, L., Lehericy, S., Dehaene-Lambertz, S., Henaff, M.A., and Michel, F. (2000). The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain, 123(2): 291–307.

Coltheart, M., Rastle, K., Perry, C., Langdon, R., and Ziegler, J.C. (2001). DRC: a dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108: 204–256.

Corballis, M.C. and Beale, I.L. (1976). The Psychology of Left and Right. Hillsdale, NJ: Lawrence Erlbaum Associates.

Craighero, L., Fadiga, L., Rizzolatti, G., and Umiltà, C. (1999). Action for perception: a motor-visual attentional effect. Journal of Experimental Psychology: Human Perception and Performance, 25(6): 1673–1692.

Damasio, A.R. and Damasio, H. (1983). The anatomic basis of pure alexia. Neurology, 33: 1573–1583.

Déjerine, J. (1892). Contribution à l’étude anatomo-pathologique et clinique des différentes variétés de cécité verbale. Mémoires de la Société de Biologie, 4: 61–90.

Ellis, A.W. (1993). Reading, Writing and Dyslexia: A Cognitive Analysis. Hillsdale, NJ: Laurence Erlbaum Associates.

Fawcett, A.J., Nicolson, R.I., and Dean, P. (1996). Impaired performance of children with dyslexia on a range of cerebellar tasks. Annals of Dyslexia, 46: 259–283.

Fitzgerald, J. and Shanahan, T. (2000). Reading and writing relationships and their development. Educational Psychologist, 35: 39–50.

Flores d’Arcais, G.B. (1994). Order of strokes writing as a cue for retrieval in reading Chinese characters. European Journal of Cognitive Psychology, 6: 337–355.

Flowers, D.L., Jones, K., Noble, K., Vanmeter, J., Zeffiro, T.A., Wood, F.B., and Eden, G.F. (2004). Attention to single letters activates left extrastriate cortex. NeuroImage, 21(3): 829–839.

Foulin, J.N. (2005). Why is letter-name knowledge such a good predictor of learning to read? Reading and Writing, 18: 129–155.

Freyd, J.J. (1983). Representing the dynamics of a static form. Memory and Cognition, 11(4): 342–346.

Garrett, A.S., Flowers, D.L., Absher, J.R., Fahey, F.H., Gage, H.D., Keyes, J.W.,

Porrino, L.J., and Wood, F.B. (2000). Cortical activity related to accuracy of letter recognition. NeuroImage, 11(2): 111–123.

Gauthier, I., Tarr, M.J., Moylan, J., Skudlarski, P., Gore, J.C., and Anderson, A.W. (2000). The fusiform “face area” is part of a network that processes faces at the individual level. Journal of Cognitive Neuroscience, 12(3): 495–504.

Goodnow, J.J. and Levine, R.A. (1973). “The grammar of action”: sequence and syntax in children’s copying. Cognitive Psychology, 4: 82–98.

Grainger, J., Rey, A., and Dufau, S. (2008). Letter perception: from pixels to pandemonium. Trends in Cognitive Science, 12(10): 381–387.

Gros, H., Boulanouar, K., Viallard, G., Cassol, E., and Celsis, P. (2001). Event-related functional magnetic resonance imaging study of the extrastriate cortex response to a categorically ambiguous stimulus primed by letters and familiar geometric figures. Journal of Cerebral Blood Flow and Metabolism, 21(11): 1330–1341.

Hamm, J.P., Johnson, B.W., and Corballis, M.C. (2004). One good turn deserves another: an event-related brain potential study of rotated mirror-normal letter discriminations. Neuropsychologia, 42(6): 810–820.

James, K.H., James, T.W., Jobard, G., Wong, A.C., and Gauthier, I. (2005). Letter processing in the visual system: different activation patterns for single letters and strings. Cognitive, Affective and Behavioural Neuroscience, 5(4): 452–466.

Joseph, J.E., Cerullo, M.A., Farley, A.B., Steinmetz, N.A., and Mier, C.R. (2006). fMRI correlates of cortical specialization and generalization for letter processing. NeuroImage, 32: 806–820.

Kleinschmidt, A., Buchel, C., Hutton, C., Friston, K.J., and Frackowiak, R.S.J. (2002). The neural structures expressing perceptual hysteresis in visual letter recognition. Neuron, 34(3): 659–666.

Kosslyn, S.M., Koenig, O., Barret, A., Cave, C.B., Tang, J., and Gabrieli, J.D.E. (1989). Evidence for two types of spatial representations: hemispheric specialization for categorical and coordinate relations. Journal of Experimental Psychology: Human Perception and Performance, 15: 723–735.

Longcamp, M., Anton, J.L., Roth, M., and Velay, J.L. (2003). Visual presentation of single letters activates a premotor area involved in writing. NeuroImage, 19(4): 1492–1500.

Longcamp, M., Anton, J.L., Roth, M., and Velay, J.L. (2005a). Premotor activations in response to visually presented single letters depend on the hand used to write: a study in left-handers. Neuropsychologia, 43(12): 1801–1809.

Longcamp, M., Zerbato-Poudou, M.T., and Velay, J.L. (2005b). The influence of writing practice on letter recognition in preschool children: a comparison between handwriting and typing. Acta Psychologica, 119(1): 67–79.

Longcamp, M., Boucard, C., Gilhodes, J.C., and Velay, J.L. (2006). Remembering the orientation of newly learned characters depends on the associated writing knowledge: a comparison between handwriting and typing. Human Movement Science, 25(4–5): 646–656.

Longcamp, M., Boucard, C., Gilhodes, J.C., Anton, J.L., Roth, M., Nazarian, B., and Velay, J.L. (2008). Learning through hand- or typewriting influences visual recognition of new graphic shapes: behavioral and functional imaging evidence. Journal of Cognitive Neuroscience, 20(5): 802–815.

McCandliss, B.D., Cohen, L., and Dehaene, S. (2003). The visual form area: expertise for reading in the fusiform gyrus. Trends in Cognitive Sciences, 7(7): 293–299.

McClelland, J.L. and Rumelhart, D.E. (1981). An interactive activation model of context effects in letter perception: part 1: an account of basic findings. Psychological Review, 88(5): 375–407.

Mangen, A. and Velay, J.L. (2010). Digitizing literacy: reflections on the haptics of writing, in Lazinica, A. (ed.), Advances in Haptics (pp. 385–401). Vienna: IN-TECH web.

Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58: 25–45.

Martin, A., Ungerleider, L.G., and Haxby, J.V. (2000). Category specificity and the brain: the sensory/motor model of semantic representations of objects, in Gazzaniga, M.S. (ed.), The Cognitive Neurosciences (pp. 1023–1036). Cambridge, MA: MIT Press.

Matsuo, K., Kato, C., Okada, T., Moriya, T., Glover, G.H., and Nakai, T. (2003). Finger movements lighten neural loads in the recognition of ideographic characters. Brain Research, Cognitive Brain Research, 17(2): 263–272.

Meulenbroek R.G.J. and Thomassen A.J.W.M. (1991). Stroke-direction preferences in drawing and handwriting. Human Movement Science, 10: 247–270.

Miozzo, M. and Caramazza, A. (1998). Varieties of pure alexia: the case of failure to access graphemic representations. Cognitive Neuropsychology, 15(1–2): 203–238.

Montant, M. and Behrmann, M. (2000). Pure alexia. Neurocase, 6: 265–294.

Murata, A., Fadiga, L., Fogassi, L., Gallese, V., Raos, V., and Rizzolatti, G. (1997). Object representation in the ventral premotor cortex (area F5) of the monkey. Journal of Neurophysiology, 78(4): 2226–2230.

Mycroft, R., Hanley, J.R., and Kay, J. (2002). Preserved access to abstract letter identities despite abolished letter naming in a case of pure alexia. Journal of Neurolinguistics, 15(2): 99–108.

Näslund, J.C. and Schneider, W. (1996). Kindergarten letter knowledge, phonological skills, and memory processes: relative effects on early literacy. Journal of Experimental Child Psychology, 62(1): 30–59.

Nicolson, R.I., Fawcett, A.J., Berry, E.L., Jenkins, I.H., Dean, P., and Brooks, D.J. (1999). Association of abnormal cerebellar activation with motor learning difficulties in dyslexic adults. Lancet, 353: 1662–1667.

Niederbuhl, J. and Springer, S.P. (1979). Task requirements and hemispheric asymmetry for the processing of single letters. Neuropsychologia, 17: 689–692.

Olivier, G. and Velay, J.L. (2009). Visual objects can potentiate a grasping neural simulation which interferes with manual response execution. Acta Psychologica, 130(2): 147–152.

Papathanasiou, I., Filipovic, S.R., Whurr, R., Rothwell, J.C., and Jahanshahi, M. (2004). Changes in corticospinal motor excitability induced by non-motor linguistic tasks. Experimental Brain Research, 154(2): 218–225.

Parkinson, J. and Khurana, B. (2007). Temporal order of strokes primes letter recognition. Quarterly Journal of Experimental Psychology, 60: 1265–1274.

Parkinson, J., Dyson, B.J., and Khurana, B. (2010). Line by line: the ERP correlates of stroke order priming in letters. Experimental Brain Research, 201: 575–586.

Pernet, C., Celsis, P., and Demonet, J.F. (2005). Selective response to letter categorization within the left fusiform gyrus. NeuroImage, 28(3): 738–744.

Perri, R., Bartolomeo, P., and Silveri, M.C. (1996). Letter dyslexia in a letter by letter reader. Brain and Language, 53(3): 390–407.

Perry, C., Ziegler, J.C., and Zorzi, M. (2007). Nested incremental modeling in the development of computational theories: the CDP+ model of reading aloud. Psychological Review, 114(2): 273–315.

Polk, T.A. and Farah, M.J. (1998). The neural development and organization of letter recognition: evidence from functional neuroimaging, computational modeling, and behavioral studies. Proceedings of the National Academy of Sciences of the USA, 95: 847–852.

Polk, T.A., Stallcup, M., Aguirre, G.K., Alsop, D.C., D’Esposito, M., Detre, D.A., and Farah, M.J. (2002). Neural specialization for letter recognition. Journal of Cognitive Neuroscience, 14: 145–159.

Pollmann, S. and Maertens, M. (2005). Shift of activity from attention to motor-related brain areas during visual learning. Nature Neuroscience, 8: 1494–1496.

Pulvermüller, F. (1999). Words in the brain language. Behavioral and Brain Sciences, 22: 253–336.

Rey, A., Dufau, S., Massol, S., and Grainger, J. (2009). Testing computational models of letter perception with item-level event-related potentials. Cognitive Neuropsychology, 26(1): 7–22.

Sasaki, M. (1987). Why do Japanese write characters in space? International Journal of Behavioral Development, 10: 135–149.

Seki, K., Yajima, M., and Sugishita, M. (1995). The efficacy of kinesthetic reading treatment for pure alexia. Neuropsychologia, 33(5): 595–609.

Selfridge, O.G. (1959). Pandemonium: a paradigm for learning, in Blake, D.V. and Uttley, A.M. (eds), Proceedings of the Symposium on Mechanisation of Thought Processes (pp. 511–529). London: HM Stationery Office.

Stone, G.O. and Van Orden, G.C. (1994). Building a resonance framework for word recognition using design and system principles. Journal of Experimental Psychology: Human Perception Performance, 20: 1248–1268.

Tan, L.H., Spinks, J.A., Eden, G.F., Perfetti, C.A., and Siok, W.T. (2005). Reading depends on writing in Chinese. Proceedings of the National Academy of Sciences of the USA, 102: 8781–8785.

Terepocki, M., Kruk, R.S., and Willows, D.M. (2002). The incidence and nature of letter orientation errors in reading disability. Journal of Learning Disabilities, 35: 214–233.

Tucker, M. and Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Visual Cognition, 8: 769–800.

Varela, F.J., Thompson, E., and Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge: MIT Press.

Velay, J.L., Daffaure, V., Giraud, K., and Habib, M. (2002). Interhemispheric sensorimotor integration in pointing movements: a study on dyslexic adults. Neuropsychologia, 40: 827–834.

Velay, J.L., Longcamp, M., and Zerbato-Poudou, M.T. (2004). De la plume au clavier: est-il toujours utile d’enseigner l’écriture manuscrite?, in Gentaz, E. and Dessus, P. (eds), Comprendre les apprentissages: Sciences cognitives et éducation (pp. 69–82). Paris: Dunod.

Viviani, P. (1990). Motor-perceptual interactions: the evolution of an idea, in: Piatelli Palarini, M. (ed.), Cognitive Science in Europe: Issues and Trends (pp. 11–39). Ivrea, Italy: Golem Monograph Series.

Viviani, P. (2002). Motor competence in the perception of dynamic events: a tutorial, in Prinz, W. and Hommel, B. (eds), Common Mechanisms in Perception and Action (pp. 406–442). Oxford: Oxford University Press.

Viviani, P. and Stucchi, N. (1989). The effect of movement velocity on form perception: geometric illusions in dynamic displays. Perception and Psychophysics, 46: 266–274.

Weisberg, J., van Turennout, M., and Martin, A. (2007). A neural system for learning about object function. Cerebral Cortex, 17: 513–521.

Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin and Review, 9(4): 625–636.

Wolfensteller U., Schubotz R.I., and von Cramon D.Y. (2004). “What” becoming “where”: functional magnetic resonance imaging evidence for pragmatic relevance driving premotor cortex. Journal of Neuroscience, 24: 10431–10439.

Wolff, P.H. and Melngailis, I. (1996). Reversing letters and reading transformed text in dyslexia: a reassessment. Reading and Writing, 8(4): 341–355.

Wong, A.C., Jobard, G., James, K.H., James, T.W., and Gauthier, I. (2009). Expertise with characters in alphabetic and nonalphabetic writing systems engage overlapping occipito-temporal areas. Cognitive Neuropsychology, 26: 111–127.