Embodied lexical representations: Flexible tools for predicting the future
INTRODUCTION
Language units, such as words and sentences, clearly convey semantic information (i.e., content information about a word’s referent). Semantic information is activated during both language comprehension and language production – but how is it actually stored and represented in the human brain? Classic neurolinguistic theories posit that semantic meaning is stored in distinct meaning modules residing in specific cerebral language areas. These theories are supported by the results of studies with aphasics, showing that damage to specific brain areas can result in relatively specific language impairments (for a review of selective aphasia patterns, see R. Martin, 2003). However, new insights into the representation of language meaning coming from neuroimaging studies with healthy participants suggest that semantic information is represented in a highly distributed manner across multiple areas within the cerebral cortex. Interestingly, these studies show that processing semantic meaning activates not only language areas in the brain, but rather depends additionally on neural areas usually engaged during processing of visual, olfactory and sensorimotor information. For example, comprehension of words referring to actions (e.g., kick) activates areas in the neural motor system that are also active when a participant actually moves his or her foot (Hauk et al., 2004). On the basis of these and other results, it has been suggested that words become meaningful through internal re-enactments of actual experiences with words’ referents.
In this chapter, we first give a brief overview of studies investigating embodied lexical semantics. We focus here primarily on studies investigating language about action (i.e., action semantics), however we acknowledge that there is substantial evidence in favour of embodied lexical representation in other domains such as perception (e.g., Gonzales et al., 2006; Simmons et al., 2005) and emotion (Chen and Bargh, 1999; Niedenthal et al., 2009). Second, we explore the functional contribution of sensorimotor-language interactions to communication. To this end we argue (1) that embodied effects elicited by language are flexible in nature, thus reflecting information relevant for a listener in a given environment, and (2) that sensorimotor activations elicited by language stimuli may aid listeners in predicting events in their surrounding environments.
ACTION AND LANGUAGE
In the past decade a plethora of studies has been published, providing convincing evidence that understanding language about actions indeed recruits the resources of the cerebral motor system. Although many open questions remain, a number of consistencies can be taken from this literature. First, action execution selectively modulates action-language comprehension, and vice versa. This suggests that something about the mechanism underlying action execution and language comprehension is shared (see also Coello and Bidet-Ildei, this volume). Second, brain areas involved in action execution are activated when participants process language with action-relevant content. This suggests that what is shared between action and language processing is indeed the neural substrate maintaining representations in both domains (see also Aziz-Zadeh, this volume). Third, action content is not restricted to verbs denoting actual actions, but is an integral part of the meaning of objects used actively (e.g., tools, manipulable objects). This suggests that word meaning extends beyond simply cataloguing objects and events in the environment to include detailed information about how objects are used. In other words, understanding how to use a hammer appears to be vital information for understanding what a hammer is conceptually. In the following section we provide some evidence for each of these points.
Action and language processing interfere with each other selectively
Language processing affects action execution
Most studies addressing the link between language and action have investigated effects of comprehending language on performing actions. For example, Creem and Proffitt (2001) showed that participants engaged in a taxing semantic task (i.e., recalling one word of a semantically related word pair) were less likely to grasp normal everyday tools (e.g., a spatula) in a way that would afford tool use than if they engaged in a concurrent visuo-spatial task or no concurrent task. The authors suggest that interacting with a tool in a meaningful way (which includes picking the tool up in a way that will afford its later use) requires accessing semantic information about the tool. Semantic information used for planning actions is postulated to be the same information recruited in language processing; thus taxing the semantic system with an arbitrary task made meaningful interactions with tools more difficult.
Creem and Proffitt (2001) demonstrate a very general link between language and action; however, other studies have demonstrated that the effects of action on language processing can be quite selective. For example, language denoting actions modulates response times in an effector-specific manner (Buccino et al., 2005; Scorolli and Borghi, 2007). In other words, the presentation of a sentence or word referring to a hand action selectively affects participants’ ability to execute actions with the hand (i.e., it does not interfere with execution of actions made by another effector, such as the foot). The direction of the effect (i.e., whether activation is facilitated or hampered) is inconsistent across studies. Scorolli and Borghi (2007) report faster response times for congruent language-action couplings, while Buccino et al. (2005) report slower response times for congruent trials (see also Borghi, this volume). There is good evidence that at least some of this discrepancy can be accounted for by the relative timing of language presentation and action execution (Boulenger et al., 2006). In general, if language processing precedes action execution, facilitation of action execution is observed, while if language processing coincides with action execution, inhibitory effects are seen (see also Borreggine and Kaschak, 2006).
Language is also known to selectively affect movements along other kinematic dimensions. For example, sentences and words implying action in a specific direction facilitate execution of actions in a congruent direction (Glenberg and Kaschak, 2002; Rueschemeyer et al., 2010b; van Dam et al., 2010b; Zwaan and Taylor, 2006). For example, Glenberg and Kaschak (2002) demonstrate that participants are faster to move their hand towards their body in response to sentences such as Open the drawer, than in response to sentences such as Close the drawer. The opposite is true for movements of the hand away from the body. In a similar vein, language denoting objects used in a specific manner (e.g., calculator) facilitates participants’ performance of the movements they would typically make upon encountering the objects in the environment (in this example, finger poking) (Bub et al., 2008; Glover et al., 2004; Tucker and Ellis, 2004; Zwaan and Taylor, 2006). This holds true not just for words denoting objects, but also for adjectives and adverbs denoting properties of objects and actions, which would result in different movement requirements (e.g., large vs. small, which is a relevant dimension for how much one opens one’s hand while grasping) (Gentilucci et al., 2000; Glover and Dixon, 2002; Zwaan and Taylor, 2006; see also Coello and Bidet-Ildei, this volume). Importantly, during sentence processing relevant action information becomes available on-line in a context-dependent manner (Bub et al., 2008; Zwaan and Taylor, 2006).
Actions affect language processing
The results discussed so far indicate that action execution (i.e., performing a motor act) is modulated by comprehension of language about actions. The interaction between action and language has been shown to be bi-directional. In other words, language processing affects action execution, but action planning and execution also affect language processing. In a recent behavioural study, Rueschemeyer and colleagues (2010a) had participants perform a lexical decision task while simultaneously rotating a disk with their right hand. The critical experimental stimuli comprised words denoting objects used in an active manner (e.g., cup, hammer) and words denoting objects that can be held in one’s hand, but do not require manipulation for use (e.g., clock, bookend). The hypothesis was that, if actions and words associated with actions rely on common neural substrate, then actively engaging in a motor act (i.e., rotating the right hand) would modulate participants’ performance on words with a putative action-semantic component (i.e., words denoting objects used actively), but would not modulate performance on words denoting objects not typically manipulated. This was indeed the case. Participants were more accurate and faster to respond to manipulable object words while rotating their hand than they were in responses to words with a weaker action-semantic association. As in the case of the literature discussed above, there is little consistency of the direction of effects elicited by overt actions on language processing: Witt et al. (2010) report results of an object-naming task in which participants simultaneously squeezed a rubber ball with their dominant hand. As in the case of Rueschemeyer et al. (2010a), overt hand actions selectively modulated the naming of tools (in contrast to animals); however, in contrast to Rueschemeyer and colleagues, participants became worse rather than better in their performance.
Thus, processing language that expresses information relevant to the action system (i.e., information about body parts, directions, size, trajectory, speed, etc.) selectively affects how actions are executed. And vice versa, preparing or executing an action selectively modulates how well participants comprehend language about actions. This suggests that language comprehension and action execution share an underlying mechanism: in the following paragraphs we provide a brief overview of neuroimaging studies indicating that this shared mechanism takes the form of overlapping neural substrate.
Language in the brain’s action system
A number of neuroimaging studies provide good evidence that action-related language (i.e., words denoting actions or objects typically acted upon) engages the same neural populations also involved in the preparation and execution of actions (see also Aziz-Zadeh, this volume). Typically, activation in frontoparietal motor areas (i.e., premotor cortex, primary motor cortex, supplementary motor cortex and inferior parietal cortex) is modulated by action-semantic content. For example, Rueschemeyer et al. (2007) used fMRI to measure participants’ haemodynamic response while they read various types of verbs. Some verbs in the stimulus set had a motor meaning (e.g., to grasp), while others had no motor meaning (e.g., to think). Signal change was higher in motor areas of the brain (i.e., dorsal postcentral gyrus) when participants read motor verbs than when they read non-motor verbs. This indicates that simply reading an isolated verb with a motor meaning selectively engages neural motor areas – even if no overt motor response is required. In a similar vein, van Dam et al. (2010a) presented participants lying in an fMRI scanner with verbs of varying motor content. Verb stimuli either referred to (1) very specific motor actions (e.g., wipe), (2) more general motor actions (e.g., clean), or (3) were not motoric in meaning (e.g., think). In holding with Rueschemeyer et al. (2007), verbs with motor content generally activated neural motor areas (i.e., inferior parietal cortex) to a greater degree than verbs with no motor content. In addition, the areas sensitive to motor content were modulated parametrically in accordance with the amount of motor detail conveyed by the verb. In other words, neural motor areas responded most to verbs with more motor content (e.g., wipe), somewhat less to verbs with less specific motor content (e.g., clean), and least to words with no motor content (e.g., think). This indicates that embodied lexical representations reflect not only the general presence of modality specific information, but also the relative amount of modality specific information.
Embodied lexical semantic representations reflect specific meaning in other ways as well. In holding with the pattern of results obtained in behavioural studies, neural activity elicited by action verbs shows a large degree of effector specificity (Aziz-Zadeh et al., 2006; Hauk et al., 2004; Tettamanti et al., 2005). Thus, comprehending words and sentences denoting hand actions (e.g., pick, grasp) activates dorsal premotor cortex (i.e., cortex involved in planning and executing hand actions) more than other premotor areas, while language denoting mouth actions (e.g., lick, speak) most effectively activates ventral premotor cortex (i.e., an area involved in planning and executing mouth actions).
Taken together there is abundant behavioural and neuroimaging evidence for the idea that understanding action-related language draws on the resources of the neural motor system. Participants execute actions differently when they simultaneously process language about actions, suggesting that action execution and action-language comprehension have a common underlying mechanism. Neuroimaging studies indicate that action execution (and observation) and action-language understanding indeed share neural substrate in the frontoparietal motor network. The most commonly observed sites of activation within this network are (1) the premotor cortex (i.e., along the precentral gyrus), and (2) inferior parietal cortex, in particular regions along the intraparietal sulcus and the supramarginal gyrus.
Action-semantic representations are seen for objects and for words
It is interesting to note that action-semantic representations are activated by word stimuli from different word categories. In other words, action verbs, such as grasp or kick, activate neural motor areas, but so do nouns denoting objects that are typically used actively, such as tool names. Martin and Chao (2001) review the results of a number of studies demonstrating that words denoting tools activate premotor, inferior parietal and inferior temporal cortex (brain areas involved in action preparation/execution and the perception of visual motion) to a greater extent than words denoting animals. In a similar vein, Saccuman et al. (2006) presented participants with both verb and noun stimuli. Within each category, manipulable and non-manipulable words were included. The results show that relevant neural motor areas (i.e., premotor and inferior parietal cortex) were sensitive to the manipulability distinction in both word categories. Thus words denoting manipulable objects activate neural motor areas in a manner similar to that seen for action verbs.
Recently, several studies have demonstrated that not all actions contribute equally to lexical-semantic representations. More specifically, actions that afford the functional use of an object are reflected in default lexical-semantic representations, while other potential actions are not. For example, a person interacting with a calculator can do so in an almost infinite number of ways, however in order to use a calculator the person must make a poking movement with the index finger. Bub et al. (2008) showed that, while processing words denoting manipulable objects (such as calculator), functional actions (in the current example, a poking motion with the index finger) are primed more quickly than other possible object-oriented actions (e.g., an action affording the displacement of an object). They concluded that a distinction can be made between the contributions of different types of manipulations to lexical-semantic processing. Information about how an object is used is highly relevant for word meaning, and is therefore activated very quickly after processing a word; information about how an object is moved, however, is less important for word meaning, and is not as prominently featured in lexical-semantic representations.
In an fMRI study, Rueschemeyer et al. (2010c) tested whether a neural basis exists for the distinction between functional and volumetric object manipulations. They presented participants in the scanner with words, all of which denoted manipulable objects, but only half of which required manipulation for use (e.g., cup: an object that fulfils the function of transporting a beverage only through a movement of the hand). The other half denoted objects that could be moved easily by hand, but do not require further manipulation to function (e.g., bookend: an object that can be easily moved about, but that fulfils its function, namely to hold books, without any further manipulation). Indeed, the haemodynamic response in neural motor areas (i.e., ventral premotor and inferior parietal cortex) was greater for words requiring functional manipulation than for words requiring volumetric manipulation. Taken together, these two studies indicate that actions performed in conjunction with object use are relevant in lexical-semantic representations, whereas other possible actions are not.
Despite the apparent clarity of these findings, many open questions remain. In the following we address two key questions: (1) how flexible are embodied lexical representations? and (2) what do sensorimotor activations elicited by language stimuli contribute to language comprehension?
HOW FLEXIBLE ARE EMBODIED LEXICAL-SEMANTIC REPRESENTATIONS?
There is thus abundant compelling evidence that language comprehension activates primary perceptual and motor areas. One question that remains unresolved is how automatic or invariant these links are. In other words, does a given word form always set off the same chain of events resulting in a specific simulation of real world experience; or do words give rise to different simulations depending on the context in which they are presented?
Two pieces of evidence favour an automatic, invariant account of embodied lexical representations. First, Pulvermüller and colleagues (2004, 2005) have performed a number of electrophysiological studies on words with a putative action-semantic component, and have demonstrated that activation over the motor cortex is present in a very early time window (c. 150–200 ms). They argue that the relative speed of motor activity points towards an automatic process. Second, these electrophysiological responses over the motor cortex are seen in response to action words even if participants are not paying attention to word stimuli (Pulvermüller et al., 2005). The authors argue that processes occurring in the absence of explicit attention must also be of an automatic nature.
Despite the results of these studies, it is not intuitively obvious that any component of semantic comprehension should be automatic. In fact, the great power of language is its flexibility, and its capacity to capture and communicate events and thoughts not perceptible in the environment. In particular with respect to object words, it seems unlikely that embodied representations should be static. Take, for example, the word football. This word should presumably elicit activation in motor foot areas – unless you are an American or a goalie, in which case footballs are manipulated primarily with the hands. However, despite the fact that most people probably have a strong foot-related representation of football, it is easy to override that tendency when speaking about footballs in the context of Americans or goalies. This is not reminiscent of automatized or invariant processing.
Beyond intuition, a number of studies also provide good evidence that motor resonance is activated flexibly by words in different contexts. First, Rueschemeyer et al. (2007) showed that morphologically complex words built on a simple action verb stem do not elicit responses in relevant motor areas. For example, the verb to stand activates neural motor areas, but the verb to understand does not, despite the fact that stand is a principle component of understand. This indicates that word forms embedded in binding linguistic contexts do not elicit automatic motor responses. Second, Raposo et al. (2009) investigated the neural motor response to action verbs embedded in idiomatic phrases (e.g., kick the bucket) as well as canonical action phrases (e.g., kick the ball). While motor resonance was seen for canonical uses of action verbs, no such pattern was seen for idiomatic phrases (but see Boulenger et al., 2009, for diverging results). This indicates that action verbs do not always elicit motor resonance, and that propositional sentence context is more important for eliciting motor resonance than the meaning of individual words (see also Jacob, this volume). Third, Zwaan and Taylor (2006) have shown that motor information becomes activated on-line during sentence comprehension. For example, while reading the sentence To quench his thirst, the marathon runner eagerly opened the water, participants show evidence for motor resonance in conjunction with presentation of the word opened. However, they further demonstrate that motor resonance is elicited by words in the sentence that unambiguously describe an action – and that this part of the sentence does not always coincide with an action verb (Taylor et al., 2008). For example, in response to the sentences He looked at the pie through the oven window and turned the timer. The pie needed to bake shorter/longer, evidence for motor resonance is seen in conjunction with the sentence final adjective (i.e., the word specifying the direction of turning) rather than with the action verb (in the example turned).
Evidence for flexibility in embodied lexical-semantic processing exists on the brain level as well. In a recent fMRI study, Van Dam et al. (2011) presented participants with words denoting objects with (1) a strong action-semantic component (e.g., hammer), (2) a strong visual-semantic (i.e., colour is an identifying feature) component (e.g., pylon), or (3) both an action-semantic and a visual-semantic component (e.g., tennis ball). Action-semantic words significantly activated neural motor areas, in particular the inferior parietal cortex; visual-semantic words significantly activated a region in extrastriate visual cortex known to be involved in processing object colour, the fusiform gyrus; action and visual semantic words activated both regions. Van Dam and colleagues then went on to investigate whether the neural correlates of word comprehension change depending on the context in which the word was presented. In one block, participants were thus instructed to think about how denoted objects were used. In a second block, participants were instructed to think about visual properties of the denoted objects. The results of this analysis show that, when processing words with an action-semantic component (e.g., tennis ball), inferior parietal cortex (i.e., a neural motor area) was more activated when participants were asked about how they used a tennis ball than if participants were asked about the colour of a tennis ball. If a word had no putative action-semantic component (e.g., pylon), the inferior parietal cortex was not differentially activated. This indicates that the motor information associated with a word is not always accessed in the same manner: in communicative contexts in which motor information is relevant, words activate neural motor areas more strongly (see also Hoenig et al., 2008).
The question of how flexible embodied lexical responses are thus remains open, however, given that word forms can clearly be used to refer to items in a flexible manner (i.e., a football does not always refer to a ball that is kicked), and given new data showing that identical word forms elicit predictably different patterns of activation in specific language contexts, the arguments in favour of flexibility appear to outweigh those in favour of automaticity.
WHAT IS THE CONTRIBUTION OF MOTOR ACTIVITY TO LANGUAGE UNDERSTANDING?
The neuroimaging data discussed above demonstrate that motor areas of the brain are activated during comprehension of action-related language. But what is the functional contribution of these activations to language understanding? On one hand, it is suggested that action-semantic features are a critical component of semantic meaning (e.g., Pulvermüller, 1999, 2005). In this account, action-word meaning is actually represented (at least in part) in cortical motor areas. Activation of cortical motor areas in conjunction with action-word understanding is thus not a by-product of lexical access, but rather a reflection of where/how semantic meaning is stored in the brain. On the other hand, it has been proposed that motor simulation evoked during word processing is an epiphenomenal by-product of language comprehension (e.g., Hickok, 2010; Mahon and Caramazza, 2008; for a review of a range of explanations for links between language and action, see Meteyard et al., 2012).
Two points support the first claim: first, the temporal dynamics of motor cortex activation during word comprehension suggest that the motor cortex is selectively activated by action-related words prior to lexical selection (Boulenger et al., 2006; Pulvermüller et al., 2005). Specifically, effects over the cortical motor areas are seen to peak approximately 200 ms following the critical word recognition point, while typical correlates of lexical-semantic processing are thought to peak around 250 ms after the word recognition point. Thus, based purely on temporal dynamics, it has been argued that activity in the motor cortex must contribute to lexical selection. Second, patients show a double dissociation between action verb processing and concrete object word processing depending on the site of lesion (Neininger and Pulvermüller, 2003). Specifically, patients with frontal lesions encompassing motor areas are impaired in the comprehension of action verbs (i.e., words with a putative action-semantic component) in comparison to nouns denoting concrete objects (i.e., words evoking information about visual form). The opposite pattern of results was observed for patients with lesions in posterior visual areas. Thus, patients with insult to motor areas perform worse when confronted with action words, while patients with insult to visual areas perform worse on words denoting visual objects. This suggests that accessing lexical-semantic content requires the resources of relevant action and perceptual brain areas.
There are several problems with the interpretations offered above (see also Nazir et al., this volume, Coello and Bartolo, this volume). First, the argument that action-related information precedes lexical selection on the basis of average peak times is not very convincing. To our knowledge, no study has directly compared the temporal dynamics of motor activity and semantic processing in the same subjects using the same words. In order to tackle this issue, we are currently investigating possible differences between lexical-semantic and action-semantic priming effects. Second, the temporal relation of processes to one another is not an argument for causality. As Mahon and Caramazza (2008) point out, it is entirely possible that two processes are started in parallel which have no further interaction. Indeed, the Language and Situation Simulation (LASS) model proposed by Barsalou and colleagues (Barsalou et al., 2008; Simmons et al., 2008) offers precisely such a solution. In the LASS model, a presented word form activates two types of information in parallel: (1) quickly activated linguistically supported information about word form, and (2) relatively slowly activated conceptual information supported by simulation of past experiences with a word’s referent. Thus a word such as cat quickly activates related word forms, such as hat (i.e., based on phonological/orthographic similarity) or dog (based on high co-occurrence rate), and in parallel also activates perceptually based information about what cats look like, smell like, feel like, etc. While simulation enriches one’s understanding of a given word, a good deal of information is already available simply by processing higher-level information about word form (see also Willems et al., 2010, for neuroimaging evidence distinguishing lexical processing from lexically motivated motor simulation). Third, although Neininger and Pulvermüller (2003) show that a small number of right frontal lobe patients are indeed selectively impaired in their processing of specific word meaning, other studies with much larger numbers of patients and much more conceptually demanding tasks have shown that it is extremely rare for right-sided lesions to cause significant impairments in action knowledge; instead, such impairments almost invariably follow from left-sided lesions (Kemmerer et al., 2012; Tranel et al., 2003). Therefore, there is little support for the idea that processing words necessitates access to perception/action areas.
Because activation in primary perceptual and motor areas in conjunction with language is flexible, and because of the problems listed above, it seems unlikely that these activations directly reflect lexical access. What, then, is the purpose of embodied lexical representations? In the literature on action observation, motor simulation is thought to support the ability of an organism to make predictions about its environment (Blakemore and Prith, 2005; Kilner et al., 2004; Wilson and Knoblich, 2005). Studies investigating the neural correlates of action observation in both primates and humans have shown that, when participants observe another acting individual, the participant’s own motor areas become activated, even in the absence of any requirement to overtly move oneself (i.e., the participant engages in implicit motor simulation). Further, it has been demonstrated that motor simulation is greater for the observation of more predictable actions. For example, Kilner and colleagues (2004) presented participants with video clips showing hands in an egocentric perspective for the viewer. Participants were informed on each trial whether or not the hand in the video would move. Electrophysiological results revealed significant activation over cortical motor areas on trials in which participants knew the hand would move compared to trials in which participants knew the hand would remain still. This activation was seen prior to any actual movement of the hand (i.e., it does not reflect action observation alone) and could also be distinguished from the signal elicited by predictions about non-active events. The authors thus conclude that activation observed in motor areas during action observation reflects participants’ ability to make predictions about observed action. In the language domain, the purpose of motor simulation could very plausibly be the same. Fischer and Zwaan (2008) have postulated that language-driven motor resonance may reflect prediction on two levels: (1) referential motor resonance, which helps the listener to respond to the content of what they hear (i.e., semantic motor resonance), and (2) communicative resonance, which aids the listener in predicting what sounds the speaker will utter next (i.e., phonological motor resonance). The findings discussed above provide examples for referential motor resonance. In these studies participants responded to verbally described actions in much the same way they would to action observation. An interesting avenue for future research will be to see whether embodied lexical effects reflect the predictive value of an utterance for a listener.
In the current chapter we have reviewed recent literature demonstrating a bidirectional link between language and action processing systems. In addition, we have emphasized that the neural motor system is relevant for words with an action-semantic component irrespective of word category – in other words, actions are relevant for the semantic representation of nouns as well as verbs. We argued that embodied lexical representations are flexible – that the simulation elicited by words changes in different contexts. Based on this, we argue that embodied lexical effects serve to enrich semantic meaning, and suggest that motor simulation elicited by words have predictive value about a listener’s environment.
REFERENCES
Aziz-Zadeh, L., Wilson, S.M., Rizzolatti, G. and Iacoboni, M. (2006). Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Current Biology, 16: 1818–1823.
Barsalou, L.W., Santos, A., Simmons, W.K. and Wilson, C.D. (2008). Language and simulation in conceptual processing, in De Vega, M., Glenberg, A.M. and Graesser, A.C. (eds). Symbols, Embodiment, and Meaning (pp. 245–283). Oxford: Oxford University Press.
Borreggine, K. and Kaschak, M.P. (2006). The action-sentence compatibility effect: it’s all in the timing. Cognitive Science, 30: 1097–1112.
Boulenger, V. Roy, A., Paulignan, Y., Deprez, V., Jeannerod, M. and Nazir, T. (2006). Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. Journal of Cognitive Neuroscience, 18(10): 1607–1615.
Boulenger, V., Hauk, O. and Pulvermüller, F. (2009). Grasping ideas with the motor system: semantic somatotopy in idiom comprehension. Cerebral Cortex, 19: 1905–1914.
Bub, D.N., Masson, M. and Cree, G. (2008). Evocation of functional and volumetric gestural knowledge by objects and words. Cognition, 106: 27–58.
Buccino, G., Riggio, L., Melli, G., Binkofski, F., Gallese, B. and Rizzolatti, G. (2005). Listening to action-related sentences modulates the activity of the motor system: a combined TMS and behavioral study. Cognitive Brain Research, 24: 355–363.
Chen, M. and Bargh, J.A. (1999). Consequences of automatic evaluation: immediate behavioral predispositions to approach or avoid the stimulus. Personality and Social Psychology Bulletin, 25: 215–224.
Creem, S. and Proffitt, D.R. (2001). Grasping objects by their handles: a necessary interaction between cognition and action. Journal of Experimental Psychology: HPP, 27(1): 218–228.
Gentilucci, M., Benuzzi, F., Bertolani, L., Daprati, E. and Gangitano, M. (2000). Language and motor control. Experimental Brain Research, 133: 468–490.
Glenberg, A.M. and Kaschak, M.P. (2002). Grounding language in action. Psychonomic Bulletin and Review, 9: 558–565.
Glover, S. and Dixon, P. (2002). Semantics affect the planning but not control of grasping. Experimental Brain Research, 146: 383–387.
Glover, S., Rosenbaum, D., Graham, J. and Dixon, P. (2004). Grasping the meaning of words. Experimental Brain Research, 154: 103–108.
Gonzales, J., Barros-Loscertales, A., Pulvermüller, F., Meseguer, V., Sanjuan, A., Belloch, V. and Avila, C. (2006). Reading cinnamon activates olfactory brain regions. Neuroimage, 32: 906–912.
Hauk, O., Johnsrude, I. and Pulvermüller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41: 301–307.
Hickok, G. (2010). The role of mirror neurons in speech perception and action word semantics. Language and Cognitive Processes, 25(6): 749–776.
Hoenig, K., Sim, E.-J., Bochev, V., Herrnberger, B. and Kiefer, M. (2008). Conceptual flexibility in the human brain: dynamic recruitment of semantic maps from visual, motor, and motion-related areas. Journal of Cognitive Neuroscience, 20: 1799–1814.
Kemmerer, D., Rudrauf, D., Manzel, K. and Tranel, D. (2012). Behavioral patterns and lesion sites associated with impaired processing of lexical and conceptual knowledge of actions. Cortex, 4(8): 826–848.
Kilner, J.M., Vargas, C., Duval, S., Blakemore, S. and Sirigu, A. (2004). Motor activation prior to observation of a predicted movement. Nature Neuroscience, 7(12): 1299–1301.
Mahon, B.Z. and Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology: Paris, 102: 59–70.
Martin, A. and Chao, L. (2001). Semantic memory and the brain: structure and processes. Current Opinion in Neurobiology, 11: 194–201.
Martin, R. (2003). Language processing: functional organization and neuroanatomical basis. Annual Review of Psychology, 54: 55–89.
Meteyard, L., Cuadrado, S., Bahrami, B. and Vigliocco, G. (2012). Coming of age: a review of embodiment and the neuroscience of semantics. Cortex, 48(7): 788–804.
Neininger, B. and Pulvermüller, F. (2003). Word-category specific deficits after lesions in the right hemisphere. Neuropsychologia, 41: 53–70.
Niedenthal, P., Winkielman, P., Mondillon, L. and Vermeulen, N. (2009). Embodiment of emotion concepts. Journal of Personality and Social Psychology, 96(6): 1120–1136.
Pulvermüller, F. (1999). Words in the brain’s language. Behavioral and Brain Sciences, 22: 253–336.
Pulvermüller, F. (2005). Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6: 576–582.
Pulvermüller, F., Shtyrov, Y., Kujala, T. and Näätänen, R. (2004). Word-specific cortical activity as revealed by the mismatch negativity. Psychophysiology, 41: 106–112.
Pulvermüller, F., Shtyrov, Y. and Ilmoniemi, R. (2005). Brain signatures of meaning access in action word recognition. Journal of Cognitive Neuroscience, 17(6): 884–892.
Raposo, A., Moss, H.E., Stamatakis, E.A., Tyler, L.K. (2009). Modulation of motor and premotor cortices by actions, action words and action sentences. Neuropsychologia, 47: 388–396.
Rueschemeyer, S.-A., Brass, M. and Friederici, A.D. (2007). Comprehending prehending: neural correlates of processing verbs with motor stems. Journal of Cognitive Neuroscience, 19: 855–865.
Rueschemeyer, S.-A., Lindemann, O., van Rooij, D., van Dam, W. and Bekkering, H. (2010a). Effects of intentional motor actions on embodied language processing. Experimental Psychology, 57(4): 260–266.
Rueschemeyer, S.-A., Pfeiffer, C. and Bekkering, H. (2010b). Body schemantics: on the role of the body schema in embodied lexical representations. Neuropsychologia, 48(3): 774–781.
Rueschemeyer, S.-A., van Rooij, D., Lindemann, O., Willems, R. and Bekkering, H. (2010c). The function of words: distinct neural correlates for words denoting differently manipulable objects. Journal of Cognitive Neuroscience, 22(8): 1844–1851.
Saccuman, M.C., Cappa, S.F., Bates, E.A., Arévalo, A., Della Rosa, P., Danna, M. and Perani, D. (2006). The impact of semantic reference on word class: an fMRI study of action and object naming. Neuroimage, 32: 1865–1878.
Scorolli, C. and Borghi, A.M. (2007). Sentence comprehension and action: effector specific modulation of the motor system. Brain Research, 1130: 119–124.
Simmons, W., Martin, A. and Barsalou, L. (2005). Pictures of appetizing foods activate gustatory cortices for taste and reward. Cerebral Cortex, 15: 1602–1608.
Simmons, W., Hamann, S.B., Harenski, C.L., Hu, X.P. and Barsalou, L.W. (2008). fMRI evidence for word association and situated simulation in conceptual processing. Journal of Physiology: Paris, 102(1–3): 106–119.
Taylor, L.J., Lev-Ari, S. and Zwaan, R. (2008). Inferences about action engage action systems. Brain and Language, 107: 62–67.
Tettamanti, M., Buccino, G., Saccuman, M.C., Gallese, V., Danna, M., Scifo, P., Fazio, F., Rizzolatti, G., Cappa, S.F. and Perani, D. (2005). Listening to action-related sentences activates fronto-parietal motor circuits. Journal of Cognitive Neuroscience, 17: 273–281.
Tranel, D., Kemmerer, D., Adolphs, R., Damasio, H. and Damasio, A. (2003). Neural correlates of conceptual knowledge of actions. Cognitive Neuropsychology, 20: 409–432.
Tucker, M. and Ellis, R. (2004). Action priming by briefly presented objects. Acta Psychologia, 116: 185–203.
Van Dam, W., Rueschemeyer, S.-A. and Bekkering, H. (2010a). Action specificity reflected in embodied lexical-semantic representations. Neurolmage, 53(4): 1318–1325.
Van Dam, W., Rueschemeyer, S.-A. and Bekkering, H. (2010b). Context effects in embodied lexical-semantic processing. Frontiers in Cognition, 1: 150–153.
Van Dam, W., van Dijk, M., Bekkering, H. and Rueschemeyer, S.-A. (2011). Flexibility in embodied lexical-semantic representations. Human Brain Mapping (October): doi: 10.1002/hbm.21365.
Willems, R.M., Toni, I., Hagoort, P. and Casasanto, D. (2010). Neural dissociations between action verb understanding and motor imagery. Journal of Cognitive Neuroscience, 22(10): 2387–2400.
Wilson, M. and Knoblich, G. (2005). The case for motor involvement in perceiving conspecifics. Psychological Bulletin, 131(3): 460–473.
Zwaan, R. and Taylor, L.J. (2006). Seeing, acting, understanding: motor resonance in language comprehension. Journal of Experimental Psychology: General, 135(1): 1–11.