11
Embodied Perception of Objects and People in Space

Towards a unified theoretical framework

Yann Coello and Tina Iachini

Introduction

Human beings are active and social agents. They need to process spatial information in order to act accurately on objects in the surroundings and to interact appropriately with conspecifics. Although spatial perception is an inherent component of adaptive behaviours, it does not emerge as an innate ability. Instead, spatial perception develops through the active exploration of the environment during the very first period of life. In absence of early motor exploration of the environment and associated experience of sensory changes, spatial perception remains immature. Supporting this view, the pioneering work by Held and Hein on animals convincingly demonstrated that when a kitten receives visual stimulation and is allowed to walk during its first weeks of life, it remains nevertheless blind to visuo-spatial information if it cannot contingently experiment walking and seeing the consequence of walking. Its behaviour appears like that of a blind kitten colliding with obstacles, stumbling in hollows and falling in cliffs (Held & Hein, 1963). Likewise, Fine et al. (2003) showed more recently that a person recovering sight after a long period of blindness due to early damage of the eye cornea perceives visual shape and motion but fails to detect objects’ volume and 3-D surfaces (see also Gandhi, Ganesh, & Sinha, 2014). Similarly, newly sighted subjects did not exhibit immediate capacity to transfer information from the tactile to the visual domain (Held et al., 2011). These observations agree well with theories of perception that have long since defended the idea that the experience of spatiality proceeds from processing sensory information in reference to the possibilities of action (Berkeley, 1709; Husserl, 1907; Merleau-Ponty, 1945). According to the philosopher and mathematician Poincaré (1902),

[W]hen we say that we localize such an object in such a point in space, what does this mean? This simply means that we represent the movements that are necessary to reach that object.… When I say that we represent these movements, I mean only that we represent the muscular sensations which accompany them and which have no geometrical character, which therefore do not imply the pre-existence of the concept of space.

(p. 75)

Around the same period, Bergson (1896) applied a similar theoretical approach to object perception, stating that “the objects that surround my body reflect the possible action of my body on them” (p. 18). The common concept between these theorists, and those who have followed (Barsalou, 2008; Gibson, 1979; Hommel et al., 2001; Noë, 2004), is that perception should not be considered as a mere passive activity based on deciphering sensory inputs but rather as a dynamic activity involving processing sensory inputs in relation to the representation of deployable actions. In the present chapter, we will present the most recent behavioural and neuroimaging data supporting this embodied approach of perception. In particular, we will show that action-related perceptual processing represents the basis, not only for specifying the structure of our visual space and acquiring conceptual knowledge about it, but also for spatially adjusting our social interactions with conspecifics.

Motor-related perception of visual objects

The most recent findings in neuroimaging studies have provided convincing arguments for sensorimotor-based cognitive models by revealing considerable overlap in the neural circuitry supporting perceiving, acting on and knowing about objects. With regard to visual artefacts, for instance, several studies have shown that simply viewing pictures of manipulable manmade objects selectively activates areas in the visual cortex together with the left ventral premotor and posterior parietal cortices, in comparison to viewing animals, houses, faces or unfamiliar objects (Chao & Martin, 2000; Chao et al., 2002; Creem-Regehr & Lee 2005; Kan et al., 2006; Martin, 2007). Electrophysiological studies in monkeys have revealed that these regions have a motor function and are active when performing voluntary actions towards manipulable objects (Binkofski et al., 1999; Medendorp et al., 2011) or when observing objects that they have already manipulated (Jeannerod et al., 1995; Krüger et al., 2014; Rizzolatti & Luppino, 2001). In agreement with an involvement of the motor system in the visual perception of manipulable objects (Cardellicchio et al., 2011; Grafton et al., 1997), Proverbio (2012) showed a modulation of cortical neural activity over the centro-parietal region when observing images of manipulable objects, compared to nonmanipulable objects. Time-frequency analysis of EEG signals revealed an attenuation of cortical oscillation between 8 Hz and 13 Hz (event-related desynchronisation of the μ rhythm), similar to that observed when performing a voluntary motor action (Babiloni et al., 1999; Llanos et al., 2013; Salmelin & Hari, 1994; Salenius et al., 1997), observing a human movement (Cochin et al., 1999) or performing a motor imagery task (Braadbaart et al., 2013; Hari, 2006; Muthukumaraswamy et al., 2004). Likewise, Noppeney (2008) showed that observing manipulable objects triggers activation and connectivity patterns in the ventral occipito-temporal cortex and the dorsal visuo-motor system, the former being more associated with processing objects’ structural aspects and the latter with processing information about objects’ function. Supporting the view that motor-related information is important for perceiving and identifying manipulable objects (Allport, 1985; Martin, 2007; Noppeney, 2008), patients with damage to either left intra-parietal sulcus or ventral premotor cortex showed a specific impairment in the processing of object-action relationship (Tranel et al., 1997, 2003).

In an fMRI study involving human adults, Culham et al. (2008) corroborated these findings by showing that observing manipulable objects triggers specific brain activations within the dorsal stream of the visual system, in particular in the reach-related area of the superior parieto-occipital cortex (SPOC) (see also Gallivan et al., 2009). However, this motor-related activation was essentially observed when manipulable objects were presented in the space near the body. The interpretation of these data was that SPOC is particularly responsive to stimuli presented within reach in relation to upper-limb movements. Other findings in the macaque monkey showed that neurons in the inferior part of the premotor cortex (the caudal area F4), where proximal arm movements are represented (Matelli et al., 1985), respond predominantly to three-dimensional objects located in the near space of the animal (Gentilucci et al., 1983; Rizzolatti et al., 1981a). Some F4 neurons respond only to stimuli very close to the body (less than 10 cm away), while others can be triggered by stimuli located further away but always within the reaching distance of the animal (di Pellegrino & Làdavas, 2014). Additionally to these brain regions, Cardellicchio, Sinigaglia and Costantini (2011) provided some evidence for a contribution of the motor cortex to the perceptual coding of objects in near space. They stimulated magnetically the left primary motor cortex and recorded motor evoked potentials (MEPs) while participants were observing graspable and nongraspable objects located within or outside their own reachable space. They found higher MEPs during the observation of graspable objects falling within the reachable space compared to the observation of either a nongraspable object or a graspable object falling outside the reachable space.

We recently analysed the modulation of EEG signals when healthy adults judged the reachability of visual objects presented at different distances in a stereoscopic three-dimensional virtual scene (Wamain, Gabrielli, & Coello, 2015). The stimuli presented through active shutter 3D glasses were visual artefacts (bottle, cup, …), all with the same width (7.5 cm) but with either a prototypical shape (sharp contour, manipulable objects) or a distorted shape as the result of a Gaussian scattering algorithm applied on the visual pixels associated with the objects (lack of contour, nonmanipulable objects). In a second task, participants judged whether the objects presented at different distances were prototypical or distorted objects. Responses were provided using foot pedals but only when a question mark was displayed after the one-second period of object presentation (10% of the trials). For the trials where no responses were provided, time-frequency decomposition of the EEG signal was performed, and the variation of the μ rhythm (8 Hz to 13 Hz) induced by the stimuli was analysed focussing on the brain’s centro-parietal region (Proverbio, 2012). The signal baseline was the brain activity registered during the 200 ms preceding objects presentation. When contrasting the EEG signal obtained for prototypical and distorted objects as a function of their location in space, a significant desynchronisation of the μ rhythm starting on average 300 ms after stimuli presentation was found in the reachability judgment task but only when the stimuli were prototypical objects located in the reachable space. Moreover, desynchronisation of μ rhythm reduced progressively from reachable to unreachable stimuli. By contrast, no similar gradient was observed when participants categorised the stimuli as prototypical or distorted. Thus, the motor system seems involved in the processing of visual manipulable objects when presented in the reachable space, as expressed in the μ rhythm and in agreement with other previous studies (e.g., Quinlan & Culham, 2007) but predominantly when the task focuses on motor-related visual information.

In a complementary functional magnetic resonance imaging (fMRI) study, Bartolo et al., (2014a), found that the network involved in the visual processing of objects in reachable space extends beyond the parietal and primary motor areas. In a task consisting of judging object reachability in a virtual environment display, the authors found that objects in reachable space trigger brain activations within a fronto-parietal network including the cerebellum. These brain regions overlap with the ones supporting the production of actual goal-directed movements (Binkofski et al., 1999; Medendorp et al., 2011) and with the brain network involved in the perception of others’ reachable space (Lamm et al., 2007). In the same vein, Makin et al. (2007) used an fMRI study to identify regions within the intra-parietal sulcus, the lateral occipital complex, and the premotor cortex that showed significantly stronger activation in response to an object approaching the subject’s hand rather than moving away from the subject’s hand. Moreover, in these areas, selective responses to objects in near space were abolished when the hand was occluded from view or was retracted.

Considered together, these studies reveal that manipulable objects are visually perceived through an interaction between the sensory inputs and the motor system. In particular, the neural network supporting the perception of manipulable objects overlaps with the neural network supporting the execution of object-oriented actions, suggesting that perception captures the mutual relationship between the environment and the perceiver’s motor capabilities (Gibson, 1979; Witt, 2011). However, the contribution of the motor system is constrained by the location of the objects in space and seems irrepressible when manipulable objects are located within reachable space. Furthermore, the extent of the motor contribution seems to depend on whether the intention of the perceiver is to prepare herself to interact with the objects (reachability estimates) or not (colour judgments). In addition, partial involvement (e.g., SPOC, Culham et al., 2008) or more comprehensive involvement (e.g., the fronto-parietal regions including the cerebellum, Bartolo et al., 2014a) of the motor network seems to depend on whether the perceptual task concerns passive observation of manipulable objects (affordances) or estimating potential actions towards these objects, namely, anticipating the consequences of acting on the objects in a particular context, for which more elaborated motor simulation processes are necessary.

Motor encoding of peripersonal space

The fact that objects in the reachable space specifically activate the sensorimotor system suggests that the proximal and distal regions of the visual space are represented differently within the brain. Since the seminal distinction by Brain (1941) of a grasping distance and a walking distance to explain the selective impairment that right brain-damaged patients may show for one or the other region of space, it is admitted that space contains functional thresholds defined by the kinds of action that can be performed within it. Rizzolatti et al. (1981a, 1981b) introduced the term “peripersonal” space to highlight the close link between the processing of bodily and visual information in the portion of space near the body. Peripersonal space was later used to describe the area representing the space surrounding the body where objects can be easily reached and manipulated (Coello & Delevoye-Turrell, 2007; di Pellegrino & Làdavas, 2014; Kirsch et al., 2012; ter Horst et al., 2011). Beyond peripersonal space there is the extrapersonal space in which objects cannot adequately be reached without moving towards them, and both of these spaces are thought to involve specific brain processing areas (Previc, 1998). Furthermore, in peripersonal space, sensory inputs are predominantly encoded according to egocentric frames of reference, specifically hand/arm-centred and head-centred frames of reference, for acting on objects within immediate reach (Di Pellegrino & Làdavas, 2014).

Many studies have shown that we encode our peripersonal space in relation to the representation of our body and capacities of acting (Bootsma et al., 1992; Carello et al., 1989; Coello & Iwanow, 2006; Fischer, 2000; Gabbard et al., 2006; Rochat & Wraga, 1997). In an ecological situation using real objects, peripersonal space encoding correlates with arm length with, however, a slight tendency for overestimation (Bootsma et al., 1992; Carello et al., 1989; Coello & Iwanow, 2006; Fischer, 2000; Gabbard et al., 2006; Robinovitch, 1998; Rochat, 1995; Rochat & Wraga, 1997; Schwebel & Plumert, 1999). When using two-dimensional virtual displays, overestimation eventually increases depending on the availability of distance cues in the virtual scene (Bartolo et al., 2014a, 2014b). Namely, when the visual scene is impoverished and distance cues are mainly derived from retinal image and object familiarity, overestimation of peripersonal space can extend much farther than arm length (Bartolo et al., 2014a). However, when using immersive three-dimensional virtual displays providing stereoscopic vision, the extent of peripersonal space is close to measures taken in more ecological situations (Iachini, Coello, Frassinetti, & Ruggiero, 2014a). This overestimation has generally been attributed to a biased representation of postural constraints (the postural stability hypothesis, Gabbard, 2009; Robinovitch, 1998), to the preconceived potential actions based on unconstrained use of multiple degrees of freedom despite the restricted posture imposed by the task (the whole-body engagement theory, Carello et al., 1989; Fischer, 2000; Gabbard et al., 2007; Mark et al., 1997; Rochat & Wraga, 1997), to a high state of confidence about current motor possibilities (the cognitive state hypothesis, Gabbard et al., 2005, 2006) or even to the visual context and the availability of optical and oculomotor variables (Coello & Iwanow 2006; Treisilian et al., 1999; see Delevoye, et al., 2010 for a thorough discussion).

The contribution of the action system to the encoding of peripersonal space was elegantly demonstrated by Iachini, Ruggiero, Ruotolo and Vinciguerra (2014b). In this study, participants had to perform a right-left localization task (with respect to the body midline) of manipulable versus nonmanipulable objects presented in either peripersonal or extrapersonal space in an immersive virtual reality scenario. To assess the contribution of the motor system, a motor interfering condition (e.g., participants’ arms tied behind their backs) was compared with a free arm condition (e.g., Sirigu & Duhamel, 2001; Stevens, 2005). The underlying assumption was that the full availability of motor resources should be crucial for objects in peripersonal space. The results showed that manipulable stimuli were more accurately localised with free arms, in line with the motor-related effects in object perception reported in the literature (Chao & Martin, 2000; Iachini, Borghi, & Senese, 2008; Tucker & Ellis, 1998, 2001, 2004). However, the main finding was that participants were faster and more accurate when they had to locate the position of both manipulable and nonmanipulable stimuli in peripersonal space with both arms free. Instead, extrapersonal space localization was not affected by motor interference. The facilitation emerging when motor resources were fully available is consistent with the idea that the nature of peripersonal space is intrinsically motor and probably reflects the adaptive need of preparing to react as more effectively as possible to events near the body. Any stimulus falling in this area could prompt appropriate actions, for example, a rapid withdrawal when lightning or splinters of glass (i.e., nonmanipulable stimuli) occur near our body (e.g., Huang et al., 2012). Thus, not only manipulable aspects but also emotional valence (attractivity, repulsivity) may originate motor coding of visual objects falling in peripersonal space. Moreover, the kind and strength of motor activation can be modulated by the characteristics of the task (Makin et al., 2007; Wamain, Gabrielli, & Coello, 2015). Still, the literature on the sensorimotor grounding of language and the debate on affordances suggest that motor resources may be triggered at different degrees of abstraction: from specific grips to more general grasping movements and to generic movements (see Borghi & Binkofski, 2014; Fischer & Zwaan, 2008; Thill et al., 2013). The motor interference based on blocking arms is quite “abstract” in nature, whereas motor tasks typically used in the literature about object perception in space require specific precision or power grips (e.g., Costantini et al., 2010; for a review on motor effects, see Fischer & Zwaan, 2008). It is thus possible that the basic motor endowment of peripersonal space reflects an “abstract” motor preactivation for preparing future actions (e.g., Anderson, Yamagishi, & Karavia, 2002; Bourgeois & Coello, 2012; Phillips & Ward, 2002; Symes, Ellis, & Tucker, 2005). Building on this motor potentiality, a stimulus entering the peripersonal margin can quickly prime finely tuned motor simulation programs, as shown by previous studies (Cardellicchio et al., 2011; Coello & Delevoye, 2007; Costantini et al., 2010). This is consistent with the idea that the motor nature of peripersonal space may reflect the adaptive need of anticipating what may happen near the body and prepare to react in time.

However, the question of the nature of motor processes involved in the coding of peripersonal space is not yet resolved. In order to get more information on this matter, we analysed the encoding of peripersonal space in neurological patients with brain damage localised in either the right or the left hemisphere and associated with contralesional hemiplegia (Bartolo et al., 2014b). In this study, patients with right-brain damage associated to left hemiplegia showed a specific deficit in the actual sequential motor task performed with the healthy hand (thumb to fingers touching task), as well as in the imagined sequential motor task performed with either hand. Such deficits were not observed in left-brain damaged patients, even when characterised by the same level of hemiplegia. The results also revealed that right hemisphere damage had a more detrimental effect on judgments of reachability, suggesting that motor planning processes contribute specifically to the encoding of peripersonal space. These findings are in agreement with the claim that brain hemispheres are specialised in specific components of voluntary actions (Schaefer, Haaland, & Sainburg, 2007, 2009). Indeed, previous studies have suggested that the left hemisphere plays an important role in the control of complex motor skills and trajectory execution, whereas the planning of voluntary action relies predominantly on the right hemisphere. In particular, in a manual reaching task performed by right-handed healthy participants without direct visual control of their hand, final accuracy was found to be higher for the nondominant arm, whereas hand trajectory was found to be smoother for the dominant arm, reflecting more efficient coordination (Sainburg, 2002, 2005; Sainburg & Kalakanis, 2000; Sainburg et al., 1999). In the same vein, right-handed patients with lesions to the hemisphere controlling the nondominant arm showed deficits in the accuracy of the final position of the dominant arm, suggesting a specific impairment in the accurate planning of voluntary motor action (Haaland & Delaney, 1981; Haaland & Harrington, 1996; Prestopnik, et al., 2003; Winstein & Pohl, 1995). By contrast, lesions to the hemisphere controlling the dominant arm were found to mainly produce deficits in the spatio-temporal features of motor trajectories, suggesting a deficit in the online control (Haaland & Delaney, 1981; Haaland & Harrington, 1996; Prestopnik et al., 2003) or cognitive monitoring (Beckmann, Gröpel, & Ehrlenspiel, 2013) of voluntary action. Therefore, right-brain damaged patients showed specific impairments of motor planning in both actual and imagery tasks. These impairments were associated with deficits in the encoding of peripersonal space, the latter resulting probably from the increased difficulty in planning covert voluntary actions (Bartolo et al., 2014b).

Planning processes associated with intentional motor behaviour have been associated with predictive mechanisms. In particular, in the embodied perspective, the concept of simulation is viewed as the core mechanism of mental processes and is closely linked to anticipation processes, in particular in the domain of action control (Barsalou, 2009; Jeannerod, 2001; Shadmehr, Smith, & Krakauer, 2010). Embodied motor simulation can be defined as a simulation of action possibilities with objects and contexts based on previous bodily experiences (Barsalou, 1999; Gallese, 2005; for a review, see Iachini, 2011). Motor simulation associated with predictive mechanisms suggests that actions can be performed at a covert stage with the aim of representing the future in terms of the goal of the action, the means to reach it, and the consequences for the organism and the external world (Jeannerod, 2001; Shadmehr et al., 2010). The function of this process of simulation is not only to shape the motor system in anticipation to execution but also to provide the self with information on the feasibility and the meaning of potential actions (Jeannerod, 2001, 2006). The fact that voluntary motor actions are represented in terms of their effects refers back to the ideomotor theory of action representation (James, 1890; Stock & Stock, 2004) and its reformulation in the recent theory of event coding (Hommel et al., 2001). The important point of these theories is that the predictive mechanisms associated with motor representations mediate the selection of objects in the environment that are relevant for action. One can thus speculate that the predictive mechanisms associated with intentional motor production are also involved in the encoding of peripersonal space.

To test this hypothesis, we performed a task consisting of modifying the relation between the predicted and observed outcomes of a voluntary motor action (Bourgeois & Coello, 2012). In this study, healthy adults performed a motor task in a condition where the visual target to reach with the right hand was visible but the hand displacement was not. At the end of every motor performance, visual feedback was nonetheless provided to the participants so that they could evaluate the accuracy of their own motor action in relation to the location of the target, on a trial-by-trial basis. The visual feedback was accurate for one group of participants, but for two other groups of participants, it was shifted along the radial axis by 1.5 cm either further or closer than the actual movement end-point. Through movement rehearsals (blocks of 60 trials), participants adapted their responses to the shifted feedback. At the end of the adaptation period, the shift was again increased by 1.5 cm, then producing a discrepancy between movement end-point and target location of 3 cm. The same procedure was repeated until the shift between target location and the visual feedback about motor performance was 7.5 cm (requiring 5 successive blocks of adaptation). Interestingly, the target used in the reaching task was also displaced by 1.5 cm in each adaptation block so that the actual amplitude of the movement remained unchanged after visuo-motor adaptation. Also, the shift between target location and visual feedback about motor performance remained unnoticed by the participants through the whole experiment. A reachability judgment task was performed at the end of each adaptation block (31 visual targets randomly presented along the sagittal axis at distances from 11 cm up to 41 cm from the starting position). The data showed that the motor adaptation was achieved after only few trials within each adaptation block. Furthermore, the perceived extent of peripersonal space followed the motor adaptation, as expected. Namely, when movement amplitude increased for a similar target location, the size of peripersonal space reduced, and the opposite effect was observed when movement amplitude decreased for a similar target location. Thus, modifying the effect of acting in the environment, while keeping movement parameters unchanged, has a sharp effect on the encoding of peripersonal space.

This interpretation was later confirmed by analysing the effect on the visual perception of using a tool to reach objects in peripersonal space (Bourgeois, Farné, & Coello, 2014). Indeed, when healthy adults were requested to use a tool extending their arm length by 60 cm to collect a series of poker chips dispersed on a table (50 trials), we found that peripersonal space increased, but with the consequence that hand representation was less accurate (increased variability of reachability judgments performed according to the hand). This tool-use-dependent effect could not be attributed to the mere execution of manual reaching movements while holding a tool: No effect was observed after a control condition whereby participants used a short tool, which did not provide any functional extension of the arm. Our interpretation was that the tool was incorporated into the body schema resulting in an elongated representation of arm length within the body schema (Cardinali et al., 2011) and that such integration had perceptual consequences that clearly outlasted the tool-use period, making unreachable objects with the hand appear reachable. The selectivity of these findings provided the first compelling demonstration that, for the tool to be effective in shaping peripersonal space, a functional benefit to the arm was necessary.

The wealth of data presented so far strongly suggests that peripersonal space encoding involves the motor system. When the possibilities of acting vary, the representation of peripersonal space changes accordingly while leaving unchanged the perception of the geometrical features of the visual scene. Precisely, peripersonal space depends on motor predictive mechanisms associated with deployable actions in the environment. Accordingly, neuropsychological cases are associated with specific impairments in processing spatial information in either peripersonal or extrapersonal space, which do not affect the overall visual function. In particular, bilateral temporo-occipital lesions produce an impairment of spatial processing in extrapersonal space, whereas bilateral parieto-occipital cortex lesions produce an impairment of spatial processing in peripersonal space (Berti & Rizzolatti, 2002; Bjoertomt et al., 2002; Halligan et al., 2003; Mennemeier et al., 1992; Shelton et al., 1990; Weiss et al., 2003). In conclusion, empirical evidence about the nature of peripersonal space is fully consistent with the theoretical assumptions of the embodied cognition approach considering perceptual, motor and cognitive processes as closely linked and attributing a central role to anticipation mechanisms, particularly in the planning of action (Barsalou, 2009; Hommel, 2004). In this perspective, the embodied representation of peripersonal space probably primarily serves the organisation of object-oriented motor behaviour, but interestingly it has also an effect on more abstract processing of spatial information in a large variety of cognitive tasks.

Role of embodied peripersonal space in cognitive processes

Several studies have provided direct evidence of the embodied motor nature of peripersonal space and its role in cognitive tasks. Ter Horst, van Lier, and Steenbergen (2011), for instance, showed that a mental rotation task, involving back views of left-right hand stimuli rotated over six different angles (from 0° to 360° in steps of 60°), was performed differently when the stimuli were presented in either perior extrapersonal space. In this study, the task was to judge the laterality of the hand displayed as fast and as accurately as possible, without explicit instructions on how to solve the task. The results showed increased reaction times when the postures of observed hands were biomechanically difficult rather than easy to adopt, but only when the stimuli were presented in peripersonal space.

A similar influence of peripersonal space on cognitive tasks was found when conceptual spatial representations were involved in language processing. Coello and Bonnotte (2013), for instance, suggested that the language description of objects’ locations may also involve motor representations similar to those used to locate objects in peripersonal space (see also Bonfiglioli et al., 2009; Coventry et al., 2008, 2013). In particular, they studied whether a link exists between the use of deictics in language and the motor coding of objects in peripersonal space. French determiners such as la (the) or cette (that), when for instance saying “la balle” (“the ball”) or “cette balle” (“that ball”), are thought to carry information about the spatial location of the related object in relation to the action system. Indeed, empirical investigation of how these determiners are used daily suggests that the selection of one or another determiner is context-dependent. For instance, in a familiar context, one would say “passe moi la tasse” (“pass me the cup”) when the location of the object is obvious for the listener (i.e., in a proximal space). Indeed, by using the definite determiner “la” (“the”), one chooses to individuate an object already identified in the spatial context. By contrast, one would prefer the demonstrative determiner “cette” (“that”) and say “passe moi cette tasse” (“pass me that cup”) when the location of the object is less obvious for the listener (i.e., in a distal space). In the latter case, a movement denoting which cup has to be passed is often executed during the verbal utterance. In this context, Coello and Bonnotte (2013) tested whether the use of a particular determiner was associated with a particular spatial representation in relation to the action system. Participants performed a reachability judgment task after having evaluated the correct spelling in French of both a determiner (la- the or cette- that) and an object-noun (balle-ball, tasse-cup or pomme-apple). The reachability judgment task consisted in judging whether the actual ball, cup or apple presented in a virtual display at different distances was reachable with the right arm or not. The rationale was that if the determiner associated to a noun activates the representation of the related object at a closer location (i.e., when using the determiner la), judging the virtual stimulus as reachable should be performed faster. By contrast, if the determiner associated to a word activates a representation of the related object at a farther location (i.e., when using the determiner cette), then judging the virtual stimulus as unreachable should be performed faster. Furthermore, the boundary of reachable space was expected to be perceived as further away in the case of the presentation of the determiner la instead of the determiner cette, since judging objects at ambiguous locations should be influenced by the determiner. The outcome of the study confirmed all hypotheses. We found that response time for judging reachability (on average 703 ms) was 21 ms shorter when the determiner la rather than cette was used. The opposite pattern of results was observed with unreachable objects, that is, response time for judging unreachability was 39 ms shorter when the determiner cette rather than la was previously presented. Furthermore, the boundary of peripersonal space (located at 98 cm on average) was perceived to be 4 cm further away when the determiner la rather than cette was previously presented. Considered together, these data stress the close connection between the spatial content of determiners and the representation of a motor-dependent peripersonal space, giving new evidence for the embodied nature of language processing.

Role of embodied peripersonal space in social interactions

The fact that communicative behaviour is grounded in sensorimotor content is not restricted to language. Indeed, the observation of how people interact in a social context also suggests that there is an intrinsic relationship between action representation, spatial processing, and social interactions. In the middle of the 20th century, Hall (1966) and later Hayduk (1978) had the intuition that the space close around us is the privileged region of space not only for grasping and manipulating objects, but also for interacting with other individuals. They were the first to suggest that social interactions require accurate control of interpersonal distances, and this would suggest that the encoding of peripersonal space is crucial, not only for the regulation of interactions with objects, but also for our social life. We recently speculated that interpersonal distances and peripersonal space are in some aspects related to each other (Iachini et al., 2014a). Indeed, interacting with someone else represents a complex task, implying coming close to an individual in order to verbally interact or physically cooperate but paying attention so as to not invade her peripersonal space. In such a situation, it is thus crucial to consider, even at an implicit level, others’ peripersonal space; this allows to control approach behaviour and adjust interpersonal distances so that one’s own and others’ peripersonal spaces are close to each other until the point that is felt appropriate and unwanted intrusions are avoided.

The closeness of individual peripersonal spaces should thus be viewed as determinant of efficient interactions between social agents. Supporting this view, we recently performed a study in which we demonstrated that interpersonal distances and peripersonal space are related to each other (Quesque, Iachini, Santos, Mouta, & Coello, in preparation). We used point light display (PLD) stimuli to avoid any confounding variables (body, gender, facial expression etc.). The PLD was a 1.76 m tall man approaching from a distance of 8 m at a velocity of 1.30 meters/second. For the participant, the starting location of the PLD was either straight-ahead or at ± 15° or ± 30° from a straight-ahead direction. Furthermore, the angle of approach was computed so that the distance between the PLD and the participant’s shoulder at the crossing location was between –8 cm up to +64 cm by step of 8 cm on each side. The PLD vanished when it reached the distance of 2 m from the participants, and the task for the participants was to indicate whether the crossing distance would be comfortable or not if the PLD was continuing to approach. The minimum comfortable distance between the PLD and participants’ shoulder was computed from yes-no responses using logistic regressions. The results showed an interaction between the starting location of the PLD and the comfortable crossing distance. When the PLD was crossing the participants from the same side of the starting location of the PLD (e.g., crossing on the left side when starting at − 15° or –30° from a straight-ahead direction), the minimum comfortable distance was on average 16.83 cm, whereas, when the PLD was crossing the participants from the opposite side compared to the starting location (e.g., crossing on the left side when starting at 15° or 30° from straight-ahead direction), the minimum comfortable distance was on average 33.68 cm. We interpreted this effect as due to the fact that participants considered the crossing distance more comfortable when the PLD moved from one side to the other one while maintaining a larger crossing distance so as the PLD did not invade their peripersonal space. To validate this interpretation, we modified the representation of peripersonal space through the use of tools. In one group, participants were requested to use a short tool (10 cm long) to collect a poker chip, positioned by the experimenter at different distances, 50 times. The task was to use the tool for moving the poker chip from a far to a close location. The same task was performed by a second group but using a long tool (70 cm long). The long tool was supposed to be incorporated into the body schema, resulting in an elongated representation of arm length (Bourgeois et al., 2014; Cardinali et al., 2011) and therefore in an extension of peripersonal space. The PLD crossing task was then repeated just after having manipulated the tool. Results showed that the comfortable distance indicated by the participants when the PLD was crossing in either side was unchanged in the group having used the short tool (average change in comfortable distance was –0.05 cm). By contrast, comfortable distance increased in the group having used the long tool (2.84 cm), indicating that the extension of peripersonal space due to the use of the long tool affected the perception of interpersonal space judged as comfortable in social interactions. The outcome of this study then clearly suggests that interpersonal space and peripersonal space are related to each other.

Hayduk (1978, 1981) also suggested that the space around the body is personal and cannot be invaded without arousing discomfort (Dosey & Meisels, 1969; Hall, 1966; Horowitz, Duff & Stratton, 1964; Sommer, 1959). Thus, people feel discomfort when someone is physically too close and interferes with their (peri)personal space (Hayduk, 1981, Kennedy et al., 2009). This feeling of discomfort is even stronger when individuals do not feel affectively close to the person (Bell et al., 1988; Morgado et al., 2011). When (peri)personal space is violated, individuals step away to reinstate the margin of safety. Confirming the relation between (peri)personal space and emotional state, Kennedy et al. (2009) found in an fMRI study that healthy individuals show an increased activation in the amygdala region, a sub-cortical brain structure playing a key role in emotion regulation, when a conspecific violated their (peri)personal space. The amygdala may be required to trigger the strong emotional reactions normally following peripersonal space violations, thus regulating interpersonal distances. Supporting this claim, individuals with complete amygdala lesions showed a deficit in regulating interpersonal distances (Kennedy et al., 2009). Interpersonal distances can then be seen as the physical space between people where some social interactions occur on the basis of their emotional and motivational relevance (Lloyd, 2009).

In order to evaluate the relation between peripersonal space encoding and the feeling of safety, Iachini and colleagues (2014a) investigated whether peripersonal space and interpersonal space refer to a similar or different physical distance by comparing two standard paradigms: reachability and comfort distance judgments. In the first case, participants had to evaluate if visual stimuli presented at various distances from the body were reachable or not; in the second case, participants had to determine the point where they felt still comfortable with the other’s proximity. While immersed in a virtual reality scenario, participants had to provide reachability-distance and comfort-distance judgments towards human (male and female avatars) and nonhuman (cylinder and robot) virtual stimuli while standing still or walking towards stimuli. Participants had to stop forward movement of themselves or the virtual stimuli, when they felt the latter was at a reachable or comfortable distance. Results showed that comfort-distance was larger than reachability-distance when participants were passive, but reachability and comfort distances were similar when participants could actively move towards the stimuli. This similarity is compatible with the idea that motor predictive processes subtending reachability judgments and the encoding of peripersonal space also contribute to specify comfortable social distances (Delevoye-Turrell, Bartolo, & Coello, 2010; Lloyd, 2009). The other finding which suggests a communality between the two spaces is that both are modulated by human versus nonhuman stimuli. In line with previous data (Teneggi et al., 2013), their size was expanded with virtual objects and reduced with virtual humans. In addition to the literature, the results showed that the space was modulated by the social valence of stimuli: There was a contraction of distance with virtual females as compared to males and an expansion with cylinder as compared to robot. These findings reveal that peripersonal and interpersonal spaces share a common motor nature and are endowed, presumably at different degrees, with finely tuned mechanisms for processing social information. Therefore, it can be concluded that low-level sensorimotor spatial processing and high-level social processing interact in the representation of the space around the body.

Acting in peripersonal space to communicate social intention

Assuming that peripersonal space plays some role in the specification of interpersonal distances, one may also suggest that actions in peripersonal space could be influenced by the social context. Several studies have indeed suggested that this is the case (Scorolli et al., 2014; Quesque et al., 2013). The kinematic patterns of object-oriented motor behaviour were found to be influenced by the presence of a confederate (Gianelli, Scorolli, & Borghi, 2013; Quesque et al., 2013), when a confederate served as the target for the motor action (Becchio et al., 2008b; Ferri et al., 2011) or even when the goal of motor action was to manipulate an object with the social intention to communicate information (Sartori et al., 2009). These motor kinematic effects have been interpreted as providing implicit but potentially informative signals that can be used by social agents when communication or interaction processes are engaged (Sartori et al., 2009, Quesque & Coello, 2014). However, if the social context can influence how a goal-directed movement is performed, it was not clear from these studies whether social agents can take advantage of these kinematic effects to guide social interactions, that is, whether our own motor actions can be implicitly modulated by the social intention that one perceives through other’s motor patterns.

According to Jacob and Jeannerod (2005), thanks to their mindreading ability healthy human adults readily explain and predict actions by representing and attributing to human agents a whole battery of internal unobservable mental states such as goals, intentions, emotions, perceptions, desires, and beliefs, many of which are far removed from any observable behaviour (see also Gopnik & Wellman, 1994). However, since different social intentions may be associated with the very same motor intention, as well illustrated by the Dr. Jekyll and Mr. Hyde paradox,1 social intentions were thought to be undetectable from simple kinematic parameters of voluntary motor actions (Jacob, 2013; Jeannerod, 2006). When observing someone performing an action, observers can simulate the agent’s movements by matching them onto their own motor repertoire. Simulating the agent’s movements might allow them to represent the agent’s motor intention, but Jacob and Jeannerod (2005) surmise that it will not allow them to represent the agent’s social intention. In this context, we recently developed an original sequential motor task that allows assessment of observers’ spontaneous perception of social intention within a voluntary motor action (Quesque & Coello, 2014; Quesque et al., 2013). In the motor sequence, the first action (preparatory action) involved moving a wooden dowel along the mid-body axis to displace it to a central location used as the starting location for the following actions. The preparatory action was performed without time constraints and in the view of a partner who was facing the actor. The second action (main action) was time-constrained and consisted in reaching for and grasping the target object to move it sideways to a final location, all as fast as possible. Although the preparatory action was always done by the actor, Quesque and colleagues (2013, 2014) found that it was influenced by whether the main action was thereafter to be performed by the actor or the partner. Namely, reaction times and movement elevations of the preparatory action increased when the actor knew that the main action was to be performed by the partner. Besides, Quesque and Coello (2014) demonstrated that the partner’s eye level played a crucial role in the influence of social intention on motor kinematics. When in the same task the partner’s eye level was unnoticeably moved upwards using an adjustable seat, they found that the actors unconsciously exaggerated their trajectory curvature during the preparatory action in relation to the partner’s eye level. This effect indicated that other bodies are implicitly taken into account, as well as their motor intention, when a reach-to-grasp movement is produced in a social context. However, it was not clear whether a social agent can take advantage of these social-dependent kinematic effects for their own actions. To test this hypothesis, Quesque, Delevoye-Turrell and Coello (2015) used the same motor sequence but informed the actor, not the partner, in advance, about who would have performed the following main action. The same effect of previous information was found in the preparatory action, namely, actors performed the preparatory action with a longer reaction time and higher trajectory when informed that they would not have performed the main action. The interesting result was the effect that previous information had on the following main action, in particular for the partner. Actors executed the main action with shorter reaction times and slower velocities when they were previously informed that they were to perform it, confirming Quesque and colleagues (2013, 2014). One striking finding was that partners showed a similar pattern of effects on the main action despite the fact that they were never previously informed about who would be performing the main action. Both the actors and the partners remained unaware of these effects. These results demonstrated for the first time that social intentions can, not only be spontaneously perceived from the spatio-temporal characteristics of others’ motor actions, but that they can also influence the low-level kinematics of our own voluntary motor actions.

Conclusion

Considered as a whole, the findings summarised in this chapter suggest that the perception of spatial distance is intrinsically linked to potential actions: Perceiving space and acting in space do not represent distinct functions. Through predictive mechanisms based on sensorimotor couplings, perception and action cooperate in encoding the spatial position and the meaning of stimuli surrounding the individual and in preparing the body to respond appropriately. Much evidence was summarised that shows that the perception of manipulable objects is based on visuo-motor processing and that this depends on their location in space. Moreover, the specificity of the neural coding of the space surrounding the body in which visuo-motor interactions occur has a broad influence on cognitive tasks, suggesting that perception, action and cognition are tightly linked to each other. Perception of spatial structure in relation to the motor system thus provides the basis for more abstract thought, as in the case of deictic use in language, for instance. Finally, the embodied nature of space perception has a crucial role in complex social processing, as peripersonal space encoding represents a key element in the regulation of distances in social interaction situations. In line with the embodied approach of perception and cognition, bodily states and simulation of information in the brain’s modality-specific systems for perception, action, and introspection represents then also the basis of social interactions. In conclusion, the data reviewed in this chapter demonstrate that visual space is not represented within the brain as a continuum, but as a series of perceptual thresholds delimitating the external environment in functional subspaces purposefully linked to behaviour. Predictive mechanisms associated to the motor coding of external space represent then the basis for a unified theoretical framework that can account to a broad extent for perceptual and cognitive as well as social behaviours, which is consistent with the embodied approach of perception and cognition.

Note

1 Put at the forefront of the neuroscientific debate on intentionality by Jacob and Jean-nerod (2005), the novella written by the Scottish author Robert Louis Stevenson on “split personality” stages Dr. Jekyll, alias Mr. Hyde, a renowned surgeon who performs appendectomies on his anesthetised patients, to heal them during the day but to murder them during the night. He then executes the same motor sequence during the day and at night, whereby he grasps his scalpel and applies it to the same bodily part of two different persons. According to Jacob and Jeannerod, Dr. Jekyll’s motor intention is the same as Mr. Hyde’s, although Dr. Jekyll’s social intention (improving patient’s health) clearly differs from Mr. Hyde’s social intention (enjoying victim’s agony). Social intention was thus thought to be hardly identifiable from movement characteristics.

References

Allport, D. A. (1985). Distributed memory, modular subsystems and dysphasia. In S. N. Epstein (Ed.), Current perspectives in dysphasia (pp. 32–60). New York: Churchill Livingstone.

Anderson, S. J., Yamagishi, N., & Karavia, V. (2002). Attentional processes link perception and action. Proceedings of the Royal Society B, 269, 1225–1232.

Babiloni, C., Carducci, F., Cincotti, F., Rossini, P. M., Neuper, C., Pfurtscheller, G., et al. (1999). Human movement-related potentials vs. desynchronization of EEG alpha rhythm: A high-resolution EEG study. Neuroimage 10, 658–665.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–660.

Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–45.

Barsalou, L. W. (2009). Simulation, situated conceptualization, and prediction. Philosophical Transactions of the Royal Society. B, 364, 1281–1289.

Bartolo, A., Carlier, M., Hassaini, S., Martin Y., Coello, Y. (2014b). The perception of peripersonal space in right and left brain damage hemiplegic patients. Frontiers in Human Neuroscience, 8 (3). doi:10.3389/fnhum.2014.00003

Bartolo, A., Coello, Y., Edwards, M. G., Delepoulle, S., Endo, S., & Wing, A. M. (2014a). Contribution of the motor system to the perception of reachable space: An fMRI study. European Journal of Neuroscience, 40 (12), 3807–3817.

Becchio C., Sartori L., Bulgheroni M., & Castiello U. (2008). The case of Dr. Jekyll and Mr. Hyde: A kinematic study on social intention. Consciousness & Cognition, 17, 557–564.

Beckmann, J., Gröpel, P., & Ehrlenspiel, F. (2013). Preventing motor skill failure through hemisphere-specific priming: Cases from choking under pressure. Journal of Experimental Psychology: General, 142 (3), 679–691.

Bell, P. A., Kline, L. M., & Barnard, W. A. (1988). Friendship and freedom of movement as moderators of sex differences in interpersonal distancing. Journal of Social Psychology 128, 305–310.

Bergson, H. (1896). Matière et mémoire [Matter and memory]. Paris: Alcan.

Berkeley, G. (1709/1975). An essay toward a new theory of vision. In M. R. Ayers (Ed.), Philosophical works (pp. 1–59). London: Dent.

Berti, A., & Rizzolatti, G. (2002). Coding far and near space. In H.-O. Karnath, D. Milner, & G. Vallar (Eds.), The cognitive and neural bases of spatial neglect. Oxford: Oxford University Press.

Binkofski, F., Buccino, G., Stephan, K. M., Rizzolatti, G., Seitz, R. J. & Freund, H. J. (1999) A parieto-premotor network for object manipulation: Evidence from neuroimaging. Experimental Brain Research, 128, 210–213.

Bjoertomt, O., Cowey, A., & Walsh, V. (2002). Spatial neglect in near and far space investigated by repetitive transcranial magnetic stimulation. Brain, 125 (9), 2012–2022.

Bonfiglioli, C., Finocchiaro, C., Gesierich, B., Rositani, F., & Vescovi, F. (2009). A kinematic approach to the conceptual representations of this and that. Cognition, 111 (2), 270–274.

Bootsma, R. J., Bakker, F. C., Van Snippenberg, F. J., & Tdlohreg, C. W. (1992). The effect of anxiety on perceiving the reachability of passing objects. Ecological Psychology, 4, 1–16.

Borghi, A. M., & Binkofski, F. (Eds.) (2014). Words as social tools: An embodied view on abstract concepts. New York: Springer. doi:10.1007/978-1-4614-9539-0

Bourgeois, J., & Coello, Y. (2012). Effect of visuomotor calibration and uncertainty on the perception of peripersonal space. Attention, Perception & Psychophysics, 74, 1268–1283.

Bourgeois J., Farnè A., & Coello, Y. (2014). Costs and benefits of tool-use on the perception of reachable space. Acta Psychologica, 148, 91–95.

Braadbaart, L., Williams, J.H.G., & Waiter, G. D. (2013). Do mirror neuron areas mediate mu rhythm suppression during imitation and action observation? International Journal of Psychophysiology, 89 (1), 99–105.

Brain, W. R. (1941). Visual disorientation with special reference to lesions of the right hemisphere. Brain, 64, 224–272.

Cardellicchio, P., Sinigaglia, C., & Costantini, M. (2011). The space of affordances: A TMS study. Neuropsychologia, 49, 1369–1372.

Cardinali, L., Brozzoli, C., Urquizar, C., Salemme, R., Roy, A. C., Farnè, A. (2011). When action is not enough: Tool-use reveals tactile-dependent access to body schema. Neuropsychologia, 49 (13), 3750–3757.

Carello, C., Grosofsky, A., Reichel, F. D., Solomon, H. Y., & Turvey, M. T. (1989). Visually perceiving what is reachable. Ecological Psychology, 1, 27–54.

Chao, L. L., & Martin, A. (2000). Representation of manipulable man-made objects in the dorsal stream. Neuroimage, 12, 478–484.

Chao, L. L., Weisberg, J., & Martin, A. (2002). Experience-dependent modulation of category-related cortical activity. Cerebral Cortex, 12, 545–551.

Cochin, S., Barthélémy, C., Roux, S., & Martineau, J. (1999). Observation and execution of movement: Similarities demonstrated by quantified electroencephalography. European Journal of Neuroscience, 11, 1839–1842.

Coello, Y., & Bonnotte, I. (2013). The mutual roles of action representations and spatial deictics in French language. Quarterly Journal of Experimental Psychology, 66 (11), 2187–2203.

Coello, Y., & Delevoye-Turrell, Y. (2007). Embodiment, spatial categorisation and action. Consciousness and Cognition, 16, 667–683.

Coello, Y., & Iwanow, O. (2006). Effect of structuring the workspace on cognitive and sensorimotor distance estimation: No dissociation between perception and action. Perception and Psychophysics, 68, 278–289.

Costantini, M., Ambrosini, E., Tieri, G., Sinigaglia, C., & Committeri, G. (2010). Where does an object trigger an action? An investigation about affordances in space. Experimental Brain Research, 207, 95–103.

Coventry, K. R. (2013). On the mapping between spatial language and the vision and action systems. In Y. Coello & A. Bartolo (Eds.), Language and action in cognitive neuroscience (pp. 209–223). Sussex: Psychology Press.

Coventry K. R., Valdés, B., Castillo, A., & Guijarro-Fuentes, P. (2008). Language within your reach: Near-far perceptual space and spatial demonstratives. Cognition, 108, 889–898.

Creem-Regehr, S. H., & Lee, J. N. (2005). Neural representations of graspable objects: Are tools special? Cognitive Brain Research, 22, 457–69.

Culham, J. C., Gallivan, J., Cavina-Pratesi, C., & Quinlan, D. J. (2008). fMRI investigations of reaching and ego space in human superior parieto-occipital cortex. In R. L. Klatzky, M. Behrmann, & B. MacWhinney (Eds.), Embodiment, ego-space and action (pp. 247–274). Madwah, NJ: Erlbaum.

Delevoye-Turrell, Y., Bartolo, A., & Coello, Y. (2010). Motor representation and the perception of space. In N. Gangopadhyay (Ed.), Perception, Action and Consciousness (pp. 217–242). Oxford: Oxford University Press.

Di Pellegrino, G., & Làdavas, E. (2014). Peripersonal space in the brain. Neuropsychologia, 66, 126–133.

Dosey, M. A., & Meisels, M. (1969). Personal space and self-protection. Journal of Personality and Social Psychology, 11, 93–97.

Ferri, F., Campione, G. C., Dalla Volta, R., Gianelli, C., & Gentilucci, M. (2011). Social requests and social affordances: How they affect the kinematics of motor sequences during interactions between conspecifics. PLoS ONE, 6 (1), e15855. doi:10.1371/journal.pone.0015855

Fine, I., Wade, A. R., Brewer, A. A., May, M. G., Goodman, D. F., Boynton, G. M., Wan-dell, B. A., & MacLeod, D.I.A. (2003). Long-term deprivation affects visual perception and cortex. Nature, 6 (9), 1–2.

Fischer, M. H. (2000). Estimating reachability: Whole body engagement or postural stability? Human Movement Science, 19, 297–318.

Fischer, M. H., & Zwaan, R. A. (2008). Embodied language – A review of the role of the motor system in language comprehension. Quarterly Journal of Experimental Psychology, 61, 825–850.

Gabbard, C., Ammar, D., & Rodrigues, L. (2005). Perceived reachability in hemispace . Brain and Cognition, 58, 172–177.

Gabbard, C., Ammar, D., & Sunghan, L. (2006). Perceived reachability in single- and multiple-degree-of-freedom workspaces. Journal of Motor Behavior, 38 (6), 423–429.

Gabbard, C., Cordova, A., & Lee, S. (2007). Examining the effects of postural constraints on estimating reach. Journal of Motor Behavior, 39, 242–246.

Gallese, V. (2005). Embodied simulation: From neurons to phenomenal experience. Phenomenology and the Cognitive Sciences, 4 (1), 23–48.

Gallivan, J. P., Cavina-Pratesi, C., & Culham, J. C. (2009). Is that within reach? fMRI reveals that the human superior parieto-occipital cortex encodes objects reachable by the hand. Journal of Neuroscience, 29, 4381–4391.

Gandhi, T. K., Ganesh, S. & Sinha, P. (2014). Improvement in spatial imagery following sight onset late in childhood. Psychological Science, 25 (3), 693–701.

Gentilucci, M., Scandolara, C., Pigarev, I. N., & Rizzolatti, G. (1983). Visual responses in the postarcuate cortex (area 6) of the monkey that are independent of eye position. Experimental Brain Research, 50, 464–468.

Gianelli, C., Scorolli, C., & Borghi, A. M. (2013). Acting in perspective: The role of body and language as social tools. Psychological Research, 77, 40–52.

Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.

Gopnik, A., & Wellman, H. M. (1994). The theory theory. In L. A. A Hirschfeld and S. A. Gelman (Eds.), Mapping the mind: Domain specificity in cognition and culture (pp. 257–294). Cambridge: Cambridge University Press.

Grafton, S. T., Arbib, M. A., Fadiga, L., & Rizzolatti, G. (1997). Localization of grasp representations in humans by positron emission tomography, Experimental Brain Research, 112, 1, 103–111.

Haaland K. Y., & Delaney, H. D. (1981). Motor deficits after left or right hemisphere damage due to stroke or tumor. Neuropsychologia, 19 (1), 17–27.

Haaland K. Y., & Harrington D. L. (1996). Hemispheric asymmetry of movement. Current Opinion in Neurobiology, 6, 796–800.

Hall, E. T. (1966). The hidden dimension. New York: Doubleday.

Halligan, P. W., Fink, G. R., Marshall, J. C., & Vallar, G. (2003). Spatial cognition: Evidence from visual neglect. Trends in Cognitive Sciences, 7, 125–133.

Hari, R. (2006). Action-perception connection and the cortical mu rhythm. Progress in Brain Research, 159, 253–260.

Hayduk, L. A. (1978). Personal space: An evaluative and orienting overview. Psychological Bulletin, 85 (1), 117–134.

Hayduk, L. A. (1981). The permeability of personal space. Canadian Journal of Behavioural Science, 13 (3), 274–287.

Held, R., & Hein, A. (1963). Movement-produced stimulation in the development of visually guided behavior. Journal of Comparative and Physiological Psychology, 56 (5), 872–876.

Held, R., Ostrovsky, Y., de Gelder, B., Gandhi, T., Ganesh, S., Mathur, U., & Sinha, P. (2011). Newly sighted cannot match seen with felt. Nature Neuroscience, 14, 551–553.

Hommel, B. (2004). Event files: Feature binding in and across perception and action. Trends in Cognitive Sciences, 8, 494–500.

Hommel, B., Müsseler, J., Aschersleben, G., & Prinz, W. (2001). The Theory of Event Coding (TEC): A framework for perception and action planning. Behavioral and Brain Sciences, 24 (5), 849–78.

Horowitz, M. J., Duff, D. F., & Stratton, L. O. (1964). Body buffer zone-exploration of personal space. Archives of General Psychiatry, 11, 651–656.

Huang, R. S., Chen, C. F., Tran, A. T., Holstein, K. L., & Sereno, M. I. (2012). Mapping multisensory parietal face and body areas in humans. PNAS, 109 (44), 18114–18119.

Husserl, E. (1907). Die Idee der Phänomenologie, Fünf Vorlesungen. [The idea of phenomenology. Five lectures.] Edited by Walter Biemel. The Hague, Netherlands: Martinus Nijhoff.

Iachini, T. (2011). Mental imagery and embodied cognition: A multimodal approach. Journal of Mental Imagery, 35(4–5), 1–26.

Iachini, T., Borghi, A. M., & Senese, V. P. (2008). Categorization and sensorimotor interaction with objects. Brain & Cognition, 67, 31–43.

Iachini, T., Coello, Y., Frassinetti, F., & Ruggiero, G. (2014a). Body space in social interactions: A comparison of reaching and comfort distance in immersive virtual reality. PLoS One, 9 (11), e111511. doi:10.1371/journal.pone.0111511e111511

Iachini T., Ruggiero G., Ruotolo F., &Vinciguerra M. (2014b). Motor resources in peripersonal space are intrinsic to spatial encoding: Evidence from motor interference. Acta Psychologica, 153, 20–27.

Jacob, P. (2013). Embodied cognition, communication and the language faculty. In Y. Coello & A. Bartolo (Eds.), Language and action in cognitive neuroscience (pp. 3–29). New York: Psychology Press.

Jacob, P., & Jeannerod, M. (2005). The motor theory of social cognition: A critique. Trends in Cognitive Sciences, 9, 21–25.

James, W. (1890). The principles of psychology. New York: Holt.

Jeannerod, M. (2001). Neural simulation of action: A unifying mechanism for motor cognition. NeuroImage, 14, 103–109.

Jeannerod, M. (2006). Motor cognition: What actions tell the self. Oxford: Oxford University Press.

Jeannerod, M., Arbib, M. A., Rizzolatti, G., & Sakata, H. (1995). Grasping objects: The cortical mechanisms of visuomotor transformation. Trends in Neuroscience, 18, 314–320.

Kan, I. P., Kable, J. W., Van Scoyoc, A., Chatterjee, A., & Thompson-Schill, S. L. (2006).

Fractionating the left frontal response to tools: Dissociable effects of motor experience and lexical competition. Journal of Cognitive Neuroscience, 18, 267–277.

Kennedy, D. P., Gläscher, J., Tyszka, J. M., & Adolphs, R. (2009). Personal space regulation by the human amygdala. Nature Neuroscience, 12, 1226–1227.

Kirsch, W., Herbort, O., Butz, M. V, & Kunde, W. (2012). Influence of motor planning on distance perception within the peripersonal space. PloS One, 7 (4), e34880. doi:10.1371/journal.pone.0034880

Krüger, B., Bischoff, M., Blecker, C., Langhanns, C., Kindermann, S., Sauerbier, I., Reiser, M., Stark, R., Munzert, J., & Pilgramm, S. (2014). Parietal and premotor cortices: Activation reflects imitation accuracy during observation, delayed imitation and concurrent imitation. Neuroimage, 100, 39–50.

Lamm, C., Fischer, M. H., & Decety, J. (2007). Predicting the actions of others taps into one’s own somatosensory representations – A functional MRI study. Neuropsychologia, 45 (11), 2480–2491.

Llanos, C., Rodriguez, M., Rodriguez-Sabate, C., Morales, I., & Sabate, M. (2013). Mu-rhythm changes during the planning of motor and motor imagery actions. Neuropsychologia, 51 (6), 1019–1026.

Lloyd, D. M. (2009). The space between us: A neurophilosophical framework for the investigation of human interpersonal space. Neuroscience and Biobehavioural Reviews, 33, 297–304.

Makin, T. R., Holmes, N. P., & Zohary, E. (2007). Is that near my hand? Multisensory representation of peripersonal space in human intraparietal sulcus. Journal of Neuroscience, 27, 731–740.

Mark, L. S., Ncmeth, K., Garciner, D., Dainoff, M. J., Paasche, J., & Duffy, M. (1997). Postural dynamics and the preferred critical boundary for visually guided reaching. Journal of Experimental Psychology: Human Perception and Performance, 23, l365–l379.

Martin, A. (2007). The representation of object concepts in the brain. Annual Review of Psychology, 58, 25–45.

Matelli, M., Luppino, G., & Rizzolatti, G. (1985). Patterns of cytochrome oxidase activity in the frontal agranular cortex of macaque monkey. Behavioural Brain Research, 18, 125–137.

Medendorp, W. P., Buchholz, V. N., Van Der Werf, J., & Leoné, F. T. (2011). Parietofrontal circuits in goal-oriented behaviour. European Journal of Neuroscience, 33 (11), 2017–2027.

Mennemeier, M., Wertman, E., & Heilman, K. M. (1992). Neglect of near peripersonal space: Evidence for multidirectional attentional systems in humans. Brain, 115, 37–50.

Merleau-Ponty, M. (1945). Phénoménologie de la perception. Paris, Gallimard.

Morgado, N., Muller, D., Gentaz, E., & Palluel-Germain, R. (2011). Close to me? The influence of affective closeness on space perception. Perception, 40 (7), 877–879.

Muthukumaraswamy, S. D., Johnson, B. W., & McNair, N. A. (2004). Mu rhythm modulation during observation of an object-directed grasp. Cognitive Brain Research, 19, 195–201.

Noë, A. (2004). Action in Perception. Cambridge, MA: MIT Press.

Noppeney, U. (2008). The neural systems of tool and action semantics: A perspective from functional imaging. Journal of Physiology, 102, 40–49.

Phillips, J. C., & Ward, R. (2002). S-R correspondence effects of irrelevant visual affordance: Time course and specificity of response activation. Visual Cognition, 9, 540–558.

Poincaré, H. (1902). La science et l’hypothèse. Paris: Flammarion Edition.

Prestopnik, J., Haaland, K., Knight, R., & Lee, R. (2003). Hemispheric dominance in the parietal lobe for open and closed loop movements. Journal of International Neuropsycho-logical Society, 9, 1–2.

Previc, F. H. (1998). The neuropsychology of 3-D space. Psychological Bulletin, 124, 123–164.

Proverbio A. M. (2012). Tool perception supresses 10–12 Hz μ rhythm EEG over the somatosensory area. Biological Psychology, 91, 1–7.

Quesque, F., & Coello, Y. (2014). For your eyes only: Effect of confederate’s eye level on reach-to-grasp action. Frontiers in Psychology, 5, 1407. doi:10.3389/fpsyg.2014.01407

Quesque, F., Delevoye-Turrell, Y. N., Coello, Y. (2015). Facilitation effect of observed motor deviants in a cooperative motor task: Evidence for direct perception of social intention in action. Quarterly Journal of Experimental Psychology, doi:10.1080/17470218. 2015.1083596

Quesque, F., Lewkowicz, D., Delevoye-Turrell, Y., & Coello, Y. (2013). Effects of social intention on movement kinematics in cooperative actions. Frontiers in Neurorobotics, 7, 14. doi:10.3389/fnbot.2013.00014

Quesque, F., Lachini, T., Santos, J., Moura, S., & Coello, Y. (in preparation). The relationship between peripersonal and interpersonal distances.

Quinlan, D. J., & Culham, J. C. (2007). fMRI reveals a preference for near viewing in the human parieto-occipital cortex. NeuroImage, 36 (1), 167–187.

Rizzolatti, G., & Luppino, G. (2001). The cortical motor system. Neuron, 31, 889–901.

Rizzolatti, G., Scandolara, C., Matelli, M., & Gentilucci, M., (1981a). Afferent properties of periarcuate neurons in macaque monkeys. II. Visual responses. Behavioural Brain Research, 2, 147–163.

Rizzolatti, G., Scandolara, C., Matelli, M., & Gentilucci, M. (1981b). Afferent properties of periarcuate neurons in macaque monkeys: I. Somatosensory responses. Behavioural Brain Research, 2, 125–146.

Robinovitch, S. N. (1998). Perception of postural limits during reaching. Journal of Motor Behavior, 30, 352–358.

Rochat, P. (1995). Perceived reachability for self and other by 3-to 5-year old children and adults. Journal of Experimental Child Psychology, 59, 317–333.

Rochat, P., & Wraga, M. (1997). An account of the systematic error in judging what is reachable. Journal of Experimental Psychology: Human Perception and Performance, 23, 199–212.

Sainburg, R. L. (2002). Evidence for a dynamic-dominance hypothesis of handedness. Experimental Brain Research, 142 (2), 241–258.

Sainburg, R. L. (2005). Handedness: Differential specializations for control of trajectory and position. Exercise and Sport Sciences Reviews, 33 (4), 206–213.

Sainburg, R. L., Ghez, C., & Kalakanis, D. (1999). Intersegmental dynamics are controlled by sequential anticipatory, error correction, and postural mechanisms. Journal of Neuro-physiology, 81 (3), 1045–1056.

Sainburg, R. L., & Kalakanis, D. (2000). Differences in control of limb dynamics during dominant and nondominant arm reaching. Journal of Neurophysiology, 83 (5), 2661–2675.

Salenius, S., Schnitzler, A., Salmelin, R., Jousmäki, V., & Hari, R. (1997). Modulation of human cortical rolandic rhythms during natural sensorimotor tasks. NeuroImage, 5 (3), 221–228.

Salmelin, R., & Hari, R. (1994). Spatiotemporal characteristics of sensorimotor neuromagnetic rhythms related to thumb movement. Neuroscience, 60 (2), 537–550.

Sartori L., Becchio, C., Bara, B. G., & Castiello, U. (2009). Does the intention to communicate affect action kinematics? Consciousness and Cognition 18, 766–772.

Schaefer, S. Y., Haaland, K. Y., & Sainburg, R. L. (2007). Ipsilesional motor deficits following stroke reflect hemispheric specializations for movement control. Brain, 130, 2146–2158.

Schaefer, S. Y., Haaland, K. Y., & Sainburg, R. L. (2009). Hemispheric specialization and functional impact of ipsilesional deficits in movement coordination and accuracy. Neuropsychologia, 47 (13), 2953–2966.

Schwebel, D. C., & Plumert, J. M. (1999). Longitudinal and concurrent relations among temperament, ability estimation, and injury proneness. Child Development, 70, 700–712.

Scorolli, C., Miatton, M., Wheaton, L. A., & Borghi, A. M. (2014). I give you a cup, I get a cup: A kinematic study on social intention. Neuropsychologia, 57, 196–204.

Shadmehr, R., Smith, M. A., & Krakauer, J. W. (2010). Error correction, sensory prediction, and adaptation in motor control. Annual Review of Neuroscience, 33, 89–108.

Shelton, P. A., Bowers, D., Heilman, K. M. (1990). Peripersonal and vertical neglect. Brain, 113 (1), 191–205.

Sirigu, A., & Duhamel, J. R. (2001). Motor and visual imagery as two complementary but neurally dissociable mental processes. Journal of Cognitive Neuroscience, 13, 910–919.

Sommer, R. (1959). Studies in personal space. Sociometry, 23, 247–260.

Stevens, J. A. (2005). Interference effects demonstrate distinct roles for visual and motor imagery during the mental representation of human action. Cognition, 95, 329–350.

Stock, A., & Stock, C. (2004). A short history of ideo-motor action. Psychological Research, 68, 176–188.

Symes E., Ellis R., & Tucker M. (2005). Dissociating object-based and space-based affordances. Visual Cognition, 12, 1337–1361.

Teneggi, C., Canzoneri, E., di Pellegrino, G., & Serino, A. (2013). Social modulation of peripersonal space boundaries. Current Biology, 23, 406–411.

ter Horst, A. C., van Lier, R., & Steenbergen, B. (2011). Spatial dependency of action simulation . Experimental Brain Research, 212, 635 –644.

Thill, S., Caligiore, D., Borghi, A. M., Ziemke, T., & Baldassarre, G. (2013). Theories and computational models of affordance and mirror systems: An integrative review. Neuro-science and Biobehavioral Review, 37, 491–521.

Tranel, D., Damasio, H., & Damasio, A. R. (1997). A neural basis for the retrieval of conceptual knowledge. Neuropsychologia, 35, 1319–27.

Tranel, D., Kemmerer, D., Adolphs, R., Damasio, H., & Damasio, A. R. (2003). Neural correlates of conceptual knowledge for actions. Cognitive Neuropsychology, 20, 409–432.

Treisilian, J. R., Mon-Williams, M., and Kelly, B. M. (1999). Increasing confidence in vergence as a cue to distance. Proceedings of the Royal Society of London B, 266, 39–44.

Tucker, M., & Ellis, R. (1998). On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human Perception and Performance, 24, 830–846.

Tucker, M., & Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Visual Cognition, 8, 769–800.

Wamain, Y., Gabrielli, F., & Coello, Y. (2015). EEG μ rhythm in virtual reality reveals that motor coding of visual objects in peripersonal space is task dependent, accepted for publication.

Weiss, P. H., Marshall, J. C., Zilles, K., & Fink, G. R. (2003). Are action and perception in near and far space additive or interactive factors? NeuroImage, 18, 837–846.

Winstein, C. J., & Pohl, P. S. (1995). Effects of unilateral brain damage on the control of goal-directed hand movements. Experimental Brain Research, 105 (1), 163–174.

Witt, J. K. (2011). Action’s effect on perception. Current Directions in Psychological Science, 20 (3), 201–206.