1 Introduction
The concept of “embodiment” is often considered with respect to cognitive functions. For example, the concept of ‘embodied cognition’ holds that cognitive functions have a physical foundation (“grounding”) in the real world, meaning a foundation in the use of the body, its sensorimotor system and its interactions with the physical surroundings (“situated”, but also relating to the so-called “4e” in cognitive sciences: “embodied”, “embedded”, “enactive”, “extended”). These concepts have roots in philosophy and many scientific fields such as psychology and robotics (see Wilson 2002). In general, there also exist strong influences of the embodiment concept in philosophy (e.g. ‘philosophy of mind’, ‘philosophy of technology and sciences’) and in psychology (especially cognitive psychology) where linkages have been postulated between cognitive performances and the body that shapes and generates a performance (an example is the concept of common coding of action and perception in ideomotor theories (Prinz 1987; Hommel et al. 2001)). And, embodiment is often addressed as an interdisciplinary link across research fields such as psychology, philosophy, and linguistic.
In robotics, there exist early attempts of cybernetic demonstration of embodied intelligence. For example, simple small ‘tortoise’ robots were made to show, using simple electronics, seemingly intelligent behavior by seeking for, probing, and charging their batteries from outlets of electric current (Grey Walter; see Holland 2003). Later, the embodiment concept strongly influenced more directly the research in artificial intelligence (AI) with impacts on robotics (‘subsumption control architecture’ for ‘behavior based robotics’; e.g. Brooks 1991, 1999). An argument was that using symbolic representations for controlling robots in the world was less efficient than basing the control development on the sensorimotor hardware and its interaction with the physical properties of the environment. Since the early development of AI in the second half of the last century, some of the formerly intriguing problems have found solutions using new computer technologies such as the use of neural networks and probabilistic techniques. Yet, a surprising finding in this field is the ‘Moravec’s paradox’ (Moravec 1988), stating that the computational effort for a reasoning tends to be relatively small compared to the problem of controlling sensorimotor skills in the terrestrial environment.
While many embodiment concepts are focused on cognitive skills, others deal with the control of an agent’s sensorimotor skills, and still others consider very basic aspects such as ‘intelligent’ body morphologies and materials, i.e. features that resulted in phylogeny from adaptation to given surroundings under ecological pressure for specialized functions – often paralleled by corresponding developments of their peripheral and central nervous systems. An example is the adaptation of shape and material of the fins in fishes for optimal swimming capabilities, one of many examples from the ‘animal kingdom’ that attracted attention in robotics (Pfeifer et al. 2007). The issues addressed reach from such basic sensorimotor issues (e.g. Pfeifer and Bongard 2006) to the ‘structural coupling’ between agent and environment and up to high-level functions such as cognition and further to social abilities (see list of Ziemke 2003), and ultimately to considerations of a technical evolution of ‘embodied robotics’ where artificial forms of self-consciousness may emerge (Mainzer 2009).
The question of grounding and situatedness of human embodied sensorimotor and cognitive skills became especially eminent when roboticists tried to implement them into robots, and this especially in the field of humanoid robotics where human-robot interaction (HRI) represent a challenge for the future, for example in that humans intuitively understand robots (Miller and Feil-Seifer 2017). The term ‘humanoid’ robot refers to a morphological similarity to humans, but often also includes functional and cognitive aspects. In fact, robotic researchers often attempt to technically mimic human perceptual, cognitive and sensorimotor functions and to use these, or inspirations thereof, for robotic implementations. Early examples are the Cog Project and its criticisms of AI, positing that intelligence of robots has to be embodied and that developing AI in robots requires interaction with the world (Brooks et al. 1999). This approach involved embodiments of sensors and actuators and their use in a bottom-up control of sensorimotor functions. There exist, however, also problems with these approaches. One problem is that not everything can be planned in advance when a skill is embodied into a robot, because non-linearities in complex systems may lead to unwanted emergences1 (Mainzer 2009). Using reliable mechanisms, as they exist in biology, may help to reduce the risk.
Inspirations for human-inspired approaches in humanoid robotics may be drawn from textbook knowledge of human sensorimotor functions, as in the work of Hyon et al. (2007), who suggested to also draw, vice versa, inspirations from work with humanoid robots for the understanding of human functions (Cheng et al. 2007; Mergner 2007). In fact, abstracting from human experiments new theoretical concepts of sensorimotor function, and ‘re-embodying’ corresponding models into robots, appears promising for future research. The approach provides inspiration for roboticists and neuroscientists. For example, neuroscientists may use it as a proof of principle test when confronting their human models in the robot with ‘real world challenges’ from sensor and motor noise and from inaccuracies and mechanical dead zones (e.g. Mergner et al. 2009).
Attempts to achieve human likeness of robots are especially strong in Japan, a country that is facing an aging society, which in future will need more and more domestic help. The spectrum of this approach is very broad, reaching from human-like communication behavior to human-like movement styles and materials – i.e. features important for human-robot interaction (HRI), meanwhile a research field of its own right. It investigates the effects that the function and design choices of humanoids may have on humans, what the differences between physical robots and robots in virtual form are, and what the role of the physical environment is (Miller and Feil-Seifer 2017; Funk 2014).
Humanoid roboticists, who aim to implement human cognitive and sensorimotor skills, may face the problem that these are not yet available in terms of a blue print model. An example is the cognitive sense of self-agency (SSA), i.e. the awareness of whether a given physical impact that an agent is experiencing stems from self-produced sensorimotor activity or has an external source in the environment. This awareness involves the use of a body, sensors and actuators as well as the development of sensorimotor control functions. In philosophy, the SSA has its roots in the differentiation between self-awareness (first-person-perspective) and awareness of other impact sources (second-/third-person-perspective). Bodily expression on the basis of this distinction is emphasized e.g. by Ludwig Feuerbach (1983) and Helmuth Plessner (2003), but also in current approaches of dialogical logics and constructivism (e.g. Lorenzen and Lorenz 1978; Lorenz 2009; Mittelstraß and von Bühlow 2015). In human sciences literature, there exist several versions of the cognitive SSA (David et al. 2008) and as a possible basis for this a bottom-up SSA in sensorimotor control. It has recently been modeled in a combined neurological and robotic (‘neurorobotics’) approach, as will be described in this chapter.
This chapter combines the views of a philosopher, a roboticist and a neurologist. It addresses next philosophical and thereafter robotic issues related to embodiment and anthropomorphization. Thereafter, neurorobotics issues are addressed, and then the embedding of the SSA in somatosensory control together with specifics required for robotic implementation. In Conclusions, the neurorobotics approach is compared to more theoretical approaches.
2 The Embodiment Concept – Philosophical Roots
Only a few decades have passed since roboticists took up the long existing idea of philosophers that mental functions such as cognition have a foundation in the agent’s body and its interaction with the world (see Sect. 1). In view of current and future developments it may be worth, or is even desirable, to recall and consider related philosophical issues. Both the meaning of embodiment as well as of humanoid robots are explicitly or implicitly related to fundamental philosophical conceptualizations, which also reach back to ancient and to early modern debates. For instance the concept of embodiment includes several understandings of what a body actually is and how it is linked to mental phenomena, the mind or soul. Humanoid robots per definition are characterized by human-like appearance or functional features. This includes the human embodiment, and with this the body-mind-problem. But also several forms of anthropomorphization shape the ways in which we humanize and embody robots.
2.1 “Embodiment”: Classical Meanings of ‘Body’ and the ‘Body-Mind-Problem’
As to terminologies, soma (ancient Greek) and corpus (Latin) are historically early terms that have led to our modern term body. A genuine German expression, with no direct correspondence in English, is Leib. It refers to a specific kind of alive bodies, ones that belong to a concrete person or, at least, to beings with a soul. Therefore, the term Leib includes the meaning of the two modern English terms body and corpus (in German Körper), but sometimes reaches also beyond. Discussing the Leib-Seele-Problem implies the relation between living body and soul. In contrast, the Körper-Geist-Problem focuses on the relation between a physical body and the separated mind (Borsche 1980; Mainzer 2010; Mittelstraß 2010; Wimmer 2010). Notably, the English term embodiment combines both, Körper-Geist and Leib-Seele, in the philosophical term ‘body-mind-problem’. Although its origins reach back to ancient philosophy and religious thinking, the body-mind-problem shapes debates till today, e.g. in the fields of ‘philosophy of mind’, cognitive sciences, neurosciences, philosophy of technology, or robotics. Here, especially dualistic and monistic approaches as well as various kinds of causal relations (e.g. emergences) will be considered.
In early modern Cartesian and French materialist philosophy the body has been conceived as a certain kind of machine that obeys the universal natural laws of bodily life. At that time, also the expression Leib was often used, initially mainly in context with human subjective experience, before becoming scrutinized in various ways. Gottfried Wilhelm Leibniz defined Leib as a manifestation of human cognition, characteristic for the limited and finite epistemic perspective of human behaviour. Following Leibniz, Immanuel Kant interpreted Leib as a conjunction between nature and subjectivity that serves as a perspective vehicle in human rationality and consciousness. Also, there have been alternative mechanistic and scientific-empirical concepts viewing Leib a feature of human subjectivity, for instance by Christian Wolff (Kaulbach 1980). Later, in nineteenth century materialist approaches, natural science definitions prevailed, e.g. in the work of Büchner, Vogt or Haeckel (Specht 1980), while Feuerbach and Nietzsche can be seen as advocates of a genuine Leibphilosophie (philosophy of lived body), which has been developed further from anthropological (Helmuth Plessner) and phenomenological viewpoints (see also the elaboration of a systematic philosophy of Leib by Hermann Schmitz in the 2nd half of the twentieth century; Wimmer 2010, pp. 500–502).
Exploring and defining the inter-relations and differentiation of Leib (body) versus soul belong to the oldest problems in philosophy. It became a central issue since the rise of Cartesianism in early modernity. An important question concerns the role of the soul in the differentiation between animated and non-animated bodies (Specht 1980, pp. 185–186; Mittelstraß 2010, p. 525). René Descartes formulated a dualistic approach with a stringent distinction between matter (res extensa) and mind (res cogitans). Even today the cause-effect relationship between matter/body and mind are still controversially discussed, both in Cartesian and non-Cartesian approaches (Specht 1980, pp. 192–193; Mittelstraß 2010, p. 525). In Leibniz’ concept of prästabilierte Harmonie (pre-established harmony) the soul is interpreted as a principle of gestalt and process, closely related to ancient philosophical approaches of a forma substantialis. Due to godly programming, the realm of final causes and souls is synchronized with the realms of nature and laws of nature. Although body and mind are separated, they harmonically interrelate in Leibniz’ approach (Specht 1980, p. 195; Mittelstraß 2010, p. 525). On the other hand, in the current philosophy of mind and in the school of ordinary language philosophy, the body-mind-problem is discussed in the tradition of Ludwig Wittgenstein and Gilbert Ryle. Here the Cartesian body-mind-dualism is for methodical reasons replaced by a language-critical approach in which mind and matter/body are not interpreted as objects, but as words with a concrete meaning in everyday language (Rentsch 1980; Mittelstraß 2010, p. 526).
- 1.
a) Terminological clarification including reconstruction of implicit assumptions and pre-understanding; b) reconstructing the methodical operation and the applied concepts of monism and dualism including related models of causality; c) ethical assessment: normative, epistemic and methodological values and its critique (philosophical perspective);
- 2.
a robotic body is conceptualized as an environment dependent feedback system built on causal levels of emergence and physical organization including sense of self-agency (neurological perspective, holistic monism: intimate embedding of cognition into the sensorimotor body, and intimate embedding of the sensorimotor body into its environment);
- 3.
for epistemic reasons processes of disembodiment and re-embodiment of cognitive functions are used in order to enable successful humanoid functionality (engineering perspective, functional dualism: applying an bionic approach the human body is seen as standard; due to technical replication its cognitive sensorimotor processes are externalized – following the paradigm of analogue computing and electric circuits – and later internalized in a robotic body).

Transdisciplinary approach to a methodical and operational interpretation of the body-mind-problem in humanoid robotics
2.2 “Humanoid Robots”: Anthropomorphizing Robotic Embodiment
Per definition, humanoid robots are characterized by a human-like appearance and/or functionality (Christaller et al. 2001, pp. 87–88; Decker 2010, pp. 45; Hirukawa 2007, p. 1; Knoll and Christaller 2003, pp. 12–14; Küppers 2018, pp. 31–32). The classical definition understands anthropomorphism actually as the appearance of gods in human shape (ánthrōpos = human, morphé = form). In ancient Greek religion both the material form and the behaviour of gods was described in analogies to mankind (Lanczkowski 1971, p. 376; Hülser 2005, p. 157). For Immanuel Kant, the symbolic anthropomorphism relates to the discourse form in which god is described by using humanoid terms. Linguistic accounts of anthropomorphic wording need to be explicitly identified in order to enable methodologically adequate descriptions of the object under consideration (Schütte and Fabian 1971, p. 377; Hülser 2005, p. 158). Today, by using technical and natural-scientific means, we are often humanizing technical systems. As a result of this, anthropomorphization plays a crucial role also in humanoid robotics.
- (1)
Anthropomorphizing linguistic embodiment I. Reminiscent to the symbolic anthropomorphism in the Kantian meaning, current habits of wording play a crucial methodical role in technologies. Linguistically, one way to talk about humanoid robots is to address the resemblance of “humanoid” robots by describing them with terms that relate to humans’ everyday life – as if they where “smart”, “autonomous”, “intelligent” or having a “free will”. This form of anthropomorphism easily leads to categorical misjudgments because normative terms receive their meaning in interpersonal dialogues. Since robots are material artifacts, tools and other means for human ends, adequate wording for technical functions must fulfill linguistic criteria of technological means-end-rationality (Janich 2006, 2012; Decker 2010, p. 42; Funk et al. 2018, p. 382).
- (2)
Anthropomorphizing linguistic embodiment II. Not only talking about robots, but also talking with and through robots can be seen as a form of anthropomorphism (Funk et al. 2018, p. 374). We treat robots as if they where social actors because we talk to them. This is historically a very recent phenomenon, compared to the usage of hand axes millions of years ago or even nowadays of classical cars (not self-driving ones). Using the voice is discovered as a human-machine-interface, which underlines that verbal expression is not exclusive to human social interaction any more. With the voice interface, we are anthropomorphizing speech-operated systems like humanoid robots. Therefore relations between physical embodiment and language became an important research topic in the field of philosophy of technology (Coeckelbergh and Funk 2018; Coeckelbergh 2017) and AI (Mainzer 2019).
- (3)
Anthropomorphizing material embodiment. Complementary to the linguistic appearance is the material appearance: Humanoid robots, especially androids, may look in an embodied way as if they where humans. This is the most obvious anthropomorphism. Masahiro Mori described a psychological “uncanny valley” effect: If robots are made very humanoid so that they are erroneously taken as humans at first appearance, people may lose trust and experience an eerie feeling when they recognize the machine instead of an human person at second glance (the ‘uncanny valley’ of Mori 2012). Humanoid embodiment of robots might cause confusion and the feeling of uncertainty in human-machine-interaction. On the other hand, humanoid appearance can also generate trust, when their identity as a robot is made explicit in a confrontation from the beginning on (Funk et al. 2018, p. 378).
- (4)
Anthropomorphizing functional embodiment. Still another form of anthropomorphism refers to the material functionality: Robots that are functioning as if they were humans. Important issues related to the operational and functional morality of robots have been discussed, for example, by Wallach and Allen (2009). Not to forget: Robots are means to an end. And with alternative morphology (e.g. non-bipedal), robots might function much better for some tasks (in terms of means-end-rationality technical efficiency) (Decker 2010; Funk et al. 2018, pp. 371–372, p. 383). Thus, setting up human-like functions as aim for robots, in whatever physical shape, is another way of anthropomorphizing.
- (5)
Anthropomorphizing epistemic embodiment. Epistemic anthropomorphization relates to the inquisitive struggle for understanding what mankind actually is. Following Richard Feynman, engineers apply the principle: “What I cannot create, I do not understand.” Building human-like machines is also a certain way to learn what it means to be human, from an engineering point of view (Decker 2010, pp. 48–49; Funk et al. 2018, p. 372). Thus, by creating humanoid robots we may want to figure out the unique features of our human embodiment.
- (6)
Anthropomorphizing sociocultural and ideological embodiment. Last not least, also human ideologies and conceptions may be included in embodiment considerations (Decker 2010, pp. 42–48; Funk et al. 2018, p. 382). Humanoid robotics makes epistemic assumptions and world views explicit, when these serve as mental and cultural projections surfaces, or as their social mirror (Funk 2014, pp. 70–71). With these often implicit and unintended projections we tend to anthropomorphize humanoid robotics even more.

Conceptualizing robot anthropomorphization
3 Robotic Embodiment Modes
There are several definitions and modes of embodiment that can be applied to robots. Referring to the list proposed by Ziemke (2003), it distinguishes: (i) Structural coupling, (ii) physical embodiment, (iii) organismoid embodiment, (iv) historical embodiment, (v) organismic embodiment, and (vi) social embodiment.
Structural embodiment (i), defined as the presence of perturbatory channels between agent and environment, applies to a sensorized humanoid in contact with the environment. As a very general definition, it includes simulated agents. The definition of physical embodiment (ii) is in this respect stricter: It requires a physical instantiation of the agent and of its interaction with the physical world. From a theoretical control system point of view, both i and ii can be seen as the interaction between dynamic systems. The experience provided by humanoid experiments in a human experimental set-up and the use of model simulations, considered below, showed that it is difficult to fully replicate in simulated environments the complexity of the real world sensors, actuators and environment with all their implications.
The concept of organismoid embodiment (iii) refers to the way in which a body is interacting with the environment, but in this it is very restrictive in that it is limited to organism-like bodies. The humanoid morphology of a body standing on two legs implies that the control system should in some way consider the issues of human sensorimotor control (described later), such as the relationships between physics of the body and space. For this reason, some authors are claiming that giving a robot a human-like morphology is the only way, how the robot can learn to cope with the human world (e.g. Dreyfus 1996). This notion is similar to the concept behind the physical embodiment (ii) definition: It is hard to imagine that the complexity of the human sensorimotor control can be generated without a human-like physical experience. Also, when one uses humanoids to study human sensorimotor control, for example, one may want to understand in which sense the humanoid behaves human-like. There are considerable differences between human muscles and robot actuators, both in terms of control, performance and kinematics (e.g., robot legs may always be bent to avoid a singular configuration as in Ott et al. 2016). And, when using in the human and robot experiments posture control and balancing, this typically demands only moderate joint torques, at least compared to other tasks occasionally performed by humans like running or jumping. Yet, performing clearly below the upper limits of the actuation implies that the behavior is not shaped by those limits, but rather by the control system and body dynamics.
The other three forms of embodiment have less relevance for humanoid control in the present context: Historical embodiment (iv) refers to the result of the history of an interaction between agent and environment and adaptations thereof. The historical evolution of the system can be observed when integrating learning into the bio-inspired control, like in experiments of Lippi (2018). These demonstrated that it may not always be obvious in such experiments, whether the parameter adaptations associated with learning should be considered fundamentally different from those resulting from the perturbation of the internal states, at least when both are almost simultaneously arising during interaction with the environment. Organismic embodiment (v) refers to biologic agents and their evolution in the environment. Since the human-inspired robot control is designed as a model of a living agent that reflects an evolution in the environment, one may consider it despite its fixed design as an „evolved“ agent. Lastly, social embodiment (vi) addresses the role of embodiment in social interactions. In fact, the control of poses and movements can be important from the social point of view, but this aspect has rarely been considered yet in humanoid robot sensorimotor experiments. On the other hand, especially when it comes to human-robot-interaction and social robots that interact as service devices with human persons – e.g. in care applications – the social embodiment receives a specific functional interest for the design of humanoids.

Efference copy (EC) mechanism. (a) The EC, a replica of the motor command, down-streams from memory a prediction of the sensory consequences of the own motor action for comparison with the actual sensory inflow (bottom). The result is used in sensorimotor control (full line) and perception (dashed) to distinguish between self-produced versus external sensory inflow from physical body impacts that, if uncompensated, disturb sensorimotor control. (b) Comparison mechanism. Sensory inflow from external impact (upward input) is time delayed (Δt), and detection threshold th shields motor output from sensory noise in resting state. Predicted sensory inflow (downward input) cancels the external disturbance signal (subtraction) before determining the output (Disturbance Compensation). When external and self-produced disturbances occur in overlap, the superposition property applies approximately (threshold is small). For perception and learning, the signal from subtraction c-b represents the self-produced disturbance and b the external disturbance (noisy estimate a is dismissed). (Compare Mergner 2010)
In the next section, the above rather broad view on robotic embodiments will be narrowed down to the embodiment of human functions following the design perspective:
Anthropomorphizing material (3.) and functional embodiment (4.) (see Fig. 2) goes hand in hand from both engineering and neurological view points with (i) structural coupling, (ii) physical embodiment and (iii) organismoid embodiment. This will lead us to the concept that human-identified control mechanisms can be ‘re-embodied’ in robots for proof of principle and research purposes.
4 Embodiment of Human Sensorimotor Functions and Re-embodiment into Humanoid Robots
The concept of embodied cognition holds that cognitive functions are discernable in early stages of child development primarily in sensorimotor functions, before ontogenesis and the interaction with the world including other agents are adding more and abstract functions such as mental reasoning (e.g. Piaget 1961; see Guidi and Morabito 2017). There exists considerable research on related embodiment concepts spread across various research fields. In cognitive sciences, for example, the rather abstract ‘Sensorimotor Approach’ to perception aims to include the expertise from several fields such as dynamical systems, psychology and neuroscience for testing operationally defined models that combine concepts of sensorimotor environment, coordination, and strategies (Buhrmann et al. 2013). In robotics, a special humanoid robot (iCub) has been designed with the aim to foster research on how a small child develops ontogenetically cognitive and related sensorimotor functions (Metta et al. 2010).
The ‘human-to-humanoids (H2H)’ approach focuses on developing human-inspired controllers, and to this end investigates the human motor system. The aim is to formally capture dynamics and kinematics of human movements in a form that can be implemented in robots under optimality criteria with consideration of noisy and time delayed signals (Ivaldi et al. 2012). Pezzulo et al. (2011) presented an impressive list of problems to be solved when considering ‘the mechanics’ of embodiment. For example, researchers could simplify the complexity of the processing tasks by dismissing modalities such as abstraction, motivation or affect. Yet, there remain complications due to the large number of cognitive functions and their differentiation, furthermore difficulties in the separation of cognitive from perceptual constituents, effects of modifiers such as attention or automatization, and the transformation from modal (sensory) to an amodal codes in the information processing and storage. And, last not least, cognitive abilities are not “given”, but learned in somatosensory activity. Disputable, the authors point out that a simulation model might prove the validity of a finally achieved concept, given that it can be used for both, comparison with empirical data and as a working tool for abstraction, if robustness and versatility is given when changing tasks.
Conceiving that such top-down approaches may possibly be too complex to get a grip on the embodiment concept, an alternative would be to focus on bottom-up approaches that start, for example, with the agent’s morphology and materials (see Sect. 1). In fact, humanoid robots are often given some human-inspired body morphology (see above, 3. Anthropomorphizing material embodiment). But deriving from such approaches far-reaching claims for cognitive functions is questionable. An intermediate approach would be to focus on bodily roots of cognitive functions in the sensorimotor control of humans, taking a neurological approach. The functionality may be represented and tested in computational models, however with the drawback that the models of agent and world properties may not fully match reality. Yet, one could add a further step in the evaluation by using the model to build a ‘production system’ of sensorimotor behaviour that contains important mechanisms of the considered cognitive function and use it for corresponding ‘proofs of principle’ in humanoid robots.
However, to give such an approach pervasive power from a neurological viewpoint, it should satisfy several demands. Important is that the motor actions of the robot should be stable in face of disturbances that challenge upright body posture. From the control viewpoint, this stability has to be maintained despite feedback time delays, noise, and inaccuracies in the system. And, an answer should be given to the question how the mechanism deals with the many degrees of freedom (DoF) an agent usually uses in sensorimotor behaviours. Finally, the envisaged solution should account for both, external perturbations such as gravity or a push of another agent, as well as for self-produced perturbations such as an intersegmental coupling force (e.g. if a moving segment exerts a push-off force on its supporting base, another segment). Both the cognitive and the sensorimotor distinction between external disturbances, requiring ‘reactive’ postural adjustments, versus self-produced disturbances, to be associated with ‘proactive’ adjustments, is highly relevant for learning action ownership. For the cognitive SSA, this task draws on high-level brain mechanisms that compare and weight the available evidence of the ownership on the basis of experiences, expectations, and matches/mismatches among sensory cues (David et al. 2008).
Such comparison functions are found in different versions already in primitive animals (Crapse and Sommer 2008). For self-produced disturbances, they often include sensory prediction, which accounts for biological feedback time delays and thereby improves sensorimotor control stability. Since robotic embodiments of human cognitive functions are currently still speculative, we focus here on human sensorimotor functions, which use a SSA comparator mechanism (‘somatosensory SSA’). As will be described further below, its robotic embodiment required us to make ‘amendments’ to standard SSA models. Furthermore, the robot had to be equipped with human-inspired sensors, actuators and motor control systems. Only then did the ‘re-embodiment’ of the human control into humanoid robots allow us to perform ‘prove of principle’ experiments and Turing test-like human-robot comparisons.
To explain the procedure, we briefly describe the study of Hettich et al. (2014), which contained the SSA in its control (details in Mergner 2010). In a first step, human experimental data for a sensorimotor control task (balancing upright stance on a moving support base) were collected. From this, a formal description of the control was obtained, which reproduced the human data in model simulation. Then, the model was ‘re-embodied’ into a human-inspired robot, and the experiments were repeated on the same test bed. The approach provided a ‘real world robustness’ test for the model (under the premise that noise, inaccuracies and non-linearities such as dead bands in the robot are not fundamentally different from those in humans). And, in the aforementioned Turing-like test, the robot data compared favourably to the human data. On the way, however, several corrections had to be made, with mutually inspirations and benefits for both, the neurological and the robotic approach (compare Mergner and Tahboub 2009).
To be described below, an important finding in the modelling was that the human sensorimotor comparator mechanism may lend itself also as a building block for a perceptual SSA, which from the cognitive viewpoint may appear as an emergency.
5 Embodiment of the Sense of Self-Agency in Somatosensory Control
The human sensorimotor control is strongly shaped by the laws of physics governing the body and its interaction with the terrestrial environment. For example, gravity effects tend to perturb the torque in the body’s joints produced for poses and movements and thereby disturb sensorimotor control. Disturbances that occur stereotypically in relation to external events or to self-generated actions tend to be learned together with their compensation by the postural control mechanisms, and then often remain subconscious. The task of the postural mechanisms is to ensure that voluntary poses and movements are executed as desired. If they are impaired in humans or are inadequately implemented in robots, sever motor deficits result.
When describing below these mechanisms, we make use of testable ‘signal flow diagram’ models.4 Considering the mechanisms that are underlying the somatosensory SSA, the signal flow is apart from motor commands mainly ‘bottom-up’, meaning starting from the sensors, in distinction to the ‘top-down’ cognitive SSA and other high-level sensorimotor tasks such as movement planning and supervising. The described embedding of the somatosensory SSA builds on identified human mechanisms and on their implemention in robots for proof of principle and human-robot comparisons (see above, Sect. 4). To this end, the original SSA model required major amendments and its fitting into a stable and conflict-free sensorimotor control.
5.1 The SSA concept: Distinguishing Between External Versus Self-Produced Body Impacts
This distinction is basic for learning and controlling sensorimotor functions and behavior. Considerable work has been devoted to this task in human somatosensory research (e.g. Saradjian 2015) and cognitive research (David et al. 2008). Both research fields typically refer to the ‘efference copy’ (EC) concept of von Holst and Mittelstaedt (see von Holst 1954). It represents a comparator mechanism in the sense that (a) self-agency is assumed if predicted sensory inflow occurring with self-produced motor activity is congruent with actual sensory inflow, while (b) non-predicted constituents are interpreted as stemming from external impacts. Neuropsychologists explain with it the absence of self-tickling (Blakemore et al. 1998), while some neuroscientists with engineering background conceive it as a feed forward mechanism that helps stabilizing motor control in face of time delays and noise and improving motor functions such as motor learning and trajectory planning (Wolpert et al. 1998; for more abstract considerations, see Pezzolu 2008).

EC in functional context. Signal flow in three behavioral scenarios where a targeting head rotation in space, HS, is disturbed by trunk rotations in space, TS (HMS and TMS: head and trunk motor systems with motor commands HSc and TSc respectively; HT, head-to-trunk rotation). Scenario 1: Active TS rotation. Its efference copy tsEC exerts via the prediction signal tspr(t) two effects, it diminishes the HSc signal (summing junction 1), and it cancels via tspr(t-Δ) the sensory signal ts(t-Δ) from the TS Disturbance Estimator. Scenario 2: Prediction of passive TS rotation. Down-streamed from memory, the prediction of external TS rotation (Pred. of ext. TS) may use the same mechanisms as in Scenario 1. Scenario 3: Passive TS rotation (‘ext(ernal) TS’); perturbation of HS is estimated with time delay t (after summing junction 3) and, after passing a threshold (THR), this estimate diminishes HS using tsnp(t-t) (subscript np, not predicted). For simplicity, actions refer here to small and slow centric head and trunk rotations in the earth-horizontal plane, so that field force impacts, e.g. of gravity, can be dismissed

Disturbance estimation and compensation (DEC) model (Mergner 2010). (a) DEC control module. Input ‘Desired Movement’ commands controller (C) to provide the joint torque that is required to produce a desired movement or pose (in Plant). Biomechanical tissue properties (‘Biom.’) provide an immediate negative feedback (green) in addition to proprioceptive short latency feedback (‘SL Loop’). The resulting servomechanism achieves correspondence between actual and desired movements or poses. In addition, external or self-produced disturbances are estimated from several sensory sources (boxes a-c). The estimates (1–4; see text) command via the LL (long latency) loops the servo to provide the extra torque required for disturbance compensation. (b) Network of information exchange between DEC control modules. The example shows how vestibular signals from the head are used to infer the kinematic state of their feet and the foot support surface (details in Lippi and Mergner 2017)
- 1.
Combination of raw sensory signals into physical variable estimates. Estimating for example head linear acceleration in a setting off subway requires fusions of vestibular otolith and canal organ signals (see Mergner et al. 2009).
- 2.
Combination of physical variable signals into disturbance estimates.5 Combining in the example the head translation estimate with proprioceptive signals for a coordinate transformation along the body axis from head to feet provides an estimate of the kinematic state of the foot support surface in the subway (Mergner 2010; compare Fig. 5b).
For a given DoF of the human skeletomotor system, four physical impacts may perturb the joint torque: (a) force fields such as gravity, (b) contact forces such as a push and (c) rotation and (d) translation of the supporting body link or the external support surface. Dealing with these estimates instead of many ‘raw’ sensory transducer signals unburdens the control modules (and facilitates their ‘re-embodiment’ in robots). Inter-connections between control modules provide the basis for intersegmental coordination (see below and Fig. 5b). Yet, combining the information across the many DoFs of the whole body (with >200 joints of the skeletomotor system) at higher brain centers for motor planning, supervision and prediction represents a complex task.
Learning and subconsciously dealing with disturbance compensation includes the impacts from the ‘invisible’ field forces such as gravity (recall the anecdote that it required a fall of an apple on the head to make Newton aware of gravity). In contrast, external mechanical perturbations such as a support surface translation typically have a representation in consciousness as a learned and predictable external world event (this, although physically it is here actually the body’s inertia that causes the disturbance). And, it is the learned experience that may allow us to interpret the visually or haptically perceived activities of other agents (relevant for HRI, for example). Especially the strength of gravity tends to be often underestimated. Theoretically, its strength may pose problems for control stability in face of the biological feedback time delays. Alerted to this problem in model simulations, McIntyre and Bizzi (1993) put into question the belief of many physiologists that the muscle stretch reflex represents the equivalent of the PD (proportional-derivative) controller – a standard tool of engineers, appreciated because of its simplicity and stability. However, when used in the disturbance estimation and compensation (DEC) sensorimotor control described below, model and robot data obtained with the PD controller compared favorably with human data.

Screenshot showing robot performing squatting while standing on unstable support and being perturbed by external pull stimuli (see https://www.youtube.com/watch?v=3ALCTMW3Ei4). (For further films, see Appendix in Mergner and Lippi 2018 and Ott et al. 2016)
The DEC modules are supported in their tasks by body material properties and body morphology that improve functional versatility and robustness. For example, movement of the upper body versus the lower body in hip bending is used in reaching to enlarge the workspace for the hands (a versatility issue), and by performing rapid hip bending to exert compensatory shear forces on the ground is used as ‘hip strategy’ for balancing (a robustness issue). The morphology is also relevant for HRI, in that an anthropomorphic design helps humanoid robots to fit into the human living and working sphere and increases human acceptance, with exceptions (see above, ‘uncanny valley’). Further support comes by beneficial task interrelations as they have been established in evolution. The following example relates to the challenge of control stability by signal transport delays in biological systems. The biological solution is to use low strength (gain) of the controller, which prevents control instability, but also comes with two advantageous properties: It makes the body mechanically compliant (nowadays an important issue in robotics) and the system more energy efficient.
6 Conclusions
The integration of a human-inspired sensorimotor control into humanoids is a powerful research paradigm in both neuroscience and robotics. Humanoid robots are used as physical models to make inferences on human sensorimotor control (Mergner 2010; Alexandrov et al. 2017; Lippi and Mergner 2017) and, vice versa, human-inspired concepts are used to address the complexity of a humanoid robot’s sensorimotor control (e.g. Ott et al. 2016; Hauser et al. 2011) – a trend that likely will continue in future since human performance is still superior to that of humanoids in many tasks (Ivaldi et al. 2012). The idea of building robotic devices on the basis of biological models, and of identified human sensorimotor and brain functions in particular, has a long tradition. A starting point was ‘Cybernetics’ of Wiener (1948), and the concept was refreshed every few decades since then (e.g. biomimetic and bionics – von Bertalanffy 1969; Brooks, Pfeiffer – see Introduction; Neurorobotics – Kawato and Gomi 1992; Giszter et al. 2001; Dario et al. 2005; Mergner and Tahboub 2009). The spectrum reaches from focusing on morphological and material aspects to attempts to simulate brain functions using spiking neurons (e.g. Falotico et al. 2017). And, behind such ideas is often the ‘embodiment’ concept addressed in this chapter. Addressing such concepts with computer simulations alone appears to have only a limited value, since the real world is not only by far richer, more accurate and cheaper, but also authentic, for example in its dynamics and sometimes unpredictable feedback effects.
In both hardware and software approaches, roboticists and theoretical neuroscientists tend to follow standard protocols. Often used is a kind of staircase approach, in which one conceptually steps from high (=computational) via middle (algorithmic) to low (implementational) levels (compare Marr 1982). The computational level is used to provide an abstract description of what is to be achieved, the algorithmic one to choose the strategy, and finally the implementational one to decide which tools to use and in which way – for example for a specific application of sensorimotor control. If such approaches include in addition narrowly defined optimality criteria, it may be difficult for the resulting agent to cover the wealth of sensorimotor challenges it is facing in the world. Phylogeny typically provides for human sensorimotor functionality rather broad criteria – parsimony, robustness, and versatility – this typically in combinations of all three (which is a core issue in holistic concepts, e.g. of Smuts 1927). An example for such a result from ‘biological design’ is the above described interrelation that, starting from the need to deal with the long biological feedback time delays, evolution ascertained control stability by using low control gain, which in turn comes with benefits from compliant actuation and low energy costs.
It remains to be evaluated to what extent and in which tasks a computational ‘top-down’ approach can compete with a neurorobotics approach that proceeds from identified embodied human mechanisms and, in iterative steps of ‘re-embodying’ them into humanoid robots (as if repeating phylogenetic and ontogenetic developing steps) is gaining benefits from combining neurological and robotic research – a holistic monism that goes hand in hand with a weak methodical dualism. If, in contrast, human cognitive functions were implemented into a robot’s ‘brain’ solely from a computational viewpoint, i.e. without sensorimotor embodiment and physical interaction with the world – a strong body-mind dualism –, the result may finally be used as a chess computer, which likely is unable to develop a sense of self-agency (compare ‘Elephants don’t play chess’ of Brooks 1990).
The robotic ‘re-embodiment’ of human-derived sensorimotor functions resulting from such a combined neurological-robotics approach may not only provide improved parsimony, robustness, and versatility, but possibly also an increase in safety against unwanted emergencies. Insofar, this approach could contribute to functional resilience of robotic systems. To this end, it would be desirable to accompany the robot’s development pragmatically by alternating between simulations (given they are able to cover complex and non-linear properties) and hardware tests for confirming interim results. In sensorimotor control, the most detrimental result would be control instability (with the risk of fall and self-destruction). In related cognitive functions, emergencies may cause distortion or disruption of top-down sensorimotor supervision, leaving the agent at best in a reflexive sensorimotor state.
However, inferences on the humanoid robot’s cognitive functions solely on the basis of external observation may be misleading (see also Funk 2014). When witnessing that a robot compensates the postural disturbance from a push against the body or a support surface tilt or translation in a ‘context-adequate’ way, we may be inclined to infer some cognitive ability, although the specificity of the reaction simply owes to the multisensory interactions underlying the disturbance estimations described for the DEC modules. Consider furthermore the very basic distinction between self versus external disturbance on a basic level for both the motor and perceptual domains as pointed out in Fig. 3b. A task of cognition (e.g. in terms of a cognitive SSA) can here be defined as combining the pieces of information from all affected control building blocks (DEC modules) across the body and bringing them into a context with current or stored other information. From this viewpoint, the step from basic simple sensorimotor functions to corresponding cognitive functions may not be very large. Therefore, one could conceive of a cognitive SSA emerging from phylogenetic and ontogenetic robotic developments in evolutionary robotics. From here, further emergence of a ‘sense of self-conscious’ and the development of ‘own will’ may still appear far, yet possible. Such visions may be countered by the argument, that an organic material basis would probably be required for this, since otherwise the consciousness of computer brains may not be comparable to that of the human brain – but such far reaching considerations are still speculative. In this context it is worth mentioning that synthetic biology does operate with organic materials, currently mainly at the level of single-cell organisms. But there exists the possibility that this branch of research, once it enters the stage of more complex forms of life, could also contribute to the development of a robotic ‘sense of self-conscious’ on the basis of organic materials.
Notably, the computational capacity required for at least self-awareness needs not to be immense when one considers that small animals like some birds with only a few grams of brain weight and no cerebral cortex structure dispose of self-recognition in terms of a SSA when watching own versus foreign body and behavior in a mirror. As mentioned above, this ability is embedded in, and shaped by the individual’s sensorimotor activities. Given the somatosensory SSA is a building block of the cognitive SSA, this then may turn out be one of the building blocks of self-consciousness in terrestrial life. At least it appears unlikely that self-consciousness came once from space on earth (as suggested by Michelangelo’s fresco of ‘The creation of Adam’). More likely, it is the result of some ‘emulsification’ of reasoning and learning arising with sensorimotor activity in the world – a man made repetition therefore appears possible. The neurorobotics approach may be an important tool to address these interrelated sensorimotor, cognitive and evolutionary issues.
Robots as parts of the environment will influence not only the socio-cultural ontogenesis of future generations, but when building and using them we humans create conditions for our own future evolution. Also, our attempts to embody human functions into humanoid robots touch in turn the core of our human self-conception and will further challenge philosophical anthropology as well as epistemology, methodology and ethics. An aim of this chapter is therefore also to foster critical reflections about these future developments and to plead for transdisciplinary approaches when trying to meet the complex challenges of current and future robotics technologies.