The mass [or collective mind] seems to function at an archaic [psychological] level. . . . The person has abdicated conscience to the external world.
—JONATHAN LEAR
Compassion is selective and often ultimately self-serving. . . . Man defending the honor or welfare of his ethnic group is man defending himself.
—EDWARD O. WILSON
I continue here to present a range of neurobiological and other evidence for the claim that the boundaries of the self extend way beyond the scope of the body and that our investment in others and in the world is finally what ethics is all about. Along with self-mapping, the discovery of the self beyond itself, invested in and spanning its worlds both social and natural, is what we must now realize is the source—or a major contributory source—of ethics. I argue here that it is this sense of self as spanning mind-body and world that is the origin and nature of our moral investment and agency in the world. We locate our basic biological sense of self-preservation and self-furthering in a self distributed beyond our skin into our environment, natural and human. This is why we care about the world and why it is the arena of our moral concern and of our ideals.1 In this chapter I review some of the mounting evidence that the scope of the self as moral agent, of who is performing a given moral action, can be distributed beyond the individual to groups, and even extend at times to whole contexts. The scope of the self as actor, its agency, can in certain circumstances be laid at the feet of social-cultural-historical systems, spanning time and place and even generations.
“The I that is We and the We that is I” is a phrase coined by the philosopher G.W.F Hegel in the early nineteenth century.
The self has turned out to be permeable and relational (as well as self-promoting, self-protecting, and self-furthering) rather than closed, discrete, and playing out its own internal program upon the world stage. Yet we have not yet seen how permeable, how open it can be and usually is. Body maps extend the “me” to include the hammer I use when I nail the picture on the wall or the car when I’m driving. Research reveals that the feeling I have that Tessie, my metallic light blue Acura, is an extension of my body when I drive in fact reflects the neural reality, for my body maps are extended to include the car’s proportions and motions within the bounds of my self that I feel and control. They are mapped within the “body mandala,” as the Blakeslees put it. There is a “tool-body unification” or extension. There is also a certain amount of space surrounding our body, “peripersonal space,” like a bubble around us, that is included in our neural self-maps.2 The expansion to include tools and other objects that we use to do things and carry out our aims, as well as all kinds of biographical and cultural and familial information, is now referred to in neuroscience as the extended self. We take so much in.
The philosopher Andy Clark, in his 2008 book Supersizing the Mind, has written about how the mind spills over into the world.3 When he argues that the mind is extended, he means that “at least some aspects of human cognition . . . [are] realized by the ongoing work of the body and/or the extraorganismic environment,” so that the “physical mechanisms of the mind . . . are not all in the head” or in our central nervous system.4 When we use a computer or a pad of paper, a calculator or our address book, our mind has distributed memory and even operations outside itself. This view of the matter radically complicates and reconfigures the nature of the relationship between mind and world, the neurophilosopher David Chalmers points out in his foreword to the book.5 Clark says that we have “a fundamentally misconceived vision . . . that depicts us as ‘locked-in’ agents—as beings whose minds and physical abilities are fixed quantities apt (at best) for mere support and scaffolding by their best tools and technologies.” He proposes instead that our “minds and bodies are essentially open to episodes of deep and transformative restructuring in which new equipment (both physical and ‘mental’) can become quite literally incorporated into the thinking and acting systems that we identify as our minds and bodies.”6 As an example, Clark mentions a robot arm, an arm that extends one’s reach and gets mapped into self-maps whose scope now includes its reach. Clark’s analysis supports what Antonio Damasio proposed about the incorporation of the extended biographical and cultural aspects of the self in a third level of self-mapping: becoming and being a self, on one hand, and responding to and incorporating multifaceted contexts, on the other, are one and the same ongoing process. Yet there is more. For we not only discover the world within us but also discover ourselves in the world, identifying ourselves with parts of it. The psychoanalysts call this projective identification. Philosophers since Spinoza have referred to the group mind. Psychologists have studied mass psychology. And neurophilosophers have begun to explore the phenomenon of distributed agency, a subject of action that is larger than the individual. The distributed self leaks out of its boundaries of the skin and can even feel itself somewhere else entirely outside the body.
Experimental psychologist Philippe Rochat designed and implemented a series of research studies on infants and children to investigate how and when they develop of a sense self. He has proposed that, developmentally, the levels of consciousness culminate in what he calls co-consciousness. Rochat defines co-consciousness as “knowledge of [one’s own] knowing,” the most complete form of self-reflexivity because it “is not just individual but collective.” This “metaknowledge is embodied in the group,” he says, and “will survive individuals.” It consists in “shared representations” resulting from “our co-conscious experience.” It is the melding or integration of first-and third-person perspectives on the self, Rochat says.7 If Rochat’s observations are correct, then the self is ultimately extended not only biographically and culturally and to its tools and projects but also, finally, into a “we”—a sharing of selves and ultimately a shared world. The self as fully self-aware includes perspectives that were initially “other”: third-person perspectives now taken in as one’s own have relocated the self outside itself and in another. In acknowledging them, owning them, and integrating them with my first-person perspective, I create a shared self, a self that is both mine and yours. The self in a sense “others” itself by casting itself into the world and finding itself there in the eyes of another. It returns to itself as including the other as integrated within the I. I discover myself in others’ eyes. I have a socially defined self resulting in a shared reality, a shared world.
The observation that the self cannot exist for long in a psychological vacuum is a finding that is not easily accepted. . . . This discovery heralds the end of another cherished illusion of Western man, namely the illusory goal of independence, self-sufficiency, and free autonomy. . . .
Thus self psychology strikes deeply at a politico-religious value system in which the self-made individual is the ideal. We now have a deeper appreciation of our inescapable enbeddedness in our environment.
—ERNEST S. WOLF
At any given time the self is constructed by a relation to another person. “People have as many selves as they have significant relationships.”8 These selves emerge from the early relationships with parents, siblings, extended family members, and others who have had an impact on one’s life, and they profoundly affect motivation and emotions. Unconsciously perceived similarity triggers the relational patterns to take hold, without our conscious awareness or even the ability to control the process. So each of us has “an overall repertoire of relational selves—aspects of the self tied to a significant other.” Our sense of selfiness is distributed into the relationship and allowed to be co-defined with the other. The other and the relationship are necessary for the feeling of “me,” of this “me.” So what we have here is a notion of how our feelings about our self are co-constructed with other people and, via them, stretch outward to the world at large. We perceive the world as resembling one or another of these models of relationship time and again, thereby reawakening the “me” of that relation, of that context, and its attachments and aversions. Our selfiness has an ongoing vulnerability to the world’s feedback, and so it has a stake in the world. The evidence also appears to support claims of the psychoanalytic theory of self psychology. Developed by psychoanalyst Heinz Kohut, self psychology presciently hypothesized “selfobjects,” external significant persons, caregivers, that are not separate in the mind of the infant but are experienced as part of the self, functioning as “self-machinery” to “complete” the self. Kohut held that it is through “expanding its self-experience to include the whole surround” that an infant comes to be able to (1) feel itself confirmed as a self, (2) pursue its ambitions, and (3) pursue its ideals.9 We find the regulation, definition, and control of our selfiness, of our self-regard, outside ourselves in another—and our attachment to the world is thereby set in motion.10 In primitive versions (and some pathological ones) we merge with the group or the parent. Yet Kohut suggests that a mature form of discovering ourselves or even relocating ourselves in the world is the deflection of the sense of self away from body and mind and its projection into our creative works—art, music, writing. “The self-esteem regulation that relates to one’s own body and one’s own person is surrendered to the work,” he writes. “The self” in these cases, Kohut comments, “is now in the work.”11
My investigation contains hundreds of pages dealing with the psychology of the self—yet it never assigns an inflexible meaning to the term self, it never explains how the essence of the self should be defined. The self . . . is, like all reality—physical reality . . . or psychological reality . . . —not knowable in itself.
—HEINZ KOHUT
How radically our sense of self can overshoot the boundaries of the skin has been demonstrated by Olaf Blanke and Thomas Metzinger’s research on out-of-body experiences. Out-of-body illusions locate the feeling of self, of having a point of view, as coming from an illusory body outside one’s own actual physical body. Sometimes the illusion is that the sense of the location of one’s body comes from two places: both from one’s actual body and at the same time, or alternately, from the illusory body. These experiences also involve a feeling of being disembodied and seeing one’s body as if from the outside. The sense of where our body is and where its feelings come from can be projected onto parts of the environment and felt as coming from elsewhere, beyond our own skin.12 Earlier research into the neurological under pinnings of the global sense of self and self-ownership (the feeling we have of the body as a whole and that it is under our control) led the two to try to reproduce in the laboratory the out-of-body experiences that have been observed in people who suffer from certain neurological injuries. They found that the subjective states of the feeling of the body as outside itself and located somewhere else can be induced in experimental subjects in the laboratory. The scientific explanation of how these subjective experiences come about has something to tell us about the nature and boundaries of the self: about what a self is and what it is not, and what the feeling of self actually can tell us about ourselves—and what it can’t.
Blanke and Metzinger’s experiments exposed subjects to “conflicting multisensory bodily cues by means of mirrors, video technology or simple virtual reality devices.” These manipulated the sense of self-identity, locus of experience, and point of view so that they were experienced as displaced outside of the subject’s own body, thus reproducing the types of illusions brought on by specific neurological injuries. For example, subjects were hooked up to a video camera and their body image projected in front of them. When they saw themselves stroked on the virtual back as they were being stroked on their real back, they experienced themselves as located in the illusory body projected in front of them.13
Some out-of-body and displaced-body-part experiences can be attributed to injuries to the body-mapping capacities. The Blakeslees describe a patient who had a growing brain tumor and had lost the sense that she had a right arm and then a right leg. She felt as if they had flown off somewhere unless she looked at them, moved them, or had a heavy object touching them. Another patient, the victim of a stroke, didn’t have sensations in her left hand and claimed that when looking at her niece’s hand, she experienced her niece’s hand as her own. Her body map of her left hand, the Blakeslees point out, had been projected onto her niece’s body.14
In trance and trancelike experiences, and in Buddhist meditation, there is a weakening of the self-in-body feeling, which becomes diffused into the environment or beyond the skin. Research into this Buddhist and trance experience of loss of self is now showing what seems to be an attenuation of self-maps, the Blakeslees suggest.
When people enter deep meditation or trance, they say that their bodies and minds expand into space. Body awareness fades, and they are left with a unitary yet diffused and nonlocalized sense of themselves. Along with it come feelings of joy, clarity, and empathy. When Buddhist lamas meditate in brain scanners, activity in their parietal lobes plummets. It can’t be a coincidence that the dissolution of the bodily self accompanies the shutting down of the body and space maps that create it.15
Blanke and Metzinger’s experiments on out-of-body states and the Blakeslees’ accounts of certain impairments of stroke victims and of altered states of consciousness have revealed something important: the sense of self is not a simple direct readout of a homunculus inside us, as we tend to think. Our internal feelings are anything but indubitable, as Descartes insisted. They are hardly even reliable. Instead, our feelings of self, self-location, and the like are flexible, complex, and indirect outputs of various coordinated neurological systems. That we have a sense of self-identity and self-location and a personal point of view does not mean that the self that is pinpointed in these ways is a concrete thing whose internal state we experience and report on in any direct way. Our lack of accurate self-access is glaring.
The nature and scope of the self is turning out to be flexible and permeable, and even capable of being relocated. That the feeling of self can be extended beyond itself (and sometimes even detached and displaced to someplace beyond one’s own body) does not suggest the immortality of the soul, but instead quite the opposite: these discoveries lead us to question the “thinginess” of an interior self. The sense of global self arises, we now see, from various distributed neurological operations and systems that do not directly reveal an interior reality, but instead construct it. In other words, the self is not an interior person to which our inner consciousness gives us simple access. Instead, the feeling of self is a mental capacity that can be projected inward or even outward onto the world. It is a feeling of ownership, of selfiness, and it is malleable and expandable. That feeling can be attached to relations we enact in the world, claiming all kinds of things as our own and as who we are, seeing ourselves in worlds beyond our mere bodies and taking into ourselves identities we find ourselves enacting in the world. We mistake the feeling of self as our interior, a soul-thing, a solid bounded essence that is our true self that we alone know and disclose to the world. But that is a false picture. We are more like verbs than nouns; we make parts of the world feel like self, and we fill our feeling of self with our engagements in the world.
Let me be clear. In company with most of my fellow neuroscientists, I believe strongly that the brain does not have a signaling circuit dedicated to ethics. . . . What the brain has is a mechanism to make use of circuits that are already there, in order to disable the self-preference that is akin to our instinct for survival.
—DONALD W. PFAFF
From the uncanny out-of-body experiments of Blanke and Metzinger I turn here to some discoveries about the neurochemistry of the self-other boundary discussed by Donald W. Pfaff.16 Pfaff shows that the feeling of self and where we draw the line between self and other, self and world, depends upon particular neurochemicals. These chemicals that produce the self-other divide, he says, are sometimes turned off, and when they are we experience others as if they were ourselves. The neurochemistry that produces the feeling of self as extending into others, he argues, underlies ethics. He says the chemicals that turn off the feeling of a self-other boundary operate in fear and in love, in empathy and in aggression, and in other situations. These chemicals create a sense of shared experience and even of merger. Pfaff believes that this mechanism produces ethical action by creating empathic responses to others. It works by inducing a kind of “blurring and forgetting of information.”17 He theorizes that before people take any action, they represent to themselves mentally the action and its effects (a feed-forward loop). The would-be actor foresees the consequences to the recipient of the action. The next step, Pfaff argues, is the potentially ethical step: the actor can blur the distinction between self and other both cognitively and emotionally and experience the action he or she is about to perform as if he or she were about to receive it and not just commit it. Pfaff gives the example of a woman, Ms. Abbott, who is about to stab a certain man, Mr. Besser, in the stomach. If “instead of seeing the consequences of her act for Mr. Besser, with gruesome effects to his guts and blood, she loses the mental and emotional difference between his blood and guts and her own,” her final step, the decision of whether or not to knife the man, may be to hold back. She feels a “blurring of identity—a loss of individuality—the attacker temporarily puts herself in the other person’s place.” Ms. Abbott has an anticipatory sense of “shared fear” and as a result refrains from knifing Mr. Besser.18
What Pfaff proposes is happening is “a reduction in the operational efficiency of neural circuits that discriminate between self and not-self,” thereby blurring the difference between self and other. This “reduced image processing take[s] place in the cerebral cortex.” There is a reduction in the neural circuits that discriminate between self and other. All kinds of self-mapping that represent to ourselves our position in space, what is happening in our musculoskeletal frame, our possible movements, our sense of being touched in a particular spot, and so on—generally, our moment-by-moment monitoring of ourselves—if toned down in any of various ways would create a blurring of self and other, “allow[ing] us to run our sense of self together with our sense of another human being.”19 A general excitation of emotion merges the identities of two people by making the neuron assemblies that mark particular identities fire together. So several identities would be signaled at once. This is what Pfaff proposes is happening in ethical behavior. When the self-other distinction is shut off between, “say, myself and the target of my intended action,” the images merge. “The result is empathic behavior that obeys the Golden Rule.” The merging of images within me of myself and the person at whom my actions or feelings are directed also entails a flow of information within my brain from my more evolutionarily recent cerebral cortex to “the more primitive forebrain, connected with emotions and drives.”
Since the discovery of mirror neurons (discussed in detail later), the plausibility of the merger of the sense of self and other has been given a big boost. For it is the same neural circuitry that is operative both when we perform an action or feel an emotion and also when we see someone else doing or feeling it. For example, “some of the same brain circuits for pain become activated whether the person is receiving the painful stimulus or observing another person receiving it.” Only the intensity differs, as does the exact location of the activation within the circuitry.20 We simulate others’ actions and feelings internally as if they were our own—and that’s the basis for recognizing what they are doing or feeling. This constant simulation of others within the self, taking place by recruiting our own mechanisms of perception, emotion, and action, makes the potential loss of the distinction between self and other an easy move. So Pfaff brings to bear evidence from the neurobiology labs that our feeling of self need not, and at times is not, confined to within our skin. We can distribute or project our sense of self and our selfiness—our self-protection and self-furthering—into parts of the world and into other people. Emotions, and the beliefs that underlie and drive them, can be contagious. We can identify with others and can use that ability to empathize—or not. We have, Pfaff tells us, “shared fates, shared fears.”21 Not surprisingly, we also can have shared love.
In social attachment generally, in sexual and parental love, and even in friendship, we are infused with neurochemical mechanisms that erase the borders between self and other. The hormone oxytocin, for example (which is involved in parenting behavior), “appears to contribute to the blurring of the ‘me-you’ distinction in a variety of social situations.” Parental love and especially motherly love “take the blurring of identity between two living beings to new heights,” he says. The upshot of all this is that the various mechanisms that induce sexual and parental attachment can be harnessed and recruited for all kinds of prosocial behaviors and attachments.22 A case in point is oxytocin, which in evolution began as a way to induce maternal care but is the major hormone of sociability in human beings. Oxytocin is widely operative in both male and female adults and especially in their bonding and increases feelings of trust, dampening the fear activation of the amygdala.23
Pfaff argues that there is a basic underlying neural mechanism operating in all the kinds of sociality that are now being investigated in neuroscience labs: it is the diminishment of the recognition of distinct identity. It is a surprising finding, he says.
Throughout the range of all possible social interactions, neuroscientists are beginning to piece together the influences on neural networks that regulate sex, social recognition, and sociability. . . . Wonderfully, . . . we do not need to posit increased levels of performance in these neural mechanisms, but instead can be confident that once we recognize another as a friendly, nonthreatening presence, then it is a decreased social recognition that leads someone to obey the Golden Rule. A person forgets the difference between himself and the other.24
A failure to blend self and other can now be seen as one of the component mechanisms of violence and aggression.25 But of course there is also the bonding of shared aggression in groups. “The violence committed by an individual may actually reflect some kind of bonding with other members of the group,” Pfaff says. “And bonding is a brain question.” He goes on to illustrate: “In gangs, aggressive acts might be both an issue of loyalty and a requirement for membership.” Again, the me-you, us-them boundary is pushed outward from the discrete self. Even in terrorism, Pfaff remarks, the explanation may reside more in the individual’s group identification than in his actual personal proneness to violence. This is an important and pertinent finding in this age of terrorist attack on the United States and elsewhere. Groups can ensure loyalty through violence and members may also be violent in response to their merger with the group, perceiving threats to the group as threats to themselves. “Even some terrorists,” Pfaff points out, “might be seen as normal people responding to threats against their cause or their group.”26 In sum, Pfaff’s theory is that altruism amounts to the sharing of identities, the lowering of the feeling of the boundary between self and other, the feeling of self (and selfiness) as it extends or is distributed into the other. That’s what Pfaff thinks is behind the action of a man, a Mr. Aubrey, who jumped onto the subway tracks in front of a train to rescue a stranger who had fallen in front of the train. At that moment, Pfaff says,
Mr. Aubrey’s brain must have instantly achieved an identity between his self-image and the image of the victim who fell in front of the subway train. This identification did not occur by some complex highly intellectual act—it came about by losing information, that is, by blurring the distinction between the two images. In addition, Mr. Aubrey was demonstrating the kind of prosocial caring feeling that (I hypothesize) normally develops from parental or familiar love.27
I think it would be better to jettison the term altruism and replace it with a better description of what’s going on: shared selves, a selfiness that loses the boundaries of the skin and is extended and distributed to others. The case of this Mr. Aubrey may seems like quite the anomaly, but actually we live in a world in which people give up their lives for the larger community all the time: that’s war. We tend to think of war as a case of violence to be explained. But it’s just as much or more a case of self-sacrifice for the group, and hence the full merger of a self into the group. The young soldier willing to die, when we think about it, is an uncannier phenomenon in need of explanation than group outrage and violence. Both are products of the erasure of the self-other boundary or its displacement beyond the skin into us versus them. Mr. Aubrey is merely a surprising case of a phenomenon so familiar that we tend to fail to think of it as in need of explanation.
Mounting data from the neurosciences show that evil is rooted in the failure to see others as ourselves. In the case of genocides such as the Holocaust, it is a collective failure of society-wide proportions. The self-other boundary beyond the skin is of central importance to a rethinking of ethics, to a deeper understanding of both good and evil. And adding neural plasticity into the mix (which makes any belief and any norm possible) gives us the true depth and scope of the problem. But this is still not the whole story. There is yet more to learn about how and why selves merge, about the sharing of feelings and actions, attitudes and beliefs, leaders and ideals.
When monkeys’ personal space is invaded and threatening objects approach the face or body, neuroscientists have discovered that special “flinch cells” fire. It’s the neural system that keeps you from bumping into things and from falling off a cliff, its discoverer, Michael Graziano of Princeton University, remarks. So it is likely that there is an opposite mechanism that allows objects and people to come close, be within one’s personal space, and engage in joint action. The Blakeslees dub these speculative entities “hug cells.”28 Our self-mapping can extend to another’s peripersonal space, coming to include theirs and theirs ours. We unconsciously take account of our space bubble all the time when we move, reach for things, dance, do martial arts. We know where our hands can move and where there are objects in the way; where our feet are stationed on the floor and where the table leg intrudes. We not only feel where our body parts are but also what the space immediately enveloping them is like—and we feel that we own it as much as we feel that we own the space of our body parts and the actual space taken up by them. This personal space around each of us can merge with others’ personal space. This is what the Blakeslees refer to as “blended personal space,” a “we-centric” space. They call this an “envelope” that encompasses blended selves. Great examples of shared space, they say, would be holding your child on your lap, riding a horse, or making love. They all induce in us a sense of shared space. “It is likely, but not yet proved,” they say, “that your brain contains spatial mapping cells that specialize in ‘affiliative behavior,’ which is a clinical term for cooperation and intimacy.”
Researchers Greg J. Stephens, Lauren J. Silbert, and Uri Hasson used fMRI to study the spatiotemporal activity of the brains of people engaged in conversation. When the research subjects engaged in “natural communication,” the spatiotemporal brain activity of the listener was coordinated or “coupled” with the spatiotemporal activity of the speaker. When they failed to communicate, the “coupling vanished.” They found that generally the listener’s brain activity follows that of the speaker with a small delay, but sometimes the listener’s brain activity would anticipate the speaker’s. Using a quantitative measure of story comprehension, they also showed that the greater the anticipatory listener-speaker coupling, the greater the understanding. When the same speakers and listeners were not engaged in comprehensible conversation (the speaker started speaking in a language that the other did not understand), the coupling of brain activity patterns ceased. The more people communicate and understand each other, the more their brains simulate each other’s brain activities, and the more they meld in the moment, researchers concluded.29 So neither our minds nor our bodies are as discrete as we think they are. We are connected to each other on the inside. Our very souls or selves—our bodies and minds—are bound up with those of others, as the Bible says of the love of Jonathan and David. We are anything but Cartesian subjects, solipsistically isolated in our internal feelings, thoughts, actions, and even bodies.
Mirror neurons will do for psychology what DNA did for biology.
—V.S.RAMACHANDRAN
The discovery of mirror neurons begins with a delightful story. Giacomo Rizzolatti and the folks in his lab at the University of Parma were investigating how the neurons in the premotor cortex, which is located in the frontal lobe of the brain, operate in guiding movements, in grasping, in bringing some food to the mouth, and the like. They had recruited a monkey, into whose brain they implanted electrodes, in order to monitor where and how the planning and carrying out of goal-directed movements occurred. One day in the summer of 1991, a graduate student walked into the neuroscience lab where the monkey was sitting in its special chair, waiting. It was just after lunch and the student was holding an ice cream cone that he brought up to his mouth and licked. And lo and behold, the parts of the monkey’s brain devoted to hand-to-mouth motor movements became active. But the monkey’s own paw had not moved! As the monkey had watched the student bring the ice cream cone to his mouth to take a lick, its own motor map had simulated the same movement—but without moving! The neuroscientists demonstrated that the same simulation in the monkey occurred when they ate peanuts. Both when the monkey itself picked up peanuts and when it watched a researcher picking up peanuts but didn’t itself move its own arm, the same group of cells fired. The neuroscientists demonstrated the phenomenon again and again. The brain’s “mirror system” automatically produces an internal replica of the perceived action as a kind of internal reenactment of both the action and its goal. A series of neuron recording experiments carried out in the 1990s showed “a particular set of neurons, activated during the execution of purposeful, goal-related hand actions, such as grasping, holding or manipulating objects, discharge also when the monkey [merely] observes similar hand actions performed by another individual.”30 This was the discovery of an “action/observation/execution matching system.”31 Scientists invented the term mirror neurons to describe this set of brain cells.
The mirror system enables not only joint action and agency but also a form of immediate understanding of other people’s actions, goals, and emotions, too. For we recognize both actions and emotions through the mirror system—so we are able to imitate them from the inside out.32 The human system “includes a rich repertoire of body actions” and operates at the preverbal level.33 It functions via automatic “direct matching” rather than via thinking and analysis.34 Giacomo Rizzolatti comments on his own discovery:
At the cortical level the motor system is not just involved with single movements but with actions. Think abut it; the same is just as true for humans [as monkeys]: we very rarely move our arms, hands and mouth without a goal; there is usually an object to be reached, grasped, or bitten into.
These acts, insofar as they are goal-directed and not merely movements, provide the basis for our experience of our surroundings and endow objects with the immediate meaning they hold for us. The rigid divide between perceptive, motor, and cognitive processes is to a great extent artificial; not only does perception appear to be embedded in the dynamics of action, . . . but the acting brain is also and above all a brain that understands.35
The motor schema of the observer enacts, in an “as if” pattern, the motor schema of the actor. Neuroscientist Vittorio Gallese proposes “that this link [between the observed agent and the observer] is constituted by the embodiment of the intended goal, shared by the agent and the observer.” The observer’s motor system “resonates” with the system of the actor, and that creates a kind of empathy and, sometimes, contagious behavior.36 Gallese claims that action is primarily “relational.”37 So fundamentally we act together from joint agency rather than individually. The social scope of agency, the group as actor, is what’s normal, and individual agency is rarer and more of an achievement—the direct opposite of what we, at least in the West, tend to assume and take for granted.
An advanced kind of mirroring is involved in understanding without imitating, and it often occurs when actually engaging the action wouldn’t be useful.38 The interior experience of understanding without imitation may be due to “super mirror neurons,” one of whose major roles may be to inhibit actual imitation by the “classical mirror neurons,” neuroscientist Marco Iacoboni suggests.39 That is to say, we reproduce the motivation of the acting other within ourselves. In his analysis of the research, Gallese proposes that the mirror system is a vital contributor to empathy and social cognition. And there is now a great deal of evidence “suggesting a strong link between mirror neurons (or some general form of neural mirroring) and empathy.”40 The German word for empathy, Einfühlung, “feeling with,” is apt, for it was invented to describe aesthetic experience linking the observer with the work of art through a kind of internal simulation. Iacoboni describes how the empathic recognition of others’ emotions works: we unconsciously mimic or imitate internally another’s facial expression, and it is through that inner experience that we recognize what emotion that other person is having.
Iacoboni devised experiments to investigate if and how links between the mirror system and the emotion system are forged in the brain, and if they come to work together in empathy. He set up an experiment in which he monitored the brain activity of volunteers as they either watched or imitated pictures of faces exhibiting fear, sadness, anger, happiness, surprise, and disgust—the basic or primary emotions. Three brain areas—mirror neurons, the limbic system (controlling emotion), and the insula, a region that connects the other two—did in fact show activity that indicated their connection when the volunteers observed emotions and even more activity when they imitated those emotions. Because of the involvement of the limbic system, we can feel the emotions that others feel. Iacoboni reproduced the experiment to investigate the empathic sharing of pain. “Mirror neurons,” Iacoboni says, “typically fire for actions, not pain.” (Another type of cell in the cingulate cortex fires for pain; it does not have an action component.) Since “mirroring of emotions is mediated by action simulation,” the viewing by volunteers of filmed painful scenes (a needle going through a hand) elicited an inhibited motor response simulating the withdrawal of the hand from the needle, which is to say a mirror response; the greater the mirror response, the greater the empathy the observer felt. The conclusion was that “our brain produces a full simulation—even the motor component—of the observed painful experiences of other people.” Rather than a private experience, pain is a shared experience.41 Studies of children’s capacity for empathy correlated the scores of children from behavioral tests of empathy and interpersonal competence with brain activity measured by fMRI. The more empathic a child, the more his or her mirror neurons would fire while watching people expressing emotions. Social competence was also correlated with high empathy scores and high mirror neuron activity.
Vittorio Gallese believes that we ought to extend the concept of “empathy” to explain all the behaviors that enable us to establish a meaningful link between ourselves and others.42 In its basic form, he suggests, empathy means that “the other is experienced as another being like oneself through an appreciation of similarity.”43 Most important, Gallese believes that this mechanism expresses itself not only in mirrored actions but also in shared emotions, body schemas, or maps.44 Gallese remarks that “there is preliminary evidence that the same neural structures that are active during sensations and emotions are active also when the same sensations and emotions are to be detected in others.” So it is likely “that a whole range of different “mirror matching mechanisms” may be present in our brain in addition to the action simulation architecture first discovered. In fact, he concludes, mirroring “is likely a basic organizational feature of our brain.”45
So even when we don’t explicitly act together in concert, we engage in automatic, unconscious, and unstoppable social behavior because our emotions and motivations are shared. It takes an extra inhibitory neural process to stop joint action and render it merely into joint perception, let alone individual autonomous action. Mirror neurons, along with the other neural evidence I have cited—self-maps, the extension and movability of the self-other boundary, and the like—provide further evidence that free will, even in the limited sense of actions stemming from autonomous individual reasoning, is fraught and highly unlikely. As Marco Iacoboni puts it:
According to free-speech theorists, we are all rational, autonomous, and conscious decision makers. However, the data . . .ranging from the unconscious forms of imitation observed while people interact socially to the neurobiological mechanisms of mirroring that have their key neural elements in mirror neurons . . . suggest a level of uncontrolled biological automaticity that may undermine the classical view of autonomous decision making that is at the basis of free will.46
We are internally connected and wired together—for good and for bad. When people observe a behavior in others, the intensity of their internal mirror simulation is stronger if they have engaged in the behavior before. This is benignly and even beneficially true, for example, of dancers watching dance but ominously true of smokers watching others smoke, experiments have shown.47 Iacoboni hypothesizes that the level of activity in a person’s mirror system in a given context is an index of his or her “identification” and “affiliation with other people” in that context. Mirror neurons, he continues, “seem to create some form of ‘intimacy’ between self and other” that may also be relevant to the sense of belonging to or being affiliated with a specific social group whose members, we feel, are more similar to us than other people. The more people like each other, the more they imitate each other, studies have shown. Yet research also has consistently shown that “there is a much stronger [neural] discharge for actions of the self than for actions of others.” Mirror neurons fire for self and other but more strongly for self. We do know when we act, Iacoboni says. He hypothesizes that “mirror neurons in the infant brain are formed by the interactions between self and other.” The baby smiles, the parent smiles; next time the parent or somebody else smiles, “the neural activity associated with the motor plan for smiling is evoked in the baby’s brain, simulating a smile.” We use the same cells in developing a sense of self because they reflect back to us our own behavior. “In other people, we see ourselves with mirror neurons.” In self-recognition there are two selves: a perceiving self and a perceived self. The mirror neurons map the perceived one (in a picture, for example) onto the perceiver self. The perceived self is perceived as the “other” and is mapped onto (and simulated as) self—but it is already another self. So the other-self (in the picture) awakens a motor repertoire already belonging to the self. The firing of neurons for self is doubled, and the mirror activity for self is stronger than the mirror activity for other.48 This explains, too, why similarity and familiarity create higher mirror activity than the less similar and less familiar. Selfiness obtains even—and perhaps especially—in mirror neurons and in the sociality, empathy, and groupiness they spark and maintain. The selfier the other, the more it moves us.
The evidence from mirror neurons on shared emotions drives home the idea that personal individuality is an unusual achievement. Sociality is our immediate, default state. Moreover, we humans, not monkeys or apes, are the imitators par excellence. An unexpected and perhaps almost uncanny discovery about mirror neurons is that when it comes to goal-directed actions, other primates are better than we are at reproducing a goal (like finding a way to pick up a raisin), while we tend not merely to strive for the goal but also imitate all steps of a procedure, even the ones that are not contributory to the goal. Yet what this enables, I think, is the extraordinary retention and transmission of culture and tradition. In the end, the monkey is ever reinventing how to pick up a raisin, and we are amassing knowledge (albeit retaining the less than optimally functional as well as the functional) and hence in the end reclaiming our inheritance and moving on. Just think about what complete imitation must contribute to the acquisition and transmission of language—and of culture as well. Language and culture enable the joint agency of groups not just to be in the moment but to span time and place and even generation.
Our neural mirror systems ensure that we act more often than not as collaborative agents, as members of a social group, rather than independently and individually. This social or relational root of human acting helps to set the scope of our responsibility as individuals. We are not less responsible because we act jointly; instead, we are responsible individually for all our actions, both individual and collective. Unfortunately, the nature of our un(self-) conscious processes, as we saw in the previous chapter, tends to make us blind to and hence disavow responsibility for our joint agency within the group. Consequently, the central moral problem of human beings is not how to get individuals to care about each other and about the common welfare (and to use their alleged free will to choose to do the right thing), but rather the human tendency to fanatic belonging and loyalty to the group (whether family, tribe, political party, church, ideological comrades, etc.) and its unfortunate concomitant, the disavowal of personal responsibility for the group’s actions. Empathy and joint action, while sounding wonderfully moral, are a double-edged sword. All of this, down to its neurobiological basis, Spinoza intuited and anticipated. He knew all too well, in his own case and in his own flesh, what the basic moral failing of human beings was, and he hypothesized its origins. He also proposed the outlines of the moral solution—one solution for the social and political management of society as a whole and another for the few who could overcome social slavery, joint agency, merger in the group, and transform themselves toward both personal freedom and also universal love and the love of nature.
No one was used to thinking of health in terms of community. Wolf and Bruhn had to convince the medical establishment to think about health and heart attacks in an entirely new way: they had to get them to realize that they wouldn’t be able to understand why someone was healthy if all they did was think about an individual’s personal choices or actions in isolation. They had to look beyond the individual. They had to understand the culture he or she was part of, and who their friends and families were, and what town their families came from.
—MALCOLM GLADWELL
No one—not rock stars, not professional athletes, not software billionaires, and not even geniuses—ever makes it alone.
—MALCOLM GLADWELL
Against so much evidence—and against our own experience, if we think about it clearly—we nevertheless tend to attribute our choices, actions, and decisions to ourselves alone. This is especially true of our successes. We see ourselves as having pulled ourselves up by our bootstraps, and therefore anyone else could have—and should have—done the same. We do not think of ourselves primarily as lucky but rather as talented, smart, hardworking. We may be all those, but we also owe a great deal—much more than most of us believe—to our social and other contexts. Our successes owe far more to others than we assume—and so, too, do our own and others’ failures. Some humility and gratitude are in order here, plus compassion for those who have not been so lucky.
In 2008 the science writer Malcolm Gladwell published a book, Outliers, about people who had achieved outstanding success. Gladwell uses well-known examples of extraordinarily successful people, from Bill Gates to the Beatles, from winning sports teams to Nobel Prize winners in medicine, and changes the way we think about them. Ordinarily we think of the extraordinarily successful and even the highly successful as geniuses, people quite unlike ourselves, who are blessed with some very unusual and special God-given talent, whether it be musical, athletic, mathematical, intelligence, or other. Gladwell says that almost all of us (and especially Americans) subscribe to a myth that extraordinary success is due to a special endowment of genius of one form or another. It is this myth that he sets out to shatter in Outliers. In case after case, Gladwell exposes the (necessary and sufficient) conditions of success as a confluence of unusually enabling (in other words, lucky) circumstances, on one hand, and outstanding personal devotion to the achievement of a specific goal (practice, practice, practice), on the other. It turns out that the folks who are at the top of their fields have a lot in common with other folks at the top of entirely different fields. These star achievers, irrespective of area of success, have something in common. Gladwell makes the case again and again for what this common factor is: a combination of a highly specific fortuitous context and roughly ten thousand hours of devoted hard work. So success is first and foremost a matter of context—often very specific and even quite temporary contextual factors. Second, it is a matter of a great deal of relevant hard work. More than anything else, he shows again and again, it is the constellation of particular features of a context that makes it possible for an individual to become successful at something that that context, at that specific time and place, fosters in some extraordinary way. So it is not individuals alone who act but instead entire contexts that are operative in success stories.
Not surprisingly, it turns out that in success-producing contexts it is often a number of people—not just one lone supposed genius—who achieve outstanding success of the particular kind that that context fosters. So it is not the lone superachiever who is Gladwell’s outlier; the real outlier is a particular extraordinary context that provides the necessary conditions for outstanding success of a very specific kind, again and again.
In Outliers Gladwell uses case after case to show that we cannot understand why certain people become stars just by dissecting them in every possible way as individuals. “Personal explanations of success,” Gladwell argues, “don’t work.” Individuals’ success is not due principally to what they “are like,” but instead it is fundamentally a result of “where they are from.” Gladwell says that he wrote the book to convince us that explaining outstanding success by looking at people’s special personal qualities, talents, personalities, or intelligence simply doesn’t work. “People,” he says, “don’t rise from nothing.” And they don’t do it all by themselves, even if they look as if they did—and even if they, and we, would prefer to think it’s all about Horatio Algers, self-made successes. “But in fact,” Gladwell goes on, these outstanding successes “are invariably the beneficiaries of hidden advantages and extraordinary opportunities and cultural legacies.”49 His book, he says, using an apt simile, “is not a book about tall trees. It’s a book about forests.” It’s about how it’s the forest that makes the particular tallest trees in it possible. The tallest oak tree in a forest gets to be the tallest, ecologists point out, not because it had the genetically best acorn of all but rather because it grew in a spot where it got an unusually large amount of sunlight, the soil was particularly rich, it was not readily accessible to lumberjacks, it was not eaten by deer or rabbits early on, and the like. That’s the kind of ecological explanation that accounts for outstanding success in human endeavors as well, Gladwell argues, and he offers some fascinating examples—examples that defy our expectations.
I’ll give a few of Gladwell’s examples of famous people whose remarkable successes, when studied closely, turn out to be case studies of extremely special and sometimes unique opportunities that were taken and used to fullest advantage. And isn’t having a particular talent, one ripe for the unusual context, perhaps the luckiest advantage of all? Gladwell makes the case that in the end even talent is far more a matter of gaining expertise than of possessing some extraordinary innate property. Take the Beatles, for example, or Bill Gates. Gladwell shows that in both instances their phenomenal successes arose from “a combination of ability, opportunity, and utterly arbitrary advantage.” Gladwell says that when you look at them closely, “what truly distinguishes their histories is not their extraordinary talent but their extraordinary opportunities.”50 The Beatles had a lucky invitation to play in Hamburg, Germany, when they were still a struggling high school band. That came about because a German impresario who had gone to London to look for bands to invite to Hamburg had happened to meet an entrepreneur from Liverpool, so he invited bands from Liverpool to Hamburg instead. In addition, in that period the only venue for live bands in Hamburg was strip clubs, and in those clubs the bands had to play not just for a couple of hours at a stretch but for eight hours at a time. So the Beatles, because of this unusual confluence of circumstances, were forced to play for hundreds of hours. Gladwell quotes John Lennon on their time in Hamburg:
We got better and got more confident. We couldn’t help it with all the experience playing all night long. It was handy them being foreign. We had to try even harder, put our hearts and soul into it . . .
In Liverpool, we’d only ever done one-hour sessions, and we just used to do our best numbers. . . . In Hamburg, we had to play for eight hours, so we really had to find a new way of playing.51
The Beatles also recall that they often played seven days a week. Gladwell remarks that in about a year and a half of playing in Hamburg, the Beatles played a total of around 270 nights, and by 1964, when they had their burst of extraordinary success, they had already done approximately twelve hundred live performances—an astoundingly large number. They were extremely seasoned onstage. They came to Hamburg quite ordinary and left quite extraordinary. Gladwell concludes that “the Hamburg crucible is one of the things that set the Beatles apart.” Indeed, Beatles biographer Philip Norman comments that Hamburg “was the making of them.”52 Gladwell does not at all deny the Beatles’ musical gifts, especially the outstanding song-writing talents of Lennon and McCartney. “But what truly distinguishes their histories,” he remarks, “is not their extraordinary talent but their extraordinary opportunities.”
This is the case for Bill Gates as well. The standard story of Bill Gates goes something like this: “Brilliant, young math whiz discovers computer programming. Drops out of Harvard. Starts a little computer company called Microsoft with his friends. Through sheer brilliance and ambition and guts builds it into the giant of the software world.”53 In fact, Bill Gates came from a well-to-do and well-connected family in Seattle. He went to an elite private school where, by chance, a mothers’ fund-raising committee one year just happened to use some of the funds they had raised to put a computer into a basement room in the school. The school had started a computer club, an unusual thing for the late 1960s. And the computer wasn’t the punch-card kind but a link to the mainframe computer at the University of Washington. As an eighth grader Bill Gates had a link to a time-share system that enabled him to do real-time programming, Gladwell points out—an incredible, and incredibly unusual, opportunity. Not only that, the mother of another kid at the school was a founder of a company associated with the University of Washington that leased computer time-shares to companies. She suggested to the school that perhaps their computer club might want to test out some of the software her company was developing, and she would give them free computer time on the weekends at her company in exchange. The club started hanging around the computer lab at the University of Washington, and another lucky break gave the club’s members an opportunity to have free computer time in exchange for testing out software. Gladwell estimates that over a seven-month stretch in 1971 Bill Gates, then an eleventh grader, averaged eight hours a day, seven days a week, on a mainframe computer. Gates says he was obsessed. He and a friend would even sneak out of the house in the middle of the night and go up to the University of Washington to use some computers—“steal some computer time,” he himself put it—that were hooked up twenty-four hours a day but were less intensively used in the middle of the night.54 Then in his senior year in high school the technology company TRW needed computer programmers familiar with the mainframe that Gates and his high school computer club had been working on for years, and turned to the club to help them out. He spent his senior year at TRW doing the specialized programming in which he had already developed a rare expertise. And his supervisor at TRW was a whiz and taught him a tremendous amount. Gates reflects on his own opportunities, commenting that he “had a better exposure to software development at a young age than . . . anyone did in that period of time, and all because of an incredibly lucky series of events.”55
In 1975, the people who became the movers and shakers in the computer industry, like Bill Gates, had to be not too old and already settled down, nor too young and still in high school. So they had to be in their early to mid-twenties. Bill Gates was twenty; Paul Allen, who founded Microsoft with Gates, was twenty-two; the third in charge at Microsoft, Steve Ballmer, was nineteen. Steve Jobs, co-founder of Apple, was twenty, and all four founders of Sun Microsystems were between the ages of nineteen and twenty in 1975.56 There was a window of incredible opportunity for those who also had had the luck to be prepared to take advantage of it.
The same story holds for the great captains of industry of the nineteenth century. There was a nine-year window in the 1860s and 1870s when those who were just the right age could take advantage of the great industrialization of the American economy and the rise of the railroads. Of the seventy-five richest people in the whole history of the world, fourteen of them were Americans born during the nine-year period between 1831 and 1840.
Gladwell concludes:
There are very clear patterns here. . . . We pretend that success is exclusively a matter of individual merit. But . . . these are stories, instead, about people who were given a special opportunity to work really hard and seized it, and who happened to come of age at a time when that extraordinary effort was rewarded by the rest of society. Their success was not just of their own making. It was a product of the world in which they grew up.57
It is not only the outstandingly successful who owe so much to context. Intelligence generally is also far more about context and culture, family and heritage, than any inborn raw brainpower. The cultural component has been shown to be the most important factor in IQ scores, not inherited intelligence. Richard E. Nisbett’s exhaustive study Intelligence and How to Get It: Why Schools and Cultures Count leaves us in no doubt of that.58 Nisbett points out that studies have shown that Americans of East Asian background have a slightly lower IQ than Americans at large. Yet their achievements far outstrip not only their own IQs but those of other Americans. “Asian intellectual accomplishment,” Nisbett concludes, “is due more to sweat than to exceptional gray matter.”59 Nisbett is particularly concerned to dispel the myth that lower-income groups are underachievers due to genetically inherited lower IQ. Instead a careful review of the evidence from multiple studies shows that the IQ gap is the result of entrenched poverty and its accompanying social and cultural deficits rather than its cause. Nisbett argues that the evidence clearly exposes that “blacks and other ethnic groups have lower IQ and achievement for reasons that are entirely environmental. Most of the environmental factors relate to historical disadvantages but some have to do with social practices that can be changed,” he says. Moreover, “some cultural groups have distinct intellectual advantages. . . . These include people with East Asian origins and Ashkenazi Jews.”60
I’ll end this gesture at the vital importance of context with an account of the success of the Jewish Americans who hailed from European backgrounds. We Jews do not owe the preponderance of our relative success to being smarter, it turns out. In a twist of black humor, we are luckier—our relative success is due primarily to cultural and other contextual factors. How successful have Jews been in America? Nisbett points out that Jews of European descent are overrepresented, compared to their minuscule population in the world, by a factor of 50 to 1 in the percentage of Nobel Peace Prizes and by 200 to 1 in Nobel Prizes for economics. If we count as Jews those who have at least one Jewish parent, American Jews have received 40 percent of all Nobel Prizes in science awarded to Americans. If we count only those who have two Jewish parents, the figure falls to 27 percent. Yet Jews are only a little over 1 percent of the American population. Other science and math awards display similar percentages of Jewish winners. More than 30 percent of students in the Ivy League are Jewish, and the same percentage applies to faculty at elite American colleges and universities. Supreme Court law clerks are also Jewish by about the same percentage. Jews have been extraordinarily successful in all kinds of endeavors, including business, in which intelligence makes a big difference. While average Jewish IQ is the highest of any American ethnic group—it’s a little less than one standard deviation above the white American average, which means it averages about 110 to 115—Nisbett points out that that factor cannot alone account for the extraordinary success of American Jews, for the record of success outstrips the IQ advantage by leaps and bounds.61 Given the average IQ of American Jews, Nisbett says, the number of Jews in the genius range, with an IQ of 140 or over, would only be six times that of the standard American population. But the overrepresentation of Jews as winners of Nobel Prizes is at least 15 to 1 for those with two Jewish parents. In the Ivy League and as professors and law clerks, the IQ difference would predict an overrepresentation of 4 to 1, but the actual overrepresentation is again 15 to 1. What these statistics show is that Jews are substantially overachieving their IQs. Sephardic Jews, who are Jews of North Africa and Asia, have IQs that are on average the same as Americans at large. Yet they were the great achievers under Islam between the years 1150 and 1300 C.E., when 15 percent of all scientists were Jews. Nevertheless, Jews and Jewish achievement are not in any way unique, Nisbett points out, for regional differences in intellectual accomplishments, both worldwide and also within the United States, display far greater differences than those between European Jews and other groups—consider the vast disparity in accomplishments in science, philosophy, and the arts occurring between, for example, Texas and the Northeast. “The magnitude of the differences in intellectual accomplishment between Jews and non-Jews in the West,” Nisbett concludes, “pales beside all these national, ethnic, and regional differences.” Nevertheless, the Jewish difference is still in need of explanation. Like Confucians, Jews have a strong emphasis on education; also similar to those in the Confucian milieu, Jews have very strong family ties, and family expectations of the individual are demanding and hard to resist. Achievements are seen to redound to the whole family and even the whole community. In sum, Nisbett says, “Jews place a high value on achievement, period.” And the emphasis on achievement is not only in academics and intellectual and cultural endeavors but in business and sports as well. Nevertheless, this explanation is less research than anecdote. What is clear from the hard evidence, however, is that Jews achieve far more than their somewhat higher IQ averages would predict. So the difference is a result of environmental, contextual factors—whatever they may be.62
Humans are collective thinkers, who rarely solve problems without input from the distributed cognitive systems of culture.
—MERLIN DONALD
Where do you stop, and where does the rest of the world begin? There is no reason to suppose that the critical boundary is found in our brains or our skin.
—ALVA NOË
I have been approaching the discovery of the self beyond itself and in the world from a number of angles. The evidence is building for the extension of our selves into our tools and computers and pencils, into robot arms and cell phones and cars, and for the distribution of our sense of self into shared environments and contexts, from culture and family to nation, from school and neighborhood to generation and church. Here we find the source of a sense of distributed agency: it can be the group, rather than the individual, who is performing an action or making a decision. We have reviewed evidence from widely different quarters—from psychological studies of infants in their development of co-consciousness, a shared world and a self co-constructed by self and environment; from studies of our un(self-)conscious thinking and feeling, which reveal that each of us has a number of implicit working self-concepts rooted in two-person repertoires that arise from our relationships with significant others (mother, father, siblings, and the like) in early childhood and triggered ever anew by the environment; from the surprising neurobiology of out-of-body experiences, which reveals that we can discover our feeling of self outside of our bodies and lodged in parts of the environment; from the neurochemistry of the self-other boundary, a boundary that breaks down and enables the other to feel like self in empathy and love but also in shared anger and fear; from the discovery of mirror neurons, which directly cause homologous brain cells to fire in mere observers of an action, creating a shared experience of actor and observer from the inside; and from the sociological analysis and meta-analysis of success and intelligence, whose findings identify the major causes of outstanding individual achievement as environmental, social, and cultural, rather than individual or genetic. Taken together, the evidence ought to begin to change where we look when we we’re searching for ethics. We should begin to look not inside the individual, as we have assumed, but rather outside, in the environment. Some philosophers and other theorists have begun to do just that.
There is a growing movement to rethink thinking, and the mind more generally, as embodied. Embodiment means that the mind is not a brain in a vat. The days when thinking is likened to a computer program are coming to an end. As the philosopher Alva Noë puts it, the standard view, not only in philosophy but in neuroscience, has been that “we are brains in vats on life support. Our skulls are the vats and our bodies the life-support system.”63 But that standard view is turning out to be wrong. One of the things that the old view assumed was that it made no difference whether thinking takes place in an embodied person, in a machine, or somewhere else. A good analogy to the way thinking was supposed to work was that it was like seeing a movie in a theater, on TV, or on your computer—and it wouldn’t matter much except for the scale and the clarity. The movie was the movie, and it was just different technology bringing it to you. But that’s not the way the mind is turning out to work. Instead, and contrary to decades of the dominance of the standard “movie” account in cognitive science, the ways that the brain is biologically, neurologically, and ecologically constructed are coming to be appreciated as supremely relevant to the content of the mind. The mind is not a computer running a discrete genetic or other kind of internally constructed program that would be the same on any type of hardware. That computer or media metaphor, a metaphor that has driven a great deal of research, is simply misguided when it comes to human thinking. For the body shapes the mind—and the content of the mind—in crucial respects rather than merely underlying it. Second, the new conception of the mind is that it is not only embodied but also embedded in its environment: in its contexts and situations and histories and communities of all kinds, social and cultural and linguistic and natural. As Noë puts it:
The limitations of the computer model of the mind are the limitations of any approach to mind that restricts itself to the internal states of individuals.64
The content of experience—what we experience—is the world; in the world’s absence we are deprived of content.65
Finally, in addition to the embodiment and embeddedness theses, there is the extendedness thesis. This is the claim that the mind is not confined to the skull. It means that “the boundaries of cognition extend beyond the boundaries of individual organisms.”66 Instead, both what’s in the mind and who’s doing the thinking and acting are “boundary crossing” and “world involving.”67
Our thinking and our acting are not separated, which is how we tend to think of them—as cognitive reflection on, and an internal picture (representation) of, a world that is separate from ourselves and upon which we take independent action. Instead, perception and cognition depend upon and are crucially constructed by the way we interact with the world. Cognition is now being shown to involve the sensorimotor brain, that is, “motor capacities, abilities, and habits.”68 This may occur both “online,” so to speak, and “offline.” What that means is that the bodily involvement or “embodiment” in question may consist at times only in the involvement of the sensorimotor areas of the brain but not necessarily the body proper; that is, it need not involve current perceptual or action input or output. The relevant point for us here is that action and cognition are, in some not yet fully delineated or completely understood ways, bundled together, causally interdependent, rather than discrete and independent processes. Our perception is interdependent with bodily motor relationships—which is to say perception is an interaction and what is perceived is the interaction, rather than a self-removed grasp of the external environment per se.
The theory of how perception is shaped by how we interact with things was first put forth in the 1960s by psychologist James Jerome Gibson. Gibson theorized that we—and animals as well—do not perceive objects, or the environment more generally, objectively in terms of the shape and volume of objects. Instead, we perceive the environment and objects in terms of how we envision how we can interact with them; we see not objects per se but rather “affordances that make possible and facilitate certain actions.” The Blakeslees explain what Gibson meant by “affordances” by suggesting that “handles afford grasping. Stairs afford stepping. Knobs afford turning. Hammers afford smashing.” We perceive the world, according to Gibson, “through an automatic filter of affordances.”69 As the Blakeslees put it:
Your perception of a scene is not just the sum of its geometry, spatial relations, light, shadow, and color. Perception streams not just through your eyes, ears, nose, and skin, but is automatically processed through your body mandala to render your perceptions in terms of their affordances. That is generally true of primates, whose body mandalas have grown so rich with hand and arm and fine manipulation mapping, and even more so for you, a human animal.70
If thinking and even our basic perceptions of the world around us are not separate and separable from acting, then moral agency cannot be, as we tend to assume, about understanding and assessing situations from a removed perspective and then rationally and independently choosing the right action. For on the new model, all three are bound together—perhaps in the way that emotions and cognition have been found to be bound together in neural packages and pathways; think of mirror neurons and how acting out scenarios within us is the basis of understanding others’ actions. Perception, emotion, action, cognition, empathic understanding of others—all seem to be integrated. What we believed to be discrete and bounded mental processes that “we” then in some sense preside over from above and bring together are turning out to be more intimately bound together from the start and all the way up the line. There is no independent “we” or “I” outside of these bundled perceptual, conceptual, affective, and enactive processes, no “I” who stands above them as if they belong to someone else or are distant parts of the world and looks down upon them and then decides or feels or chooses or acts. As Noë puts it, “Scientists seem to represent us as if we were strangers in a strange land. . . . Our relation to the world is not that of an interpreter. . . . Our relation to the world is not that of a creator. The world is bigger than we are; what we are able to do is to be open to it.”71
It’s not what is inside the head that is important, it’s what the head is inside of.
—JAMES JEROME GIBSON
Human decision making is most commonly a culturally determined process . . . When the individual “makes” a decision, that decision has usually been made within a wider framework of distributed cognition, and in many instances, it is fair to ask whether the decision was really made by the distributed cognitive cultural system itself, with the individual reduced to a subsidiary role. . . . Distributed systems are able to change where in the system each component that influences a certain decision is located.
—MERLIN DONALD
Openness to the world would seem to be our fundamental posture. We are of the world and in it, engaged in and engaging the environment and context. The misleading but dominant metaphor of “seeing” as our basic relation to the world obscures this reality. Seeing places us too much on the outside looking in. Let’s replace sight with touch. If we think of ourselves as fundamentally touching and being touched, acting and being acted upon, and acting together with others, then we can grasp our fundamental openness. Each of us can come to be aware of ever larger contexts and environments in which we are embedded, as affecting and being affected. The local context we grasp is too narrow to contain or explain the scope of the openness of the self. The self we are and with which we can identify moves ever outward.72 We discover our thinking, emotion, and action as a product of the group and of ever-wider contexts and environments. We come to know ourselves by discovering ourselves beyond ourselves. The self is distributed.
The brain interacts with the world in ways that influence our perception itself. Pivotal findings of neuroscience, particularly those of Jaak Panksepp about the crucial role of action in perception,73 were anticipated by the philosopher Susan Hurley beginning in the 1990s, and especially in her first great breakthrough work, Consciousness in Action.74 Her insights go a long way toward explaining and establishing that the boundary between self and world is not set by our skin. Instead, patterns of interaction are what thinking is all about. Hurley argues that action is distributed among mind, body, and world rather than being attributable to the individual alone. And her conjecture has turned out to have a great deal of neurobiological evidence in support of it.
Hurley suggests that our standard (and generally unconscious) assumption that perception and action operate according to an input-output model is simply incorrect. This means that we do not simply have sensory faculties that bring us raw data from the world (input), which we make sense of according to some internal genetic or other program, and then act upon (output). We falsely presume that the mind is bounded in a way that separates it (and us) from the world, so that input and output are distinct processes. As Alva Noë puts it, “We are not world representers. . . . Our worlds are not confined to what is inside us, memorized, represented.” Instead, “we live in extended worlds” that are “reachable” rather than “depicted.”75 By trying to rid us of the presumption that we are “representers,” Hurley is banishing our sense of ourselves as observers of the world rather than participants in it. Another way to put it is that we are not somehow separate from the world, our brains constructing a common intersubjective internal world by imposing standard patterns upon a chaos of disorganized perceptual data. In the standard Cartesian-Kantian story, these subjective and intersubjective constructions play out as films in our heads, and, as a result, they have a tenuous relation to the actual external world. We are locked in our heads. Hurley sets out to refute this standard view that the mind is an internal program playing out upon a world stage, which she argues is wrongheaded.
Part of the conceptual problem, she says, is that we tend to focus almost exclusively on the input-to-output direction, how the mind structures incoming percepts. Consequently, we tend to ignore the functions from output back to inputs, “the way environments, including linguistic environments, transform and reflect outputs from the human organism.” In other words, the world we encounter is not an unstructured arena of chaotic raw data but in fact is (pre-)structured by human practices, linguistic meanings, institutions, histories, cultures, and nature itself. So both directions are just as complex, Hurley remarks; not only that, but they “are causally continuous.” “To understand the mind’s place in the world,” she goes on, “we should study these complex dynamic processes as a system, not just the truncated internal portion of them.” Our place in the world is “a complex dynamic feedback system [that] includes not just functions from input to output, but also feedback functions from output to input, some internal to the organism [that is, from internal data about the body’s state back to the mind], others passing through the environment before returning.” The upshot of this approach, Hurley tells us, is that there are no “sharp causal boundaries either between mind and world or between perception and action.”
Hurley points out that we must not take at face value the notion that the mind provides a ubiquitous and knowable human structure for the raw environmental data coming in through the senses. That scenario, she says, is just as much based on unfounded faith—in this case, in the transparency and availability of the mind—as is the naive acceptance of the external world as being just as we perceive it. So the same kind of Cartesian-Kantian skepticism about the world ought to be directed at the mind itself. If we don’t take the world as given, why should we so take the mind? Why should we assume that the mind is subjectively available to us as providing reliable information or accurate readouts of our internal mental states? And why should we assume that the ways the mind perceives external data are ubiquitous across human beings? Why should we take our internal mental states naively, as we have learned not to do with our external senses?
Hurley also challenges the notion that the contents of our mind and the structuring of the mind are independent of the world, and the world we take in is independent of the mind. Her book is an extended argument for externalism, the view that the self is in the world and that self and environment are related in and as interacting open systems. What that means is that there is nothing that is either pure self or pure environment. Both are always interactively constituted. Hurley proposes that
the revolution that began with Kant’s arguments about perceptual experience should be carried through to agency. Action is no more pure output than perception is pure input. The whole of the Input-Output Picture should be rejected, not just half of it.
We get a new angle by making the ninety-degree shift: by making the focus of our scrutiny the perception/action cut rather than the mind/world cut.76
When we act, we create a relationship to the environmental context we inhabit, and that relationship both influences what we perceive, on one hand, and structures the mind’s way of perceiving, on the other. People, for example, who live in remote locations and build only round buildings and hence have no experience of corners have difficulty seeing corners when they later encounter them because their faculty of sight in early life was not influenced and in part constituted by its interaction with right angles.
The self is not deeply hidden within, able to be discovered only by solipsistic introspection or surmised solely via indirect clues from others’ behavior. Instead, “the self is in the open, where it seems to be,” Hurley says. “It is a mistake to think that the processes in brains that make subjecthood and agenthood possible relocate subjecthood and agenthood internally. These processes make it possible for us familiar persons to be selves, embedded in the world, here where we seem to be. They don’t replace us with other hidden selves.”77 That is, it is a mistake to confuse the vehicle with the content. The fact that we have neural architecture that makes possible a sense of self does not mean that the content of that self is a product of that architecture alone.78 Nor are our minds completely determined by the world or totally passive to it, Hurley points out, for we can make mistakes about the way it is. So instead we can understand our internal neural architecture as making possible openness to the world and shaping by the world, along with its own shaping of the world. Person and world are relational, interactional, and also contextual.79
The themes of Hurley’s revised approach to the mind—decentralization, self-organizing systems, context dependence, feedback, emergence—have resonance in research programs in connectionism, dynamic systems theory, and artificial life. So they are not only good philosophy but also where neuropsychology is leading us.80 We might reflect that it’s taken this long to begin to banish Augustinianisms—which severed the human psyche from the body and from nature, and hence from both desire (from cognition, understood as “will,” from affect and emotion) and the world, natural and social—not just from Western cultural, philosophical, and of course religious presuppositions but also from the scientific ones that have driven research agendas. Hurley identified the problem but not its theological origins. She also pointed to its tyranny in ethical thinking when she wrote that “the Input-Output Picture that is here [in Consciousness in Action] criticized has had significant implicit influence in ethics.” She then raises a question: “What are the implications for ethics, and for social and political philosophy, if this [input-output] view cannot be justified?”81 She calls attention to the fact that the input-output view in ethics presupposes and bolsters the claim of free will, for it conceives us human beings as originating sources of causal chains—our minds impose an order upon the world. Hurley’s own view, in contrast, gives a death blow to free will since it envisions human beings as contextually embedded in natural and social causal networks and webs.82 Here we have come back full circle to Spinoza, who presciently anticipated the direction now being taken by cutting-edge contemporary philosophy of mind in response to the new brain sciences. And here we are rethinking agency and moral agency via systems theory, again with Spinoza as pioneer and guide, even though the formal discovery or invention of systems theory would not take place until more than three hundred years in the future.83
Spinoza anticipated externalism and rethought moral agency in terms of it. He of course didn’t call it that, but he envisioned nature as a network, a system of causes at all levels from culture to physics. Each person, animal, or thing was a location in the system of networks, a location that defined the point in the overall system that makes each thing what it is and serves as its definition and identity, an “essence,” a basic desire, that tries to maintain its integrity amidst change. A thing’s essence was not a static thing or content or a quasi-genetic program, Spinoza believed, but instead a “ratio,” which is to say it was itself also a system, an open system within open systems, and each system at every level strove to maintain its internal homeodynamic organization while being open to the larger systems, environments, that were its constitutive causes and to which it also contributed.
As a consequence, when it came to ethics, Spinoza conjectured that the move inward to discover the causes of how we each come to be this self and maintain ourself as this self, and the move outward toward identification with the world, turn out not to be discrete processes but instead interwoven. For there is an irony at work here that gets us out of our solipsism indirectly, perhaps even surreptitiously: the process of filling the self with its unconscious content, bringing its un(self-)conscious and even nonconscious causes to light, entails connecting the self to all the contexts, environments, worlds, people, situations, culture and history and biography, and event and memory of which it is composed. So the self, in becoming a self and in becoming itself, acknowledges and becomes a more internally coherent, self-organizing internalization of its immediate world—Spinoza’s name for that dynamic well-functioning of self as a coherent system is activity—and then of its more distant environments. The paradox is that to be truly yourself is to be your world, and ultimately the universe that created you. That was Spinoza’s insight, long before neural self-maps ever were a gleam in any scientist’s eye. To be this self was to be this point in the universe, Spinoza thought, and it took the whole universe up till now to produce any given “me.” So to attain what Spinoza regarded as a state of personal autonomy or “freedom”—to achieve the spiritual and moral psychological aim of the Ethics—was to come to understand and own as self all that has come to make up this (biologically, psychologically, socially, culturally, historically, biographically, cosmologically, quantum mechanically, etc.) situated and constructed self.
The world thus is systematically introduced into the self as causes of the self and hence as self—but in the doing, the self now flips and sees itself in terms of its world, in terms of those parts of the world that appear now as personally constitutive. There is no limit to that centrifugal force. We are in principle at home in the universe, and our freedom lies in making that real to ourselves. The environment is not foreign but constitutive. So the irony of autonomy is that its achievement comes to fruition only in the embrace of the environment and of those things within the environment in which one now sees oneself, and progressively more so to infinity. To see aspects of the environment as self rather than only as other is to feel the world not as merely an external limit to the self but as constitutive of the self and finally as harboring the possibility of further self-formation in it and also of it. To include the world within the self (self-mapping) is to open the self to the world, as Antonio Damasio realized, and also, eventually, to extrude the self into our constitutive environments. It is to come to accept the self and embrace the world. It is to love the other as the self. Yet to embrace the world or aspects of it without the arduous path of self-discovery is to magically (or “imaginatively,” as Spinoza designated it) extrude and lose the self in the immediate environment, in a merger with the present situation and moment and world. A self that has not discovered its own unique environmental constitution is all too vulnerable to being filled by its immediate environment—or, if it rebels, by nothing much at all, or by chaotic impulses and wild shifts in identification and viewpoint. So the human moral danger is more often than not that of the fanaticism of the group mind. That marks the devolution of the systematicity of the self into its environment and the relinquishment to the group of its internal cohesive identity of its own “ratio” or “essence” or homeodynamic stability. That is the real moral danger, not the rare psychopath who has no stake in others or the world. Moreover, the danger of the individual merged into the group self and group mind plays itself out not only between external enemies but also, ominously, between subgroups hierarchically organized in a society.
Thus the group poses another danger as well: it is an internal danger rather than an external one. I am referring to the sacrifice of some component groups for the sake of others within a society. While subgroups are often designated as those to be sacrificed for the benefit of the whole group (the military, for example), it is also the case that certain subgroups are chosen to be sacrificed to other subgroups within the larger social group that is the society as a whole. The obvious example is slavery. The history of American slavery takes that form: the group as a whole includes both masters and slaves, but the hierarchical structure benefits only the elite group. We can think of the Holocaust in this way, too, for not only was it group against group (Germans against Jews) but also a hierarchical group’s war against a subgroup (Jews were Germans, too, after all) that those in power systematically denigrated and legally disenfranchised. The Milgram experiments can be interpreted along these lines as well. The experiments were set up so that those designated as the legitimate authorities and the top subgroup in the hierarchical order of subgroups—that is, the psychological researchers—were (or, actually, appeared to be) in effect sacrificing the lowest subgroup in the group hierarchy, the (mock) experimental subjects, on the altar of science. The middlemen were the subgroup of unwitting volunteers. They came into the social system and group structure as the means between the hierarchical elite and the lowest subgroup. It was they who were willing to sacrifice the subgroup for the seemingly legitimate purposes defined by the hierarchical elite.84 Seen from a wider perspective, the danger to individual members of the group, especially those who are part of subordinate subgroups, is as high as its danger to outside groups. Moreover, the danger from within is always present, whereas the dangers of war and genocide are more episodic. I am reminded of Spinoza’s lament that human beings are far too prone to fight for their own slavery as if it were their salvation. His urgent cri de coeur was a plea to envision the moral route to human freedom. Malcolm Gladwell relates a news story that illustrates how common the danger to subgroups within the larger group can be even in normal peacetime situations. He reports that a series of Korean plane crashes occurred because junior pilots (a subordinate subgroup) were loath to criticize or even to call attention to the mistakes of senior pilots. The junior officers were disastrously engaged in what Gladwell says is technically called “mitigated speech,” the kind of downplaying talk that people tend to use in deference to authority. As a result, hundreds of people died, including a number of pilots.85 The normative authority conferred upon those in a high subgroup position in the social hierarchy is a social urge so overwhelming that it has a tendency to trump life itself. If we think that this is merely an ethnic or national issue, we recall that the Milgram experiments had comparable results across cultures.
This story has a good ending and an instructive one. Korean Air recognized cultural deference to authority as a problem and called in Delta Airlines’ David Greenberg to help them. Research has shown that the United States is low on a comparative scale of nations’ relative deference to authority. Greenberg says that what he did with the Korean Air personnel was that his team “‘took them out of their culture and re-normed them.’”86 The intervention was a success. What we should take away from this example is that intervention from the broader world and context can critique and revise internal group norms and the hierarchies among constituent subgroups of the larger group. It is just as much the internal environment that is in danger from a group as its external enemies—perhaps even more so, generally speaking. That was one lesson of the Stanford Prison Experiment, which also had its subgroup of internal victims rescued only because of an outside intervention. International human rights organizations attempt to intervene in this way across cultures. Intervention from a wider perspective and the world at large is a function of diversity and suggests how much it should be cherished, bolstered, and institutionalized. Diversity creates the possibility of groups, which are adaptive systems, opening themselves to ever-wider perspectives.
Human cognitive processes are inherently social, interactive, personal, biological, and neurological, which is to say that a variety of systems develop and depend on one another in complex ways.
—WILLIAM J. CLANCEY
The mind leaks out into the world, and cognitive activity is distributed across individuals and situations. This is not your grandmother’s metaphysics of mind: this is a brave new world. Why should anyone believe it? . . . One part of the answer lies in the promise of dynamical systems theory . . . as an approach to modeling cognition. . . . Insofar as the mind is a dynamical system, it is natural to think of it as extending not just into the body but also into the world. The result is a radical challenge to traditional ways of thinking about the mind, Cartesian internalism in particular.
—PHILIP ROBBINS AND MURAT AYDEDE
The insight that a person is an open system in relation to other open systems, natural and cultural, has begun to be rigorously articulated and theoretically worked out in the developing field of systems theory. Systems theory, which originated in the 1940s, derives its principles from physics, biology, and engineering, and was influenced by both cybernetics (the study of communication and control systems) and semantics (the study of meaning). Systems thinking seeks to encompass the viewpoints of the many different disciplinary approaches to the parts of the system being analyzed. So, “for example, when building a highway, one can consider it within a broader transportation system, an economic system, a city and regional plan, the environmental ecology, and so on.” The different perspectives highlight different relationships, different parts, and different causal processes. The issue is how to understand a dynamic process at many levels of analysis and within a hierarchy of levels of analysis (the inorganic, the organic, the neurological, the symbolic, and the cultural, for example). So what are the identifiable features of complex systems? A complex system is defined as one “whose properties are not fully explained by linear interactions of component parts,” that is, it is a system whose properties are not merely explainable by adding them together and in which small changes can produce large effects. The whole is more than the sum of the parts.87 The systems perspective on human behavior is all about context; it takes into account “psychology, anthropology, sociology, ethology, biology, and neurology, and their specialized investigations of knowledge, language, and learning” in emphasizing the “contextual, dynamic, systemic, nonlocalized aspects of the mind, mental operations, identity, organizational behavior, and so on.” The approach was well established in biology before it reached other disciplines. Biologists realized that in order to come to understand the sustenance, development, and evolution of life, an organism could not be isolated from its environment nor a cell from the organism. “Systems thinking, involving notions of dynamic and emergent interactions, was necessary to relate the interactions of inherited phenotype, environmental factors, and the effect of learning.”88 The overall approach of systems thinking to human cognition and behavior is as follows:
An all-encompassing generalization is the perspective of complex systems. From an investigative standpoint, the one essential theoretical move is contextualization (perhaps stated as “antilocalization” . . .): we cannot locate meaning in the text, life in the cell, the person in the body, knowledge in the brain, a memory in a neuron. Rather, these are all active, dynamic processes, existing only in interactive behaviors of cultural, social, biological, and physical environment systems.
A self, according to this approach, is “self-organizing” and “unfolding” and always contextualized or “situated.”89
Thinking about human behavior in terms of systems changes dramatically the way we conceive agency: what it means to act and even who is doing the acting. That is the conclusion of the computer scientist Merlin Donald, who argues that although decision making “seems to be a very private thing: individualized, personal, and confined to the brain,” when it is looked at from system theory, we realize that culture is a major factor in how the brain self-organizes during development, both in its patterns of connectivity and in its large-scale functional architecture.90 So it is the system that makes the decision: “The mechanisms in such decisions must be regarded as hybrid systems in which both brain and culture play a role.”91 Donald, of course, does not deny that decisions are made in individual brains. Nevertheless, he points out that “human brains . . . are closely interconnected with, and embedded in, the distributed networks of culture” that “define the decision-space.”92
Neurobiological research is further elaborating how decision making can have a range of scopes, from a part of the brain to large groups of people. To discover who is actually acting in a given case, all the facts need to be taken into account and then analyzed from multiple standpoints, from the brain sciences to organizational behavior to culture and history, and so on. Only through this multidisciplinary approach can the attribution of agency and responsibility be accurately distributed across people and levels of organization and participation and authority. Christoph Engel and Wolf Singer raise the question of how many agents are involved in decision making, and offer this answer: “This perspective treats human decision makers as [possible] multiple agents [both internal and external].”93
Complex adaptive systems are quite different from most systems that have been studied scientifically. They exhibit coherence under change, via conditional action and anticipation, and they do so without central direction.
—JOHN H. HOLLAND
Complex adaptive systems [are] those that learn or evolve in the way that living systems do. A child learning a language, bacteria developing resistance to antibiotics, and the human scientific enterprise are all discussed as examples of complex adaptive systems.
—MURRAY GELL-MANN
The human person not only is an open system within others but also adapts. Complex adaptive systems are a special kind of system in which emergence and self-organization hold sway. In contrast, Murray Gell-Mann, Nobel laureate in physics and one of the founders of the science of complexity, points to galaxies and stars as examples of systems that are complex and evolve but which are nonadaptive.94 The control in a complex adaptive system is decentralized and widely distributed, rather than being under some central control. The patterns of activity arise or emerge from the interactions of the agents rather than from some overall plan. Nevertheless, there is dynamic stability, identifiable patterns that are neither utterly chaotic nor substantially fixed. These patterns evolve, changing over time as the system itself changes and evolves. It is the individual “agents” in the system that, from their location and environment, develop adaptive behavior.95 They exhibit the same patterns of the whole at various scales within the system. Learning is an important feature of complex adaptive systems even though there is no central consciousness involved. And they are highly resilient. Ever increasing diversity is an important feature of complex adaptive systems and crucial to their capacity to adapt and survive.
The hallmark of complex adaptive systems is perpetual novelty, according to John Holland. Diversity arises from how this kind of system recycles its resources.96 For example, if you have a network with an ore extractor, a steel producer, and a car manufacturer, and three-quarters of the steel from cars is recycled and goes back to the steel producer and then the manufacturer and comes out again as cars, then with the same input of steel, recycling creates more resources at each stage. A tropical rain forest illustrates the point perhaps even more dramatically, and from this second example we can see how diversity enters in. The soil in a tropical rain forest is quite poor because minerals are constantly leached out of it and into the rivers by heavy rains. So tropical rain forests are terrible places for introducing agriculture. Nevertheless, the rain forest is hugely rich both in numbers of living beings and in the diversity of species. Why and how is this so? Rain forests depend for their fecundity on the recycling of critical resources. The rain forest is not like a simple hierarchy in which the top predator consumes the resources. Instead, “cycle after cycle traps the resources so that they are used and reused before they finally make their way into the river system.” This system of recycling is so effective and creates such richness in the rain forest that ten thousand different insect species may inhabit a single tree! The recycling effect produces resources available to be used in new environmental niches, and these niches are filled by increasingly diverse species: “each new adaptation opens the possibility for further interactions and new niches.”97 A complex adaptive system does not settle into locked-in patterns but keeps producing change and self-correction.
It is the particular niche that defines the kind of diversity that will arise. In evolution, what this results in is convergence. A particular niche produces species that exploit it in very specific ways, and if that species disappears, another species, one that can be entirely unrelated, appears that nevertheless has similar characteristics to the earlier one in the way that it fits into the niche. Holland gives as examples the prehistoric ichthyosaur and the modern porpoise, two species in no way related to each other yet similar in their habits (their prey, for example) and their form. That the niche determines the kind of diversity that arises to occupy it is a stellar example of what is meant by externalism: the context determines the individual.
To understand what any item in this kind of system is, we need to look at its location, function, and interactions, rather than primarily inside it. What you get inside a complex adaptive system is what the outside, the environment, has made it—through either the “fast” externalism of changes in behavior or the “slow” externalism of evolutionary adaptation. While the quintessential example is an ecosystem, two other examples of the context defining the niche are New York City, which has thousands of different kinds of wholesale and retail businesses, and the mammalian brain, with its patterns of neurons organized into hierarchies and regions.98 The overall system and its dynamic relations define its component building blocks. The dynamism depends upon ongoing diversity.
Each of us humans functions in many different ways as a complex adaptive system. . . . When you are investing in a financial market, you and all the other investors are individual complex adaptive systems participating in a collective entity that is evolving through the efforts of all the component parts to improve their positions or at least survive economically. Such collective entities can be complex adaptive systems themselves. So can organized collective entities such as business firms or tribes. Humanity as a whole is not yet very well organized, but it already functions to a considerable extent as a complex adaptive system.
—MURRAY GELL-MANN
Murray Gell-Mann comments that the “human race” can be thought of as a complex adaptive system engaged in “evolving ways of living in greater harmony with itself and with the other organisms that share the planet Earth.” At different scales, “a society developing new customs” and even an individual “artist getting a creative idea” are complex adaptive systems both within and engaged with other complex adaptive systems.99 How can thinking in terms of complex adaptive systems help us rethink moral agency and come up with ways to make societies and all kinds of groups function more ethically? Crucially, complex adaptive systems theory suggests that to change the person we need to look at the system. Interventions in context and environment, rather than in the brain or mind of the person (for example, through drugs or the training of the individual will), seem to be the place to start.
Diversity is crucial, too. We need to think about diversity and its role in complex adaptive systems to ensure their ongoing vitality and continuing evolution. In social systems, diversity plays out as the introduction of diverse people, practices, and points of view that challenge and disrupt the stable social system, sparking a more complex and inclusive reordering and reintegration. It is a trade-off between closure to variation and the resulting static internal coherence, on one hand, and on the other, openness so great that the system cannot accommodate the differences fast enough through internal systemic reintegration. Spinoza advocated a systems theory of ethics that was perpetually reorganizing at the brink or “edge” of chaos.100 This means that he advocated a personal maximal openness to others and to the world while retaining the capacity for dynamic self-organization. So at best, in this view, each of us ought to cultivate an openness to others that doesn’t overwhelm us but can be integrated into our sense of self and what we care about through understanding, the increased capacity for empathic identification, standing in the other’s shoes, perspective taking, and openness to critique and self-critique. As a friend puts it, it’s not just about tolerating differences; it’s also about finding in oneself the capacity to enlarge one’s empathic acceptance. This involves the ability to learn from others, both about them and about one’s own self from another’s perspective. So ongoing diversity is necessary in order to overcome personal self-deception, dogmatism, and denial—our most ubiquitous and corruptive moral dangers. Professor Pat Longstaff of the Newhouse School at Syracuse University points out that diversity enhances the resilience of systems, so that diverse groups and institutions are more likely than homogeneous ones to survive under threat or in times of adversity.
We should now rethink the problem of selfiness from a systems perspective: it is the attempt to maintain a narrow systematicity and coherence that won’t allow in challenging data from others or even from the implicit un(self-) conscious meanings and intentions of our own actions. Selfiness tends toward the refusal to acknowledge that one is a part of larger cultural, social, and natural systems. It is the arrogance of the myth of self-creation, of free will. The overcoming of narrow selfiness of this kind in an expansive self-coherence that enlarges the self to include more of the world and others is a lofty ideal for the individual and a noble and difficult path. It is also a rare one, as Spinoza pointed out.101
On the societal level, more is called for. Protecting diversity, especially diverse opinions, is absolutely vital to the possibility of social moral agency, for it makes available critical perspectives that foster both whistle-blowing and creative thinking and solutions. Whistle-blowing and creative thinking enable social systems to adapt to changing environments instead of stagnating. Just think of oil companies or car companies that fail to adapt to new energy sources, or of large corporations that corrupt scientists and government agencies to the point where they deny global climate change. Our human survival, as well as our moral integrity, lies in the balance. Groups and individuals alike are the subjects of moral integrity or corruption. We’re all systems in need of the critique and larger perspectives that can come from the outside. We all need to foster moral life at the edge of chaos. We cannot allow self-systems and group-self-systems to push for a selfy (that is, self-serving and hypocritical) coherence that eliminates—and so allows us to deny—the challenge of external perspectives and critiques at the price of our moral integrity, the betterment of the world, the health of the natural environment, and even our survival as a species.
True, humanity never runs out of claims of what sets it apart, but it is a rare uniqueness claim that holds up for over a decade. This is why we don’t hear anymore that only humans make tools, imitate, think ahead, have culture, are self-aware, or adopt another’s point of view. If we consider our species without letting ourselves be blinded by the technical advances of the last few millennia, we see a creature of flesh and blood with a brain that, albeit three times larger than a chimpanzee’s, doesn’t contain any new parts. Even our vaunted prefrontal cortex turns out to be of typical size: recent neuron-counting techniques classify the human brain as a linearly scaled-up monkey brain. . . . No one doubts the superiority of our intellect, but we have no basic wants or needs that are not also present in our close relatives. I interact on a daily basis with monkeys and apes, which just like us strive for power, enjoy sex, want security and affection, kill over territory, and value trust and cooperation. Yes, we use cell phones and fly airplanes, but our psychological make-up remains that of a social primate.
—FRANS DE WAAL
We are social primates. We are defined by our niche and by our evolutionary inheritance. We are different in degree but not in kind from other primates in our brain and behavior, capacities and needs, and desires and tendencies. Aristotle was right: human beings are social animals. Cooperation is the norm, and it occurs on a grand scale in human beings. Human beings are “groupy” to an astounding degree, as well as selfy. From an evolutionary biological standpoint as well as from a systems theory standpoint, the group, not the individual, is often the significant actor. New thinking about evolution is now focusing not just on competition among individual members of a species as the driver of selective pressures, but crucially on group-against-group competition within species and between species. Some biologists even think cooperation is as important and basic an evolutionary principle as competition.
Evolutionary biologists Martin A. Nowak, Corina E. Tarnita, and Edward O. Wilson argue that natural selection for kinship is proving to be unfounded.102 What that means is that altruistic behavior in living beings (specifically, in insects of various kinds) is not proving to be tied to kinship. Instead, behavior benefiting the group, rather than the singular individual, is far more widespread and the scope broader than mere kinship relations. Altruism is not driven by an urge to pass on the genes of related individuals. Since the 1950s the standard theory has been that close kin are most likely the ones who give up their lives altruistically so that those who share their genes are able to survive and reproduce. The sparse evidence available at that time seemed to prove the case. Yet with much more evidence now accumulating, what was known early on has turned out to represent not the norm but the exception. It is now known that most eusocial (what is commonly called “altruistic”) behavior in insects, for example, takes place not between kin, nor is it to the particular genetic advantage of kin. There is even evidence that genetic variation is being favored, in contrast to genetic likeness: in the case of certain types of ants, for example, it is colony-level selection that obtains, and that favors genetic variability rather than similarity.103 The authors continue: “Other selection forces working against the binding role of close pedigree kinship are the disruptive impact of nepotism within colonies, and the overall negative effects associated with inbreeding. Most of these countervailing forces act through group selection or, for eusocial insects in particular, through between colony selection.”104 So there is increasing evidence that evolutionary competition is not just between individuals but also between groups within colonies and between colonies. Moreover, the authors note that “in some cases, social behaviour has been causally linked through all the levels of biological organization from molecule to ecosystem.105
Having defeated kinship selection as the primary way that cooperation occurs, the authors put forth their own “alternative theory of eusocial evolution.”106 Animal groups assemble in all kinds of ways, the authors point out, from local nest and food sites to families staying together to flocks following the leader or even just local proximity. “What counts then,” they say, “is the cohesion and persistence of the group.”107 According to Martin Nowak, director of the Program for Evolutionary Dynamics at Harvard University, cooperation is as vital and ubiquitously operative an evolutionary principle as competition. Individuals themselves, after all, are the result of cooperation among cells, among organs, at every level. Evolution favored cooperation in order to create organisms.
The upshot of the evidence presented in this chapter and throughout the book is that sociality trumps individuality in human beings as a species. It is an idiosyncrasy of the particular religio-cultural trajectory of the Latin West that prosocial behavior, moral agency, came to be defined in terms of the individual standing beyond belonging—willing and choosing and deciding from a locus of self that could free itself from determination by group, context, world, and natural endowment and act. An alternative understanding of what it means and takes to be moral seems to have come on the horizon: a conception of ethics in terms of shaping, improving, and enlarging a fundamental human natural sociality. Sociality is the default position. On this view, individuality—freedom in some new form—is a difficult and heady achievement and one accomplished only through the embrace of broader belonging and wider critical perspectives.