ONE

Genealogy of the Cerebral Subject

What “Is” the Cerebral Subject?

It may well be that nobody believes they literally are their brain. But when influential people proclaim it, we must take them at their word. Together with the brain in a vat, brain transplantation is one of the favorite thought experiments of philosophers of personal identity (Ferret 1993).1 It is usual to observe that if the brain of A were transplanted into the body of B, then A would gain a new body, rather than B a new brain. Commenting on that commonplace, Michael Gazzaniga (2005, 31), a leading neuroscientist, serenely asserted: “This simple fact makes it clear that you are your brain.” Yet what we have here is neither a fact nor anything simple; it is a profession of faith. The neurophilosopher Paul Churchland “carries in his wallet a colour picture of his wife. Nothing surprising in it,” remarks the sociologist Bruno Latour, “except it is the colour scan of his wife’s brain! Not only that,” he continues, “but Paul insists adamantly that in a few years we will all be recognizing the inner shapes of the brain structure with a more loving gaze than noses, skins and eyes!” (Latour 2004, 224). Gazzaniga, Churchland, and many others who make similar claims express a widespread belief.2 So widespread indeed, that saying, as the New York Times cultural commentator David Brooks did in June 2013, that “the brain is not the mind” immediately generates a flutter of suspicion about a religious and antiquated—even reactionary—dualistic antineuroscience backlash as well as self-confident reassertions of the assumption that “the mind is what the brain does” (Brooks 2013, Marcus 2013, Waldman 2013). The examples could be multiplied.

What is at stake here? Neither science nor ascertainable facts but an idea of the human being, the anthropological figure of the cerebral subject—an “ideology” in the plain sense of a set of notions, beliefs, values, interests, and ideals. Like any ideology, this one offers varieties and internal debates and inspires practices that are not necessarily compatible. Yet there is unity in diversity, so that the cerebral subject allows for a fairly unequivocal characterization, and even for a sort of formula: “Person P is identical with person P* if and only if P and P* have one and the same functional brain” (Ferret 1993, 79).3 To have the same brain is to be the same person, and the brain is the only part of the body we need in order to be ourselves. As the philosopher Roland Puccetti (1969, 70) memorably put it: “Where goes a brain, there goes a person.” Puccetti was not saying that a person is his or her brain but that insofar as the brain is the physical basis of personhood, one cannot be separated from the other. The brain is the somatic limit of the self, so that, as regards the body they need to be persons, humans are specified by the property of “brainhood” (Vidal 2009a), that is, the property or quality of being, rather than simply having, a brain.

Now we must go beyond definitions and ask, first, if there are any real, concrete cerebral subjects and, second, which magnitude (from hegemonic to inconsequential) the brainhood ideology may actually be said to have. In a first approximation, there is one answer to both questions, and it is: It depends. Yes, real people can see themselves as cerebral subjects and behave accordingly—but not necessarily all the time. The weight of the ideology depends on contexts and criteria.

The reason for thinking in terms of a “subject” is that views about what humans essentially are go hand in hand with concrete decisions about how to study them and how to treat them, and these decisions implicate processes of “subjectivation” (Foucault 1983; “subjectification” is sometimes also used). These are processes involved in the production of ways of being, in forms of reflexivity and “technologies of the self” (Foucault 1988); they make individuals what they are and contribute to shape their behavior and experience. In our case, then, they are processes whereby people think of themselves and others as primarily determined by their brains—and act, feel, and believe accordingly.4 Individuation and subjectivation are rooted in sociohistorical contexts and, as we shall see, do not exclude the coexistence of different anthropological figures: cerebral selves, psychological selves, chemical selves, and others.

At the individual level, cerebral subject is not a label that can be permanently affixed to anyone but is rather a way of denoting notions and practices that may be operative in people’s lives some of the time. In practice, no one conception of the human is monolithic or hegemonic in a given culture, and persons are not one kind of subject alone. For example, the developmental biologist Scott F. Gilbert (1995) contrasted four biological views of the body/self—the neural, immunological, genetic, and phenotypic—and put them in correspondence with different models of the body politic and different views of science. He thus highlighted how political debates mirror disputes over which body, and consequently which self, are the true body and self. “Immune selfhood” has a very rich history of its own (Tauber 2012), but writing in the mid-1990s, Gilbert noted that the genetic self had been recently winning over the other selves. These may be theoretical constructs, but they have real consequences. Thus, as Gilbert points out, in controversies over abortion, the self may be defined genetically (by the fusion of nuclei at conception), neurally (by the onset of the electroencephalographic pattern or some other neurodevelopmental criterion), or immunologically (by the separation of mother and child at birth). In each case, when affected by concrete medical decisions, individuals accomplish the “self” whose definitional criteria were used to reach the decisions.

Thus, it makes sense to refer to a “genetic self” when people’s life and self-concept are largely defined by genetic conditions or by genetic testing, screening, and treatment (e.g., Peters, Djurdjinovic, and Baker 1999). Individuals are unlikely to reduce themselves and others to their genetic makeup. However, scientific authorities may suggest such a reduction in statements epitomizing beliefs that permeate a research field, inspire its quest, legitimize its promises, nourish expectations, and orient policy. This was the case when James D. Watson, the codiscoverer of the structure of DNA, uttered for Time an assertion that has been quoted hundreds of times: “We used to think our fate is in our stars. Today we know, in large measure, our fate is in our genes” (Jaroff 1989). The oracular claim was supposed to be universally valid, independently of particular individuals’ sense of self. By the time the Human Genome Project was completed in 2004, the gene had long been a cultural icon; the HGP itself participated in the hype that the sociologists of science Dorothy Nelkin and M. Susan Lindee (1995) called the “DNA Mystique”—one that involved a basic posture of genetic essentialism and offered an overly optimistic picture of the future clinical applications of genetic research (Hubbard and Wald 1993).

In spite of the increasing convergence of neuroscience and genomics, by the late 1990s the brain had largely supplanted the genome as the source of foundational explanations for human features and behaviors as well as the source of scientific hype. Such a shift may appear justified. Since the brain and the nervous system seems more directly relevant than genetics to many of the philosophical and ethical questions raised by the Western philosophical tradition, including issues of personal identity, they are more likely to be felt as constitutive of one’s self. Some occasions may prompt or sustain such a special relation. Thus, while people with genetic afflictions have been observed to “hiss and boo at pictures of genes or enzymes that cause these afflictions,” sufferers of mental illnesses react to brain images of patients diagnosed with depression of schizophrenia with “care and concern,” as if the image represented both the affliction and “the suffering of the afflicted” (Dumit 2003, 44–45).

As we shall see, such differences in attitude, as well as the precedence of brain over genes as far as human individuality is concerned, have deep roots in the history of notions of personal identity. Yet, again, this does not mean that brainhood is hegemonic. For example, on the basis of ethnographic research in a neuro-oncology clinic, the sociologist of science Sky Gross (2011) shows that while most brain tumor patients admit that the brain is the seat of “who they are,” they tend to consider it as just another diseased organ. We must insist on this point, to which we return below, because there has been concern about the empirical accuracy and the interpretive traction of “totalising accounts of the neurological as determining subjectivity, as if the brain is the epicentre of personhood” (Pickersgill, Cunningham-Burley, and Martin 2011, 362).

Notions such as cerebral subject, brainhood, or neurochemical self are not meant to suggest that a neurobiological perspective dictates views of subjectivity always and absolutely but that, in some times and contexts, it effectively does, occasionally at a very large scale. The sociologist Nikolas Rose’s example for neurochemical selves is the well-documented fact that millions of people around the world have come to think about sadness “as a condition called ‘depression’ caused by a chemical imbalance in the brain and amenable to treatment by drugs that would ‘rebalance’ these chemicals” (Rose 2003, 46; see here Chapter 3). However, as with “genetic self,” it should be obvious that, in real life, everyday ontologies (in the loose sense of mainly implicit “theories about being”) coexist, both inside a society and within a single individual. We shift registers in our ways of acting, experiencing, and interacting as well as thinking and speaking about ourselves and others, and this is why psychotherapies and antidepressants can live happily together, if perhaps not “ever after.”

The coexistence of such ontologies and their related practices corresponds to what happens in the diachronic and historical dimension. When a phenomenon or area of knowledge is neurologized, it does not ipso facto cease to be what it previously or otherwise was. For example, in the neurobics industry examined below, “brain jogging” simply translates into training the mind, and the exercises proposed are basically the same as those long peddled to improve mental capacities. Nevertheless, when these exercises are relabeled neurobics, they realize the ideology of the cerebral subject. It may be a superficial instantiation of that ideology, where the neuro is no more than a marketing gimmick. That, however, does not abolish the fact that what is sold and bought belongs to a neuro business based on people believing (or at least being told) that they are essentially their brains.

In a medical context, individuals may share a condition but not its interpretation. For example, in her study of bipolar disorder patients, the anthropologist Emily Martin (2009) describes the clash between a dominant reductionist model and the individuals who challenged the idea that neurobiology sufficed to explain their experience. Grassroots diversity thus coexists with a more homogenous official discourse. As is well known, much of psychiatry, including scientists at the head of major national mental health agencies, assert that there are no mental diseases, only brain diseases. Different consequences could follow—one being an emphasis on pharmacological medication and a restriction of access to psychotherapies, with a huge impact on people’s lives. A development such as the neurodiversity movement (Chapter 3 here) can only happen in a world where “mental disorders” have been redefined as “brain disorders that primarily affect emotion, higher cognition and executive function” (Hyman 2007, 725). In such a context, psychiatric patients are approached mainly as cerebral subjects, and this may contribute to modulate their self-understanding and how they live their lives.

However, the neuroscientific consensus does not automatically translate into public consent, and research confirms commonsense intuitions about the variety and coexistence of views and practices of the self. Emily Martin (2010, 367) noted that the uptake of brain-based explanations outside the neurosciences and in the wider public is “uneven” and that there is no full takeover by “a newly dominant paradigm.” Such heterogeneity exists side by side with the development of brain-centered interventions in medicine, in the workplace, and in schools—interventions that may take place independently of how particular individuals understand themselves.

The sociologist Martin Pickersgill and his colleagues (2011) investigated how people draw on neuroscience and neuro ideas to articulate self-understanding. Working with patients suffering from epilepsy, head injury, and dementia as well as with neuroscientists and other professional groups (teachers, counselors, clergy, and foster care workers), they showed that individuals turn their attention to (popular) neuroscience mainly after some kind of neurological event, for example, a brain hemorrhage. This contingent interest, however, does not imply attributing to neuroscience an absolute capacity to define or explain subjectivity. Overall, attitudes are governed by pragmatism and personal relevance; rather than altering notions and practices of the self, neuroscientific concepts “seemed to simply substantiate ideas already held by individuals.” The brain thus emerges “as an object of mundane significance,” which sometimes helps one understand oneself but is “often far from salient to subjective experience” (Pickersgill, Cunningham-Burley, and Martin 2011, 358, 361–362). Using online questionnaires with Dutch adults diagnosed with ADHD, the sociologists Christian Broer and Marjolijn Heerings (2013) also noticed that although those individuals were interested in neurobiological explanations, they did not reduce their condition to a brain phenomenon. In the framework of the Dutch tradition of public debate and dissent over mental health issues, neurobiology did not colonize subjectivity and was invoked in different ways: as explanation or excuse but also as opening the possibility of governing the self “in the name of the brain” (Rose and Abi-Rached 2013, 8). A study of adults diagnosed with ADHD documented parallel discourses of self-regulation that did not rely on “brain talk” (Broer and Heerings 2013, 61). In Canada, adults diagnosed with major depression or bipolar disorder were asked their ideas about the potential role of neuroimages in stigma mitigation, moral explanations of mental illnesses, and the legitimation of psychiatric symptoms. The resulting interviews show the complex and ambivalent ways in which individuals integrate brain-based notions of mental disorders into their self-understanding; some assumed neurobiological explanations of their disorder yet struggled against pharmaceutical treatments (Buchman et al. 2013).

Studies with other populations produce similar results. Adolescents’ explanations of their own behaviors and mental health issues emphasize personal, familial, and social contexts, rarely incorporating the brain or biology (Choudhury, McKinney, and Merten 2012). This may be partly attributable to a lack of information. When informed, however, teens do not refuse to include biological factors in their understanding of adolescent behavior. Rather, confronted with an overwhelmingly negative view of the “teenage brain” as defined by the incapacity to exert control over high-risk pleasure-seeking behaviors or by a deficit in the synchronization of cognition and affect (e.g., Steinberg 2008), they call for neuroscience to contribute to a positive view of their age of life and, in any case, do not generally see behavior in purely biological terms. In turn, on the basis of conversations with three groups (undiagnosed, diagnosed with ADHD but medicated, or diagnosed but not medicated) Ilina Singh (2013) described how children, including those of the two latter groups, did not subordinate their I to brain-based explanations but tended to depict the role of the brain in their lives in ways that emphasized personal agency. She thus confirmed that encounters with neuroscientific discourses or technologies do not necessarily cerebralize subjectivity. Similarly, fieldwork in a laboratory conducting fMRI research with children diagnosed with ADHD, learning disabilities, autism, and Tourette syndrome documented how subjects “appropriate lab-based descriptions of neurological difference to their own purposes, claiming a positive identity for themselves,” and how “the effects of laboratory research and the metaphors used to describe them may serve expansive purposes in the practices of those who see their subjectivity embedded in research findings” (Rapp 2011, 3, 22).

In a review published in 2013, Cliodhna O’Connor and Helene Joffe examined the empirical evidence for three frequent claims: that neuroscience fosters a conception of the self based in biology, that it promotes conceptions of individual fate as predetermined, and that it attenuates the stigma attached to particular social categories. They concluded that “claims that neuroscience will dramatically alter people’s relations with their selves, others and the world are overstated. In many cases, neuroscientific ideas have been assimilated in ways that perpetuate rather than challenge existing modes of understanding” (O’Connor and Joffe 2013, 262). Such bricolage will not surprise historians, who are used to the intertwining of continuities and discontinuities. They are nonetheless valuable for deflating fantasies about the subjective impact of the neuro and thus for disrupting “over-theorised accounts of the impact of ideas about the brain on personhood” (Pickersgill, Cunningham-Burley, and Martin 2011, 362).

A lot of this sociological literature has referred to our ideas about brainhood and the cerebral subject. We are thankful for such references but must also point out some misconceptions. One of us (FV) has been described as “one of the most outspoken critics of a cultural hegemony of the ‘neuro’ ” (Besser 2013, 48). However, arguing that the neuroscientific level of explanation is not always the most appropriate or questioning claims that the neurosciences will radically alter our view of the human is not the same thing as maintaining that the neuro is hegemonic. Another misinterpretation concerns the level at which the neuro exerts its power. Notions of the self and identity are not limited to self-conceptions, which is what the sociological research we just mentioned is about. When, to give just one example, the director of the U.S. National Institute of Mental Health (NIMH) proclaims that illnesses categorized as “mental” or “behavioral” actually are brain disorders, that diagnoses should be aligned with neural systems, and that psychiatry must become a neuroscientific discipline (e.g., Insel 2012, Insel and Quirion 2005), his statements reflect a position that, regardless of its explicit incorporation into people’s self-concept, regulates public health policy and the allocation of resources. Whether individuals like it or not, NIMH considers them cerebral subjects, and that has a significant effect on their lives—and even more so since Thomas Insel, NIMH’s director for over a decade, became in 2015 head of the new life sciences unit of Alphabet, the company better known as Google (Regalado 2015). If that were not the case, there would be no debate around these issues.

Finally, because our focus is indeed on recent and contemporary contexts, our temporal perspective has been misapprehended. We grasp the scope of the confusion when we read that “the contemporary salience of the brain does not mark the emergence of new conceptions of personhood as ‘brainhood’ (as suggested by Vidal 2009[a])” (Rose and Abi-Rached 2013, 22). For the argument was, precisely, that the cerebral subject as an anthropological figure is not attributable to the contemporary prominence of the brain, nor is it anything “natural,” but exactly the other way around: the cerebral subject was enabled by an early modern reconceptualization of personal identity, independently of any naturalistic knowledge about the brain.5

The Cerebral Subject in the Longue Durée

As all historical phenomena, the emergence of the cerebral subject is multilayered and overdetermined, and it involves different time scales. In the perspective of the recent past, the appearance of the “neurochemical self ” has been considered as “one element of a more widespread mutation in which we in the west, most especially in the United States, have come to understand our minds and selves in terms of our brains and bodies” (Rose 2004, 90). The conception of health and illness in terms of treatable bodily malfunctions is linked to a “more profound transformation in personhood,” whereby our sense as psychological individuals is “supplemented or displaced” (109) by a tendency to redefine crucial aspects of the self in bodily terms. Such a turn toward “somatic individuality” constitutes “a shift in the presuppositions about human beings that are embedded in and underpin particular practices” in human genetics, molecular biology, and genetic medicine and biotechnology (Novas and Rose 2000, 485–486).

Parallel to the rise of somatic individuality, Nikolas Rose and Joelle Abi-Rached identify the emergence, in the 1960s, of a “neuromolecular gaze” resulting from the hybridization “of different styles of thought, practices and knowledges in the investigation of the brain, mind and behaviour and the introduction of a reductionist and predominantly molecular approach to the realm of the nervous system” (Abi-Rached and Rose 2010, 31). Such an “epistemological shift,” they suggest, “was accompanied by a shift in the mode of governance; with the state, the industry and the scientific community gathering around the same object of interest (‘the brain’) albeit with different aims, drives, expectations, and motivations” (26).

While such narratives depict the ascendancy of the brain and the scientific, political, and institutional contexts that have sustained it since the mid–twentieth century, they leave open the question: Why the brain? We have already suggested an apparently obvious answer, namely that the brain seems most directly relevant to many of the issues that, in the Western philosophical, moral, and political traditions, have been central for questions of personal identity. This answer, however, implies that scientific discoveries about the brain have inspired views about personhood and attributes to the advancement of science the choice of the brain as the organ of the soul/self. This is a widespread view. To give just one example, a distinguished specialist of medical humanities explains that

anatomical and physiological understandings of the structure and function of the brain have further established it as the “seat of the soul” because of an increased understanding of its cognitive powers. As an organ of reflection, meditation, and memory, the brain becomes synonymous with what defines the self through the existence of consciousness—the mind. (Dolan 2007, 2)

As historical narrative, this description is untenable. The brain did not become the seat of the soul because it was better known but because, at a certain point, the self was defined in terms of functions that were associated with processes located inside the head. Obviously we are not advancing a radical constructionist argument according to which the choice of the brain rather than the ankle as the organ of thought is purely “ideological.” After all, head injuries have long been linked to alterations of personality, cognition, and emotion. The point is rather that by naturalizing historically contingent definitions of self and personhood, the received accounts turn the metaphysical claim that “we are our brains” into a factual statement.

A longer-term perspective helps undo such an illusion. At the same time, it suggests that the preeminence of the brain is more deeply rooted and results from a more protracted history than is usually suggested.

To begin with, brainhood is rooted in a Western context, albeit now almost universally disseminated through the circulation of originally European forms of knowledge and systems of values. Let us look briefly at a major instance of such globalization: the definition of death according to brain-based criteria, which has been increasingly accepted since the late 1960s and predominates everywhere in clinical practice (De Grazia 2011). National legislations and medical guidelines generally allow cardiopulmonary criteria to apply but tend to define death on the basis of irreversible loss of brainstem or (more commonly) whole-brain function. Present controversies concern the coherence of the brain death concept, the extent of necessary neuronal damage, and (especially within the medical profession) the tests required to prove irreversibility (Bernat 2009, 2013). A survey of eighty countries published in 2002 documented the existence of practice guidelines for brain death in seventy countries but considerable differences in diagnostic procedures (Wijdicks 2002); the variability persists and keeps prompting calls for an elusive worldwide consensus (Smith 2012).

Arguments from religious traditions modulate both attitudes and practices (Bernat 2005, Bülow et al. 2008). The brain death definition is officially accepted in the various Christian churches as well as in Judaism and Islam; some groups in all traditions oppose it. In Japan, where the 1997 Organ Transplant Law admits it, a significant proportion of people reject it and do not think that “the essence of humans lies in self-consciousness and rationality” (Morioka 2001, 44). Thus, as the medical anthropologist Margaret Lock (2002, 8) explains, in Japan “the cognitive status of the patient is of secondary importance to most people,” and even if an individual suffers from irreversible brain damage and loss of consciousness, many people do not recognize him or her as dead.

In Western medical and philosophical contexts there is an ongoing debate concerning persons who are in VS (vegetative state, now also called “unresponsive wakefulness syndrome”; Laureys et al. 2010). These persons have irreversibly lost the capacity for consciousness but retain some autonomic functions, such as unaided breathing. In the eyes of relatives and physicians, their ontological status is unclear—they seem neither distinctly alive nor unequivocally dead (Holland, Kitzinger, and Kitzinger 2014). While many people also tend to see early death as better than being in VS, positions “may hinge upon our tendency to see minds and bodies as distinct.… Advocates of terminating life support may frame vegetative patients as bodies, while those who advocate continued life support may highlight their mental capacities” (Gray et al. 2011). At a more philosophical level, it has been argued that the default position of not withdrawing artificial nutrition and hydration should be reversed: Insofar as there is no evidence that VS patients have a “compelling interest in being kept alive,” we “commit a worse violation of autonomy” by maintaining them alive than by not doing so (Constable 2012, 157, 160).

We mention these debates because they highlight two of the features that permeate the notion of personhood as brainhood: dualism (at least insofar as positions are framed according to a distinction of mind and body) and an emphasis on autonomy. But there are other instances, for example, the way in which the courts have treated conjoined twins as autonomous individuals competing for scarce resources (Barilan 2002, 2003). Discussions around brain death in the contexts in which it is accepted go in the same direction: the “higher-brain” death criterion has not been adopted as the legal standard anywhere, but the fact that it is theoretically envisaged underlines the kinds of features that are considered essential for personhood.

What counts for grasping the cultural significance of the cerebral subject is the fact that identifying the end of personhood with the loss of higher brain functions became imaginable. It implies that a state of the brain may define not only the end of a person’s life but also the beginning (Sass 1989). If neuromaturation could provide biomedical indicators of personhood, then, as human persons distinct from merely living organisms, we would exist essentially from “brain life” to “brain death” (Jones 1989, 1998). As is well known, the redefinition of death as “brain death” was prompted by advances in life-sustaining technology and related challenges in obtaining organs for transplantation. These issues were placed center stage in the 1968 Harvard Medical School landmark report that proposed “to define irreversible coma as a new criterion of death” (Beecher et al. 1968, 337). What marked the end of life was irreversible brain damage, a state of the body in which the patient’s heart continued to beat but he or she had suffered “permanent loss of intellect” (337).

In the Harvard report, intellect clearly stood for a complex of psychological features, such as memory, consciousness, and self-awareness, whose possession defines both our individual personal identity and human personhood in general. Despite the appellation “brain death,” it is the permanent cessation of those functions, not a state of the brain per se, that signals the end of the human being as a living person. To the extent that these features reside in or are a product of the brain, we may indeed be seen as “being our brains.” But the historical point is that personhood was not always reduced to those psychological features and that therefore, as long as personhood was not thus redescribed, it could not be conceived of in terms of brainhood. Anthropologists who study conceptions and practices related to the beginnings and ends of life make a similar point when they notice that “producing persons is an inherently social project” and that “personhood is not an innate or natural quality but a cultural attribute” (Kaufman and Morgan 2005, 320–321).

“Personhood as brainhood” was enabled by early modern systems of thought that conceptualized self and body in such a way that the body, while experientially significant, became ontologically derivative. Being an I or having a self was equated with memory, consciousness, and self-awareness. This is the “modern” self, and it is in the framework of its early development that the cerebral subject became the anthropological figure of modernity (Vidal 2009a).

Though a notoriously malleable concept, “modernity” is generally understood to include the rise, during the seventeenth century, of a new concept of selfhood—specifically, the notion of a “detached” and autonomous self, which has self-awareness as a constitutive property and is characterized by reflexivity, self-distancing, a sense of inwardness, a first-person standpoint, and disengagement from body and world (Taylor 1989). Related to this modern self is “possessive individualism,” a conception of the individual “as essentially the proprietor of his own person or capacities, owing nothing to society for them” (Macpherson 1962, 3). The British philosopher John Locke (1632–1704) provided its founding formula when, in the Second Treatise of Government (1690, §27), he wrote that “every Man has a Property in his own Person.”

Locke too, in a revolutionary move, reconceived “person” as a continuity of memory and consciousness. It followed that personhood could in principle be attached to any material substance. However, since memory and consciousness were associated with the contents of the head, the brain became the organ of the self or, more precisely, the only organ we need in order to be ourselves as persons. Such localization of personhood was independent from empirical knowledge of the brain and predated by over three centuries the emergence of the “somatic individuality” thought to supplement our sense of ourselves as psychological individuals. In short, as a view of the human being, the cerebral subject derives neither from neuroscientific progress nor from a late twentieth-century biopolitical mutation. Both are relevant, but, as far as their import for personhood is concerned, they are best understood in a long-term historical perspective. For only that perspective shows that, contrary to what neuroscientists often assert or imply, the conviction that “we are our brains” is neither a corollary of neuroscientific advances nor an empirical fact. Rather, it is a position, philosophical or metaphysical, even if some claim it is dictated by science, that depends on views about what it is to be a human person.

“From Nothing Else but the Brain Come Joys, Delights, and Sorrows”

Some timelines documenting awareness of the link between brain and self go back to the Edwin Smith surgical papyrus (dated ca. 1700 BCE but datable in part to 3000–2500 BCE), which includes reports about twenty-seven cases of head injuries.6 It is more common to trace it to Hippocrates in the fifth century BCE and then to the second-century Greek physician Galen of Pergamon. Such chronologies smooth out context, and the continuity they give to mind, soul, brain, and body masks significant transformations in the meanings of these terms and in the concepts and practices associated with them.

In the Aristotelian frameworks that largely dominated Western scholarly thought from the thirteenth to the seventeenth century, the soul was a principle of life, that which animated potentially live matter. In Aristotle’s analogy (De Anima 412a–413a), if the eye was an animal, then sight would be its soul: It would actualize the material eye’s potential to see, rendering it capable of fulfilling its intrinsic function. Soul was therefore responsible for the basic functions, faculties, or powers of living beings, known as nutritive or vegetative, perceptive or sensible, appetitive or desiderative, motor or locomotive, and rational or intellective (Michael 2000). Sometimes these faculties were attributed to different souls, and their possession defined a hierarchy: Human beings had all of them, nonhuman animals lacked a rational soul, and plants had only a vegetative soul. Yet all were “animals,” or ensouled bodies. That is why the word psychology, in use by 1590, originally designated the science of all living beings (Vidal 2011).

As the Aristotelian frameworks disintegrated in the seventeenth century, the soul ceased to be responsible for organic functions; most famously in the philosophy of René Descartes (1596–1650), it became equal to the mind. Even though this was a radical transformation of the concept of soul, the interaction of soul and body remained understood through the humoral theory derived from Galen (Temkin 1973). In Galenism, four bodily humors (blood, yellow bile, black bile, and phlegm) were made up of mixtures of the four elements (air, fire, earth, water) and shared in their basic qualities (warm and humid, warm and dry, cold and dry, cold and humid). The “temperaments,” or proportions and mixtures of the humors, dictated individual temperaments, in the sense of “characters” (respectively the sanguine, choleric, melancholic, and phlegmatic). Physiology thus elucidated someone’s personality and aptitudes as well as soul-body interactions in general.

According to Galen, as the blood passed through various organs, it was transformed into increasingly subtle and thin fluids, or “spirits.” It first became a “natural spirit,” responsible for nutrition and growth. After combining with air in the lungs, it passed into the heart, where a portion was transformed into the “vital spirit” on which motor and life-sustaining functions depended. The final refinement took place in the cerebral ventricles, with the formation of the “animal spirits,” thus called because they sustained the sensitive and intellectual functions of the anima or soul. The qualities of these spirits, such as their temperature, humidity, or density, depended on those of the humors and determined in turn those of the mind. If a person’s blood was too cold, the animal spirits would also be cold, and the mental acts that relied on them would lack “heat” and be correspondingly weak and slow. Thus, it was not the brain and nerves but the humors via the animal spirits that held body and soul firmly together.

There is a myriad of early modern examples of such a psychophysiological theory. In English, virtually every page of Robert Burton’s famous Anatomy of Melancholy, first published in 1621, illustrates the claim that

as the body works upon the mind by his bad humours, troubling the spirits, sending gross fumes into the brain, and so per consequens disturbing the soul, and all the faculties of it … so on the other side, the mind most effectually works upon the body, producing by his passions and perturbations miraculous alterations, as melancholy, despair, cruel diseases, and sometimes death itself.7

The animal spirits, together with the rest of the humors, determine a person’s character and capacities. The same humoral determinism is the foundation of another late Renaissance bestseller, the Spanish physician Juan Huarte de San Juan’s Examen de ingenios para las ciencias, or (as the title of the second English translation puts it) The Tryal of Wits: Discovering the Great Difference of Wits Among Men, and What Sort of Learning Suits Best with Each Genius (Huarte 1698).

First published in 1575, then censored and expurgated in subsequent editions, by the end of the seventeenth century its 1594 edition had been reprinted several times in Spain and variously translated into Latin, French, Italian, and English (followed by German in the eighteenth century). Huarte (1698, 92) reproduced Galen in explaining that, for the rational soul to perform its actions adequately, the brain needs a “good configuration” and “unity of parts,” its substance should “be composed of very fine and delicate Parts,” and neither should its heat exceed its coldness nor its moisture surpass its dryness. While attention was paid to the organ’s morphology, individual dispositions were dictated by its degree of heat, moisture, and dryness, by way of correspondences between humidity and memory, dryness and the understanding, heat and the imagination. For example, “Old Men have a good Understanding, because they are very dry; and … they have no Memory, because they have no moisture” (146). Partly through their effect on brain substance, the bodily humors and their qualities were ultimately responsible for an individual’s “wits” and psychological features. The title of Galen’s treatise Quod Animi Mores Corporis Temperatura Sequantur (That the Traits of the Soul Follow the Temperaments of the Body) transparently expresses the doctrine.

The corresponding theory of mental functioning provides another instance of the predominance of fluids. The animal spirits were believed to reside in and move among the brain ventricles (cavities filled with cerebrospinal fluid), which were therefore considered the seat of mental faculties. From the front to the back of the head, these were the “common sense” where sensory information was collected, the imagination and fantasy, the judgment and intellect, and memory (Clarke and Dewhurst 1996, Harvey 1975, Kemp 1990). The brain was primarily a factory and storehouse of the animal spirits; Galen considered it the hegemonikon precisely because of the role of the ventricles in producing them (Rocca 2003). Yet, again, the ultimate causes of a person’s character and personality were to be found in the qualities of the animal spirits and the humors.

That what we call “mental faculties” somehow depends on what is inside the skull has presumably been intuited since the first member of the genus Homo hurt his or her head. That, however, does not amount to “knowledge about the brain” nor makes the humoral theory a direct predecessor or (as in Arikha 2007) an approximate equivalent of modern theories of enzymes or neurotransmitters. It is not difficult to find apparent continuities. Hippocrates often appears as the father of the idea that the brain is the organ of the mind. Yet, as Stanley Finger (2000, chap. 3) notes in his history of neuroscience “pioneers,” this tends to be done by taking out of context a few lines from Hippocratic treatises. In On the Sacred Disease, written around 400 BCE, we certainly find the oft-quoted sentence, “Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations” and the claim that “by the same organ we become mad and delirious.” Nevertheless, Hippocrates goes on to say that we endure these things when the brain “is more hot, more cold, more moist, or more dry than natural.” And he further explains that brain disorders arise from phlegm and bile (and therefore reflect the classic temperaments), such that “those who are mad from phlegm are quiet” and “those from bile are vociferous.”8

In sum, while it is true that behavior and psychological functions have long been associated with processes taking place inside the head, the philosophical and psychomedical traditions that remained dominant in Western learned cultures until about the end of the seventeenth century defined the human being as a composite of body and soul and made humors circulating within the body, rather than particular brain structures, responsible for the features of the individual self.

A Huron’s Soul and Montesquieu’s Brain

As mentioned, the breakdown of the Aristotelian frameworks in the seventeenth century entailed the reduction of soul to mind and its consequent localization in the brain. The so-called seat of the soul was not a physical place where the soul materially resided but the organ where it interacted with the body. Descartes, in several letters as well as in his Treatise of Man (written before 1637) and The Passions of the Soul (1649), speculated that the soul exerted its functions “immediately” at or through the pineal gland. His model was hydrostatic. When the soul desires something, it makes the pineal gland move in such a way that it displaces the animal spirits to obtain the required effect. Memory, for example, was explained by the flow of animal spirits through pores in the brain substance: The flow widens the pores, and the widened pores then function as memory traces that are activated when the pineal gland pushes the animal spirits through them.

In contrast to Descartes, the English anatomist and physician Thomas Willis (1621–1675) proposed a distributed localization of the faculties. Celebrated as the founder of modern neuroanatomy and clinical neuroscience, he provided seminal descriptions of many structures, notably the vasculature at the base of the brain, known as the circle of Willis, as well as the cranial nerves; he also described morphological abnormalities in pathological cases, for example, congenital mental retardation and unilateral paralysis (Molnár 2004, Rengachary et al. 2008). Postmortem study of brain lesions provoked by a loss of blood supply as well as comparisons between the cortex of humans and other animals led him to conclude that the cerebrum was the seat of the rational soul in humans and of the sensitive soul in animals.

Nevertheless, in his Oxford lectures of the 1660s on the anatomy, physiology, and pathology of the nervous system, Willis explained functions such as the will or memory by the circulation of the animal spirits in the cerebral convolutions. In The Anatomy of the Brain and Nerves, first published in Latin in 1664, he accounted for the difference in cerebral convolutions between humans and animals by “the dispensation of the animal Spirits.” His explanations combine a basically humoral physiology with a new emphasis on the “substance” of the brain:

For as the animal Spirits, for the various acts of Imagination and Memory, ought to be moved within certain and distinct limited or bounded places, and those motions to be often iterated or repeated through the same tracts or paths: for that reason, these manifold convolutions and infoldings of the brain are required for these divers manners of ordinations of the animal Spirits.… Hence these folds or rollings about are far more and greater in man than in any other living Creature, to wit, for the various and manifold actings of the superior Faculties.… Those Gyrations or Turnings about in four-footed beasts are fewer, and in some, as in a Cat, they are found to be in a certain figure and order: wherefore this Brute thinks on, or remembers scarce any thing but what the instincts and needs of Nature suggest. In the lesser four-footed beasts, also in Fowls and Fishes, the superficies of the brain being plain and even, wants all crankling and turning about: wherefore these sort of Animals comprehend or learn by imitation fewer things, and those almost only of one kind. (Willis 1681, chap. 9, 59–60)

The smoother the brain, the simpler the mind; amount and degree of convolution correlated with an organism’s degree of “perfection.” In a more convoluted cortical surface, the animal spirits circulated more freely and were less limited to one pathway; such a surface offered more spaces for the storage of mental representations and hence for learning. To the extent that they regulated the circulation of the spirits, structures and morphology gained precedence over the humors and their qualities and acquired more causal significance. Willis (1683, 209) thus remarked that “Stupidity is excited by the mere solitary fault of the Spirits” and that “the Brain it self is found to be first in fault.” He justified his opinion by describing abnormal features of the cerebral substance.

In short, it was possible to retain a Galenic physiology and still locate the seat of the soul in structures that had more consistency and materiality than the hollow reservoirs of the humors. The redefinition of soul as mind and the turn to “solidism” stimulated empirical research and a lively localization debate that lasted until the late eighteenth century. Contrary to matter, the soul was defined as “simple” and indivisible. Many therefore believed that its seat must be a discrete point or area inside the brain where the nerves converged, and provided anatomical and clinical support for various localizations: the corpus callosum, the semioval center, the walls of the ventricles. Materialists, on the contrary, considered the quest for the seat of the soul as “one of the chimeras of ancient and modern philosophy” (D’Alembert 1767, 273).

This quest did not lead to any reliable anatomical conclusion. But neither did it weaken the connection between the self and the brain. In fact, it strengthened it, though not thanks to any empirical evidence. Despite considerable advances in brain and nerve anatomy during the seventeenth century, the first identifiable formulation of “brainhood” did not derive from neuroscientific discoveries but from a combination of Locke’s theory of personal identity and the corpuscular theory of matter. Not neuroscience but a mutation in the concept of person opened the way for anchoring the self in the brain.

On the one hand, corpuscularianism, the theory of matter associated with the Scientific Revolution of the seventeenth century, explained natural phenomena by the size, local motion, shape, and contrivance of microscopic corpuscles of matter (Eaton 2005). Differences among physical bodies no longer originated in the essential nature of their substance but in the “mechanical affections” of its component particles. Consequently, body A at time T1 did not have to be made of the same matter as body A at time T2 in order to be the same. Material continuity thus lost its earlier significance as a constitutive element of the identity and sameness of material bodies. This, as Locke realized, applied also to persons and to the very definition of personhood (Thiel 2011).

In a radical philosophical innovation introduced in the second edition of his Essay Concerning Human Understanding (1694, book 2, chap. 27), Locke separated substance and personal identity, the “man” and the person. The identity of the man, he wrote, consists in “a participation of the same continued life, in succession vitally united to the same organized body” (§6). The person, in contrast, is “a thinking being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places” (§9). Thus, if the soul of a prince, containing the consciousness of the prince’s past life, is transferred into a cobbler’s soulless body, then the being who resembles the cobbler would in fact be the prince (§15). In Locke’s view, personal identity requires the capacity to recognize one’s actions and accept responsibility for them. In turn, this capacity implies a continuity of memory and consciousness, which the philosopher identified to “the sameness of a rational being.” It follows that “as far as this consciousness can be extended backwards to any past action or thought, so far reaches the identity of that person” (§9). In other words, personal identity depends exclusively on the “same consciousness that makes a man be himself to himself,” regardless of the substances to which it might be “annexed” (§10).

We just mentioned the cobbler and the prince, an example of Locke’s strategy of psychologizing personal identity with the help of thought experiments. Another such experiment concerns the little finger: If my consciousness is located in my little finger, and this finger is cut off my hand, then, Locke claimed, “it is evident the little finger would be the person, the same person; and self then would have nothing to do with the rest of the body” (§17). In short, bodies become things we have, not things we are; in turn, personal identity becomes purely psychological and distinct from bodily identity. Such a loss of body broke with the Christian tradition, which, founded on the doctrine of the Incarnation, insisted on the essential corporality of the self. Not surprisingly, some of the earliest objections to Locke’s theory of personal identity were formulated by divines defending the notion that resurrected persons must regain the same body they had on earth if they were to be the same persons they had been while alive.

Despite the depth of the theoretical rupture, disincarnation in practice could not be complete. Locke speculated about a conscious little finger or a cobbler’s body with a prince’s soul but knew that the nerves conveyed sensory information “to their Audience in the Brain, the mind’s Presence-room” (1694, 2.3.1). Some later authors were more explicit as to the brain’s role and emphasized the union of soul and brain as requirement for personal identity. Thus, in his Analytical Essay on the Faculties of the Soul (1760, §771), the Genevan naturalist and philosopher Charles Bonnet (1720–1793) wrote, “If a Huron’s soul could have inherited Montesquieu’s brain, Montesquieu would still create.” The native North American was an Enlightenment paradigm of the savage, yet if his soul were joined to Montesquieu’s brain, then one of the era’s greatest thinkers would, for intellectual purposes at least, be still alive. It did not matter that the soul and body were those of a “primitive,” provided the brain was the philosopher’s own.

In short, the conviction that the brain is the only organ indispensable for personal identity emerged independently or, at most, marginally connected to empirical neuroscientific advances. Bonnet’s 1760 statement about Montesquieu and the Huron declares exactly the same thing as Puccetti’s aphorism of 1969, “Where goes a brain, there goes a person,” or Gazzaniga’s confident assertion of 2005, “you are your brain.” A good number of twentieth- and twenty-first-century (neuro)scientists and (neuro)philosophers claim that their convictions about the self are based on neuroscientific data. That may be so for them personally. Historically, however, things happened the other way around: Brainhood predated reliable neuroscientific discoveries and has all the appearance of having been a motivating factor of brain research. As it advanced, this research legitimized and reinforced the brainhood ideology.

“Man Will Increasingly Become a Brain Animal”

Looking back to the early modern period reveals that the rise of the cerebral subject was not prompted by knowledge about the brain and that the neural turn of the late twentieth century is in fundamental respects neither a novelty nor the result of scientific progress. By the time Bonnet wrote his striking aphorism in 1760, “brainomania” (Rousseau 2007) had been developing for about a century. The early modern and Enlightenment “nervous wave” (170) housed the mind in the cerebrum; it placed the brain at the core of selfhood but never apart from soul and mind. Whether in a positive or negative, apologetic or offensive, Christian or atheistic vein, addressing the soul was a constitutive element of that early neural turn. When the soul later dropped out of the picture, it was not (as Francis Crick [1994], for one, suggested in The Astonishing Hypothesis) because brain research proved it did not exist.

On the contrary, in the eighteenth century, the psychological theories that gave most room to the brain and the nerves in explaining the mind were authored by convinced Christians, such as Bonnet and David Hartley (1705–1757), who proclaimed their belief in an immortal and immaterial soul. However, they insisted on not discussing its nature or union with the body (which they of course assumed) but only the observable results of its “commerce” or interaction with the body. This interaction, they explained, took place in the brain and by the intermediary of the nerves, and precisely that gave the nervous system its paramount significance (Vidal 2011). Rather than requiring a materialist stance, localizing mental contents or functions in the brain was compatible with the definition of the human as a composite of body and soul, matter and spirit, and had over materialism the advantage of accounting (though indeed mysteriously) for the unity of mental life (Kaitaro 2004).

Brain research was of course not foreign to such an intellectual configuration. To begin with, John Locke, himself a physician, attended Thomas Willis’s Oxford lectures, and it is largely through his notes that the lectures have been preserved (Dewhurst 1980).9 The immense neuroscientific progress that has taken place since then has variously strengthened the conviction that “we are our brains.” Yet it has not crucially modified its initial form. Replace soul by the functional equivalent of your choice, and you readily update Bonnet’s fantasy of 1760, that “if a Huron’s soul could have inherited Montesquieu’s brain, Montesquieu would still create.”

As far as the social and cultural role of brain research, nothing sounds more like statements by late twentieth-century advocates of the neuro than the prophecies of their late nineteenth-century predecessors (Meloni 2011). The main difference is that the former consider that their prophecies have better bases and are closer to being fulfilled. For example, in 1907, the Swiss psychiatrist, neuroanatomist, and social reformer Auguste Forel (1848–1931) characterized neurobiology as “a science of the human in man” and as “the basis of the object of the highest human knowledge which can be reached in the future” and depicted its growth as the condition for social progress (quoted in Hagner 2001, 553). Similarly, in 1912, the German neurologist Oskar Vogt (1870–1959) announced that “man will increasingly become a brain animal [Der Mensch wird immer mehr ein Hirntier werden]” and anticipated that “in our further development, the brain will play an increasingly significant role” (553–554). It would be invidious to select here, for their similarity in content with these early proclamations, a few quotations from the neuroscientific literature since the mid-1990s. On the one hand, there are endless possibilities; on the other, numerous examples are to be found throughout this book and its bibliography. The point is that new neuroscientific data, theories, and techniques have allegedly substantiated but not crucially affected an ideology that in its modern form dates from the late seventeenth century. That is why the cultural history of the cerebral subject is largely independent from the history of brain science. This is particularly obvious in its early instances: it is clear that Bonnet’s aphorism about Montesquieu’s brain did not derive from neuroscientific investigation but from a conception of personhood.

Nineteenth-century brain scientists refined anatomical description and pursued functional localization as one of their main goals. The bond of brain to self and personhood was thereby confirmed but not reframed. Phrenology is a case in point (Clarke and Jacyna 1987, Renneville 2000, van Wyhe 2002). It also illustrates how psychological theory—in this case, one that emphasizes individual differences—orients discourses and research about the brain. Based on the theories of the Viennese physician Franz Joseph Gall (1758–1828), who called it “organology” and “doctrine of the skull” (Schädellehre), phrenology assumed that the brain is the organ of the mind; that the mind is composed of innate faculties; that each faculty, from amativeness and benevolence to secretiveness and wit, has its own brain “organ” (twenty-seven in Gall’s original scheme); that the size of each organ is proportional to the strength of the corresponding faculty and that the brain is shaped by their differential growth; and, finally, that since the skull owes its form to the underlying brain, its “bumps” reveal psychological aptitudes and tendencies. Phrenology and the accompanying practices of cranioscopy and cranial palpation remained hugely popular into the 1840s, and phrenological publications appeared steadily until after World War I.

Gall (1835, 1:55) noted that “as the organs and their localities can be determined by observation only, it is also necessary that the form of the head or cranium should represent, in most cases, the form of the brain, and should suggest various means to ascertain the fundamental qualities and faculties, and seat of their organs.” The deductive form of his claim points to the lack of empirical connection between organology and brain research. Yet Gall, together with his disciple Johann-Caspar Spurzheim (1776–1832), carried out significant neuroanatomical investigations, innovated in dissection methods, contributed to demonstrations that the nerves stem from gray matter, and described the origins of several cranial nerves (Rawlings and Rossitch 1994, Simpson 2005). All of this, however, had no empirical connection to their phrenological localizations. After Gall and Spurzheim presented their neuroanatomical researches at the Institut de France in 1808, a committee discussed them in a report. Though notoriously ungenerous to the authors, its members, including such celebrities as the alienist Philippe Pinel and the naturalist Georges Cuvier, were right to observe that, even if the connection between the brain and psychological functions was undeniable, neuroanatomy had so far not contributed to elucidate it.

Spurzheim and Gall emphasized that physiology must be grounded on anatomy and that anatomy should lead to physiology. Several times, however, they declared that function is not directly observable or deducible from structure and that knowledge of the former precedes that of the latter—in the same way, they said, that we know the function of the eye before understanding its structure or learning anything about the optic nerve. “It is also without the assistance of anatomical dissection,” they wrote, “that we made most of our physiological discoveries; and those discoveries could persist for centuries before their concordance with the material organization of the brain is known” (Gall and Spurzheim 1809, 246). They admitted that their anatomical findings were inspired by their “physiological and pathological views,” including the fundamental assumption that moral and intellectual qualities are innate, and they added,

it is precisely the perfect concordance of mental phenomena with the material conditions of their existence that will guarantee for ever the duration of our anatomical and physiological doctrine.… It is one thing to say that the discovery of the brain’s functions was made independently from knowledge of its structure, another to claim that those functions do not have an immediate and necessary connection with its structure. (249–250)

As far as the distributed localization of mental faculties, inclinations, and personality features is concerned, their main general conclusion was that, since most brain structures are double, and since nerves neither originate in nor lead to the same point, “there is not, and there cannot be, a common center of all sensations, all thoughts and all desires.” It followed, in their view, that “the unity of the self will forever remain a mystery” (168).

Discussions around that “mystery”—its interpretation, mechanisms, and relationship to phenomenal consciousness—have not subsided (see, e.g., Metzinger 2009 or, for a larger audience, Ananthaswamy 2015). Beyond apparent mysteries, the persistent question is whether the gap between psychological and neuroscientific analyses and explanations is inherent to the problem at hand or a temporary state of science that can be superseded. As of 2015, the best theories about the brain and about some aspect of mind “do not seem to share any properties” (Phillips et al. 2015, 367), and while it is clear that psychological theories largely contribute to inflect neuroscientific investigation, the extent to which brain research techniques such as neuroimaging can inform psychology remains debated (e.g., Coltheart 2013, Moran and Zaki 2013, Uttal 2015).10

Localization

Nineteenth-century experimental psychophysiology and pathological anatomy fueled the localization project and at the same time contributed to the demise of the phrenological enterprise. While phrenology correlated behavior or dispositions with cranial shape, the anatomo-clinical method searched for correlations between symptoms and brain lesions and was common to the partisans of discrete loci of mental faculties and those who insisted on the unity of intelligence and the integrated nature of brain action. The case of “Tan,” an aphasic patient studied in the late 1850s by the French anatomist and physical anthropologist Paul Pierre Broca (1824–1880), is paradigmatic of the anatomo-pathological method and of mid-nineteenth-century localization debates.

“Tan, tan,” accompanied by hand gestures, was Monsieur Leborgne’s response to all and any questions. His clinical history and the postmortem study of his brain led Broca to conclude that the faculty of articulate language was possibly located in the second or third frontal convolution. It was clear to him that the higher “brain faculties,” such as judgment, reflection, comparison, and abstraction, had their seat in the frontal lobes, whereas feelings, inclinations, and passions depended on the temporal, parietal, and occipital lobes. Broca (1861, 338) recognized “that the major areas of the mind correspond to major areas of the brain.” At the same time, he found that differences in the localization of lesions inducing loss of articulate language were incompatible with the phrenological système des bosses yet consistent with the “system of localizations by convolutions.”

Moreover, Broca’s demonstration of the unilateral localization of language (in the left hemisphere) opened the way to the formulation of promising new dichotomies (Harrington 1987, 1991). The right brain ended up associated with “animality,” femininity, and the emotions, the left with humanness, masculinity, and the “rational” faculties of will, intelligence, consciousness, and understanding. As we shall see below, hemispheric lateralization and dominance were to be assimilated into the discourses of “neuroascesis.” They inspired a vast personal development and self-help literature for cultivating the supposedly neglected right brain and even neuropolitical considerations about the catastrophic future of a society tyrannized by left-hemisphere values (Harrington and Oepen 1989).

For nineteenth-century British and German brain scientists, the method of correlating clinical and pathological phenomena was suspiciously reminiscent of the craniological approach (Young 1990). Few, however, would have denied that the extraordinary positive or negative qualities of geniuses, criminals, and the mentally ill were somehow inscribed in their brain’s fleshy substance. This brand of localizationism, with its galleries of exceptional individuals and its collections of preserved brains, matched the nineteenth-century development of anthropometry and the related elaboration of physiognomic, cranial, and bodily typologies; closely connected to craniometry, the measurement of differences in brain weight and size dates back to the early days of physical and racial anthropology and was a truly international fad (Hagner 2004, Podgorny 2005, Rafter 2008). By the late nineteenth century, cerebral localization, functional differentiation, and the correlation of site and effect, or structure and function, had become investigative principles.

Starting in the 1950s, cybernetics provided abstract models of brain neurophysiology; a decade later, artificial intelligence and cognitive science fostered the brain-as-computer paradigm (Pickering 2011). While circuit diagrams and flowcharts became tools for thinking about brain structure and function, the quest for localizationist explanation did not lose its appeal, even though it picked up only later. The saga of Albert Einstein’s brain is extreme but emblematic. After the physicist’s death in 1955, the pathologist Thomas Harvey cut his brain into 240 cube-shaped blocks from which microscopic slides were prepared; like relics of a medieval saint, some of these pieces and slides were sent over the years to devotees around the world. By the time of Einstein’s death, the relic status of “elite brains” was nothing new. Investigations into the gross anatomy of genius’s brains was underway by the mid–nineteenth century, and after Lenin’s death in 1924, Oskar Vogt sliced his brain more finely than Harvey would slice Einstein’s.

Three decades after Einstein’s death, a contested but well-publicized histological analysis claimed that the left inferior parietal area of Einstein’s brain contained more glial cells per neuron than the average (Diamond et al. 1985). A 1996 article described Einstein’s cortex as thinner and more densely populated with neurons than control brains; a few years on, an equally disputed study stated that in the posterior end of the Sylvian fissure, Einstein’s brain is 15 percent wider than controls (the parietal lobes were singled out for study because neuroimaging techniques had allegedly confirmed that these areas are responsible for mathematical reasoning as well as for visual and three-dimensional representation; Witelson et al. 1999). In the meantime (1994), the BBC produced Kevin Hull’s hilarious documentary Einstein’s Brain, about the Japanese Einstein worshipper Kenji Sugimoto’s quest for a piece of the genius’s brain.11

The saga continues: Newly discovered photographs of Einstein’s entire brain prompted a revival of interest and led to detailed descriptions of the physicist’s “extraordinary prefrontal cortex” (Falk, Lepore, and Noe 2012). On the same basis, a highly technical study of his corpus callosum found it thicker and displaying enhanced connectivity, so that, it concluded, “Einstein’s intellectual gifts were not only related to specializations of cortical folding and cytoarchitecture in certain brain regions, but also involved coordinated communication between the cerebral hemispheres” (Men et al. 2013, e7). These findings have been widely covered in the media and dozens of online sites; the Los Angeles Times celebrated the “wonder of connectedness” (M. Healy 2013), and New Scientist announced that a “new look at Einstein’s brain pictures show his genius” (Carver 2012). No recent case illustrates more eloquently the persistence of hopes for reading mind from brain than these technologically updated revivals of the nineteenth- and early twentieth-century morphological approaches.

From nineteenth-century phrenologists palpating head bumps, through electroencephalography starting in the 1930s, and up to today’s brain scans, the hope of being able to read the mind and the self through brain recordings has not subsided (Borck 2005, Uttal 2003); the late twentieth-century comeback of the cerebral localization of mental aptitudes and inclinations “is due to a cohabitation of new visualization techniques with old psychological parameters” (Hagner and Borck 2001, 508). At the same time, these techniques confirm the anatomical, functional, and developmental evidence that the brain is neither a mosaic of minute sites nor a hard-wired collection of neuronal circuits but an array of interconnected and parallel networks, highly plastic and capable of developing and repairing itself.

Cognitive functions, in particular, turn out to be dispersed in various cortical areas, and the networks that represent them seem highly mobile, both functionally and anatomically. This does not invalidate complex forms of the localizationist approach (Zawidski and Bechtel 2005), which emphasize circuits and their “plasticity.” Since the 1990s, studies of how diverse activities, from taxi driving to meditating, correlate with anatomical changes in the brain as well as discoveries about the brain’s capacity for recovery, repair, and self-reprogramming after injury or amputation have turned neuroplasticity into a powerful motivator in rehabilitation and geriatric medicine and stimulated research on learning and cognition, aging and development, brain injury, addiction, and such brain-related disorders as Alzheimer’s, Parkinson’s, autism, and depression (e.g., Doidge 2015; Merzenich, Nahum, and van Vleet 2013; Merzenich, van Vleet, and Nahum 2014; Schwartz and Begley 2002; for discussions, see Choudhury and McKinney 2013; Droz 2011; Pickersgill, Martin, and Cunningham-Burley 2015; Rees 2010; Rose and Abi-Rached 2013).

Neuroplasticity has become a central neurocultural keyword not only inside but also outside the neurosciences. We shall see that it plays a role in “neuroarthistory”; in philosophy, it is one of the best allies of “neuropragmatism” (Solymosi and Shook 2014). In works for general audiences, such as Brave New Brain: Conquering Mental Illness in the Era of the Genome (2004) and The Creating Brain: The Neuroscience of Genius (2006), both by the neuroscientist Nancy Andreasen, neuroplasticity appears as the basis for creativity and therapy. According to the Canadian psychiatrist Norman Doidge (2007, xv) in his bestseller The Brain That Changes Itself, neuroplasticity is “one of the most extraordinary discoveries of the twentieth century.” As “proof ” that the mind indeed alters the brain, neuroplasticity substantiates convictions about the mind’s power to bring about illness or cure (on whose history see Harrington 2008), which the same Doidge (2015) now markets as “neuroplastic healing.”

In 2003, with wonderful irony, the conceptual artist Jonathon Keats copyrighted his brain as a sculpture created thought by thought (Singel 2003); the following year, a professional philosopher claimed, “Humans make their own brain, but they do not know that they make it” (Malabou 2008, 1) and repeatedly linked neuroplasticity to our “sculpting” our brains. As we show below, the “neurobics” industry, with its slogan “Change your brain, change your life,” has effectively incorporated the idea into its strategies for marketing brain fitness. The point here is not to scorn scientific accomplishments or deride therapeutic hopes but rather to highlight how the ideology of brainhood feeds on the most diverse pieces of evidence and the most varied beliefs.

In short, the claim that “the success of the scientific method partially replaced older notions of the soul or mind-body dualism with the doctrine that mind … is the brain’s exclusive output” (Lepore 2001) is as commonplace as it is false. The substitution in question is rooted in developments that have nothing to do with brain science (though brain science subsequently reinforced it), and the absorptive capacity of the brainhood ideology derives precisely from its not being the result of neuroscientific progress. In short, as Cathy Gere (2011, 236–237) has noted, the cerebral subject

is not a historically contingent outcome of research into localization of brain function: it is the aim and object of the whole enterprise. Over the course of its one hundred and fifty year history, localization theory has consistently posited the cerebral subject as an a priori commitment: the question is not so much “can psychological phenomena be translated into the language of brain function?” but rather “where can we locate those functions that define human personhood in our neural topography?”

Neuroascesis: Health for the Cerebral Subject

Once these functions are localized—whether on solid or weak bases, in discrete spots or distributed across complex circuits—practical consequences rapidly follow. Genuine or spurious, knowledge about the brain has not only prompted further empirical, theoretical, and applied research but also given a new lease on life or new directions to more or less doubtful businesses. One such business is automated lie detection, which since the early 1900s has evolved from polygraphs measuring blood pressure, pulse, respiration, and skin conductivity to twenty-first-century “neurotechnologies of truth” such as brain fingerprinting and “No Lie MRI” (Pugliese 2010, chap. 5). Although calling these latter-day brain technologies respectively “neurognomics” and “digital phrenology” underlines the persistence of the belief that truth can be automatically read from outward bodily signs, the analogies to earlier techniques are no more than suggestive. Brain-based approaches embody an old goal but are in themselves a recent development—so much so, that in the early 1990s, the brain had not yet made it into the history of lie detection (Hanson 1992, Littlefield 2011).

At the level of practices asking from persons that they treat themselves as cerebral subjects, the self-help advice industry provides a much stronger instance of continuity, accompanied with renewal via neuro discourses. Manufacturers of self-help products have been appealing to the brain for a long time, but two periods stand out: the second half of the nineteenth century and the decades since 1990. The 1960s, for example, also witnessed the emergence of prescriptions upgrading the “mind-power” strain of self-help by way of rhetoric drawn from the cybernetic brain-as-computer model (McGee 2005, chap. 2). Yet it is mainly in the earlier and later periods that the brain itself was placed center stage. That is why we can speak of “neuroascesis.” Insofar as ascesis refers to self-discipline, to the regulation of one’s life for the sake of improvement, neuroascesis may designate the practices of the self aimed at the brain or pursued by way of behaviors purported to affect the brain directly. In neuroascesis, we benefit ourselves by acting on our brains. Obviously, everything we do has to do with them. But we are here talking about regimens and prescriptions that, even before the appearance in the 1990s of terms such as “neurobics” or “brain fitness,” were advertised as having been specifically designed to enhance brain function.

Exercises for the Double Brain

A number of nineteenth-century authors considered that some mental pathologies were to be explained by the independent and disharmonic functioning of both “brains.” Before Broca’s discovery in the 1860s of the left-hemisphere location of language ability, it was indeed believed that the hemispheres were functionally identical and worked harmoniously together. The notion of a double brain without lateralization of function inspired explanations of mental illness and neuroascetic proposals for attaining brain health.

The Brighton clinician Arthur Wigan (1785–1847) provides a prominent example. His A New View of Insanity: The Duality of the Mind Proved by the Structure, Functions, and Diseases of the Brain and by the Phenomena of Mental Derangement, and Shewn to Be Essential to Moral Responsibility (1844) illustrates the idea, not uncommon in the British medical context at the time, that madness was attributable to the uncoordinated, asymmetrical functioning of the two “brains” (Clarke 1987). Wigan saw each hemisphere as a distinct organ, complete in itself, and therefore capable of exerting independent volitions. While the organism remained healthy, one of the brains exerted control over the other; in pathological conditions, each brain followed its own way and could oppose the other. Curing mental illness required “presenting motives of encouragement to the sound brain to exercise and strengthen its control over the unsound brain” (Wigan 1844, 22). Brainpower, according to Wigan, could be indefinitely potentiated through “exercise and moral cultivation.” By means of a “well-managed education,” it was possible to “establish and confirm the power of concentrating the energies of both brains on the same subject at the same time; that is, to make both cerebra carry on the same train of thought together” (22, 23).

The Duality of the Mind proposed a system of cerebral ascesis that emphasized the importance of exercising and cultivating the brain for augmenting its power. The tasks and abilities involved, requiring exercise, self-control, and dedication, were moral as much as pedagogical. The brain must be constantly attentive, always watchful, and one of the hemispheres should permanently fulfill the role of “sentinel” (52, 298); “self-indulgence,” “excess,” or a “neglected education” would make such cerebral pedagogy fail (207–208). Training and perfecting the brain were according to Wigan the “great duty of man” (295). Programs of cerebral self-improvement should be incorporated into the treatment of the mentally ill as well as into the legal and educational systems. In the latter, for instance, arithmetical calculations could contribute to the “education of the cerebral fibres”; such training would bring about a “real physical change” in the exercised brain parts and produce “alterations in the external form of the skull” (343–344).

On the Continent, a major figure of double-brain neuroascesis was Charles-Édouard Brown-Séquard (1817–1894), Claude Bernard’s successor at the Collège de France. While Wigan flourished before Broca’s discovery of cerebral asymmetry, Brown-Séquard wrote at a time when language ability had already been located in the left hemisphere (Aminoff 1993, Clarke 1987, Harrington 1987). That, however, did not prevent Brown-Séquard from becoming Wigan’s main advocate in the second half of the nineteenth century. He was especially interested in the possible application of Wigan’s theory for “educating” the cerebral hemispheres (Brown-Séquard 1874a, 1874b, 1890). He recognized hemispheric functional differences, but instead of considering them as innate and structural, he believed they were attributable to educational failures. “We find,” Brown-Séquard (1874b, 10) declared, “that it is owing to that defect in our education that one-half of our brain is developed for certain things, while the other half of the brain is developed for other things.”

So the issue was clear-cut: “If we have two brains, why not educate both of them?” (1). Indeed, “if children were thus trained, we would have a sturdier race, both mentally and physically” (Brown-Séquard 1874a, 333). Training the brain would not only improve its efficacy but also increase its size, since “every organ which is put into use for a certain function becomes developed” (Brown-Séquard 1874b, 15–16). The exercises proposed, primarily motor, were meant to affect each hemisphere by means of activities of the contralateral side of the body:

Try to make every child, as early as possible, exercise the two sides of the body equally—to make use of them alternately. One day or one week it would be one arm which would be employed for certain things, such as writing, cutting meat, or putting a fork or spoon into the mouth or in any of the other various duties in which both hands and the feet are employed. (Brown-Séquard 1874b, 20)

Brown-Séquard’s neuroeducational program, like some contemporary counterparts, anticipated the “ambidexterity movement” that would become popular in the early twentieth century.

In his 1900 New Methods in Education, James Liberty Tadd (1854–1917), the headmaster of the Philadelphia Public School of Industrial Art, proposed a regimen based on an ambidextrous program that also valued hemispheric symmetry. He explained:

If I work with the right hand I use the left side of the brain. In truth, I exercise some special region or center of the brain and in every conscious movement I make and in every change of movement I bring into play some other center. If, by performing any such action with energy and precision, I aid in the development of the accordant center, I am improving the cerebral organism, building for myself a better and more symmetrical mental fabric. (Tadd 1900, 48)

Such a view of brain structure and function grounded an entire neuroascetic and neuroeducational perspective.

In 1903, John Jackson, a grammar school teacher in Belfast, founded the British Ambidextral Culture Society, whose goals he defined in Ambidexterity, or, Two-Handedness and Two-Brainedness: An Argument for Natural Development and Rational Education (1905). Here, Jackson blended Wigan, Brown-Séquard, and Tadd to elaborate a neuroeducational system that would improve the functioning of both hemispheres (Harrington 1987; Harris 1980, 1985). Future generations, he stated, quoting a member of the society, “must utilize to the utmost every cubical line of brain substance, and this can only be done by a system of education which enforces an equal pre-eminence to both sides of the brain in all intellectual operations” (Jackson 1905, 103–104). The implication was that we do not use all of our brains and that individual and social progress depends, at least in part, on no longer wasting our precious cerebral substance.

The mechanism was straightforward: while you exercise both hands, “the motor cells of the controlling side of the brain [will] be stimulated, strengthened, and developed” (84). As a result, brainpower will be duplicated, and the brain will be able to perform independent activities simultaneously. “If required, one hand shall be writing an original letter and the other shall be playing the piano; one hand shall be engaged in writing phonography, and the other into making a pen-and-ink sketch” (225). Jackson even imagined that training both cerebral hemispheres would not only increase brainpower but also lead to the growth of new language centers in the right hemisphere, thus preventing aphasias and hemiplegias. In the following decade, several authors in the United Kingdom and France claimed to supply evidence in favor of ambidexterity as a treatment for aphasia and several kinds of brain damage. By the 1920s, the ambidextral perspective as a source of neuroeducational goals and practices had been marginalized (Harris 1980, 1985), but there was no dearth of neuro beliefs to nurture the neuroascetic imagination. Both a general notion that the brain is plastic and the myth of the underutilized brain proved remarkably widespread and durable (Boyd 2008).

Phrenological Discipline

Phrenological self-help emerged in parallel to the double-brain approach. Phrenologists speculated that brain “organs” functioned like a muscular system, so that the action of disturbed organs could be compensated by the contrariwise exercise of the healthy ones. They therefore came up with a neuroeducational program based on training, redirecting, and strengthening specific brain organs. Contrary to what the New York Times suggested in 2006, “brain calisthenics” was nothing new (Belluck 2006). For phrenologists as for latter-day promoters of “brain fitness,” mental health consisted of exercising all organs daily; both inactivity and excessive exercise were considered unhealthy. The difference between the mid–nineteenth century and the early twenty-first is that phrenologists asked schools to encourage sobriety, moderation, chastity, and personal amelioration. With the help of phrenological self-discipline, individuals could cultivate and enhance virtues favored by Victorian society while strengthening their capacity to inhibit vices and pernicious inclinations.

Phrenology’s social and individual moral significance derived from its perfect fit with the Victorian ideals of self-knowledge, self-control, and self-improvement (Cooter 1984, De Giustino 1975). It provided guidelines for how to lead one’s life and offered a panacea for mental and physical ills. The brain emerged as the clue to manifold queries, from personal talents to exercise, whom one should trust and whom not, how to raise one’s children, how to go about sexual education and choose one’s spouse or even appropriate servants. Every aspect of an individual’s social and personal life could be phrenologically approached (Stern 1971).

Phrenology had a considerable impact on educational reform, particularly through the action of George Combe (1788–1858) and his brother Andrew (1797–1847). George was largely responsible for the transformation of phrenology into a scientifically respectable vehicle for ideas on social life and its organization (Cooter 1984, Van Wyhe 2004). For him, the cerebral organs had to be treated like muscles. The best way to increase their strength and energy was to train them regularly but judiciously, “according to the laws of their constitution”; as a result, “when the cerebral organs are agreeably affected, a benign and vivifying nervous influence pervades the frame, and all the functions of the body are performed with increased pleasure and success” (Combe 1828, 115, 117–118).

Exercise would also enlarge cerebral organs. Andrew Combe (1836–1837, 7) claimed that “even in mature age the size of the individual organs of the brain may be increased by adequate exercise of the corresponding faculties.” James Deville (1841), a well-known practical phrenologist, offered many examples of increases of up to half an inch in the size and the diameter of particular cerebral organs as a result of training. Phrenology therefore looked like an efficient philosophy of education, one based on the idea that the organs of the brain need as much training as those of the body and can be affected in targeted ways by physical exercise. This is the very premise of twenty-first-century “brain gyms,” whose pseudoneuroscientific bases have been debunked without apparent effect on their commercial success.12

Phrenology was credited with the power of contributing to general good health, and an avalanche of phreno-physiological literature sustained belief in such a power. This literature was at the same time moral. For example, by showing the noxious effects of alcohol and sexual depravity on the brain, it encouraged temperance and sexual moderation as rational prescriptions for a healthy life; the natural laws of health converged entirely with social norms, and the achievement of good health depended on following the organic laws that governed both body and brain (Cooter 1984, Van Wyhe 2004). These views inaugurated a recurrent neuroascetical motif and introduced a number of prescriptions that have close late twentieth-century analogs. Today’s neuroascesis, like its phrenological ancestor, claims to pursue cerebral improvement. The practices it recommends include dieting, physical exercises, and a healthy life in the broadest sense—in short, they target the body as a whole and not the brain alone. Yet their proponents insist that it is the brain that undergoes training and is thereby enhanced. This offers other similarities with earlier proposals.

A major contribution toward the popularization of phrenological neuroascesis was made by Sylvester Graham (1794–1851), one of the founders of the natural food movement in the United States, who set city life and industrialization in opposition to the virtues of traditional agricultural (and vegetarian) life (Nissenbaum 1980, Sokolow 1983). In Graham’s view, the improvement of individual health enhanced moral capacities and vice-versa. The self-discipline and self-control required to lead a healthy life were seen as acts of moral excellence (Gusfield 1992); the moral and medical spheres went hand in hand, and both conveyed traces of more ancient wisdoms. For instance, the most popular American phrenologists, Lorenzo and Orson Fowler, took up Graham’s conviction that phrenology opened the way to health reform (Fuller 1989, Stern 1971). In their teaching they phrenologically reformulated the old belief that a carnivorous diet fosters a carnivorous temperament, turning it into the idea that meat’s stimulating power circulates through the nerves, inflames the lower regions of the brain, and strengthens the organs of “Combativeness” and “Destructiveness” (cited in Whorton 1982, 125).

One of Graham’s main followers was the physician and Seventh-Day Adventist John Harvey Kellogg (1853–1943), a prolific writer and inventor of corn flakes, who continued the Grahamites’ crusade for natural food and sexual purity (Carson 1957). In the chapter on “How to Keep the Brain and the Nerves Healthy” of his First Book in Physiology and Hygiene, Kellogg outlined a neuroascetic program, again aimed at training the brain as if it were a muscle. “We should exercise the Brain,” he wrote, and he explained:

What do we do when we want to strengthen our muscles? We make them work hard every day, do we not? The exercise makes them grow large and strong. It is just the same with our brains. If we study hard and learn our lessons well, then our brains grow strong and study becomes easy. But if we only half study and do not learn our lessons perfectly, then the study does not do our brains very much good. (Kellogg 1887, 203)

Brain gymnastics were to be supplemented by physical exercise, a balanced diet, and a sufficient amount of sleep; toxins, alcohol, and drugs were of course to be avoided. Children should not “eat freely of meat,” which “excites the brain” and irritates the nerves, and avoid spicy food, which tends to “injure brain and nerves” (204). Psychological and moral habits also had to be disciplined. Becoming angry does the “brain and nerves great harm,” and every child must refrain from swearing or use slang phrases, for “the brain after a while will make him swear or use bad words before he thinks” (205). These various facets of Kellogg’s cerebral self-help program reappear literally in many of the neuroascetic manuals of the late twentieth century.

Cerebral Self-Help

The phreno-physiological wave was of paramount importance for the emergence of the self-help movement in the nineteenth century. The brothers Combe’s emphasis on personal responsibility, both physical and cerebral, and on the role of education and self-control announced the movement’s fundamental values (Van Wyhe 2004). Essential topics, such as rationalism, natural laws, education, health, hygiene, self-knowledge, and self-development, all contributed to the very concept of “self-help,” as can be found, for example, in the 1859 bestseller of that title by the Scottish social reformer Samuel Smiles (1812–1904). As already mentioned, the emphasis on developing one’s mental faculties through exercise belongs to the basic credo of phrenology as much as to Victorian morals.

By the end of the nineteenth century, the ethics of individual self-help and self-improvement had become more important to phrenologists than the dimension of social reform that had characterized the movement in earlier decades (Cooter 1984). Phrenology accompanied the growth of “self-healing” and other forms of alternative, heterodox popular medicine, and it sometimes combined forces with spiritualism and various forms of occultism. Especially in the United States, phrenology merged with interest in the paranormal. The “psychologization of esotericism” prepared some of the ground for the New Age movements of a century later (Hanegraaf 1998). Around 1890, the New Thought or Mind Cure crusade, which borrowed from Samuel Smiles’s self-help outlook, generated dozens of books mingling metaphysical spirituality with self-help training programs (Braden 1963; Fuller, 1982, 1989, 2001).

The quest for health and spiritual integration embodied in the “Mind-cure movement,” as William James called it in The Varieties of Religious Experience, readily incorporated elements of neuroascesis. A major instance is to be found in the works of Warren Felt Evans (1817–1889), an American Methodist minister turned Swedenborgian. The basic idea of his doctrine was that illness originates in the mind because of false beliefs and can be overcome by way of openness to God. He developed it in books with such titles as Mental Cure (1869), Mental Medicine (1871), The Divine Law of Cure (1881), The Primitive Mind Cure (1885), and Esoteric Christianity and Mental Therapeutics (1886). Evans believed it possible to tap the healing resources of divine energy by getting in contact with the unconscious mind, whose healing power, he thought, corresponded to the kerygma, the preaching of the early Christian church. The principles of mind cure combined an idealistic tradition that referred back to the Hindu Vedas, according to which the only reality is thought itself; a Swedenborgian transcendentalist mysticism; elements of pantheism; and occult, gnostic-like interpretations of Christianity (Fuller 1989, 2001; Teahan 1979). Following on the steps of earlier mental healing and self-help systems, Evans made individuals responsible for their own physical and mental condition. For him, the only reason why external circumstances seem to exert an influence upon us is that we believe they do. Thought, he claimed, can change and shape any situation in the real world. And thought depends on the brain.

Evans picked up several gimmicks from phrenology and phrenomagnetism. For example, he claimed that touching the skull could increase the action of the underlying cerebral organ: “Touch the organ that you wish to excite, or any part of the brain whose activity you may desire to augment,” he wrote, “and silently will or suggest that they feel happy, or calm, or strong, or hopeful, as the case may require, and it will have its effect in inspiring the proper mental state” (Evans 1874, 74). His recommendation for those prone to despondency and despair was the following: “Let us fix the attention upon the part of the cerebrum which is the organ of hope and, if need be, place your finger upon it and a joyful sunshine will light your darkness” (75). The old healing touch magic combines here with the religious laying on of hands as the means to reach the patient’s innermost being. Over one hundred years later, one of the basic brain gym exercises still consists of laying one’s fingertips on the “positive points” above each eye, halfway between the hairline and the eyebrows, in order to “bring blood flow from the hypothalamus to the frontal lobes, where rational thought occurs” (Dennison, Dennison, and Teplitz 1994, 32). In the French-speaking world, the Coué method provides another instance of neuro varnish: According to the websites of some French coaching agencies, the self-improvement method by way of conscious autosuggestion promoted by the French pharmacist Emile Coué (1857–1926) is in fact a programmation positive du cerveau, a positive programming of the brain.

Contemporary Neurobics

Brain gymnastics is one of the many nineteenth-century neuroascetical ingredients that reappear in updated garb in contemporary cerebral self-help. But the continuity at the level of practices must not mask the difference in contexts. Sociologically, neuroascesis always involves the development of “objective selves,” a process of “objective self-fashioning” (Dumit 2004) whereby individuals and categories of people are transformed through the assimilation and application of expert knowledge. However, today that process is bolstered by factors that at the time of phrenology or the New Thought movements were absent, weaker, or qualitatively different—among others, the role of the media, brain imaging techniques, the pursuit of a “strong” neuroscientific program, and an extremely assertive global pharmacological industry (Ehrenberg 2004, Healy 2002, Rose 2003). These factors have sustained the emergence of contexts where seeing oneself as a cerebral subject functions as a biosocial criterion of personal identity (see here Chapter 3).

We have already noted the neurocultural significance of brain plasticity, which “neurobusiness” has been using for its own benefit (Wolbring 2007). The vague claims about the effect of mental and physical activity on the brain asserted in nineteenth-century regimens have been replaced by more precise information. For example, research has found that aerobic activities are beneficial beyond their well-known effects on the cardiovascular system and in cases of depression. The brains of rats that exercise have over twice as many new neurons and show more interconnections than the brains of sedentary rats (Brownlee 2006a, 2006b; Cotman and Berchtold 2002). It allegedly follows that physical exercise helps healthy brains function at an optimum level and may increase their performance and plasticity. Exercise, it has been claimed, may also delay the progression of Alzheimer’s disease and the onset of Parkinson’s. Similar effects were observed in connection with feeding habits involving low rates of saturated fatty acids and a high content of Omega-3 factor. The basic idea is simple: As Carol E. Greenwood, a Toronto University specialist of nutrition and the aging process, puts it, “by taking care of your body, your brain also benefits” (Brownlee 2006b). This statement illustrates a recurrent trope in the history of neuroascesis, namely the ontological subordination of the body to the brain, as if brain and body were actually separable. Exercise trains the body as a whole, but its real target is the brain; hence, for example, the redescription of healthy food as “powerful brain medicine” (see the discussion in D. Johnson 2008).

The Posit Science Corporation is a good example of how neuroascetic firms take cerebral plasticity as a point of departure. Posit explains that its goal is “to help people flourish throughout their lives,” and it adds: “We do this by providing effective, non-invasive tools that engage the brain’s natural plasticity into improving brain health.”13 Its “brain fitness program” focuses on increasing the speed, precision, and intensity with which the brain receives, registers, and remembers information. This program is a first step that can be followed by a more complete “brain gym” to train the totality of the motor and cognitive systems.

Posit’s advertisement resembles that of cosmetic products: neuroascesis promises to “rejuvenate” the brain’s “natural” plasticity and postpone mental decline for as much as ten years. Not coincidentally, the elderly make up Posit Science Corporation’s main target audience. The company does not advertise a fountain of youth but claims to supply “a part of the solution.” More astutely, it predicts an increased “brainspan” or “cerebral longevity,” something particularly valuable at a time of aging populations and growing life expectations (Anonymous 2006). Like bodily fitness, cerebral fitness involves a moral dimension: Exercises are said to demand a great deal of discipline, willpower, and self-motivation—and they are all said to be indispensable for neuroascesis to “reverse the brain’s aging process” (Olney 2006). Neuroplasticity research thus legitimates a market for brain gymnastics and cerebral self-help; some of it even comes out directly of Posit Science Corporation’s own “Brain Plasticity Institute” (Merzenich, Nahum, and van Vleet 2013).

The neuroascesis market offers a vast range of products. Some are books by neuroscientists, cognitive psychologists, and well-known psychiatrists who explain recent neuroscientific advances while offering programs to enhance brainpower, prevent mental decay, and improve perception, short- and long-term memory, and logical, verbal, visual, and spatial abilities (Chafetz 1992, Goldberg 2001, Mark and Mark 1991, Winter and Winter 1987). “Brain training” programs thus sustain a multimillion-dollar industry whose efficacy remains unproven. Results from a six-week online study involving 11,430 participants who trained several times each week on specific cognitive tasks aiming at improving reasoning, memory, planning, visuospatial skills, and attention provided “no evidence to support the widely held belief that the regular use of computerised brain trainers improves general cognitive functioning in healthy participants beyond those tasks that are actually being trained” (Owen et al. 2010, 777).

Always on seemingly neuroscientific bases, other products, by psychological self-help authors converted to neuroascesis, lead their buyers to expect more: to identify hidden meanings in people’s conversation, absorb facts “like a sponge” and reproduce them intact years later, read and understand any book in half an hour, or easily memorize facts, images, and even complete works. Among the authors of these products, those closest to a New Age imaginary also employ an apparently scientific vocabulary but promise to reach any possibly desired result. After all, some claim, on the basis of a crude oversimplification of quantum mechanics, that since reality is no more than an illusion created by our brains, “the universe is the mind and the mind is the universe” (Spotts and Atkins 1999, 80). The exercises they propose presume to allow the individual brain to connect to the forces of the universe and a superior intelligence, a Cosmic or Divine Mind. It is all at once instructing, amusing, and alarming to see the extent to which this quack neuroascetic literature reproduces with an updated scientific-sounding vocabulary the main topics of older self-help literature.

Commonplaces in this framework are an emphasis on creativity as a means to engender reality, the idea of an “internal self ” that can be cultivated by means of cerebral exercises, and the insistence upon autonomy, responsibility, and self-control not only of one’s personal destiny but even of reality itself, all to be attained by means of brain practices. In cerebral self-help literature, the absolute irreducibility of the individual goes hand in hand with a belief in the reducibility of reality to the designs of thought. Finally, the essentially cerebral nature of the self renders other people as well as the social and cultural environment obsolete. The brain takes over, so that the old slogan “You are what your mind is” is replaced by the basic assumption of the neurocultural universe: “You are your brain.”

As we have seen, the notion of a divided mind, embodied in a divided brain in conflict with itself, goes back to Wigan and others in the nineteenth century. After Broca, the left hemisphere came to be considered superior because it was seen as responsible for the intellectual, civilized activities predominant in white European males, while the right one was thought to dominate in women, criminals, Indians, blacks, madmen, and homosexuals (Harrington 1987). Cerebral self-help bestsellers reproduced and exploited the right-brain boom that emerged during the 1960s in the context of counterculture movements, but they also referred to the split-brain research that was emerging at the time, which could itself make room for Wigan (the neurophysiologist Joseph Bogen [1971, 1985] reprinted The Duality of Mind and described his own position as “neowiganism”). The self-help market is full of titles relating the right hemisphere to the most varied phenomena, from the classification of artists, musicians, politicians, and dictators according to their cerebral “orientation” to tantric sexuality, mediumistic capacities, and other paranormal activities supposedly enabled by the right brain (Capacchione 2001, Ehrenwald 1984, Spotts and Atkins 1999, Wells 1989).

Since the late 1960s, several authors in the area of education have insisted on the countless advantages of a school that would focus on the right brain and have criticized traditional pedagogy for its emphasis on left-hemisphere capabilities (Edwards 1979, Gainer and Gainer 1977, Hunter 1976). Such proposals for a “hemispheric balance in the curriculum” that would avoid the didactic failures of left-brain educational programs hark back to nineteenth-century pedagogical crusades and revive many of the assumptions of Brown-Séquard in France and the Ambidextral Culture Society in the United Kingdom. For all their success among teachers, more recent brain-based ideas about teaching and learning are no less scientifically specious and no more relevant or effectual for their stated purposes than their predecessors (Becker 2006, Bowers 2016). Yet no amount of failure dampens the hopes of bringing about an “integrative framework” through “constructive interdisciplinary dialogue” (Busso and Pollack 2015).

Important as it might be to differentiate science from quackery, the genealogy of cerebral self-help brings to light the porosity of the distinction and the extent to which twentieth- and twenty-first-century neuroascesis reproduces with an updated appearance the commonplaces of much earlier self-help discourse. But there are some major differences. At the end of the nineteenth century, the aim of Kellogg’s workout for the brain was to resist a weakening of the social fabric; the disorders of the physical, social, and political bodies were to be countered by neuroascetic practices. The brain fitness movement at the time wished to salvage an individual and collective moral order seen as eroded by the rise of industrial society and by the concomitant loss of traditional sources of authority and legitimacy (Gusfield 1992). In contrast, contemporary neuroascesis is not aimed at restoring or saving an allegedly endangered social order; rather, it instantiates the values of an individualistic somatic culture. Yet the spirit of neuroascetical prescriptions and practices remains largely the same then and now. In light of Foucault’s (1986, 1990) depiction of technologies of the self in the transition from paganism to Christianity, the fact that contrasting goals and frameworks sustain similar practices does not come as a surprise.

As highlighted by the very ideas of “brain fitness” or “neurobics” present in so many titles since the 1990s, the muscular-fitness model offers another element of continuity between the nineteenth century and recent decades (Cohen and Goldsmith 2002; Dennison, Dennison, and Teplitz 1994; Mark and Mark 1991; Winter and Winter 1987). The brain is a muscle: “Just as weight lifting repetitions in the gym or jogging strengthen certain muscle groups, mental exercises appear to strengthen and enhance cognitive functions over time” (Tannen n.d.). It is common to praise the “mental weight lifting” one can do in the “Brain Gym” (CBS 2006). Train your “cerebral muscles” (Goldberg 2001, 255), but in such a way that you avoid “brain cramps” (Chafetz 1992, 72). Do regularly the “brain stretches” that will help you “burn some synaptic calories” and prevent you from becoming a “mental couch potato” (Parlette 1997, 16); this is a challenging goal because mental muscles enjoy television, a true “bubble-gum for the brain” (152–153). Most neurobics authors establish distinctions among levels of brain accomplishment or mental prowess, since “you do not have to attain the brain equivalents of Steffi Graf’s or Michael Jordan’s level of physical fitness to be quicker in conversation, better at solving problems, have richer memories, and livelier associations” (Chafetz 1992, 23). For “those of you who wish to exercise your brain systematically as an athlete would exercise various muscle groups,” manuals provide well-ordered cerebral training programs and recommend hiring a cerebral “marathon trainer” and keeping “brain workout diaries” (213–214). The vocabulary of bodily fitness is thus extrapolated to the brain itself. Causally and rhetorically, bodily and cerebral fitness go hand in hand.

But none of this can be explained by invoking neuroscientific advances, not even those connected to cerebral plasticity, which have come to play such a central role in contemporary neuroascetical discourse. Rather, the genealogy of neuroascesis is best seen as an episode in the development of views about the human as well as of forms of sociality and subjectivation that involve notions and practices of the self and its relationships with one’s own body and other people. In short, neuroascetical practices are tools whereby persons constitute themselves as cerebral subjects, and that is why to do their genealogy amounts to throwing critical light on that particular form of being human.