“UP FOR GRABS”

We have just observed that literature and cinema perform the cerebral subject and the ideology of brainhood in ways that both assert them and challenge them and that they enact the radical difference between the claim that “we are our brains” and the fact that we cannot be without one. This raises the question of who has the authority to examine whether and how we “are our brains” and points to the human sciences in their interpretive, contextualizing, and historicizing dimensions.1 Yet in most recent and contemporary contexts, it is chiefly the neurosciences that have claimed and generally obtained that authority.

Of course, it is up to these sciences to inform us about the brain and the nervous system and to document how sociocultural processes are neurobiologically “implemented.” Beyond the widespread mirages of inter- and transdisciplinarity, collaborations between them and other scientific endeavors increasingly bring to light the complexity of the brain’s interactions with internal and external milieus, both organic and social. But this is no reason to accept the ideology of brainhood. Presumably no one believes we could be brains in vats and at the same time fully human. However, as we have illustrated here, there are well-funded research programs and successful commercial enterprises based not on simply acknowledging that we need a brain to be what we are and do what we do but on claiming that we are essentially our brains and that there can be no valid knowledge of human phenomena unless one shows what happens in the brain when those phenomena take place.

In a longue durée perspective, such views have been ultimately made possible by the historical transformations of debates about personhood within the Latin Christian tradition, and they followed from the early modern psychologization and partial disincarnation of the self. “Partial” because it was understood that humans, though no longer obliged to possess a complete body to be persons, could not be fully incorporeal, and insofar as personhood was redefined in purely psychological terms, human persons became essentially their brains. “Essentially” first qualified a material entity, the minimal body needed for personhood and personal identity. With time, however, it also came to signal the conviction that the ultimate level of explanation for the vast range of human behavior is neurobiological. Those who sincerely refuse such reductionism but make more or less angelic calls for critical friendship between the human and the natural sciences in this domain tend to go along with it. This is not what they want, since (on the contrary) they emphasize full contextualized embodiments, but it is a structural effect of their position, for they imply that the lack of collaborative goodwill proceeds from human sciences that are afraid of losing their prerogatives instead of willingly embracing the neural turn.

The opposite, however, is the case. While claiming to leave behind the supposed speculations of the human sciences, the neuro is moved by their basic agendas, even if it does not share their purposes and barely takes into account their concepts and empirical work. It is from the cultural and historical import of these agendas that its own attempts at cerebralizing the human may gain some luster. Obviously, all sciences are “human” insofar as they are our creation, and the adjective thus applies in a twofold manner to those that concern human beings. However, beyond objects and methods, what differentiates the natural from the human sciences (and here we include the humanities and social sciences) may still be captured by the distinction between a causal, ontologically or methodologically reductionist Erklären and an interpretive and historicizing Verstehen (on these matters see for example Smith 2007). Both are necessary, their divergences are not absolute, and conversations between them are possible and may sometimes be both desirable and rewarding. But so are disciplinary autonomy and the maintenance of boundaries when they contribute to a division of labor that helps make sense of the human world and experience. These considerations apply to the neuro itself, which the life sciences seem to perceive as the natural and in any case intrinsically justified consequence of neuroscientific progress but which the human sciences recognize as a historically rooted and contextually dependent cultural phenomenon.

Is it, however, the same phenomenon throughout? We know today much more about the brain than was known in the late seventeenth century. But we don’t deal here with the brain—only with an ideology whose origins can be traced to that period and with its subsequent expansion and materializations. The neuro, as we explained, is not a single entity but the sum of those materializations. Neither does it relate in a straightforward manner to how the histories of the self and the body have intertwined since the mid–twentieth century. On the one hand, the neural turns and the late forms of the cerebral subject have been interpreted as the extension to the brain of a broader transformation in which bodies and selves have grown increasingly close and interdependent by way of genetics, molecular biology, and biomedical technologies. Although the neurobiological dimension is said to be essential for the resulting “somatic individuality,” the self and personhood do not emerge as essentially “neuro.” On the other hand, the body is seen as increasingly obsolete, as an archaic vessel to be technologically enhanced and then replaced as we move from a transhuman to a posthuman condition (Ortega 2014). Ultimately personhood would be synthetically replicated and the cerebral subject realized as an in silico programmable and networked model of and for human properties (Stollfuß 2014).

The dialectic between the organic embodiment of the somatic individual and the relative dematerialization of the posthuman one (relative because even virtual bodies must run on some sort of matter) is taking place as we write, and much of it lies in futures envisioned as utopian or dystopian. Yet, for all its novelty, when looked at from the appropriate distance, it has the odd look of a familiar scene from the times when Charles Bonnet imagined that if a Huron’s soul could inherit Montesquieu’s brain, Montesquieu would keep on thinking.

As we saw, in connection precisely with Bonnet’s post-Lockean thought experiment, the cerebral subject is a product of history, not an organism identified in nature thanks to the advancement of science. That reason for refusing its naturalness is reinforced by how it functions as an anthropological figure. In contrast to the unequivocal quality it irradiates in the framework of the neuro, its real impact is characterized by what we have termed “ambivalence.” On the one hand, we do not need to engage in systematic Foucauldianism to recognize in the neuro a universe that, like the “discourse” of The Archaeology of Knowledge, “is characterized not by privileged objects, but by the way in which it forms objects that are in fact highly dispersed,” as well as by a capacity to generate “mutually exclusive objects, without having to modify itself” (Foucault 1969, 49). Discourse in such a framework is not mere language but a “formation” involving objects, concepts, and practices as well as subjective positions and power relations. The neuro possesses the resilience of such a formation, its aptitude both to generate and hold contradiction, to embrace incongruities without going to pieces. The ambivalences we have examined with regard to the ideology of brainhood are at the core of an eclectic, fragmented, yet robust system.

On the other hand (and that is part of the same phenomenon), when people can, they resist or adopt the neuro depending on local and fluctuating interests. In the neurodiversity movement, a fitting exemplar of its complex modus operandi, neuroscientific information is used to rethink disease as difference, but the redefinition of mental illness as brain disorder justifies both resisting and advocating therapy for the diagnosed individuals. In the neurodisciplines of culture, the “neuro” is largely a matter of opportunity. Goals are defined, ranging from the theoretical to the pragmatic (from, say, naturalizing art to getting funds); neuroscientific idioms and research are an advantageous way to pursue them. The protagonists of the neuro nonetheless explain that their choice has inherent necessity by virtue of what humans fundamentally are. The same applies to the global challenges of mental illness and the promises of neuroscience to uncover their ultimate causes. The fulfillment of the hopes raised in this domain would certainly be beneficial for humanity, but the promises conveying those hopes rest on a desire for cerebral causality that tends to exclude elements of context and relationality recognized as important for understanding and treating psychological distress.

All these fields share what we characterized as a “modern creed.” It is modern both because of its chronology (not thinkable before the late seventeenth century) and because it is an element of the psychological, philosophical, political, and scientific cosmologies usually labeled “modern.” It is a creed because it states basic beliefs that guide action. “Belief,” in our usage, neither stands as a term of abuse nor serves to place the neuro in a closed world of faith, opinion, and subjectivity; knowledge, after all, is a species of belief (according to a widespread view, justified true belief acquired by a method that is reliable in the relevant context). Rather, our emphasis is on the fact that, if it were formulated as a statement opening with Credo, with the words I believe, the set of basic shared beliefs of the neuro community would begin with the assertion that we are essentially our brains or (if the formulation were extreme) that “everything, absolutely everything, is in the brain.”2

We have here argued that such a creed neither historically derives from nor currently depends on neuroscientific knowledge (though it may feel that way for certain individuals and communities) but that it must be traced to early modern scientific and philosophical developments that transformed the notions of personhood and personal identity. Later research into the brain, up to the present day, has buttressed the “cerebralization” of personhood but (contrary to what is often claimed) cannot substantiate it either conceptually or empirically; neuroscience sustains and embodies various “neural turns,” but it cannot turn the view of humans as cerebral subjects into a piece of “natural knowledge” based on empirical evidence.

Brainhood, in short, is best understood as a historically contingent resource, born to uphold and make plausible a redefinition of personhood. The arts, by simultaneously asserting it and denying it, infuse it with ambiguity and inconclusiveness and display it for what it is: an ideology, a complex set of notions, beliefs, and ideals in whose making empirical knowledge about the brain plays a role at best comparable to that of a supporting actor. And this makes sense: As Louis Menand (2002) put it, “every aspect of life has a biological foundation in exactly the same sense, which is that unless it was biologically possible it wouldn’t exist. After that, it’s up for grabs.”