IT is a common observation that people harbor ambivalent feelings about science and technology, feeling both highly suspicious and outrageously confident in these two fields. Strangely, these highly emotional but often, at times, rational views ranging between technophobia and technophilia turn to mostly confidence when it comes to medical progress as well as body and mind enhancement. In the field of medicine, this can easily be understood as a basic human brain activity that develops behavior allowing prevention and/or a fight against suffering. Indeed, shamans and ancestors of physicians are as old as humanity. It is more difficult to understand concerning enhancement since potential benefits must be more carefully analyzed and actual risks more precisely considered, leading to a conservative attitude related to a “preservation instinct.” This should be particularly true when it comes to our brain. Indeed, neurosciences not only open new avenues to alleviate neurological and psychiatric disorders—and some day to repair our nervous system—but also present targeted ways to control and enhance vegetative (being awake, sleep, appetite, sexuality) as well as mood and cognitive behaviors (memory to ideation), already triggering a strong adhesion and a huge market (Farah et al. 2004).
The momentum is such that one may already consider that this is not a matter of discussion since “you can’t stop progress.” On the one hand we may consider this as a new life style, a general trend to obtain improved memory and comprehension capacities through socalled “smart pills,” rewards from technical progress. Physicians or regulatory governmental agencies should only be required to check for safety and efficiency of “treatments” which should be more and more considered as a food complement. Considered as a basic need, people should only be concerned about equal access to enhancement drugs and devices. If we take this view, cognitive enhancement is just considered as any technical element in the market field, consumer needs and satisfaction being the only goals to fulfill. On the other hand, currently available drugs will not only change some quantitative aspects of neural activities, whose improvement implies no problem, but also the global internal economy of cognition. Are available “smart-pills” controlling children’s behaviors or they are increasing immediate, emotional, pre-learned skill-based and short-term rewarded strategies? The answers may be detrimental to long-term goals and social interactions, and consequently challenge our philosophy of human rights. Cognitive enhancers do not only change the amplitude of a given brain capacity, i.e. memory, but also the balance between emotional and rational networks, and will thus change our relationship to others, essential to the building of our thought and social life. Furthermore we are at risk at becoming addicted to our own brain enhancement, thus succumbing to renouncing our free-will, provided it exists. Indeed, because most of our brain activity is unconscious, some neuroscientists argue that there is no free will and that cognitive enhancement is a natural need of our brain. I shall argue, on the contrary, that recent advances in neurosciences do not deny free will and that the real risk resides in a hypertrophy of the self, losing essential feedback from the eyes of the others. These fundamental changes should encourage us to understand the driving forces of our “neurotechnological gourmandize” and wonder if cognitive enhancement is not a mystification that covers up social pressure for enhanced productivity and behavior control.
For some time now, our nervous system has been formalized with the cybernetic scheme that considers it essentially as a feedback adaptive system, sculpted by trials and errors, experiences, and repetitions. Indeed, scientists learned from developmental neurobiology that starting from a highly redundant and poorly interconnected cellular stock, our nervous activity at critical periods selects and stabilizes less than 50% of neurons. These survivors become highly branched and interconnected, a neuron from the cerebral cortex forming, for instance, an average of 50,000 synapses (zone of communication) with neighboring or distant cells. This architecture remains highly plastic, at least at the level of synapses, allowing for the remodeling of circuits and the acquisition of new behaviors and memories. Learning may influence functional brain morphology (Peretz et al. 2009). The initial theory of phrenology of Franz Joseph Gall comparing the brain to a muscle was rejected long ago. It considered skull deformations observable, such as bumps, as the results from underlying highly developed brain activities such as maternal instinct or criminality. The “bump of mathematics” remains a popular metaphor. But the development of brain imaging, particularly functional magnetic resonance imaging (fMRI), renewed phrenology since it is now clear that training, such as with the example of learning music and becoming a violin maestro, results in an enlargement of brain areas dedicated to the learned tasks, i.e. the motor area in the cerebral cortex dedicated to the fingers of the left hand (Zarate and Zatorre 2008).
Neural systems also possess intrinsic, self-generated activities. On the one hand we have fundamental and determined activities of a living brain such as the basic 40Hz rhythm of thalamus and constraints and thresholds to generate an action potential. On the other hand, stochastic processes introduce a probabilistic view of how the central nervous system (CNS) works, perceives, and acts. Take N-methyl-D-aspartic acid (NMDA) receptor (NMDAR) of glutamate, the main excitatory neurotransmitter of the adult brain. NMDAR is a complex of five transmembrane proteins forming a channel through the plasma membrane that may, or may not, let calcium ions enter the cell and trigger multiple intracellular events (Tai et al. 2008). A naïve view of the synapse should be that the action potential arrives at the presynaptic cleft, triggers the release of glutamate that binds to NMDAR on the postsynaptic membrane, and results in the opening of the channel, calcium entrance, and signaling to the postsynaptic element. But in our brain, the presence of glutamate is not enough to predict the opening probability of NMDAR. Spontaneously, NMDAR opens and closes randomly. Glutamate binding should increase the probability of the open state but in basal conditions, another ion, magnesium, occupies the pore of the channel and blocks it, and glutamate does not change the probability of the open state. Glial cells, however, particularly astrocytes that wrap the synapse and are able to release D-serine, surround the neuron. In the presence of this gliotransmitter, the magnesium block is removed and glutamate then significantly increases the probability of opening and its duration. In addition, NMDAR interacts with several other proteins that modulate its activity. An example is its interaction with the calcium-dependent enzyme CAMKII, highly enriched in the postsynaptic element, which results in the addition of a phosphate group on NMDAR, thus changing its properties. Recent mathematical models that try to integrate what neuroscientists know from NMDAR consider that it may exist under 21 different conformations, more or less possible to stabilize in an open state. This means that a similar pulse of glutamate will have a very different outcome from one NMDAR to another. This will be determined by the history of the synapse and of the circuit involved. But the stochastic opening of the channel at the precise time when the pulse arrives will introduce probability in the system. In other words, a given drug may enhance glutamate release without increasing synaptic transmission.
These advances change our view of how our brain works. In the cybernetic model, the brain was a black box activated by external stimuli such as perception, or internal needs signaled by hormones. A given stimuli was associated to a coordinated reaction, more or less like a reflex. Classical illustrations are Pavlov’s experiences. Today’s model considers a self-generated activity modulated by life events. Most of our brain activities are anticipations and not reactions. This was recently illustrated by the analysis and formalism of changes of mind in decision making (Resulaj et al. 2009). The currently admitted “drift-diffusion” model considers that a decision is made when the accumulated noisy evidence (decision variable) reaches a criterion level, the decision bound. This decision is followed by reinforcement as the subject looks over the results of an action. But this does not explain the changes of decision that one may make even in the absence of a novel clue. Indeed it appears that a decision is made 400ms or so before the physical initiation of it (latency time). During this time, the brain continues to work, to evaluate… and even can change its mind.
Latency time is close to the one originally found in Libet’s famous experiment on voluntary action (Libet 1985). Using electrodes on the wrist and scalp to measure, respectively, the start of the action and the start of the readiness potential in motor cortex, Libet asked people to say when they decided to move looking at a revolving spot on a clock. The brain activity began about 500ms before the person was aware of deciding to act. Conscious decision came far too late to be the cause of the action. This experience and many confirmatory ones led some neuroscientists and moral philosophers to consider that free will is a fiction, or a brain self-generated illusion, because we should not be aware of the unconscious reasons that drive our choices (Dennet 2003; Wegner 2004; Soon et. al. 2008; Heisenberg 2009; Suhler and Churchland 2009). Conscious free will is a post hoc rationalization since our brain is making essentially unconscious decisions, and even the few decisions that arise in our conscious field appear a few milliseconds to seconds after the real moment of our brain’s choice. Thus, it seems that conscience is a distinct brain function from decision, and that a free decision does not depend on the fact that we are conscious of it. As stated by Martin Heisenberg “Conscious awareness may help improve our behavior, but it does not necessarily do so and is not essential” (Heisenberg 2009).
Unexpectedly, this anticipatory feed-forward principle links our intentions to the conscious perception of our actions. This was recently illustrated for a simple hand movement. Taking advantage of the electrical stimulations performed in the course of brain tumor neurosurgery to prevent lesions of major active cortical areas such as the motor cortex, it was reported that if direct stimulation of the motor cortex actually triggers a movement of the contralateral hand, the subject is not always conscious of this movement (Desmurget et al. 2009, Desmurget and Sirigu 2009). On the contrary, the same stimulation of the parietal cortex area involved in the intention to move the hand triggers the feeling that the hand moves, even if it did not. Only prestimulation of the parietal/intentional area allows the conscious perception of the movement of the hand. We may formulate the hypothesis that future work will generalize this principle linking conscience to the anticipation of the results of actions. One is aware of what s/he does if and only if it was her/his intention to do so. On the contrary one should neglect or at least be unconscious of what s/he did with no intention to do so. In other words, the neurophysiological basis of our conscious perception may yield much more attention to internal rewards from expected results than from unexpected ones. We need an external witness to warn us from neglecting or rapidly forgetting unintended effects.
Placing intention at the genesis of conscious activities is of paramount importance since a basic function of our brain is to be an intention-of-the-other detector (Behrens et al. 2009). Our intentions to act make us aware of acting, and we also spend most of our time observing and guessing others’ intentions. This is not only true for other humans or even our preferred pets, but also for any scene we observe. Humans spontaneously imbue the world with social meaning. The amygdala, a key node in the network of interpreting social meaning, is a collection of nuclei in the temporal lobe (Kennedy et al. 2009). Its role in processing emotionally and socially relevant information was demonstrated a few years ago (Heberlein and Adolphs 2004). A film of animated shapes, two triangles, a big and a small one, and one small circle, moving on the screen, sometimes colliding, sometimes disappearing on the boarders of the screen, is normally seen as full of social content. Indeed a bad big triangle tries to eat the gentle small circle and the courageous small triangle tries to protect the small circle. In one case study of a patient with bilateral amygdala damage, this film was described in entirely asocial, geometric terms, despite otherwise normal visual perception (Heberlein and Adolphs 2004). As amygdala is well known to be a major crossroad of emotion processing in our brain, this finding suggests that the human capacity for anthropomorphizing draws on some of the same neural systems as do emotional responses. It also suggests that a basic and unconscious function of our brain is to fill-up the physical world that surrounds us with human-like anticipated intentions, this being finely tuned by an emotionally driven flux.
Taken altogether, these data depict a brain that works mainly on self-generated, partly stochastic, anticipatory and unconscious activities. Intention makes us consciously aware of the results of our actions. Crossroads such as amygdala are essential to process emotions and to detect intentions of others. How can cognitive enhancers be selective with such highly integrated and overlapping activities?
Most technologies we are using daily simultaneously satisfy our feed-forward and feedback working brain. The remote control of our television allows us to reach the channel we want, satisfying Hume’s learning association proposal. How do we know and trust that pitching a violin cord will produce a sound? Because we learned that reproducibly pitching the cord was associated with hearing a sound and this built up our memory and rational networks. On the same basis, we can spontaneously guess that the sun will rise tomorrow morning as it does every day. Night after night getting the channel we want reinforces that technology can be trusted, as long as the batteries work. However, this might be not true for a new technology. Our experiences demonstrate that technology improves rapidly, and that the pace is increasingly quicker. Just take cell phones as an example. Not only do they allow us to work or reach friends any time, they now guide us with GPS (global positioning system), deliver weather reports and news, and can take pictures or small movies. For an increasing number of people, cell phones have become almost another limb of their body, and any failure is felt with a moral distress similar to a real wound. It is possible to hypothesize that we can become addicted to personal technologies because our brain anthropomorphizes these objects using the same networks as it does for emotion. And so the question arises: Are we still able to decide about how we use it? This will be one of the main questions challenging any cognitive enhancing technology.
Indeed we have to evaluate the multiple aspects of this growing addiction that include personal/individual choices as well as social constraints. To continue with our cell phone example, it is easy and rather cheap to call a friend but much more difficult to move from one service provider to another. Problems of reversibility will become major when implants will do everything from enhancing, or potentially erasing memory to accelerating ideation? Is it going to interfere or even prevent our self-generated capacity to obey moral law, to stay with the classical definition of freedom or at least autonomy (Kant)1?
At the end of 2008, several scientists, many of them also contributing to the present book, called for a responsible use of cognitive-enhancing drugs for healthy people (Greely et al. 2008). It was most likely the right time to take such a position—regardless of whether one views it correct or erroneous—considering the rapidly growing demand. But we need to examine enhancement from different views. On the one hand, from an empiric and practical basis, one may consider that enhancement is a cultural common and that the brain needs to be optimized as any mechanical machine could be. On the other hand, one may wonder what kind of cognitive enhancement society will request, and even ask ourselves if individuals really need it.
One good reason to oppose the use of cognitive enhancers is their risk of toxicity and/or their lack of efficacy. This could lead us to consider them as medical treatments with the same evaluation procedures and regulatory body approval requirements as for any medical care. However, approval as a treatment is not a guarantee for the absence of abuse, as illustrated by the over-prescription of antidepressant and anxiolytic drugs. Furthermore, approval as a medical treatment may present several perverse effects. One is to consider the approval as a certificate that validates the efficacy of the drug as a cognitive enhancer. Another one is that a medical treatment is designed for a disease. This may even create a vicious circle where new diseases are invented for the sole purpose of using new drugs. The effect of the drug is then considered as proof that the diagnosis was correct and that the disease is real. We can already observe a trend towards transforming an increasing range of conditions that were previously regarded as part of the normal human spectrum into pathologies treated by medicine.
A second reason to oppose the use of enhancers should be the fact that they are already overestimated as so-called “smart pills” because they are supposed to “boost one’s intellectual creativity.” A century ago, these were terms used for the devastating alcohol “absinthe,” a basic “booster” for the genius of French poets such as Baudelaire, Verlaine, or Rimbaud, or painters such as Van Gogh, which in fact actually drove them insane. Since one intention is to benefit from cognitive enhancement, people will anticipate a low individual risk soon to be solved by rapid technological progress. This will be supported by marketing strategies minimizing the real and/or potential risks and promoting the benefits.
Enhancement unto itself is not a problem. Getting better is a basic root of human culture and an essential way to implement our weak nature. Enhancing our immune system through vaccination, for example, allowed for a major decrease in child mortality. So, enhancing our brain capabilities is not something to blame because it would modify some untouchable natural property of human beings. Even the Levitic book of the Bible says, “I gave you life and death, you’ll choose life,” on which the Talmud comments as “human destiny is to finish the world.” Considering progress in our understanding of how the brain works and availability of drugs or other processes to increase some of its abilities, why not use them? Indeed, since humans have existed, education has always tried to enhance the cognitive abilities of students as well as scholars. Consequently, the question of enhancement using drugs or implants is frequently focused on fairness. The transhumanist philosopher Nick Bostrom describes it this way: “If school is to be regarded as a competition for grades, then enhancers would arguably be cheating if not everyone had access to enhancements or if they were against the official rules. If school is viewed as having primarily a social function, then enhancement might be irrelevant. But if school is seen as being significantly about the acquisition of information and learning, then cognitive enhancements may have a legitimate and useful role to play.”2
An initial problem with such an argument is that education deals with the fruit of natural selection that took a rather long period to produce the human brain, this wonderful set of 200 billions interacting cells that we are just beginning to understand. Drugs and implants must be evaluated for more than their potential toxicity and their real efficacy. They may introduce drastic quantitative but also qualitative changes in the way our brain works. One apparent advantage, for example, to stay awake longer, may be at a cost to other cognitive impairment, sooner or later. Such a cost/benefit balance should be carefully evaluated to show evidence not only of some immediate benefits but also of short-term as well as long-term side effects. A major need should be a detailed analysis of the impact of “smart pills” on the balance between emotionally-driven and rationally-grounded circuits, and on the importance given to immediate rewards toward long-term goals.
A second problem is the confusion between some basic neurophysiological functions, being awake for example, which is easy to evaluate, and cognition, a matter much more of quality than quantity.
A third problem requires us to consider cognitive enhancement within the social context wherein it takes place. Taking “awake” drugs does not have the same meaning if you are a soldier on the battlefield, a skipper in a race, or a trader fighting for bonuses.
More is far from better when we consider cognition. A classical example is Cherechevski, the mnemonist patient of the Russian neurologist Alexander Luria, a man gifted with an impressive capacity for memory but a depressed and inhibited capacity for focusing attention. This demonstrates that memory is much more than storing capacities. Cognition is not a simple aggregation of basic functions, enough to increase one of them to improve the complete process. The success obtained by implants as substitutes for sense organs that may lead to a vision better than 10/10 or an improved audition are not a good metaphor for cognition. In examining cognitive enhancers and society, one is struck by their overly simplistic view of a precise increase in the magnitude of a given activity.
Take the example of modafinil (Provigil® in the US), an “awake” promoting agent whose therapeutic effect was first discovered in France in the early 1990s. It is used as a treatment for excessive somnolence observed in the rare genetic disease known as narcolepsy/cataplexy. Modafinil has consistently shown efficacy in measures of alertness in narcolepsy, as well as in shift-work sleep disorder (Czeisler et al. 2005). Studies in rodents indicate that modafinil can improve working memory as well as the processing of contextual cues, and that these effects may be augmented with sustained dosing regimens. In healthy humans, with or without sleep deprivation, working memory, recognition memory, and sustained attention, are enhanced with modafinil. However, results also show that the magnitude of modafinil effects in healthy adults depends on underlying cognitive abilities. The mechanism of modafinil’s action is controversial. Studies employing pharmacological tools or genetic ablation of α-1B-adrenoreceptors in mice suggest that modafinil increases wakefulness by activating central noradrenergic transmission. Consensus studies recently reported that modafinil changes the coupling between the brain stem locus coeruleus nucleus and the prefrontal cortex (Minzenberg et al. 2008). However, pharmacological elimination of the noradrenalin transporter-bearing forebrain projections in mice does not influence the efficacy of modafinil action (Wisor and Eriksson 2005). By contrast, dopamine-dependent signaling is important for the wake-promoting action of modafinil. Indeed modafinil has a direct agonist action on the D2 dopaminergic receptor (Korotkova et al. 2007). In addition, modafinil can increase serotonin release, decrease GABA release, and enhance glutamate release in various brain regions. Therefore, even if modafinil possesses only minimal potential for abuse, it will affect cognition on a much more global scale than by just increasing the level of arousal or the efficiency of working memory.
Mimicking some aspect of dopamine D2 transmission, modafinil will affect some learning processes. Reward-based decisions are guided by reward expectations, and reward expectations are updated based on prediction errors. The processing of related errors involves dopaminergic neuromodulation in a central region of the brain, the striatum. Consequently, modafinil’s effect on the striatum will affect the balance between “Go” learning to make good choices and “NoGo” learning to avoid those that are less adaptive. In the prefrontal cortex, dopamine contributes to learning on a short-term scale by actively maintaining recent reinforcement experiences in a working memory-like state. Dopamine also tunes some essential interactions between the prefrontal cortex and the striatum by processing the importance that a particular prediction error has on the expectation for updating for the next decision (Moustafa et al. 2008). Thus, modafinil will also affect the decision-making process in the long term. One may even hypothesize that modafinil or equivalent enhancers played a part in the 2008 financial crisis. The arguments are as followed: (1) As each individual brain, the financial industry is essentially based on anticipation. (2) The mainstream that follows agents of the financial market illustrate our brains as intention detectors since they buy or sell when they feel that the others are going to buy or sell much more than considering the real value of what they buy or sell. (3) It is notorious that there is a large portion of the population in favor of arousal enhancing drugs, including modafinil. What will happen when new molecules will become available such as oxytocin, recently demonstrated to increase trust in social relationships (Baumgartner et al. 2008)? Will some professionals feel that they have an obligation to use enhancers as proof of their dedication to their work? Since social relationships are based on the fact that individuals are responsible for their actions, are they still responsible for what they do if they consider cognitive enhancers as a duty?
Complicating the issues is the intrinsic heterogeneity of people. For example, studies coupling human genetic analyses and learning tasks recently showed that several independent dopaminergic mechanisms contribute to reward and avoidance learning in humans (Frank et al. 2007; Krugel et al. 2009). Individual genetic variations may influence neurobiological mechanisms underlying the ability to rapidly and flexibly adapt decisions to changing reward contingencies. Thus, a molecule such as modafinil will have a very different effect from one individual to another, introducing to the concept of “personalized enhancement” as a parallel to personalized medicine, but also making more complex the real evaluation of what benefit a given individual can really expect from cognitive enhancers.
Another example of the difficulty to determine what “enhancement” with cognitivetargeted drugs really means may be illustrated by methylphenidate, effective in getting children to be quieter. Methylphenidate is the drug of reference for attention deficit with hyperactivity disorder (ADHD) a psychiatric syndrome that remains highly controversial, some physicians considering that it was created (or at least its diagnosis definition highly extended) to support extensive prescription of this drug. Indeed, the diagnosis does not only open the door to treatment but also to many parallel social benefits for the family. On the contrary, the long-term benefits that diagnosed children get from the treatment, for instance better scholar results, are far from being demonstrated (Gonon 2009). Thus, the treatment is effective on the treated child’s behavior, effective in bringing social support for parents, effective for a quiet classroom, but is not effective as a cognitive enhancer for the treated individual. Here is another real dilemma: individual rights versus social pressure.
Who makes the decision of treatment and does it remain under the treated individual’s control? The Nuffield Council for Bioethics in 2006 considered that the requirement for cognitive enhancement for all might be positive in a certain way: “improving the general standards or abilities across the population might not be a problem if this was driven by the public interest.”3 Notice also that in medical ethics the so-called “public interest” is never considered to be more important than the choice of the person. It might be more and more difficult to differentiate this decision, even taken freely, from the need to fulfill standards of efficacy requested by the society, for instance to get and stay in a given school, team or company. This makes cognitive enhancement improperly compared to doping in sport because the latter is a way to obtain an advantage over competitors. At the start of enhancer availability, and eventually for quite sometime, cognitive enhancement, like doping, may be a powerful means of discrimination between a cast of “enhanced” and a mass of “regulars.” However, after a while enhancement will become an obligation just to stay at the same level as competitors.
Individual addiction and social pressure will make cognitive enhancement irreversible or at a great risk of social exclusion or even worse. A recent and dramatic illustration is a case of the face transplant. People requesting such surgery do not suffer from a life-threatening disease but rather from a type of social death associated with a scarred face, and are ready to risk their lives to be relieved of this disability. Since the grafted tissue comes from a stranger, this requires permanent treatment to block the immune system and prevent transplant rejection. Treatment is expensive, sometimes painful and not fully effective. The first Chinese face transplant patient stopped his immunosuppressive therapy after a year of treatment, and rapidly died from the rejection of the graft. This might become the archetype of risking life to remain socialized.
Our post-industrial society requires more and more efficiency from each of us. Several ways could be used to reach such a goal. One could be to overcome individual limitations through improved collaborative organization and enhancement of non-competitive individual interactions. However, more and more is required from each individual, placing competition above any other success motivating factors. Furthermore, competition is mostly evaluated on short-term quantitative criteria. It seems that the fact of establishing numbers, creating ranks and classifying people within percentiles, has a kind of magic effect: it makes any evaluation credible. Hence, the development of tests that can be translated into numbers. Such tests, when applied to cognition, preferentially explore certain aspects of brain functions such as speed of reaction, memory, and pre-registered logical skills. These biases were already denounced for a century in the abusive use of IQ tests. Actually, available cognitive enhancers increase these brain processes, may be because their efficiency was validated by these tests. In addition, this targets the concept of cognitive enhancement on individual criteria whereas no individual human brain is able to work alone and by itself only: a human brain needs to interact with another human brain from its first day to its last.
This road has been paved by several recent philosophers, i.e. in France Emmanuel Levinas (Humanisme de l’autre homme, 1972), Jean-Paul Sartre (L’existentialisme est un humanisme, 1946), Paul Ricoeur (Changeux and Ricoeur 1998), Michel Foucault (Le courage de la vérité 2009). In the introduction of his last series of lectures, delivered at the College de France in Paris in the first months of 1984,4 Michel Foucault considers that all his work was dedicated to studies of how one is recognized as telling the truth. For years Foucault described the social practices and forms of “telling the truth” on figures of the modern society, the mad or the delinquent for instance. During this period, he developed the concept of “Biopower” defined by a generalized constraint on our bodies but not yet on our minds. He was always studying the relationship between the subject and the truth through the discourse that tells the truth on the subject. But during the last 10 years, he turned to the discourse of truth that the subject is able to deliver on himself. He developed a long analysis of “parrêsia,” the antique practice central to Greeks and Romans to search within one’s thoughts, and practice the truth on oneself. Each time he studied a case, from Socrates to present, there was always a need for another person to allow this possibility to “tell the truth on oneself.” From the last words of Socrates for Criton, to Catholic confessors, and more recently the extent of medicine, psychiatry, and psychoanalysis, there is always the need of the other.
Cognitive enhancement may appear as an attempt to escape from this need for alterity, the boosted brain suddenly able to understand the truth on its own without any further external oversight. This is most likely the greatest disillusion since, as already mentioned, a basic brain activity is detecting the other’s intention, particularly being in the eyes of the other. Interestingly, recognition of the other and the exploration of a face starts with the eyes. By contrast, it seems that the lack of focalization on the other’s eyes is a frequent disorder found in autism. This need for the other’s eyes is reminiscent of Emmanuel Levinas’s views. Levinas derives the primacy of his ethics from the experience of the face-to-face encounter with the other. The last words of Foucault during his last lecture concluded with the absolute need of an external view. This is the main risk of an over-individualized society where cognitive enhancement is seen as a way to escape from the eyes of the other, a phenomenon that would result, if we follow Levinas and Foucault, in a renunciation to tell the truth.
During the last two centuries, the concept of progress built-up by Enlightenment thinkers, and initially defined by Condorcet as the road to human happiness (1973 Esquisse d’un tableau historique des Progres de l’esprit humain), has became an autonomous social value with no other goals than its own technological development. We must reintroduce human goals and finality to evaluate enhancement.
There is clearly a convergence between the natural appetite of humans for enhancement and society’s needs. This convergence has led to consider our human body, including our brain, as an object to technically improve in all its functions, including cognition among them. We can wonder what kind of human being will emerge from these technical manipulations. We must carefully evaluate the potential for modern slavery that the techniques of enhancement might create through addiction and irreversibility. We must also carefully acknowledge that biology in general, and the brain in particular, is not just a huge set of simple mechanisms that some bioengineers can easily fine tune to achieve maximum efficiency. In the realm of cognition, enhancement may mean a fundamental change in the way our brain processes information. We have mentioned earlier that modafinil may interfere with mechanisms underlying the ability to rapidly adapt one’s decisions when reward contingencies change. Such modification in the way our brain operates may represent a real breakthrough and surge towards a post-human creature. What this new form of humanity might resemble is simply impossible to imagine since we still do not know what novel cognitive process these “enhanced brains” will use, nor how they will need (or not) interaction with the “other.” Consequently, it will be essential to evaluate how these individual performance enhancers influence social interaction capabilities. The risk is creating isolated super-brains lost within a self-centered, self-organized, virtual world wherein the absence of the eyes of the other blurs the fundamental meaning of “telling the truth” on oneself.
I am particularly grateful to Lucile Chneiweiss, Judy Illes, and Jennifer Merchant for careful reading and fruitful suggestions on this manuscript.
Baumgartner, T., Heinrichs, M., Vonlanthen, A., Fischbacher, U., and Fehr, E. (2008). Oxytocin shapes the neural circuitry of trust and trust adaptation in humans. Neuron, 58, 639–50.
Behrens, T.E., Hunt, L.T., and Rushworth, M.F. (2009). The computation of social behavior. Science, 324, 1160–4.
Changeux, J.P. and Ricoeur, P. (1998). Ce qui nous fait penser. La nature et la règle. Paris: Editions Odile Jacob.
Czeisler, C.A., Walsh, J.K., Roth, T., et al. (2005). Modafinil for excessive sleepiness associated with shift-work sleep disorder. New England Journal of Medicine, 353, 476–86.
Dennett, D.C. (2003). The self as a responding-and responsible-artifact. Annals of the New York Academy of Sciences, 1001, 39–50.
Desmurget, M. and Sirigu, A. (2009). A parietal-premotor network for movement intention and motor awareness. Trends in Cognitive Sciences, 13, 411–19.
Desmurget, M., Reilly, K.T., Richard, N., Szathmari, A., Mottolese, C., and Sirigu, A. (2009). Movement intention after parietal cortex stimulation in humans. Science, 324, 811–13.
Farah, M.J., Illes, J., Cook-Deegan, R., et al. (2004). Neurocognitive enhancement: what can we do and what should we do? Nature Reviews Neuroscience, 5, 421–5.
Foucault, M. (2009). Le Courage de la vérité. Le gouvernement de soi et des autres II. Cours au Collège de France, 1984. Paris: Éditions du Seuil, coll. “Hautes Etudes.”
Frank, M.J., Moustafa, A.A., Haughey, H.M., Curran, T., and Hutchison, K.E. (2007). Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning. Proceedings of the National Academy of Sciences of the United States of America, 104, 16311–16.
Gonon, F. (2009). The dopaminergic hypothesis of attention-deficit/hyperactivity disorder needs re-examining. Trends in Neuroscience, 32, 2–8.
Greely, H., Sahakian, B., Harris, J., Kessler, R.C., Gazzaniga, M., Campbell, P., and Farah, M.J. (2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature, 456, 702–5.
Heberlein, A.S. and Adolphs, R. (2004). Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. Proceedings of the National Academy of Sciences of the United States of America, 101, 7487–91.
Heisenberg, M. (2009). Is free will an illusion? Nature, 459, 164–5.
Kennedy, D.P., Glascher, J., Tyszka, J.M., and Adolphs, R. (2009). Personal space regulation by the human amygdala. Nature Neuroscience, 12, 1226–7.
Korotkova, T.M., Klyuch, B.P., Ponomarenko, A.A., Lin, J.S., Haas, H.L., and Sergeeva, O.A. (2007). Modafinil inhibits rat midbrain dopaminergic neurons through D2-like receptors. Neuropharmacology, 52, 626–33.
Krugel, L.K., Biele, G., Mohr, P.N., Li, S.C., and Heekeren, H.R. (2009). Genetic variation in dopaminergic neuromodulation influences the ability to rapidly and flexibly adapt decisions. Proceedings of the National Academy of Sciences of the United States of America, 106, 17951–6.
Levinas, E. (1972). Humanisme de l’autre homme. Montpellier: Fata Morgana.
Libet, B. (1985). Subjective antedating of a sensory experience and mind–brain theories: reply to Honderich (1984). Journal of Theoretical Biology, 114, 563–70.
Minzenberg, M.J., Watrous, A.J., Yoon, J.H., Ursu, S., and Carter, C.S. (2008). Modafinil shifts human locus coeruleus to low-tonic, high-phasic activity during functional MRI. Science, 322, 1700–2.
Moustafa, A.A., Cohen, M.X., Sherman, S.J., and Frank, M.J. (2008). A role for dopamine in temporal decision making and reward maximization in parkinsonism. Journal of Neuroscience, 28, 12294–304.
Peretz, I., Gosselin, N., Belin, P., Zatorre, R.J., Plailly, J., and Tillmann, B. (2009). Music lexical networks: the cortical organization of music recognition. Annals of the New York Academy of Sciences, 1169, 2565.
Resulaj, A., Kiani, R., Wolpert, D.M., and Shadlen, M.N. (2009). Changes of mind in decision-making. Nature, 461, 263–6.
Sartre, J.P. (1970). Lexistentialisme est un humanisme. Paris: Nagel.
Soon, C.S., Brass, M., Heinze, H.J., and Haynes, J.D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11, 543–5.
Suhler, C.L. and Churchland, P.S. (2009). Control: conscious and otherwise. Trends in Cognitive Science, 13, 341–7.
Tai, C.Y., Kim, S.A., and Schuman, E.M. (2008). Cadherins and synaptic plasticity. Current Opinion in Cell Biology, 20, 567–75.
Wegner, D.M. (2004). Precis of the illusion of conscious will. Behavioral and Brain Science, 27, 649–59; discussion 659–92.
Wisor, J.P. and Eriksson, K.S. (2005). Dopaminergic-adrenergic interactions in the wake promoting mechanism of modafinil. Neuroscience, 132, 1027–34.
Zarate, J.M. and Zatorre, R.J. (2008). Experience-dependent neural substrates involved in vocal pitch regulation during singing. Neuroimage, 40, 1871–87.