CHAPTER 21
WHY NEUROETHICISTS ARE NEEDED

RUTH FISCHBACH AND JANET MINDES

For all that scientists have studied it, the brain remains the most complex and mysterious human organ.

(Carey 2009a)

INTRODUCTION

The Court Will Now Call Its Expert Witness: The Brain

(Chen 2009)

Poor Children Likelier to Get Antipsychotics

(Wilson 2009)

Brain Researchers Open Door to Editing Memory

(Carey 2009a)

Surgery for Mental Ills Offers both Hope and Risk

(Carey 2009b)

DISCOVERIES about the brain emerge almost daily, therefore it is hardly surprising that the relatively new field of neuroethics is burgeoning. Headlines like those above make us pause, and immediately bring to mind many issues that will need definition and resolution. In this context, who will be best suited to integrate knowledge from brain science and ethics? We contend that well-prepared neuroethicists are needed.

Medical ethics developed in the 18th century. In the mid-20th century, research ethics rose from the ashes of the Holocaust and with the recognition of many highly problematic medical and social science research protocols (e.g. Tuskegee Syphilis Study (Jones 1981); Milgram’s Obedience to Authority Experiment (Milgram 1974)). Bioethics has had a rapid rise in the 20th century in response to dramatic advances in the life sciences, public health, engineering, environmental sciences, and other developments. The subdisciplines of classical ethics continue to evolve in parallel, and remain robust. Their collective expertise enriches and protects society and patients, science and scientists. Recently, neuroethics has emerged as a subspecialty of both bioethics and neuroscience. There is no reason to believe that this new subdiscipline will be any less successful, needed, or enduring than its disciplinary ancestors.

Technological and biomedical advances relevant to the brain—brain surgery, brain imaging, brain stimulation technology, neuroengineering, nanotechnology, robotics, neurogenetics, and others—will continue to spawn many significant questions. Because the brain is both symbolically and biologically such a crucial organ, and persons with brain disorders are often particularly vulnerable individuals, neuroethical questions become exquisitely compelling.

However one views the mandate of neuroethics, overarching questions about defining the field and how broad and diverse should be its scope must be addressed. In addition to monitoring brain sciences for ethical problems, and educating the public to be more neuroliterate, neuroethicists can work to constructively use the knowledge from neuroscience to benefit other disciplines. They also should speak out “against gee-whiz science that is merely fashionable, as well as against experimentation that exceeds the risk-benefit calculus” (Fischbach and Fischbach 2008).

In this chapter we will first review some of the definitions in circulation that reveal the varied perspectives and goals of the field of neuroethics. We follow this with our informal brief taxonomy of neuroethical questions. We will discuss in depth two specific contentious issues, one clinical and one from social sciences (psychosurgery and brain privacy) and show how neuroethicists can serve to inform and to protect. As neuroethicists will need education that will encompass many domains, we describe the academic grounding and qualifications that should be required. We also consider the pivotal roles neuroethicists should play. We contribute views that may help define the field of neuroethics and give structure and relevance to its activities. In conclusion, we contend that neuroethicists are needed, now more than ever.

DEFINITIONS AND SCOPE OF THE FIELD

Multiple definitions of neuroethics and calls to action have been offered. In a seminal conference, Mapping the Field, in 2002 in San Francisco, William Safire, late writer and journalist, is credited with bringing the term “neuroethics” to wider attention although it had been used earlier (see Cranford 1989; Pontius 1973, 1993). According to Safire:

Neuroethics is the examination of what is right and wrong, good and bad about the treatment of, perfection of, and unwelcome invasion of or worrisome manipulation of the human brain. (Safire 2002).

Another oft-quoted view is that of Gazzaniga who wrote:

“Neuroethics is more than just bioethics for the brain”…it is “the examination of how we want to deal with the social issues of disease, normality, mortality, lifestyle, and the philosophy of living informed by our understanding of underlying brain mechanisms” (Gazzaniga 2005).

Illes adapted a more general definition from Van Rensselaer Potter’s definition of bioethics as,” a discipline that aligns the exploration and discovery of neurobiological knowledge with human value systems” (Illes 2006).

The goals of a 2005 conference, Hard Science, Hard Choices, held at the Library of Congress in Washington, DC could be taken as a definition. Neuroethics promotes:

exploration of new and emerging technologies, exploration of ethical, social, economic, and legal implications of new technologies, implications for public policy, and facilitates scholarly networking, a key element in any emerging field (Fischbach 2006, p. xi).

In the book produced from that conference (Ackerman 2006), a call to action is tempered with a caveat of caution:

Our growing understanding of how the brain works and how we may manipulate, inquire into, or change it (both to treat its disorders and for nonmedical purposes) must now call forth our best efforts to seek ethical consensus while issues are taking shape – not after they have emerged as moral crises or controversies in the public arena. (Fischbach 2006, p. ix)

In 2002 Roskies described the scope of neuroethics in two divisions: the ethics of neuroscience and the neuroscience of ethics. The former addresses the ethics of the practice of neuroscience and:

the implications of our mechanistic understanding of brain function for society… integrating neuroscientific knowledge with ethical and social thought. The latter, the neuroscience of ethics, borrows from the field of neurophilosophy and examines the neurological foundations of moral cognition. (Roskies 2002, p. 21)

Federico, Lombera, and Illes (Chapter 22, this volume) argue that this two-division view of neuroethics is overly simplistic; the neuroscience of ethics should not be placed apart or at a higher standard than neuroscience studies of other domains such as addiction or neurodegenerative diseases.

Racine asserted that:

the single most important integrative goal underlying neuroethics is a practical one: the need to improve patient care for specific patient populations. Hence, technological advances should always be discussed in the light of their potential contribution to the good of the patients and the public. (Racine 2008, p. 3)

He points out that one of the first times the term neuroethics was used was to urge clinicians to pay more attention to the needs of neurological and psychiatric patients, especially in order to protect them from possibly harmful new interventions (see Cranford 1989; Pontius 1973, 1993).

According to cognitive neuroscientists and neuroethics scholars at the University of Pennsylvania (2009a), attempts to define the scope of the field may be premature. Nonetheless they offer two new categories of neuroethical issues: those emerging from what we can do, and those emerging from what we know:

The “what we can do” problems: ethical problems raised by advances in functional neuroimaging, pharmaceutical enhancement of mood and related functions, cognitive enhancement, brain implants, and brain-machine interfaces.

The “what we know” problems: ethical problems raised by our growing understanding of the neural bases of behavior, concepts of personal responsibility, personality, consciousness, and states of spiritual transcendence.

Neuroethicists will consistently confront the technological imperative: if the technology exists, use it. But they will need to recall the bioethics mantra: it is not what you can do, it is what you should do.

These definitions and field-framing statements also indicate how diverse are the issues and questions the field will address are. William Safire’s definition is pithy and quotable. It was offered as an alert to the most crucial and salient concerns in early 2000. Some neuroethical questions surely will not be explored so neatly within this framework.

The present authors agree with Joseph Fins that not all neuroethical questions will be of the “sound the alarm” variety. He does caution against neuroethics having “the unintended consequence of squelching clinical progress for historically marginalized patients who might be helped by advances in neuroscience” (Fins 2003a; 2005).

Where Gazzaniga suggests a “philosophy of living informed by our understanding of underlying brain mechanisms”…i.e. “a brain-based philosophy of life” (Gazzaniga 2005), Fins finds this expansive stance “worrisome.” Advocacy for a brain-based “universal ethics” he finds is reminiscent of a theological construct and could be seen as prescriptive, not unlike Delgado’s wish decades ago to use brain implants to “psychocivilize society” (Fins 2008, p. 38; see Delgado and Anshen 1969; Fins 2003b).

Others offer practical lists of neuroethical questions and concerns that may emerge in diverse arenas. At the 2005 Hard Science, Hard Choices Conference one of the present authors (R.F.) suggested that we “call forth our best efforts to seek ethical consensus while issues are taking shape; we can’t afford to wait.” Yet it is a given that reaching consensus takes time, that new discoveries and issues are a moving target, and that controversies will inevitably erupt in the public arena.

Schiavo as a cautionary tale

While having their unproductive aspects, controversies often do bring a needed and urgent focus to important issues. A case in point is the wrenching Terri Schiavo tragedy and specifically how the neurological meaning of severe brain injury and the persistent vegetative state, “became central to the decade’s most convulsive bioethics debate” (Fins 2008) that occupied national attention in 2005 (see Annas 2005; Cassell 2005; Dresser 2005) (Figure 21.1). Despite the terrible emotional and monetary cost to the extended Schiavo family, the country as a whole learned much—lay people, healthcare providers, policy makers, the media, even the President of the USA and Governor of Florida.

In the end, new neurological standards and methodologies that can help distinguish the persistent vegetative state (PVS) from the minimally conscious state (MCS) emerged “to avoid the diagnostic shortfalls that stem from clinical ignorance or ideological intent” (Fins 2008, p. 19; Hirsch 2005). In addition, the prognosis for brain injury leading to MCS, once considered untreatable, now is seen as having a variable prognosis that may respond to deep brain stimulation (Fins 2008; Schiff et al. 2007).

Heterogeneity of neuroethics today

While neuroscientists, neurologists, psychiatrists, and psychologists will publish about neuro-issues in their professional domains, neuroethicists can have an important role to play in synthesizing the range of neuroethical scholarly production. Neurologists will not likely read most publications in psychiatric epidemiology, nor will psychiatrists often read a paper in an experimental psychology journal. Neuroethicists are likely to explore a wide variety of professional publications, and contribute to many. They now have opportunities to publish empirical research, reviews, and commentaries in their own professional periodical, the American Journal of Bioethics Neuroscience (AJOB Neuroscience). We also expect that neuroethicists will raise many new questions, prompting experts in diverse disciplines to comment further from their perspectives. Finally, in addition to monitoring brain sciences for possible ethical issues, neuroethicists can work constructively with other experts to benefit other disciplines (see Wolf 2008).

Image

FIG. 21.1 Terri Schiavo’s brain. Left: CT scan of normal brain. Right: Schiavo’s 2002 CT scan provided by Ronald Cranford, showing loss of brain tissue. The black area is liquid; the small white mark in the right image is the thalamic stimulator implanted in her brain. http://en.wikipedia.org/wiki/Terri_Schiavo_case (accessed 22 December 2009).

To give the reader an appreciation of the heterogeneity of neuroethics today, which is not likely to abate, one has only to refer to a recent promotional brochure (fall 2009) for AJOB Neuroscience. The variety and scope of “Recent topics” that appeared in the journal is as impressive as it is exciting (Table 21.1).

Neuroethics as a field is rapidly becoming international (Lombera and Illes 2009). It has garnered significant public and political attention in the US, Japan, various European nations, and others, where work groups exist to study ethical issues of translating neuroscience technologies into clinical use. The more international neuroethics becomes, the more neuroethical issues may converge within the international scientific community, and yet, new cross-cultural neuroethical issues may emerge from the very different cultures and faiths. Neuroethicists will be needed as valuable worldwide colleagues, who together contribute to improved clinical care and scientific advancement beyond what could be achieved by countries working independently.

“Neurologisms”

With the advent of interest in the brain sciences, a new vocabulary of words and concepts has developed, consisting of the prefix “neuro-” (meaning neuroscience-based or neuroscience-informed). Appended to other fields of study or practice, this has given us terms such as neurotechnology, neuromarketing, neuroengineering, neuroeconomics, and neurolaw. Judy Illes coined the term “neurologisms” to describe these hybrids (Illes 2009).

Table 21.1 “Recent topics,” promotional brochure (Fall 2009) for AJOB Neuroscience

Image

The prefix “neuro-” has become so ubiquitous that some, for example philosopher Roger Scruton, see intellectual imperialism of “neuro-thugs” claiming chunks of new intellectual territory as though those fields had no history, analytical traditions, or tools of their own (Scruton 2009). Conversely, some find anything “neuro-” sexy and thus contribute to the proliferating literature. As neuroethics as a field evolves, it will be necessary to learn when the use of “neuro” is appropriate and productive, and when merely clever and fashionable. Nonetheless, using tools of neuroscience, questions from other fields stimulate novel opportunities that may yield important new insights, justifying the faith many have in science that at times may be overhyped. Ideally these new neuro-hybrids will indeed become areas of legitimate and rigorous scientific discovery.

A related issue concerns the growing miscellaneousness of neuroethical publication. Susan Wolf cautions that her field, neurolaw, “seems increasingly about everything.” As with health law, “it becomes difficult to see what is distinctive about the field, what core challenges it poses, and how to systematize our thinking about the field to make progress” (Wolf 2008, p. 21). Similarly, burgeoning diversity lacking sufficient structure may be cause for concern if there is to be a recognized field of neuroethics that can develop meaningful content. Yet, valid neuroethical questions will arise in varied and unexpected places, and should be welcomed. Over time, organizing principles will emerge. Neuroethicists will be needed to serve as guides to this ever-growing literature and to give it form and definition.

D. Gareth Jones (2008) points out that there are good reasons to keep neuroethics very connected to its clinical roots. Neuroethical questions, however, do not always have primary clinical origins. Neuroethicists and others have been defining new questions at the intersection of brain sciences and many other fields. To develop optimally, neuroethics will need to respect, understand, and integrate the knowledge of other disciplines with an open mind.

Again, neuroethicists will contribute to the integration that will be needed. They also will be the ones to help determine when a course of action is “neuroethical.”

TAXONOMY OF NEUROETHICAL QUESTIONS

We offer here(see Table 21.2) a modest taxonomy for neuroethics that, for the authors of this chapter, helped create order, given the myriad vexing questions with which neuroethicists grapple, now and in the future. Complex and diverse neuroethical issues may migrate across categorical boundaries, and many topics could be examined under more than one category according to the specific question(s) posed. Our illustrative examples reflect our interests and scholarly work, and help us address the question, “Is it neuroethical?”

Below we expand on the categories in Table 21.2 and offer salient questions.

A Technologically-driven questions with wide implications for society relating primarily to bioethics
Brain stimulation

Brain stimulation modalities hold considerable promise in psychiatric treatment (see Pascual-Leone et al., Chapter 25, this volume). They range from some long-used mainstream psychiatric therapies such as electroconvulsive therapy (ECT) to sophisticated devices used more widely clinically only in the last decade, such as deep brain stimulation (DBS) and transcranial magnetic stimulation (TMS). These and other modalities are more and less focal and/or invasive; they have shown varying efficacy for different indications. Some still are experimental and/or Food and Drug Administration-approved for limited indications. Brain stimulators often are viewed as complementary to brain imaging concerning the kind of neural information they can provide, and are teaching us much about brain circuitry. As therapies, however, their use occasions larger questions about our changing relationship to technologies affecting the brain.

Electroconvulsive therapy for decades has been an effective, even life-saving treatment for patients with severe depression and chronic medication resistance (Figure 21.2). ECT continues to be modified to improve efficacy and safety as well as to reduce distressing aspects of its administration (Prudic 2008; Sackeim et al. 2008). ECT is invasive because it involves global stimulation of the brain (cortical and subcortical) with high frequency alternating current, which must cause a seizure to have any chance of therapeutic efficacy.

Image Despite its advantages, frightening images from older media continue to influence public perceptions of ECT’s invasiveness causing many patients to reject ECT who could benefit (Morrison 2009; Payne and Prudic 2009a,b; Walter et al. 2002).

Image There are valid fears concerning post-ECT memory loss (Payne and Prudic 2009a, b). People who justly fear memory loss from ECT should concomitantly fear remaining depressed, as chronic depression exacerbates memory problems.

Image ECT is disproportionately prescribed for the elderly depressed for many reasons: it is safer than antidepressants as there are minimal cardiovascular side effects; ECT will not contribute to polypharmacy; chronic treatment resistance is common in this population; and very importantly, ECT is most likely to rapidly improve disabling deep depression and suicidality (Rosenberg et al. 2009; Sherman 2009; Sackeim et al. 2007).

Table 21.2 Taxonomy of neuroethical questions

Image

Image

Image

FIG. 21.2 Administration of electroconvulsive therapy (ECT). a) Right unilateral placement: to generate a seizure with right unilateral treatment, one electrode is placed on the crown of the head, the other on the right temple. Right unilateral treatment is typically associated with reduced memory side effects. b) Bilateral placement: bilateral treatment involves placing electrodes on both temples. This treatment may be associated with more acute memory side effects. Bilateral ECT is indicated for severe mental illnesses including depression with psychosis, manic episodes of bipolar disorder, psychosis related to schizophrenia and catatonia.

Q: How can neuroethicists help the public better understand ECT and mitigate fears, so patients and families can make more informed therapeutic choices, while also understanding that therapies seldom are perfect?

Deep brain stimulation involves implanting an electrode in the brain to stimulate subcortical areas (Figure 21.3). Current indications include treatment of movement disorders such as Parkinson’s disease (PD) (target: subthalamic nucleus of basal ganglia); and treatment of refractory depression (target: experimental only—subgenual cingulate cortex). The electrodes cannot be explanted easily but can be turned off. Surgery usually is successful, but surgical complications and serious side effects can occur (Carey 2009a).

Image

FIG. 21.3 Implantation of deep brain stimulation (DBS) electrode, to treat Parkinson’s disease (PD). http://commons.wikimedia.org/wiki/File:Parkinson_surgery.jpg (accessed 22 December 2009).

Image DBS for PD can lead to significant cognitive and emotional impulsivity, even as it ameliorates the primary movement disorder (Halbig et al. 2009; Hardesty and Sackeim 2007; Witt et al. 2008). Given the complexity and plasticity of brain networks, the safety and efficacy of all approaches to brain stimulation must be evaluated carefully.

Q: As we only partially understand current brain stimulating modalities to treat neuropsychiatric conditions, a necessary question is whether the cost/benefit ratio is good enough compared to other therapies?

Q: How can neuroethicists promote advancing this promising science while cautioning about the “technological imperative” and the reality that brain stimulating modalities may result in serious complications?

Transcranial magnetic stimulation (Figure 21.4) for depression requires extensive clinician involvement and is very expensive (cost at one major center is approximately $12,000 for a 6–8 week course of 5 days/week treatment). It is only affordable to the affluent, apart from participating in clinical trials. Yet if its usefulness and safety hold up, TMS may become a treatment of choice for treatment-resistant patients (i.e. patients for whom multiple drugs and ECT have failed to lessen symptoms).

Q: If insurance companies or Medicare begin to cover TMS, can the expense be justified, compared to much less expensive pharmaceuticals or other therapies for depression?

Q: How will society manage questions of equitable treatment for promising but expensive therapy?

Image

FIG. 21.4 (Also see Plate 8). Transcranial magnetic stimulation (TMS). Schematic image of electromagnetic action. TMS is a safe and non-invasive means of getting electrical energy across the insulating tissues of the head and into the brain. http://intra.ninds.nih.gov/Research.asp?People_ID=196 (accessed 1 January 2010).

B Clinically-driven questions relating primarily to medical ethics
Autism genetics

A deletion of a segment that includes 29 genes on the short arm of chromosome 16 occurs in approximately 1% of the autism population. This structural variant is highly penetrant: chances are high (between 30–50%) a person with this deletion will fall somewhere on the autism spectrum. Others with the same deletion may exhibit mental retardation or a different developmental disorder.

Image A husband over 40 and his 38-year-old wife may choose in vitro fertilization and pre-implantation genetic screening to eliminate embryos with the 16p deletion.

Q: How can neuroethicists, working with a clinical team, assist couples making these extremely difficult choices?

C Applied philosophical, definitional, legal, cross-cultural, and psychosocial questions relating primarily to the meaning of personhood and symptomatology
Revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV)

In revising DSM-IV to produce DSM-V, a particularly contentious issue is the proposal to abandon the diagnosis of Asperger’s syndrome. Asperger’s advocates see many negatives if they are identified as autistic individuals rather than as a distinct category. Those diagnosed with Asperger’s often have a more favorable societal profile, are more likely to be able to work or function independently, and often possess unusual or creative abilities. They may not wish to be “cured” of these qualities and most oppose research seeking the genetic risk factors that might lead to genetic screening or elimination of their special status.

Q: In what ways will the restructuring of DSM-IV (1994) into DSM-V (2013) affect diagnosis and treatment of autism spectrum disorders and many other mental health conditions? Will all changes be for the better?

Neuroenhancement

Neuroenhancement has been defined as “a pharmacological attempt to increase cognitive performance in healthy humans” (Normann and Berger 2008).

Q: Is non-medical neuroenhancement ever ethical and possibly appropriate? If yes, at what ages, and for what specific purposes? (Sahakian and Morein-Zamir 2010; Chneiweiss, Chapter 19 this volume)

Q: Legally and philosophically, how authentic are enhanced personality traits, and how legally autonomous, in various contexts, are people using such enhancements (Bublitz and Merkel 2009)?

D Ethically-driven questions relating primarily to protection of humans and animals in research
Human research: disclosure of incidental brain research findings

What should be done when an incidental finding of possible clinical relevance is revealed in a research study involving brain technology? Incidental findings that have not been validated generally are not disclosed; currently there is no consensus for appropriate action in response to medically significant findings (Illes 2008; Illes and Chin 2008; Wolf et al. 2008).

Q: How can research volunteers’ right to know or be reassured be balanced with other issues?

Q: How should neuroethicists contribute to re-framing research protocols and informed consent forms concerning disclosure of incidental findings, and how can they help educate participants?

Animal research: neuroethical care of research animals; creating chimeras in higher primates

Use of animals in research and questions of animal rights and welfare have become high profile concerns in recent years.

Image Research into comparative human and animal brain function, consciousness, and physiological function reveals that mammals possess more advanced awareness and sensation than historically recognized.

Q: What are the most neuroethical, affordable standards for housing, care, pain mitigation, and legal protections that research animals are owed by science and society?

Image While common in a human to rodent model, creation of chimeras (physically mixing cells from two different species) in a human to primate model is only recently being explored.

Q: Is it neuroethical to create a chimera in higher primates by implanting human brain cells into their brains (see University of Minnesota, Center for Bioethics document on chimeras)?

E Ethically-driven and future-oriented questions concerning the unity of mind, brain, and body relating primarily to progressive and integrative medicine
Stress and health

Neuroscientists and proponents of complementary and alternative medicine have been documenting how mind, brain, and body are inextricably intertwined with respect to health and illness. This effort has provided new insights into how people respond to stress. Stress can be useful for creativity and even survival, but chronic, excessive stress, especially if present from an early age, can cause systemic over-activation of the hypothalamic/pituitary/adrenal axis (HPA axis) (McEwen 2009) and of key areas of the “social brain” (Urry et al. 2006). These patterns contribute to stress-related physical and psychological illness.

Q: How can neuroethicists contribute to more rapid integration of discoveries about stress reduction and emotional functioning from psychology as well as cognitive, affective and social neuroscience, to diversify primary and neuropsychiatric care for posttraumatic stress disorder (PTSD) and other ills?

Curbing medicalization creep

Our rapidly advancing biomedical science yields a constant outpouring of ever deepening and broad knowledge of psychiatric and neural phenomena, but are our psychiatric concepts overly “medicalized?” Medicalization redefines normal psychological (and other) life occurrences such as grief or normal cognitive aging as predominantly medical in nature. Medicalization has been called “disease-mongering,” and is linked as well to the excessive influence of pharmaceutical and other commercial medical interests (Conrad 2007; Horwitz and Wakefield 2007).

Q: How can neuroethicists help integrate new perspectives on the nature of the brain and behavior to redefine the role of pharmaceuticals in mental health and neurological care, i.e. optimizing their biological benefits while expanding and redefining the nature of the illnesses and of neural wellness, meanwhile reducing the negatives of medicalization?

Neuroethicists, working in diverse settings, can be influential in addressing these and a myriad of other questions generated by all the branches of neuroscience, neurology, and psychiatry. We would now like to concentrate on two major examples that we feel illustrate the roles neuroethicists will need to play, working with other experts.

NEUROETHICISTS WILL PLAY A VITAL ROLE IN RESOLVING COMPLEX ISSUES

We present perspectives on two ethically complex issues: psychosurgery and brain privacy. We find these topics particularly compelling. They underscore why neuroethicists will play a vital role in exploring the ethical, legal, economic, social, and other medical and scientific implications of these advances.

The new psychosurgeries

A recent front-page headline in the New York Times read “Surgery for Mental Ills Offers Both Hope and Risk” (Carey 2009a). The article, one of a series on insights from the latest research, featured several surgical procedures that have been conducted in approximately the last 10 years on more than 500 people. While most procedures were performed as part of experimental studies, they were designed to treat a variety of problems including depression, anxiety, Tourette’s syndrome, and even obesity. “The great promise of neuroscience at the end of the last century was that it would revolutionize the treatment of psychiatric problems. But the first real application of advanced brain science is not novel at all. It is a precise, sophisticated version of an old and controversial approach: psychosurgery, in which doctors operate directly on the brain” (Carey 2009a).

Concerning the high-risk experimental procedures that patients are being put through, Paul Root Wolpe opined: “We have this idea—it’s almost a fetish—that progress is its own justification, that if something is promising, then how can we not rush to relieve suffering?” Wolpe went on to note that not so long ago, doctors considered the frontal lobotomy a major advance. But given the evidence of the thousands of patients left with irreversible brain damage, Wolpe concluded, “that’s why we have to move very cautiously” (Carey 2009b).

Historical background

Neurosurgical procedures often follow in the tradition of Dr. Wilder Penfield (Figure 21.5) who had a passionate desire to unlock the mysteries of the human brain (Lipsman and Bernstein, Chapter 24, this volume). During the 1930s, he operated in a careful and cautious manner, probing the exposed brain tissue of patients with severe epilepsy, which was felt to be incurable. With his patients awake and reporting their sensations, his search for the scarred tissue that caused the seizures also revealed many functions performed by unmapped regions of the brain. Perfecting his “Montreal Procedure,” Penfield is said to have “discovered the source of memory, tapped the reservoir of long forgotten sensations and emotions, and located the storehouse of dreams” (Penfield 1975). These findings revolutionized the study of higher brain function.

Image

FIG. 21.5 Dr. Wilder Graves Penfield (1891–1976), 1934. http://en.wikipedia.org/wiki/Wilder_Penfield (accessed 22 December 2009)

Brain surgery continued to evolve but on a dark trajectory with the development of leucotomy and prefrontal lobotomy to treat severe behavior problems. Portuguese neuropsychiatrist Antonio Egas Moniz, who subsequently won the Nobel Prize in 1949 for his innovative work in neurosurgery, pioneered the procedure that was felt to be beneficial for people with otherwise untreatable psychoses. In this crude psychosurgical procedure, connections in the prefrontal cortex and underlying structures were severed, or the frontal cortical tissue was destroyed, the theory being that this led to the uncoupling of the brain’s emotional center and the seat of intellect (Lerner 2005).

Walter Freeman introduced the lobotomy to the US and became its greatest advocate. As he was a neurologist, he collaborated with neurosurgeon James Watts, to refine Moniz’s technique (Figure 21.6). Together, they developing a quick method, the so-called “ice-pick” lobotomy, performed for the first time in 1945. At this time, prior to the availability of Thorazine and other psychoactive drugs, lobotomy was used to reduce agitation and aggressive behaviors and to make patients easier to handle. Freeman performed ice-pick lobotomies on anyone referred to him and, during his career that lasted into the mid 1950s, he performed almost 3500 operations (he once lobotomized 25 women in a single day (McManamy 2009)).

The concern that troubles neuroethicists and others who currently grapple with issues raised by psychiatric neurosurgery is that they pose high risks for patients, with only very minimal evidence of success to date. These procedures—cingulotomy, capsulotomy, DBS, and gamma knife surgery (targeted radiation)—are performed at only a limited number of medical centers and generally only as a last resort (Figure 21.7). While they bring relief to some, there remains much we need to learn about how exactly they work, specifically about the brain’s cellular and functional interconnections.

Image

FIG. 21.6 Pre-frontal lobotomy. http://scienceblogs.com/neurophilosophy/2007/07/inventing_the_lobotomy.php (accessed 1 December 2009).

Many recognize the desperation of patients afflicted with disabling conditions that impact almost every aspect of their lives. Neuroethicists can educate the public, work with neurosurgeons to help patients assess the major benefits, risks, adverse events, and side effects, and remind patients that anecdotal reports of the same surgery for others supposedly suffering from the same disorder may not be successful for them.

The Holy Grail in psychiatry, broadly speaking, would be to know so much more about the true, multifaceted causes of every psychiatric illness, and therefore also to know the good predictors of response for specific interventions. Ideally, treatment could be individually tailored. Many months if not years of treatment failures, frustration, loss of quality of life if not tragedy, and emergence of treatment resistance could thus be averted. However, we are far from there. Latest reports on the new psychosurgeries demonstrate once again how little we know (Carey 2009a). Desperate patients will continue to consent to surgeries that may well be too risky in their current form. Yet it seems we have no alternative but to continue to gather more data as we seek the next advance. How can we reconcile this technological imperative with the bioethical mantra to promote the responsible treatment of patients?

Treatment advancement must be balanced with honest, open, and professionally accepted, even embraced discussions of risk and failure, more publication of negative results, more transparency in the dissemination of results of clinical trials, and recognition of the limitations of current knowledge. Such discussions should also promote more curiosity about what other models of mental ills, and other medical traditions or perspectives, might have to teach us.

Most of our knowledge about possible efficacy of treatments comes from clinical trials. In the case of surgical intervention in the brain, novel study designs are needed (e.g. on/off design) to determine the impact and safety of a given therapy for the individual patient. An ablation to treat obsessive-compulsive disorder (OCD) in some patients, or DBS, as in Helen Mayberg’s pioneering work with severely depressed patients (Lozano et al. 2008), may resolve psychiatric symptomatology, but also may lead to adverse sequelae.

Image

FIG. 21.7 (Also see Plate 9). Psychiatric neurosurgery.

Patients may be driven by unrealistic expectations (Kravitz, et al. 1996). Clinician scientists may be overwhelmingly idealistic and dedicated, but may be driven by a zeal to discover and succeed (El-Hai 2005). Companies with huge resources committed to clinical trials may be driven by the profit motive, hoping for a breakthrough drug, device, or procedure. Together these produce a perfect storm of competing interests. Neuroethicists will be needed to help resolve the dilemmas posed by the principle to respect the autonomy of the desperate patient, anxious to access innovative therapies, and the principles of beneficence and non-maleficence to protect that patient from harm caused by a high risk, unproven procedure. Neuroethicists must elaborate strong and persuasive arguments concerning the merits and drawbacks of experimental procedures, so that any participation in clinical trials can be optimized to serve the greatest good for neurological and psychiatric patients. Neuroethicists must remain unbiased, and promote clinical equipoise. They must not be seen as barriers to advancing the frontiers of brain therapy. With appropriate safeguards in place, they even can be advocates for high risks potential high gain procedures.

Medical tourism is growing more popular as many desperate people consider American medicine at times too conservative and unwilling to take risks when they, the patients, seemingly have almost nothing to lose. In addition, excellent medicine at greatly reduced cost is available in countries such as India and Thailand. The same minimally invasive neurosurgical techniques used in the US, for example, gamma-knife irradiation and DBS, are offered in India at a fraction of the US cost (PR Log 2009). On the other hand, if the irrational hope for a cure turns into a disabling disaster of its own, this only adds to the tragedy of such patients. It is often unclear if anyone can, and should, be held responsible.

The complexity of addressing such situations as those with the new psychosurgeries, rife as they are with inherently conflicting hopes and motives, and of working to make the public at large, and patient populations and advocacy groups in particular, sufficiently neuroliterate, suggests once more that neuroethicists are very needed, now more than ever.

BRAIN PRIVACY

Brief history of brain privacy

Historically, the inner thoughts and personality traits of the vast majority of people were of little consequence in the wider sphere, unless persons were accused of crimes or sedition, or as an inspiration to novelists. Yet, awareness of personal and familial privacy is very old (Aries and Duby 1987). Recently, we have begun to think about “brain” privacy, whether concerning privacy of perceptions, memories, and thoughts, or privacy of health information pertaining to the status of the brain.

Aspects of personal privacy extended to privacy about health very early. The Hippocratic Oath reminds us that in Western culture, matters of health are not only private, between physician and patient, but considered distasteful to discuss beyond that relationship. “What I may see or hear in the course of the treatment or even outside of the treatment in regard to the life of men, which on no account one must spread abroad, I will keep to myself, holding such things shameful to be spoken about.” This standard clearly has been changing since the later 20th century, for mostly positive reasons, e.g. reduced shame about most health conditions, and increased discussion of health issues in all media. Concerns about privacy of electronic health records remain. Yet if worry about loss of some aspects of privacy has diminished, the ability to breach privacy of the brain and thoughts technologically has increased worries considerably.

Neuromarketing is one new concern. With the rise of mass production and then modern advertising in the later 19th century in the West, for the first time many people became major consumers of goods (Hudson 2008). Only then did the incentive to plumb material desires and buying habits arise. In the 21st century, commercial interests wish to find the means not only to influence but also to identify individuals’ thoughts, choices, attitudes, and preferences, for example by accessing their online behavior, or using brain imaging. Neuroeconomics investigations have been systematizing more of our knowledge of the brain basis of choice and reward (Astolfi et al. 2008; Walter et al. 2005). So-called neuromarketers (e.g. SalesBrain) now assert they know best how to market, based on the nature of the brain. Some say neural activation patterns from imaging can reliably indicate interest in a type or brand of product, or political preference (see Caldwell 2007; Iacoboni et al. 2007; Lee et al. 2007). While neuromarketing is not yet widespread, it is troubling. It could lead to “invasion of brain privacy,” and it involves distortion and potentially inappropriate commercialization of science.

Humans have always tried both to infer the thoughts of normal individuals from nonverbal cues, verbal content, and behavior, and to understand the abnormal thoughts of the psychologically ill. Yet until only recently, one’s inner life remained fundamentally private and largely inaccessible, often even to oneself. The rise of experimental and psychoanalytic psychology in the later 19th century brought about fundamental changes, and led to many historically new approaches to plumbing mental contents. These included everything from psychophysical (e.g. Wundt, Fechner) and reaction time (e.g. Donders) experiments to characterize structure and function of basic perceptual and cognitive processes (see Snodgrass et al. 1985), to structured methods of introspection (James 1890), and the rise of early modern psychiatric and psychoanalytic methods (e.g. Charcot, Freud).

The invention of projective psychological testing (Rorschach, Thematic Apperception Test (TAT), and others) in the early to mid-20th century, also gave examiners the possibility of investigating the mind, identifying pathologies, and typing personalities. Leading clinicians refined these techniques, which still are in use (Rapaport, Gill and Shafer 1945/1978). In the popular imagination, projective tests often were seen as a means of spying into minds (Lemov 2009). Clinical psychological assessment began as a high-priority activity of the Office of Strategic Services (OSS; forerunner of the CIA) during World War II, to profile and recruit agents for intelligence work, and accomplish other wartime goals. Harvard psychologist Henry A. Murray, inventor of the TAT, guided many of these efforts, including profiling Adolf Hitler. Students in Murray’s Harvard classes also were psychologically profiled, as an academic requirement. It was only understood later that psychological self-revelation can have its costs. One student in particular was alleged to have been damaged by the experience—the young Theodore Kaczynski, who became the Unabomber (Chase 2000).

The business world now often uses projective tests (Wimmer and Dominick 2006). Clinicians still do, but these tests have been criticized as relying too heavily on examiners’ clinical judgment, and may lack reliability and validity (Cordon 2005). Yet despite the rise of biological psychiatry and brain imaging, the information these clinical tools can provide is strikingly different, and therefore not redundant.

Psychological and occupational assessment tools long used by potential employers (e.g. Meyers-Briggs Type Indicator (MBTI)) likely still can yield more useful information about a job candidate than a brain scan, at a much lower cost, for the vast majority of applicants and positions. We also are far from linking patterns of brain response to most job-relevant traits and behaviors. Canli (2006), however, showed that neuroimages can predict some kinds of behavior better than either self-reported measures or reaction time data. He believes that in the future, brain imaging data, combined with life history and genetic information, may predict aspects of behavior and personality very precisely. But can these predictors become inexpensive enough to use for such non-medical purposes, and will they have true practical value? SAT tests, a widely used and precisely calibrated measure of aspects of intelligence and academic performance, can be mass administered, as can some personality inventories. While SATs give college admissions offices useful data, they still can fail to predict who will be successful in college and life. Too many other factors can influence those outcomes.

We are judged primarily by what we do, not what we may privately think. People learn to restrain most harmful impulses, and can seek change through spiritual, psychotherapeutic, and other efforts. What, therefore, would be the utility of most hypothetical efforts to use brain technologies to spy on people’s inner life? This is likely to be unaffordable, impractical, and unproductive in most cases, except perhaps critical forensic ones where the legal right to examine a suspect or defendant might appropriately be obtained. An identified terrorist is one such example. Attempts to “spy” on mental or behavioral patterns using brain imaging in most contexts would be considered unethical, invasive, undemocratic, and in violation of constitutional rights to privacy as well as protection from unreasonable search and seizure.

The era of documentable intrusion into experienced perceptions if not thoughts, however, may have begun. Striking recent studies using sophisticated computer vision algorithms have shown that it is possible to reconstruct or “see” what a research subject has just seen (but not “thought”). This technology, thus far used to reconstruct visual cortex content of innocuous experimental scenes, could not be used to probe the brain for older visual memory content, or probe for abstract thought or silent language (Kay et al. 2008; Makni et al. 2008; Miyawaki et al. 2008; Naselaris et al. 2009). With advancing technology, however, hypothetically more personal or meaningful mental contents from a person’s past could be reconstructed and accessed.

Neuroethicists have begun and are needed to lead in establishing ethical guidelines in all these situations.

Brain privacy in psychiatry

Brain privacy is a significant issue in psychiatry. Erick Cheung (2009) asserts that “neuroscience and technology also have important ethical consequences for the practice of psychiatry that have yet to be fully considered.” We would agree. Neuroethicists will be needed to help psychiatry frame many neuroethical questions it may not always consider from its disciplinary point of view. Psychiatrists obtain a wide range of information and data about individuals to optimize treatment for often grave neuropsychiatric or psychological conditions. Electronic medical records gradually are becoming the norm: electronically stored brain scans, EEG (electroencephalogram) data, results of treatment with brain stimulation modalities (e.g. TMS, ECT, DBS), and other psychological tests and measures of brain functioning, will be information potentially available to be misused. Being labeled with a psychiatric diagnosis has always had major implications including stigma.

Employment security and interpersonal bias are constant concerns for persons diagnosed with a psychiatric condition. Recall the leak concerning the depression and ECT of Senator Thomas Eagleton, running as vice-presidential candidate with George McGovern in 1972. The revelation ended his candidacy (Editorial, TIME Magazine 1972). Neuroethicists will be needed to work with psychiatrists, neurologists, and others to insure that in seeking needed treatment, patients are not subjected to a less stringent standard of personal privacy than non-patients. If the use of brain scans becomes more common in employment screening, people with a history of psychiatric illness might well show atypical or clinically suggestive scan patterns, even if they are well and able to work.

A brain scan of a person with schizophrenia or severe depression typically will show hypofrontality (abnormally low activation in key areas of frontal cortex) (Figure 21.8). If a person in a research study shows hypofrontality on a scan as an incidental finding, but has not been diagnosed with an illness, what does this mean? What action if any should be taken, concerning the participant, and the data?

Image

FIG. 21.8 (Also see Plate 10). Positron emission tomography (PET) image. Scan of a patient with schizophrenia. (Source: Andreas Meyer-Lindenberg, M.D., Ph.D., NIMH Clinical Brain Disorders Branch). http://www.nih.gov/news/pr/jan2002/nimh-28.htm (accessed 22 December 2009).

Scans could become a tool of discrimination and thwarted potential. Too little is known about the relationship of brain scans taken at a point in time to overt symptomatology and prognoses (Boyce 2009; Cheung 2009).

Consideration of ethical guidelines and possibly even laws to ensure brain privacy in research, clinical healthcare, and other contexts should start now. Twentieth- and 21st-century psychology and psychiatry have brought us much closer to reliably accessing and analyzing aspects of thought and personality, and yet what we still do not understand is vast.

Lie detection

The cost of brain imaging inevitably will fall in the years to come. Use of functional brain imaging (functional magnetic resonance imaging, fMRI) may become more commonplace, and could possibly be used beyond approved medical and research contexts. The visual format of imaging is powerful and can become too concretely/simplistically interpreted even among professionals (“new phrenology”); it can so more easily be misunderstood by the public or in commercial contexts. Even today, amid much concern (Fischbach and Fischbach 2005; Greely and Illes 2007; see also Murphy and Greely, Chapter 38, this volume), some companies are marketing use of brain scans for lie detection and personnel evaluation (e.g. Cephos, No Lie MRI, Inc.). These companies, aggressively marketing their services to private companies and government agencies, in some cases involve brain scientists as scientific advisors or active partners. But does the science support this use? While the techniques of some companies extrapolate from current science, the current authors believe they can mislead the public and potential clients, exaggerating the valid information that can be extracted from individual as opposed to averaged brain scans. It seems likely that regulations regarding this application of brain imaging are needed; some authors have argued vehemently in favor of such action (Greely and Illes 2007). Who will monitor regulatory compliance by these companies? Neuroethicists are needed to address these issues.

In dystopian futuristic movie scenarios such as The Matrix series, an agent is able to coercively access the brain to manipulate consciousness, know what people are thinking, and probe memories of past events. This fictional scenario is not entirely beyond imagining. It is possible to further imagine a time when a legal instrument such as a “brain search warrant” might exist. Courtroom use of brain scans for lie detection is being hotly debated, as is the EEG result known as the P300 component of a brain wave that can be used to probe aspects of memory (e.g. Fox 2008; Iacono 2008; Meegan 2008).

Physiological measures such as fMRI scans, P300 EEG responses, galvanic skin response, analysis of minute details of facial expression (Ekman 2003, 2009), and others, all now tell us much about emotion and cognition, in specific contexts. All these kinds of evidence, however, still are deemed inadmissible in a court of law, as they cannot reliably document “the truth.”

Concern about brain privacy is warranted

Does neurotechnology encroach dangerously on the privacy of brain processes and the self today? Certainly entities such as health insurance companies, life insurance companies, and prospective employers might have the wish to use psychiatric or electronic health records to reveal the status of an individual’s brain, or how that brain would function in certain circumstances (Ackerman 2006, p. 29). Illes argues that brain data should certainly have as much, if not more, protection than genetic data, which is certainly another new area involving major privacy concerns (see the Genetic Information Non-Discrimination Act of 2007; Illes and Racine 2005). It is unclear if this protection will eventually extend to neuroimaging as well, given how much less is known about the results and interpretation of brain imaging data.

Concern is warranted about the validity of human thought content extrapolated from brain imaging and other technologies, and the uses to which this content may be put that could be deemed invasive, harmful, and ethically inappropriate.

As neurotechnology advances, and as research results accumulate, it may seem we can make conclusions about very specific thoughts or brain states; however, mostly this assumption would be wrong. Compelling images of the brain at work have enormous appeal in these uncertain times, yet several major technical problems must be overcome before neuroscience can help resolve the contentious and challenging problem of brain privacy (Fischbach and Fischbach 2005; Glenn 2005; Wolpe et al. 2005).

NEUROETHICISTS AND THEIR CONTRIBUTIONS

Neuroethicists are needed now more than ever to become professionals qualified in bioethics, neuroscience, and allied areas. They will invest career time in analyzing neuroethical problems in depth, and educating about neuroethical questions by integrating diverse expertise from many domains. Clinically, they will help find more neuroethical solutions to research issues and clinical questions relevant to brain function and health. Vitally important, they will help the public and policy makers resolve neuroethical conundrums, and provide advocacy and safeguards. Neuroethicists also will play a critical role in guiding the media to avoid hype and hyperbole in their effort to report promising research.

To date, neurologists, neuroscientists, psychiatrists, psychologists (clinical and research), philosophers, historians of science, legal ethicists, bioethicists, sociomedical scientists, epidemiologists, and others have been contributing to the field of neuroethics. To the extent that we now have “professional” neuroethicists, likely they are coming from these diverse ranks.

Some argue that neuroethics should stay close to its clinical roots (e.g. Jones 2008), or be a branch of bioethics dedicated to protecting our brains and minds (Safire 2002). Some ask for a broader approach, beyond “bioethics for the brain” (Gazzaniga 2005). The clinical perspective is absolutely necessary, but we argue it is not sufficient. While many neuroethical questions will remain close to clinical medicine and neuroscience research, some neuroethical issues—and doubtless many yet to be defined—extend well beyond narrow clinical ones. Many will have ethical, legal, economic, and social implications for every citizen. It is only appropriate that knowledge and experience from other disciplines, as relevant, be weighed along with medical knowledge.

We do not argue for a neuroethics cottage industry within every academic discipline. We must not forget, however, that many recommendations that may improve our healthcare system are derived from beyond medicine—from law, public policy, and public advocacy. The public increasingly needs guidance, and needs to be educated about clinical and research issues concerning the brain, in a phrase, the public needs neuroliteracy (Illes et al. 2009). An informed voting public that can understand issues is a necessity in a complex society, one in which many funding and strategic priorities compete, and in which often deliberate misinformation is disseminated for political, economic, or other advantage. In addition, while a subset of society—those in medicine, neuroscience, and allied academic and clinical disciplines—develop and extend knowledge of brain sciences, they could easily leave the non-specialist and lay public far behind. Given the likelihood that many neuroethical questions will be relevant to the wider society, in personal (medical) and general (policy, political) contexts, an ill-informed public will leave us lacking in sufficient, considered, and informed societal participation when we will need it.

Educating neuroethicists

With emerging complex issues in the brain sciences, few would doubt that neuroscientists need at least some neuroethics training (Morein-Zamir and Sahakian 2009; Sahakian and Morein-Zamir 2009; Lombera et al. 2010). Non-neuroscientists whose work will comment on brain sciences concomitantly need basic grounding in neuroscience. The idea of a “neuroscience bootcamp” (University of Pennsylvania 2009b) seemed timely. Academic and professional trainees need grounding in neuroethics when relevant to their work. Neuroethics curricula and increased educational efforts and outreach are needed. Neuroethicists will be needed to develop these curricula, collaboratively. The central importance to so many clinical and research activities of understanding the brain mandates that much wider, deeper, more accurate, and accessible knowledge about the brain and nervous system becomes available to learners at every stage. In this task, Web-based distance learning programs can play a large role. Some programs currently are available, such as Neuroethics: Implications of Advances in Neuroscience, the Columbia University Center for Bioethics online course (2008) funded by the Dana Foundation; and the Neuroethics module available on Health Sciences Online (Lombera et al. 2010), a portal where health professionals in training and practice can access free, comprehensive, high quality, current courses, references, and other learning resources to improve global health.

Given the need to educate neuroethicists, some universities have created Masters programs in neuroethics. Interdisciplinary 1-year certificate programs in neuroethics also could be created, tailored for well-prepared professionals or academics who are non-specialists. These programs could provide modular course offerings that would meet the learners’ needs—in bioethics, neurohistory, neuroscience, law, public policy, and so on.

Neuroethicists will be needed to serve in diverse settings and capacities

In scholarly and professional contexts, neuroethics rightly has emerged as an interdisciplinary field. We feel that neuroethics, even while becoming an academic subspecialty, should not become overly self-referential. Questions neuroethicists ask should remain diverse and inherently linked to other relevant disciplinary knowledge. How exactly this will work in practice will unfold as the field continues to evolve.

In addition to scholarly work, we see a need for the neuroethicist to function in clinical settings in neurology, psychiatry, and allied clinical areas, to educate and to discuss difficult decisions with patients and family members, much as genetic counselors do. Families are likely to request the most current information and the chance to explore in depth the risks and benefits of cutting-edge options. It may reassure patients and families to know that they could consult the neuroethicist as well as the brain specialist. In the same way, Howard Gardner and others proposed that parents could consult neuroeducators, specialists in both education and neuroscience (Sheridan et al. 2007).

Neurological and psychiatric consultations at times may require that cross-cultural gaps and gaps of understanding about brain-related procedures and treatments be bridged. Physicians and physician/scientists are likely to have limited time for this; they could work with a consulting neuroethicist on their clinical team. In clinical settings, neuroethicists also could educate other professional personnel such as medical students, residents, nurses, social workers, clergy, and others about clinical neuroethical issues.

To educate varied disciplines in neuroethics, multiple options are available or should be created, such as specialty conferences or sections in neuroscience and ethics, continuing medical education (CME), lecture series, and tailored educational services for psychiatry, neurology, and other medical divisions.

Neuroethicists should be found not only by the hospital bedside but squarely in the university classroom. They will be needed to create graduate and undergraduate courses so that students at any level, as they advance in their understanding of brain sciences, could benefit from perspectives of neuroethicists. Fortified by their historical knowledge, neuroethicists could forestall risky or unethical academic endeavors.

Some of the most notorious behavioral science experiments of the 20th century that stretched the limits of ethical conduct of research took place in university settings.

At Yale, Stanley Milgram’ 1963 obedience to authority experiments (1974) (Figure 21.9) and Philip Zimbardo’s Stanfsord simulated prison experiments (Figure 21.10) conducted in 1971 (2007) are now seen as unacceptable.

Nonetheless they did obtain findings that revealed great insights about human behavior. These controversial and indeed infamous studies help us today to understand the tragedies of the My Lai massacre as well as the torture and prisoner abuses of Abu Ghraib and Guantanamo Bay.

We strongly urge institutional review boards (IRBs) to routinely have a neuroethicist serving as a member. With experimental protocols that offer potential promise in the face of anticipated risk, a neuroethicist can be indispensable to assess the risk-benefit calculus and guide a robust informed consent process. Faced with protocols involving dual use, controversial brain surgery and imaging procedures, neurogenetic studies, and even issues suggesting threats to brain privacy or manipulation of public attitudes, a wellprepared neuroethicist can play a pivotal role in protecting research subjects. Many terminally ill patients will state they have nothing to lose so they should be allowed to participate in extreme brain or nervous system experiments. It will fall to the neuroethicist to point out the pain and suffering that may be a consequence of the protocol, which could seem worse than death itself. Neuroethicists can serve to protect the rights and promote the welfare of those who volunteer to advance the frontiers of clinical neuroscience.

In legal contexts, neuroethicists might offer an addition or alternative to psychiatrists or neurologists having no specialty education in ethics who are asked to testify in legal proceedings. Neuroethicists might be seen as more neutral, and well rounded on points of ethics and relevant precedents.

Image

FIG. 21.9 Obedience to authority experiment of Stanley Milgram. The experimenter (E) orders the teacher (T), the subject of the experiment, to give what the latter believes are painful electric shocks to a learner (L), who is actually an actor and confederate. The subject believes that for each wrong answer, the learner was receiving actual electric shocks, though in reality there were no such punishments. Being separated from the subject, the confederate set up a tape recorder integrated with the electroshock generator, which played prerecorded sounds for each shock level.

Public and professional contributions of neuroethicists

Neuroethicists can serve as activists, contributing their knowledge as private and/or public citizens. Their unique knowledge is needed for policy and advocacy discussions in many contexts that involve complex and contentious neuroethical issues. Educating the media is an imperative to ensure public dissemination of bona fide neuroethical information rather than high-class hype.

Neuroethics expertise also will be needed in government and the private sector at many levels, e.g. federal research study sections at the National Institutes of Health (NIH), Centers for Disease Control and Prevention (CDC), and National Science Foundation (NSF); federal, state, and local policy commissions; in industry at pharmaceutical and device companies; and in private think tanks, foundations, and policy institutes (e.g. The Hastings Center, Dana Foundation, Alzheimer’s Disease Foundation, etc.).

Image

FIG. 21.10 Stanford Prisoner’s Experiment of Philip Zimbardo.

Government and private funders will need guidance concerning neuroethical issues that will arise in the science they are evaluating for support. Having a consulting neuroethicist review proposals in advance can help avert potential ethical pitfalls in the research; neuroethically poor proposals would be flagged for revisions or rejected, and better ones strengthened.

Professional outreach and networking will be needed to obtain wider input, as new issues continue to arise that will need careful analysis from multiple perspectives. Who will mount and coordinate efforts to do so? Neuroethicists whose multidisciplinary education allows them to combine key types of expertise at a high level, and who can collaboratively tackle the epistemological and scientific challenges of today.

CONCLUSION

When we deal with brain science, we are dealing with the organ that makes us unique individuals, that gives us our personality, memories, emotions, dreams, creative abilities, and at times, our sinister selves. (Fischbach 2005, p. xiii)

In the age of neuro-everything, as we enter novel and daring new territories, neuroethicists will be needed to formulate many essential questions, and provide much-needed guidance in addressing difficult and ethically challenging problems. Neuroethics can serve as the common ground for stakeholders, and neuroethicists can bring their broad body of knowledge and critical thinking skills to the table. Neuroethicists are needed as part of a largescale effort to speed up the integration of diverse knowledge, much as translational medicine is working to speed up valid and safe use of bench discoveries of clinical relevance, bench to bedside. We contend that neuroethicists can guide the equivalent process, and help mitigate or resolve ethical problems in research and clinical treatment. Well-prepared neuroethicists, working within a very interdisciplinary context, are needed now more than ever.

REFERENCES

Ackerman, S.J. (2006). Hard Science, Hard Choices: Facts, Ethics, and Policies Guiding Brain Science Today. New York: Dana Press.

Annas, G.J. (2005). “Culture of life” politics at the bedside—The case of Terri Schiavo. New England Journal of Medicine, 352, 1710–15.

Aries, P. and Duby, G. (1987). A History of Private Life. Cambridge, MA: The Belknap Press of Harvard University Press.

Astolfi, L., De Vico Fallani, F., Cincotti F., et al. (2008). Neural basis for brain responses to TV commercials: a high-resolution EEG study. IEEE Trans Neural Systems Rehabilitation Engineering, 16, 522–31.

Boyce, A.C. (2009). Neuroimaging in psychiatry: evaluating the ethical consequences for patient care. Bioethics, 23, 349–59.

Bublitz, J.C. and Merkel, R. (2009). Autonomy and authenticity of enhanced personality traits. Bioethics, 23, 360–74.

Caldwell, M. (2007). Careers in behavioral science. Neuromarketing careers. Science, 316, 1060–1.

Canli, T. (2006). When genes and brains unite: ethical implications of genomic neuroimaging. In J. Illes (ed.) Neuroethics: Defining the Issues in Theory, Practice, and Policy, p. 175. New York: Oxford University Press.

Carey, B. (2009a). Surgery for mental ills offers both hope and risk. New York Times, 26 November. Available at: http://www.nytimes.com/2009/11/27/health/research/27brain.html (accessed 16 December 2009).

Carey, B. (2009b). Brain researchers open door to editing memory. New York Times, 5 April. Available at: http://www.nytimes.com/2009/04/06/health/research/06brain.html (accessed 16 December 2009).

Cassell, E.J. (2005). The Schiavo case: A medical perspective. The Hastings Center Report, 35, 20–3.

Center for Bioethics, Columbia University. Neuroethics: Implications of Advances in Neuroscience. Available at: http://ccnmtl.columbia.edu/projects/neuroethics/index.html (accessed 16 December 2009).

Chase, A. (2000). Harvard and the making of the Unabomber. The Atlantic Monthly, 285, 41–65.

Chen, I. (2009). The court will now call its expert witness: the brain. The Stanford Report Online, 19 November. Available at: http://news.stanford.edu/news/2009/november16/greely-neurolaw-issues-111909.html (accessed 16 December 2009)

Cheung, E.H. (2009). A new ethics of psychiatry: neuroethics, neuroscience, and technology. Journal of Psychiatric Practice, 15, 391–401.

Chneiweiss, H. (2011). Does cognitive enhancement fit with the physiology of our cognition? In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.00–00. Oxford: Oxford University Press.

Columbia University, Center for Bioethics. Neuroethics: Implications of Advances in Neuroscience. Available at: http://ccnmtl.columbia.edu/projects/neuroethics/index.html (accessed 16 December, 2009)

Conrad, P. (2007). The Medicalization of Society: On the Transformation of Human Conditions into Medical Disorders. Baltimore, MD: John Hopkins University Press.

Cordón, L.A. (2005). Popularpsychology: an encyclopedia, pp. 201–4. Westport, CT: Greenwood Press.

Cranford, R.E. (1989). The neurologist as ethics consultant and as a member of the institutional ethics committee. The neuroethicist. Neurological Clinics, 7, 697–713.

Delgado, J.M. and Anshen, R.N. (1969). Physical Control of the Mind: Toward a Psychocivilized Society. New York: Harper and Row.

Dimidjian, S. and Davis, K.J. (2009). Newer variations of cognitive-behavioral therapy: behavioral activation and mindfulness-based cognitive therapy. Current Psychiatry Reports, 11, 453–8.

Dresser, R. (2005). Schiavo’s legacy: The need for an objective standard. The Hastings Center Report, 35, 20–2.

Editorial (1972). The Campaign: McGovern’s First Crisis: The Eagleton Affair. TIME Magazine, 7 August. http://www.time.com/time/magazine/article/0,9171,879139,00.html (accessed 10 January, 2010).

Ekman, P. (2003). Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life. New York: Times Books.

Ekman, P. (2009). Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. New York: W.W. Norton & Co.

El-Hai, J. (2005). The Lobotomist: A Maverick Medical Genius and His Tragic Quest to Rid the World of Mental Illness. Hoboken, NJ: John Wiley and Sons, Inc.

Federico, C.A., Lombera, S., and Illes, J. (2011). Intersecting complexities in neuroimaging and neuroethics. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.00–00. Oxford: Oxford University Press.

Fins, J.J. (2003a). Constructing an ethical stereotaxy for severe brain injury: balancing risks, benefits and access. Nature Reviews Neuroscience, 4, 323–7.

Fins, J.J. (2003b). From psychosurgery to neuromodulation and palliation: history’s lessons for the ethical conduct and regulation of neuropsychiatric research. Neurosurgical Clinics of North America, 2, 303–19, ix-x.

Fins, J.J. (2005). The Orwellian threat to emerging neurodiagnostic technologies. American Journal of Bioethics, 5, 56–8.

Fins, J.J. (2008). Brain injury: The vegetative and minimally conscious states. In M. Crowley (ed.) From Birth to Death and Bench to Clinic: The Hastings Center Bioethics Briefing Book for Journalists, Policymakers, and Campaigns, pp. 15–20. Garrison, NY: The Hastings Center.

Fins, J.J. and Schiff, N.D. (2005). The afterlife of Terri Schiavo (in brief). The Hastings Center Report, 35, 8.

Fischbach, R.L. (2006). Foreword. In S.J. Ackerman (ed.) Hard Science, Hard Choices: Facts, Ethics, and Policies Guiding Brain Science Today, p. xi. New York: Dana Press.

Fischbach, R.L. and Fischbach, G.D. (2005). The brain doesn’t lie. American Journal of Bioethics, 5, 54–5.

Fischbach, R.L. and Fischbach, G.D. (2008). Neuroethicists needed now more than ever. American Journal of Bioethics Neuroscience, 8, 47–8.

Fox, D. (2008). Brain imaging and the Bill of Rights: Memory detection technologies and American criminal justice. American Journal of Bioethics Neuroscience, 8, 34–6.

Gazzaniga, M. (2005). The Ethical Brain. New York: The Dana Press.

Glenn, L.M. (2005). Keeping an open mind: What legal safeguards are needed? American Journal of Bioethics, 5, 60–1.

Greely, H.T. and Illes, J. (2007). Neuroscience-based lie detection: the urgent need for regulation. American Journal of Law and Medicine, 33, 377–431.

Halbig, T.D., Tse, W., Frisina, P.G., et al. (2009). Subthalamic deep brain stimulation and impulse control in Parkinson’s disease. European Journal of Neurology, 16, 493–7.

Hardesty D.E. and Sackeim H.A. (2007). Deep brain stimulation in movement and psychiatric disorders. Biological Psychiatry, 61, 831–5.

Hirsch, J. (2005). Functional neuroimaging during altered states of consciousness: how and what do we measure? Progress in Brain Research, 150, 25–43.

Horwitz, A. and Wakefield, J. (2007). The Loss of Sadness: How Psychiatry has Transformed Normal Sadness into Depressive Disorder. New York: Oxford University Press.

Hudson, R. (2008). Cultural political economy meets global production networks: a productive meeting? Journal of Economic Geography, 8, 421–40.

Iacoboni, M., Freedman, J., Kaplan, J., et al. (2007). This is your brain on politics. New York Times, 11 November. Available at: http://www.nytimes.com/2007/11/11/opinion/11freedman.html?scp=1&sq=This%20is%20your%20brain%20on%20politics&st=cse (accessed30December 2009).

Iacono, W.G. (2008). The forensic application of “brain fingerprinting”: Why scientists should encourage the use of P300 memory detection methods. American Journal of Bioethics Neuroscience, 8, 30–2.

Illes, J. (2006). Neuroethics, neurochallenges: A needs-based research agenda. Stanford Center for Biomedical Ethics. Available at: http://neuroethics.stanford.edu/documents/Illes.NeuroethicsSFN2006.pdf (accessed 30 December 2009).

Illes, J. (2008). Brain screening and incidental findings: flocking to folly? Lancet Neurology, 7, 23–4.

Illes, J. (2009). Neurologisms. American Journal of Bioethics, 9, 1.

Illes, J. and Chin, V.N. (2008). Bridging philosophical and practical implications of incidental findings in brain research. Journal of Law and Medical Ethics, 36, 298–304, 212.

Illes, J., and Racine, E. (2005). Imaging or imagining? A neuroethics challenge informed by genetics. American Journal of Bioethics, 5, 5–18.

Illes, J., Moser, M.A., McCormick, J.B., et al. (2010). Neurotalk: improving the communication of neuroscience research. National Review of Neuroscience, 11, 61–9.

James, W. (1890). The Principles of Psychology. Available at http://psychclassics.yorku.ca/James/Principles/index.htm. (accessed 6 January, 2010).

Jones, D.G. (2008). Neuroethics: adrift from a clinical base. American Journal of Bioethics, 8, 49–50.

Jones, J.H. (1981). Bad Blood: The Tuskegee Syphilis Experiment. New York: Free Press.

Kay, K.N., Naselaris, T., Prenger, R.J., and Gallant, J.L. (2008). Identifying natural images from human brain activity. Nature, 452, 352–5.

Kravitz, R.L., Callahan, E.J., Paterniti, D., et al. (1996). Prevalence and sources of patients’ unmet expectations for care. Annals Internal Medicine, 125, 730–7.

Lee, N., Broderick, A.J., and Chamberlain, L. (2007). What is “neuromarketing”? A discussion and agenda for future research. International Journal of Psychophysiology, 63, 199–204.

Lemov, R. (2009). Towards a data base of dreams: assembling an archive of elusive materials, c. 1947–61. History Workshop Journal, 67, 44–68.

Lerner, B.H. (2005). Last-ditch medical therapy—revisiting lobotomy. New England Journal of Medicine, 353, 119–21.

Lipsman, N. and Bernstein, M. (2011). Ethical issues in functional neurosurgery: Emerging applications and controversies. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.00–00. Oxford: Oxford University Press.

Lisanby, S.H., Husain, M.M., Rosenquist, P.B., et al. (2009). Daily left prefrontal repetitive transcranial magnetic stimulation in the acute treatment of major depression: clinical predictors of outcome in a multisite, randomized controlled clinical trial. Neuropsychopharmacology, 34, 522–34.

Lombera, S. and Illes, J. (2009). The international dimensions of neuroethics. Developing World Bioethics, 9, 57–64.

Lombera S. and Illes J. Health Sciences Online Neuroethics Resources and References. http://neuroethicscanada.ca/National_Core_for_Neuroethics/Initiatives_files/NeuroethicsReferences%26Resources.pdf (accessed 14 January 2010).

Lombera S., Fine, A., Grunau, R.E. et al. (2010). Ethics in neuroscience graduate training programs: views and models from Canada. Mind, Brain and Education. March 4 (1): 20–27.

Lozano, A.M, Mayberg, H.S, Giacobbe, P., Hamani, C., Craddock, R.C., and Kennedy, S.H. (2008). Subcallosal cingulate gyrus deep brain stimulation for treatment-resistant depression. Biological Psychiatry, 64, 461–7.

Makni, S., Idier, J., Vincent, T., Thirion, B., Dehaene-Lambertz, G., and Ciuciu, P. (2008). A fully Bayesian approach to the parcel-based detection-estimation of brain activity in fMRI. Neuroimage, 41, 941–69.

McEwen, B.S. (2009). The brain is the central organ of stress and adaptation. NeuroImage, 47, 911–13.

McManamy, J. (2009). Father of the lobotomy. Thinking of giving someone a piece of your mind? Stay clear of Walter Freeman. McMan’s Depression and Bipolar Web, http://www.mcmanweb.com/lobotomy.html Updated Nov 1, 2009 (accessed 11 January 2010).

Meegan, D.V. (2008). Neuroimaging techniques for memory detection: Scientific, ethical, and legal issues. American Journal of Bioethics Neuroscience, 8, 9–20.

Milgram, S. (1974). Obedience to Authority; An Experimental View. New York: Harper Collins.

Miyawaki, Y., Uchida, H., Yamashita, O., et al. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron, 60, 915–29.

Morein-Zamir, S. and Sahakian, B.J. (2010). Neuroethics and public engagement training needed for neuroscientists. Trends in Cognitive Science, 16 November [Epub ahead of print]

Morein-Zamir, S. and Sahakian, B.J. (2011). Pharmaceutical cognitive enhancement. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp. 00–00. Oxford: Oxford University Press.

Morrison, L. (2009). ECT: shocked beyond belief. Australas Psychiatry, 17, 164–7.

Murphy, E.R. and Greely, H.T. (2011). What will be the limits of neuroscience-based mindreading in the law? In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp. 00–00. Oxford: Oxford University Press.

Naselaris, T., Prenger, R.J., Kay K.N., Oliver, M., and Gallant, J.L. (2009). Bayesian reconstruction of natural images from human brain activity. Neuron, 63, 902–15.

National Human Genome Research Institute. Genetic Information Nondiscrimination Act of 2007. Available at: http://www.genome.gov/24519851 (accessed 16 December 2009).

Normann, C. and Berger, M. (2008). Neuroenhancement: status quo and perspectives. European Archives of Psychiatry and Clinical Neuroscience, 258, 110–14.

Pascual-Leone, A., Fregni, F., Steven, M.S., and Forrow, L. (2011). Noninvasive brain stimulation as a therapeutic and investigative tool: An ethical appraisal. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.00–00. Oxford: Oxford University Press.

Payne, N.A., and Prudic, J. (2009a). Electroconvulsive therapy: part I: A perspective on the evolution and current practice of ECT. Journal of Psychiatric Practice, 15, 346–68.

Payne, N.A., and Prudic, J. (2009b). Electroconvulsive therapy: part II: A biopsychosocial perspective. Journal of Psychiatric Practice, 15, 369–90.

Penfield, W. (1975). The Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. Princeton, NJ: Princeton University Press.

Pieri, E., and Levitt, M. (2008). Risky individuals and the politics of genetic research into aggressiveness and violence. Bioethics, 22, 509–18.

Pontius, A.A. (1973). Neuro-ethics of “walking” in the newborn. Perceptual and Motor Skills, 37, 235–45.

Pontius, A.A. (1993). Neuroethics vs. neurophysiologically and neuropsychologically uninformed influences in child-rearing, education, emerging hunter-gatherers, and artificial intelligence models of the brain. Psychological Reports, 72, 451–8.

PR Log (Press Release). (2009). Psychosurgery in India, now at a lower price. 18 August. http://www.prlog.org/10312883-psychosurgery-in-india-now-at-lower-price.html (accessed 1/14/2010)

Prudic, J. (2008). Strategies to minimize cognitive side effects with ECT: aspects of ECT technique. The Journal of ECT, 24, 46–51.

Racine, E. (2008). Comment on “Does it make sense to speak of neuroethics?” European Molecular Biology Organization Reports, 9, 2–4.

Rosenberg, O., Shoenfeld, N., Kotler, M., and Dannon, P.N. (2009). Mood disorders in elderly population: neurostimulative treatment possibilities. Recent Patent CNS Drug Discovery, 4, 149–59.

Roskies, A. (2002). Neuroethics for the new millennium. Neuron, 35, 21–3.

Rossi, S., Haslett, M., Rossini, P.M., and Pascual-Leone, A. (2009). Safety of TMS Consensus Group. Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research. Clinical Neurophysiology, 120, 2008–39.

Sackeim, H.A., Prudic, J., Fuller, R., et al. (2007). The cognitive effects of electroconvulsive therapy in community settings. Neuropsychopharmacology, 32, 244–54.

Sackeim, H.A., Prudic, J., Nobler, M.S., et al. (2008). Effects of pulse width and electrode placement on the efficacy and cognitive effects of electroconvulsive therapy. Brain Stimulation, 1, 71–83.

Safire, W. (2002). Neuroethics: Mapping the Field. Dana Foundation. Available at: http://www.dana.org/news/cerebrum/detail.aspx?id=2872 (accessed 30 December 2009).

Sahakian, B.J. and Morein-Zamir, S. (2009). Neuroscientists need neuroethics teaching. Science, 325, 147.

Schiff N.D., Giacino J.T., Kalmar K., et al. (2007). Behavioral improvements with thalamic stimulation after severe traumatic brain injury. Nature, 448, 600–3.

Scruton, R. (2009). Statement made at open session of Technology, Neuroscience and the Nature of Being: Toward a Common Morality Conference. United Nations.

Sheridan, K., Zinchenko, E., and Gardner, H. (2007). Neuroethics in education. In J. Illes (ed.) Neuroethics: Defining the Issues in Theory, Practice and Policy, pp. 266–75. Oxford: Oxford University Press.

Sherman, F.T. (2009). Life-saving treatment for depression in elderly. Always think of electroconvulsive therapy (ECT). Geriatrics, 64, 8, 12.

Snodgrass, J.G., Levy-Berger, G., and Haydon, M. (1985). Human experimental psychology. New York: Oxford University Press.

Urry, H.L., van Reekum, C.M., Johnstone, T., et al. (2006). Amygdala and ventromedial prefrontal cortex are inversely coupled during regulation of negative affect and predict the diurnal pattern of cortisol secretion among older adults. Journal of Neuroscience, 26, 4415–25.

University of Pennsylvania, Center for Neuroscience & Society. Overview of Neuroethics. Available at: http://neuroethics.upenn.edu/index.php/penn-neuroethics-briefing/over-view-of-neuroethics (accessed 16 December 2009)

University of Pennsylvania, Center for Neuroscience & Society. NeuroscienceBootcamp. Available at: http://www.neuroethics.upenn.edu/index.php/events/neuroscience-bootcamp (accessed 16 December 2009).

University of Minnesota, Center for Bioethics. Chimeras. http://www.ahc.umn.edu/img/assets/25857/chimeras.pdf (accessed 14 January 2010).

Walter, G., Fisher, K., and Harte, A. (2002). ECT in poetry. Journal of ECT, 18, 47–53.

Walter, H., Abler, B., Ciaramidaro, A., and Erk, S. (2005). Motivating forces of human actions. Neuroimaging reward and social interaction. Brain Research Bulletin, 67, 368–81.

Wilson, D. (2009). Poor children likelier to get antipsychotics. New York Times, 11 December. Available at: http://www.nytimes.com/2009/12/12/health/12medicaid.html?_r=2&tntemail0=y&emc=tnt&pagewanted=print (accessed 17 December 2009).

Wimmer, R.D. and Dominick, J.R. (2006). Mass media research: an introduction. Belmont, CA: Thomson Wadsworth.

Witt, K., Daniels, C., Reiff, J., et al. (2008). Neuropsychological and psychiatric changes after deep brain stimulation for Parkinson’s disease: a randomized, multicentre study, Lancet Neurology, 7, 605–14.

Wolf, S.W. (2008). Neurolaw: The big question. American Journal of Bioethics Neuroscience, 8, 21–2.

Wolf, S.W., Lawrenz, F.P., Nelson, C.A., et al. (2008). Managing incidental findings in human subjects research: analysis and recommendations. Journal of Law and Medical Ethics, 36, 219–48, 211.

Wolpe, P.R., Foster, K.R., Langleben, D.G. (2005). Emerging neurotechnologies for lie-detection: Promises and perils. American Journal of Bioethics, 5, 39–49.

Zimbardo, P. (2007). The Lucifer Effect: Understanding How Good People Turn Evil. New York: Random House.