CHAPTER 40
NEW DIRECTIONS IN NEUROSCIENCE POLICY

TENEILLE R. BROWN AND JENNIFER B. MCCORMICK

TO some, scientific discovery is an engine that fuels the intellect, career, and even, when friends are gracious, social conversations. But even if one is not a person whose life is steered by science, there probably still have been occasions to marvel at the profound depth of what it means to be human: our complex emotions, memories, and thoughts; our sense of ourselves; our decision-making capacity; and our consciousness. How is this all possible? To be sure, the brain is at the center of this inquiry. And as the interdisciplinary field of neuroscience focuses on the brain, it is in a unique position to answer questions that are exceptionally interesting, not just to those passionate about the science, but also to humanity generally. The problem, however, is that human interest in the brain is so great that social institutions risk prematurely demanding too much of neuroscience.

This chapter is intended to accomplish two goals: first, to describe some of the diverse areas where neuroscience findings have overlapped with policy and the law; and second, to provide concrete questions that policymakers, which we mean to include judges and lawyers, interest groups, individual lobbyists, and legislators, should answer before relying on neuroscience research. The central thesis of this chapter is that neuroscience findings, particularly those that relate to complex human behavior, must be used with care and caution. Until they are thoroughly vetted through the scientific process, neuroscience findings must be interpreted narrowly and in context, or they risk being abused for political gain.

THE DEPTH OF NEUROETHICS

Before we branch out to the many domains where neuroscience and policy meet, let us first begin at the roots. Neuroethics examines the intersection between ethics and neuroscience, and as such it rests on a very rich intellectual base that for centuries has inquired into the ethical issues associated with mind and behavior (Illes and Bird 2006). As philosophers have been contemplating the ideas of free will, identity, and moral decision making for thousands of years, one scholar cleverly referred to neuroethics as in some ways representing old wine in a new bottle (Moreno 2003). Even if the ethics of neuroscience does not present completely new questions, the ability to localize activity in the brain and chart relative differences in brain activity and metabolism places neuroscience in a special position among the life sciences. Localization might lead to exciting hypotheses against which scientists are able to test basic presumptions about the neuropsychology and biology of human decision making.

Thus, not only is there an ethics of neuroscience research, but as researchers elucidate how the human brain makes decisions, neuroscience findings might enlighten the views generally held on agency, consciousness and social thought. This reciprocal nature of the relationship between ethics and neuroscience may yield what the prominent cognitive neuroscientist Michael Gazzaniga has referred to as a brain-based philosophy of life (Gazzaniga 2005). Even if neuroscience does not inform the entire life philosophy of individuals and social institutions, at the very least neuroscience findings will both inform and question the foundations of legal and policy decisions.

In order to capitalize on neuroscience findings in an intelligent and ethical manner, debate and deliberation about the science and its uses must respect the stage of discovery from which the data came. For example, neuroscience findings could hit news desks and judicial chambers at varying points on the trajectory from preliminary finding to fully grown and replicated population data. This is an obvious point generally, but it is one that is not fully appreciated by policymakers, and perhaps even scientists. One only needs to review headlines such as “This is your brain on politics” (Iacoboni et al. 2007) from The New York Times, in order to see how preliminary research findings can be co-opted for political purposes.

Part of the explanation for the co-opting of neuroscience by policymakers may be simply that the two disciplines are beholden to different social norms for knowledge production. Scientists stake truth claims based upon empirical data that are created in order to disprove or prove working research hypotheses. In science, there is no inherent value in protecting current belief based on data collected in the past, and in fact, entire careers may be made by successfully challenging the orthodox view. Policymakers, on the other hand, have to either dismiss the ideology, relevance, or facts of the status quo, or yield to it. In this way, precedent is a jealous mistress to policymakers that demands explicit genuflection. If policymakers decide to deviate from the path of those before them, they must explain this departure by relying on one of a handful of tools at their disposal.

One such tool is to distinguish a seemingly applicable law by arguing that the relevant details of a statute or case are not completely comparable and therefore the law should not apply in this instance. Policymakers can turn to non-binding authorities from other jurisdictions as persuasive text, or they can argue that a law is unfair or does not maximize social utility. After availing themselves of the traditional swords, policymakers may rely on empirical data, but it is often an instrument of last resort. Viewed like this, it becomes less surprising that policymakers may view science as yet another argumentative tool in their arsenal, and once something becomes a tool, it ceases to exist in its own right and has value only by being leveraged. In this way, policymakers are the principals and science is their agent. But to scientists, the pursuit of knowledge is their principal, and they are merely the agents for discovery.

Scientists also have their tools. They consist largely of methods, materials, and equipment: statistical analyses, stem cells, chemicals, mice, DNA sequencers, and mass spectrometers. In its purest, ideal form, science discovery is internally agnostic to politics. We emphasize here that this is the ideal—and that because scientists are social beings, asking the research questions and designing the experiments, this ideal may be arguably unattainable, and this notion is a point of well-reasoned debate. Still, it is the cultural norm of respected scientists to attempt to remove personal bias in their methodology and data analysis. Precisely because of this pursuit of unbiased findings, science possesses the allure of resting objectively above politics. This view would be naïve; while the methodology of science ought to be agnostic, there are still limits on how the research question and findings can be applied, and whether the applications are fair or useful.

Just as it might not be obvious to scientists that lawyers have a hierarchy of sources they rely upon in making arguments, it may not be obvious to lawyers that science comes in many shapes and stages. Some data stem from pilot projects with very small sample sizes, some involve unusual experimental conditions that do not mirror the real world, some confirm or challenge previous findings, some are done in rats and squid, some extend a finding to a new population, some cannot be published because they produce a negative result, and some emerge from longitudinal studies in large populations that may take decades to be completed. Unlike in the law, there is no set hierarchy for which type of research data are best. Of course it is critical to make that initial, potentially paradigm-changing finding; however, it is also quite important to see if a result is generalizable to different groups and is capable of replication across different laboratories. Independent validation allows for extrapolation of a finding to create targeted drugs, devices, or other treatment programs.

Even though the sciences do not privilege one type of finding over the other, policy figures should still understand where the data fit on the methodological spectrum, as the social utility will depend greatly on how robust the findings are and how many times they have been replicated. Policymakers should also unpack what might appear to be banal details about sample size and base rates, statistical significance and external validity. These are concepts about which most law and policy people have virtually no training even though they present considerable roadblocks to effective social use of science. Absent some clarity on the methodological limitations, neuroscience data may be cloaked by opportunistic players for political gain.

This co-opting of neuroscience is quite likely, given the history of its older sibling behavioral genetics. Behavioral genetics arguments were also thought to carry an extraordinary amount of pedagogical and social weight, as the public craved genetic support for pre-existing theories about social behavior (Nelkin and Lindee 1995). Thus, individuals, and the public generally, readily consume stories about genes for adultery or genes for homosexuality, or genes for committing murder. Even though there might be considerable overlap in public appetite, thoughtful scholars have noted that the ethical issues raised by neuroscience are not identical to those raised by genetics and genomics, as “our genes are causally far removed from our behaviours” while our brains are not (Illes and Racine 2005; Roskies 2007, p. S54).

THE MAGNETISM OF NEUROSCIENCE FINDINGS

Neuroscience has the potential to energize arguments in ways that are not necessarily backed by data (Brown and Murphy 2010). This is true for many reasons. One reason is neuroessentialism, which captures the idea that we are our brains and our brains are us (Doucet 2007). Another is that humans identify and relate with our brains in ways that we do not with our prostate or adrenal glands. Humans also have a special affiliation with the brain because of what has been dubbed “neurorealism,” a term used to explain how coverage of brain imaging studies “can make a phenomenon uncritically real, objective or effective in the eyes of the public” (Racine et al. 2005, p. 3). In other words, seeing a picture of the brain somehow explains a finding and places it on a pedestal of truth, the authority of which cannot be obtained by referencing the results of a genetic assay.

Perhaps because of this phenomenon many individuals may believe they have the ability to interpret a functional neuroimage, even if the image is a statistical construct of the metabolic activity of the brain and not, as many people misunderstand, a snapshot of the brain at work. Despite being poorly understood, the novel methods behind neuroimaging and other relatively new technologies such as transcranial magnetic stimulation (that uses a short pulse applied to the skull to boost or disrupt local brain activity) have begun to revolutionize our understanding of the human brain and its function. Of course, imaging methods and stimulation devices stand on the shoulders of psychological giants; without solid psychological theories and networks of behavior, cognition, and emotion, the meaning of “activated” neuronal networks would not be clear. Activated appears in quotes because of course the brain is always active, and the brain images merely take advantage of relative differences in metabolism in specific regions of interest.

By contextualizing the findings and building on the interdisciplinary strengths in physics, statistics, biochemistry, neurology, and psychology, neuroscience findings and their importance are coming into sharper focus. Some of these discoveries are allowing neuroscientists to determine the underlying mechanisms for diseases that affect everything from movement to mood. In addition to the many existing targeted treatments, there are thousands more on the horizon, providing hope for a huge segment of the population afflicted with neurological or mental illness. These same discoveries, however, are also finding uses outside of the medical realm—some of these good, some perhaps questionable, and others simply ugly.

It is with this theme of contextualizing neuroscience findings and placing them on the spectrum of social utility that we situate our discussion. Next, we suggest that neuroscientists ought to take more responsibility for how their apolitical research is ultimately used, while at the same time legal professionals and policymakers ought to resist the temptation to ask more of the science than it can currently provide. We then suggest that a funding model for neuroethics research be developed that would enable the creation of a body of empirical data that can be used to inform thoughtful, fair, and efficient public policies. What we propose includes some of the features of the Genome Institute’s Ethical Legal and Social Implications (ELSI) program, but is different in some important respects. For example, funding responsibility for this type of research ought to belong to multiple governmental agencies and private foundations. Finally, we recommend a non-exhaustive list of questions that should be asked before going down the road of neuroscience-informed law or policy.

NEUROSCIENCE: THE GOOD, THE GRAY, AND THE UGLY

Given the multiple ways and contexts in which results of neuroscience can be applied, we propose that generally there are three domains in which these might be categorized: the Good, the Gray (or questionable), and the Ugly. The Good applications are those in which there is a fairly clear consensus that, on balance, the particular use of the finding has sufficient social and individual benefit when weighed against the costs to the same. The goals outlined at the onset are being met and there is general acceptance of the relative value of the outcomes. The Gray, on the other hand, represents those applications of neuroscience that might be questionable or controversial because of differences in religious values, cultural norms, or political motivations. These applications might also be questionable because, although the underlying finding was born out of sound scientific principles, the particular use might be exploiting or over-extending the science. Finally, the Ugly are those applications that are politically and socially unacceptable by most if not all, or involve some sense of government coercion or privacy violation that is not generally thought to be justified by the countervailing social benefit.

Virtually every finding in neuroscience has the prospect of being either socially destructive or socially beneficial, which is to say that the same research data could be used in ways that might be thought of as good, gray, or ugly. To some extent this echoes the US science policy discussions around dual use technologies, defense versus commercial use, and export controls (Neal et al. 2009, pp. 188–9, 321). The gray area is the most fertile ground for our discussion of neuroethics, as it presents the most challenging area for policymakers. If a scientific finding or technological development is generally considered good or bad, politicians know where to spend their political capital. But in the gray area, precisely because it is a gray area, stakeholders can highlight only the elements of the data that support their needs or can cover up interpretive limitations to let the seemingly objective science do the talking.

The likely outcome, then, is that when there is disagreement on values or social norms, science can be leaned on to arbitrate what is at its core a non-scientific problem.

Science in general and neuroscience in particular have often been summoned to answer moral dilemmas such as “which criminals are deserving of execution?” (Aronson 2007) and “how should teachers allocate resources between boy and girl students?” (Gurian and Stevens 2007). While these questions may be informed by neuroscientific findings related to cognitive development or effective learning strategies, data alone cannot provide authoritative answers. This is because most social policies are not driven by wholly empirical factors; they are also guided by our sense of justice, equality, and autonomy, among other values. Because policymakers can decide to ignore data if it suggests a normatively unattractive outcome or is merely inefficient, legal and policy decisions typically only rely on science when it is convenient to do so, or as a method of last resort. There are many examples of areas where the scientific data are solid, and yet for policy reasons, there is a decision to ignore what the data are saying (Loftus and Hoffman 1989; Faigman and Saks 2008). Perhaps it is simpler to defer to the wisdom of science to divine the answers, for example, determining punishment for wrongdoers and educational configurations for children, side stepping the hard psychosocial questions about equality of liberty. It is not to say that findings from neuroscience research cannot be properly applied to the particular social question. Rather the challenge is to know how to use the science, recognize when the science is premature, and not over-extend the data and blur the lines between the subjective and the objective. With neuroscience, the appearance of reductionist objectivity provides for an even greater disconnect between what the data can suggest, and what they are being promoted to say (Roskies 2008). With that, we will begin our analysis of some key neuroscience findings, and the ways in which they have been, or may be, applied in good, gray, and ugly ways.

Using preliminary data in laboratory settings and extrapolating to inferences about individuals: problems with external and internal validity in lie detection studies

Currently, jurors, parole boards, and civil commitment committees are using rough behavioral estimates to determine whether the person on the witness stand is telling the truth. These determinations can have significant consequences and can establish whether the person walks out to freedom or languishes in a prison cell for years. Typically, the parties perform this critical function in a very unscientific way—by looking to see whether the witness appears nervous, is making eye contact, is shifting in his seat, or if his story appears too rehearsed. Clearly, a cunning liar can easily manipulate the system.

In another setting, voters attempt to determine whether or not to trust a political candidate, based on her demeanor, behavior, and general comfort while speaking. Just as charismatic liars may be acquitted so too may disingenuous officials be elected into office, as the public believes them to be telling the truth in their promises that fall short of reality. These are just two examples within the realm of public policy for which it would be useful, to say the least, to have more reliable and valid forms of detecting individual lies.

A small number of researchers to date have had moderate successes predicting who is concealing information during brain imaging studies. These findings are compromised by a significant problem, however: in many the subjects were not in fact being asked to lie. Instead, they were being told by a research confederate to lie when asked by the investigator, thus measuring compliance rather than deception. A recent study sought to measure the act of lying, rather than compliance with instruction to lie. This study was conducted by Joshua Greene and Joseph Paxton using functional magnetic resonance imaging (fMRI). FMRI is a specific type of magnetic resonance imaging system that measures the flow of oxygenated blood in the brain. Blood flow and metabolism serve as an indirect proxy for neuronal activity. In regions of the brain where there is a large amount of oxygenated blood it is assumed that there is increased activity in that region. However, the premise behind fMRI is currently being investigated further and challenged, as a surplus of oxygenated blood may reflect a number of phenomena other than increased neuronal firing. In any event, in the Greene study, subjects were not told that the study was about detecting deception. Rather, they were told that the phenomenon under investigation was clairvoyance—whether the subjects could predict the flip of a coin as heads or tails more than 50% of the time. Using this method, the researchers probed whether individuals presented with an opportunity for dishonest gain (correctly guessing the 50/50 heads or tails flip of a coin) exhibited greater control or lack of conflict/temptation to lie (Greene and Paxton 2009). They demonstrated that individuals who behaved honestly exhibited no increased activation in areas associated with control when choosing to behave honestly. By comparison, individuals who behaved dishonestly showed relative increases bilaterally in the dorsolateral prefrontal cortex (DLPFC) associated with control, both when choosing to lie about their prediction and when they refrained from lying.

In spite of the fascinating work that has been done, using functional imaging of the brain to detect deception in individuals in the courtroom is not appropriate at this time. There is currently insufficient evidence demonstrating fMRI to be valid or reliable for this purpose. Even if relative increases in certain brain regions are evident, this does not mean that an individual is lying. As the mapping of structure to function is not a one-to-one relationship, the relative increase in blood flow may be suggestive of thoughts of disgust, frustration, or anger, or indicate the individual is performing a mental calculation such as how much or little to disclose. Further, reduced relative activation in a particular region might result from habituation and expertise rather than deficiency. Essentially, the data from functional imaging could suggest a host of possibilities, only one of which may be lying (Brown and Murphy 2010). Moreover, some of the studies being used to support the use of brain-based deception detection in the courtroom were designed in a manner that has no direct correspondence to how the technology would be used for forensic purposes.

The popular press tends to overlook the limitations of the methods and tends to shine more light on the possibilities of being able to predict whether someone is telling the truth (Sip et al. 2008). As a result, statements such as this are promulgated as fact: “areas of [the] brain associated with emotion, conflict, and cognitive control – the amygdala, rostral cingulate, caudate, and thalamus – were “hot” when I was lying but “cold” when I was telling the truth” (Silberman 2006). First, the amygdala appears to be involved in many sensory experiences including hunger, lust, and anger: identifying it as “hot” is almost absurd. The colors are chosen arbitrarily after the fact, and there is nothing to suggest that more activation means that one is lying, unless we know more about the psychological process underlying the art of deception. Perhaps greater activation in one area is related to conflict management or impulse control, as it implicates the networks involved in these processes. Or, the relative increase in activation in one area might not be capturing the inhibition of neurons. Neuronal inhibition is as important in the signal transmission in brain physiology as is neuronal activation, and the potential inability of fMRI to detect this aspect of brain function underscores further that merely saying an area is “hot” or “cold” is patently naïve. Further, the default networks of the brain are always active, and unless the entire brain is surveyed during every decision, the filters that are used to measure brain activation will sometimes focus the lens on one area at the expense of viewing another brain region. A more accurate description might be that these areas under investigation appeared to recruit more oxygenated blood than they normally would when the individual is telling the truth. Moreover, to specifically tie this increased activation with the act of lying is using the imagining data to make a correlation that is not truly validated by the data. The same blurring occurs with the cingulate cortex and thalamus—regions of the brain that are likewise leveraged in many daily tasks: “Look here, when you’re telling the truth, this area is asleep. But when you’re trying to deceive, the signals are loud and clear” (Silberman 2006). This remark is misleading, as there is never a time when regions of the brain are asleep; source: (silberman 2006) the brain is thankfully always active. But these statements by a respected researcher, in trying to convey lie detection results to a journalist, illustrate the deep misunderstanding about what neuroimaging findings can meaningfully demonstrate.

Two commercial companies have been providing lie detection services on the commercial market, at various points claiming that their reports would be admissible in courts regardless of the purpose for which they were being offered. The law, however, does not allow admissibility determinations to be made in a vacuum; the evidence being introduced must be relevant and probative for the purpose it is being used. Still, these companies have sought to market their products for legal applications. The methods are currently marginally better at confirming truth in compliant adults, so it appears that the chief audience is couples who are suspicious of adultery. This may be even more socially destructive than legal uses. Private individuals have no expert counsel to cross-examine the findings and challenge the weak methodology when confronted with “hard science” evidence that a spouse is not telling the truth about adulterous behavior. As such, these data have the potential to irresponsibly and permanently break up marriages and families, and shatter lives.

This entire discussion might seem like science fiction, but in fact in June of 2008 brain-based lie detection (or brain-based memory detection) was used in a criminal case in Pune, India, where 24-year-old Aditi Sharma was convicted of murdering her ex-fiance, Udit Bharati. The state’s circumstantial evidence against Sharma was weak. This may be why the opinion relied heavily on a form of brain-based lie detection called brain electrical oscillations signature (BEOS). The BEOS test relies on a brain response called the P300 wave detected by using an electoencephalograph (EEG). The P300 wave is an aggregate recording from many neurons as measured using electrodes applied to the scalp (Picton 1992). The P300 wave is useful for measuring cognitive decisions because the subject cannot consciously control whether it is triggered. While its neural substrates are still being determined, the P300 wave is often elicited in response to the subject making a novel, or odd, observation. This is the finding that has been manipulated for lie detection or “guilty knowledge” tests. The idea is that the presence of a P300 wave is meant to suggest whether the subject has or has not seen a particular item or heard a particular phrase.

The problem with using the P300 wave as evidence of whether someone committed a crime is that the signal cannot presently differentiate between experiential knowledge (i.e. knowledge of previous personal memory or activity) and content knowledge (i.e. familiarity based on exposure or non-personal experience). Examples of the latter would be if the defendant were familiar with elements of a particular crime because she has read about them in the newspaper or heard about them on the radio. In this case, the EEG results might not show that this information is novel, but it would not be because the subject personally engaged in the act that is being presented to her. In Sharma’s case, forensic researchers placed 32 EEG electrodes on her head and read aloud their version of events, speaking in the first person (“I bought arsenic”; “I met Udit at McDonald’s”), along with seemingly neutral statements (like “the sky is blue”) to ostensibly distinguish between her personal memories and general facts. The state forensic scientist boldly asserted that the BEOS data were proof that Ms. Sharma committed the murder rather than just having heard about it. The trial judge agreed. Based on what appears in large part to be findings from unverified and unvalidated BEOS technology, Sharma and her husband, Pravin Khandelwal, were sentenced to life in prison. Subsequently, both have been granted bail by the Bombay High Court. Pravin’s sentence was suspended on the grounds that there was no real evidence to tie him to the case as a conspirator. Sharma was released based on the fact that the evidence of her possessing the arsenic was not compelling, and indeed “the possibility of plantation [of arsenic] cannot not be ruled out.” But as for the underlying methodological flaws in the BEOS technique, little is publicly known.

The ugly uses of brain-based lie detection are not unheard of in the US, either. In a juvenile sex-abuse and child protection case in San Diego, the guardian wanted to admit a report based on the results of functional neuroimaging done by a San Diego-based company registered as No Lie MRI. This is the first case known of where a party attempted to introduce brain-based lie detection in the US, even though the individual introducing the report agreed to withdraw his request for the evidence to be heard. Presumably the scan and resulting report were going to show that the man accused of abuse was telling the truth when he denied sexually abusing a child (Washburn 2009).

Brain-based lie detection that draws inferences based on research findings with limited external validity is an ugly use of the neuroscience, especially when the consequences could be imprisonment or freedom, life or death, or custody of a child or no custody. In other situations, however, there may be room for debate. Should employers be allowed to conduct brain-based lie detection tests prior to hiring someone, for example, for positions in which huge sums of money are handled? Would it be more appropriate to rely on brain-based lie detection as a crude and preliminary test of whether someone is lying to his parole board? Similarly an elite preparatory school may find it beneficial to screen all applicants not just through test scores and letters of recommendation, but also with lie detection tests, with the goal of identifying students with a propensity to cheat and plagiarize. While some may believe such screening to be a clear social misuse of science and technology, others may not, suggesting that ugliness of using brain-based lie detection in some social domains may not be clear-cut. Here, we suggest that the use of neuroscience falls into the gray area of our proposed spectrum of social utility.

Researchers are beginning to replicate their data and test it in more real-world like settings (Kozel et al. 2009). As the external validity of the lie detection studies improves, it moves the methodology one step closer in the direction of having a potentially valid social use (Kozel et al. 2004). Once the technology is deemed sufficiently reliable and valid for social uses, we must then engage in a normative discussion of whether each particular use is ethical, moral, and cost-justified.

The ethics of using neuroscience to prevent certain groups from being executed

In 2002, the Supreme Court of the US decided Atkins v. Virginia, which held that executions of mentally retarded criminals were “cruel and unusual punishments” prohibited by the Eighth Amendment of the US Constitution (Atkins v. Virginia 2002). Later on in Roper v. Simmons, the Court extended this opinion to children, holding that the execution of individuals who were under 18 years of age the time of their crimes is prohibited by the Eighth and Fourteenth Amendments (Roper v. Simmons 2005). The American Psychological Association (APA) submitted an amicus brief to the Court arguing, based on the neuroscience research of Jay Giedd and others (Johnson et al. 2009), that adolescents do not have the full capacity to control their impulses, as the prefrontal cortex does not fully develop until approximately 25 years of age. While the majority opinion did not specifically refer to the APA’s amicus brief, it might have been the triggering point to reverse case law. The dissent, however, did reference the APA brief. Justice Antonin Scalia argued that neuroscience evidence was used by the APA in a previous case to argue the opposite: namely, that a “rich body of research” showed that juveniles were mature enough to decide whether or not to obtain an abortion without parental involvement (Roper v. Simmons 2005, 618). This manner of reasoning highlights how various types of neuroscience data presented as evidence may be confounded, facilitating sharp legal minds to inadvertently mistaken the mental processes under investigation (Steinberg et al. 2009, p. 585). It is entirely possible that juveniles might be both incapable of fully appreciating the consequences of their actions in moments of intense, and often unanticipated, emotion, while also having a brain that has developed enough to make decisions about long-term consequences that have less to do with impulse.

The Eighth Amendment’s prohibition on cruel and unusual punishment is meant to respond to evolving standards of decency. As such, one might expect that new findings from neuroscience could be extended from Atkins to eliminate the death penalty in cases where the defendant suffers from a mental illness such as psychopathy or schizophrenia. That is, if neuroscience findings could provide a biomarker or biological basis for certain types of impulsive behavior, then arguments such as those made by the APA in Roper might be used in other populations that demonstrate similar deficits in cognitive and emotional control. However, such neuroscience evidence could be a double-edged sword—the direction of the cut depending on the sympathy a particular population evokes. If the argument is that both psychopaths and schizophrenics may be similar to children and the mentally retarded in their difficulty controlling their impulses, then the evolving neuroscience research could possibly be used as an aggravating factor against the argument, leading to greater punishment and civil commitment, rather than less (Snead 2007).

Retrofitting neuroscience findings for education policy

Several labs have discovered sex differences in the human brain (Shaywitz 1995; Gur 1999). Some suggest that women have better language aptitudes than men, and men have greater ability to build systems (Baron-Cohen 2003). Many of these findings have been replicated and are taken as a given in the research community. Some of the findings are more fringe. Even so, there are respected researchers who have concluded that the brains of girls and boys develop along different time courses, and they have, on average, relative differences between them in function. Even though there might be solid population data on the differences between girls’ and boys’ brains and their development, the neuroscience of sex differences does not direct us as to how we ought to engage with the sexes and how we ought to choreograph classroom and other learning settings. It is therefore not a question of the strength of the science that makes this use ugly, but rather the way that the science is abused or misused to make policy arguments that may be socially destructive.

Social institutions rejoiced in the findings that, on average, the brains of females differ from the brains of males in some ways. Some journalists published op-eds, that many—both women and men, would find offensive, making statements that women are the dumber sex, and this “is amply supported by neurological and standardized-testing evidence” (Allen 2008). It is quite tempting to use neuroscience findings in this manner, bending and twisting what the data actually say to make socially antiquated arguments (Weil 2008). To date there has not been any peer-reviewed study demonstrating a reliable measure of intelligence, defined broadly, that places men above women. Even so, inaccurate applications of neuroscience data find their way into the public domain to be absorbed and re-applied in ways that can be socially harmful to many and might seem to support antiquated social biases.

Some researchers, but mostly business people, have taken such findings on sex differences to argue for sweeping changes in the way we treat women and men, or boys and girls (Norfleet 2007). Specifically, two men, Leonard Sax and Michael Gurian are using these findings to advocate for sex-segregated education in public schools (Gurian et al. 2001). Michael Gurian is a corporate consultant and a novelist, and Leonard Sax is a psychologist. These men rely completely on the scientific findings of other labs, as neither conducts his own research. They advocate for disparate treatment of boys and girls by teachers. Notably, a junior high student in Louisiana filed a motion for an injunction to prevent her school from adopting the sex-segregated teaching policy that local officials planned to implement. The student’s argument was that the sex-segregation violated Title IX and the Equal Protection Clause of the Fourteenth Amendment. From her legal arguments, we learn of a real example where policymakers in one district planned to use Sax’s viewpoint from his book Why Gender Matters to structure its sex-segregated curriculum. Below are some of the suggestions from Why Gender Matters that the Louisiana high school planned to adopt that we learn about from the court filing:1

• Girls have more sensitive hearing than boys. Thus, teachers should not raise their voices at girls and must maintain quiet classrooms, as girls are easily distracted by noises. Conversely, teachers should yell at boys, because of their lack of hearing sensitivity.

• Because of biological differences in the brain, boys need to practice pursuing and killing prey, while girls need to practice taking care of babies. As a result, boys should be permitted to roughhouse during recess and to play contact sports, to learn the rules of aggression. Such play is more dangerous for girls, because girls do not know how to manage aggression.

• Having girls take off their shoes in class is a good way to keep stress from impairing girls’ performance

• Girls need real-world applications to understand math, while boys understand and enjoy math theory. Girls understand number theory better when they can count flower petals or segments of artichokes, for instance, to make the theory concrete.

• Literature teachers should not ask boys about characters’ emotions, and should only focus on what the characters actually did. But teachers should focus on characters’ emotions in teaching literature to girls.

And from the teacher’s guide that comes with Michael Gurian’s book, Boys and Girls Learn Differently!, we learn that “[a]dolescent males receive surges of the hormone testosterone five to seven times a day; this can increase spatial skills, such as higher math. Increased estrogen during the menstrual cycle increases female performance in all skills, including spatials, so an adolescent girl may perform well on any test, including math, a few days per month.” Gurian also argues that teachers should give boys Nerf baseball bats so that they can release tension during class. The training sessions that Gurian conducts for teachers are teeming with brain scans and the window-dressings of neuroscience data. On his institute’s website, Gurian claims that “we in business tend to prefer science to art and anecdote, and many of popular culture’s suggestions regarding women and men have felt more like artful opinion than fact-based, empirical knowledge. Fortunately, now, things have changed. PET scans and MRIs of men and women’s brains are useful training tools. These are powerful and compelling, as well as easy to look at.” (Lapidus and Martin 2008). In what is now perhaps a predictable story, entrepreneurs are leaning heavily on what appears to be incredibly objective neuroscience data to do exactly what it is they are critiquing—reinforcing popular culture’s suggestions about the way we humans behave.

Sex segregation based on the current neuroscience data is an ugly use of science because it side steps an important question about education—and how our public education system ought to be teaching our children. Even if differences in learning style could be demonstrated, does that argue for carving up the group so they cannot learn strategies from each other? Policymakers have to start with articulating the goals of any education system and work backward from there. Is education meant to accommodate or to challenge? Should systems attempt to reach out to every child, or use crude proxies to reach most? By adopting sex-segregation policies, are policymakers acknowledging that they do not have the resources for teachers to model their approach differently for each child?

There are good data available arguing that students learn differently based on a whole host of idiosyncratic factors including the way they were raised, how often they read, how auditory they are, and how much they can sit still. Many of these variables do not sort neatly on sex lines. Sorting by sex based solely on findings from brain research appears to be an incredibly blunt tool to use for tailoring educational strategies. This sort of policy only takes into account the existing biological or physiological phenomena, that may or may not be broadly generalizable to every individual, and that certainly ignore environmental and social contexts.

There is potential for responsible and ethical applications of neuroscience research in education. In fact, in June 2009, the Society for Neuroscience sponsored a summit on this very topic and issued a report in which the participants outlined existing problematic uses of neuroscience in education, the needs to move beyond these, and how to move forward (Neuroscience Research in Education Summit 2009). The broad goal of the summit and the initiatives likely to arise from it is to determine how neuroscience research can inform educational strategies and ensure its appropriate use in teaching paradigms.

Using demonstrated findings for use in a new population

Devices and drugs that have been demonstrated to be clinically effective in one population—and actually quite beneficial to alleviating symptoms and allowing for some normality in daily activities—are also being used in unverified clinical contexts (i.e. potentially harmful) or for completely non-clinical (i.e. recreational) purposes. In some cases this is a good use of neuroscience, and in some cases it is gray. We will start with the development of deep brain stimulators in Parkinson’s patients and the desire by some to use this experimentally in people who are clinically depressed. We will then discuss a different type of transition use—from clinical to recreational, or self-improvement.

Deep brain stimulators for Parkinson’s disease

Treatments for Parkinson’s disease have changed quite a bit in the last 20 years, with targeted ablation surgeries being replaced by therapeutic use of L-dopa in the 1960s, a drug that helps promote greater uptake of dopamine in the brain. While patients initially respond quite well to L-dopa, eventually many patients stop responding and develop other movement complications that can be worse than those caused by Parkinson’s itself (Kleiner-Fishman et al. 2006). Because of these common complications, neurologists have returned to surgical therapies. One of the more recent treatments is deep brain stimulation, or DBS, which involves placing a medical device (called a “brain pacemaker”) in the brain (Laitinen et al. 1992). Electrodes are implanted in targeted regions of the brain—the most common for Parkinson’s being the subthalamic nuclei, and provide electrical impulses to modulate stimulation in these areas. The advantage of DBS over L-dopa is that rising and falling drug levels can lead to motor fluctuations, while the DBS pacemaker controls symptoms continuously. Several peer-reviewed studies have now found that DBS appears to be safe and effective in reducing Parkinson’s symptoms (Rodriguez-Oroz et al. 2005). However, before these studies were done in the late 1990s, clinicians were experimenting with DBS (Kumar et al. 1998). Whether or not this was appropriate depends on the answers to a few questions. First, what was the likelihood and magnitude of harm to the patient? On how many patients had this safety and efficacy testing been demonstrated? Were the subjects similar to patient of the clinician considering DBS as treatment? What were the patient’s other options? Had the safe and effective alternatives, like L-dopa, stopped working? How much does it cost?2 So long as the parkinsonian patient was competent and understood the risk and cost of DBS, the use of this device to treat Parkinson’s would likely be thought of as a “good” use of a neuroscience finding.

Given the success of DBS for treating parkinsonian patients, work is being done to determine whether there is a clinical application for individuals suffering from depression. Researchers have observed that DBS of the white matter tracts near the subgenual cingulated gyrus is associated with “a striking and sustained remission” of depression in four out of six individuals who received DBS as a treatment (Mayberg 2005). Given the highly experimental nature and the small sample size, what should clinically depressed patients need to demonstrate before they are appropriate candidates for DBS? Use of DBS in clinically depressed patients has yet to be made standard of care, and therefore a clinician’s decision to use it, without ample population data and thorough clinical testing, should be informed by the risks, reliability, and validity of DBS, along with the efficacy of the alternatives. Sometimes clinicians do not have all of these data and thus their decision to use DBS to treat a patient with clinical depression might be questionable, or fall within the gray zone of our paradigm.

What is known is that the risks of DBS are significant, including infection in the brain, stroke, memory loss, personality and behavioral change, and even exaggerated depression. Given the breadth of medical options, it would appear that the clinical depression would have to be pretty severe, resistant to talk therapy and all of the many medical interventions, to contemplate undergoing experimental DBS for depression (Glannon 2008; Wolpe et al. 2008). Unlike advanced stage Parkinson’s, clinical depression for some is not a long-term condition and can sometimes be treated successfully with the symptoms subsiding without further medication. Even so, clinical depression can be incredibly debilitating. The clinician/researcher would need to be very careful to ensure that the depressed individual fully appreciates the risks of DBS and the relative benefit that it might or might not afford him.

The use of therapeutics for recreational cognitive augmentation

One “good” use of neuroscience would be when research findings lead to the development of a targeted delivery drug that operates on specific faulty mechanisms, completely correcting or alleviating debilitating symptoms. There are abundant examples of this with a variety of antidepressants for the treatment of depression and a variety of antipsychotics for the treatment of schizophrenia. While not taking care of all symptoms, these classes of drugs have certainly improved the quality of life and productivity for millions of individuals. Another example of “good” uses of neuroscience would be the development of certain drugs prescribed for fatigue or severe sleepiness. Modafinil is one such drug, which has been demonstrated to be safe and effective for the treatment of narcolepsy and excessive daytime sleepiness associated with sleep apnea or shift-work (Rammohan et al. 2002). Being alert and able to stay awake after a long night shift promotes public safety, health, and productivity. If the night-shift population were never able to function with a clear head we might have many more road accidents as truckers fall asleep at the wheel and more mistakes made by pilots, flight crews, and air controllers unable to stay awake through overseas flights. Or we might have surgeons with sleep apnea who were drowsy in the middle of a critical emergency heart surgery. Given that today’s society is dependent on a successfully operational 24/7 culture (which we recognize is a point of debate itself), these types of failures could have negative consequences for many. Until quite recently, modafinil was thought to have relatively few side effects or risks in this population (Broughton 1997). Given these factors, the use of modafinil in the approved population is probably a positive application of science.

An “ugly” use of modafinil may occur when a parent, with high expectations, gives her otherwise healthy kindergarten child modafinil (or methylphenidate or dextroamphetamine—two drugs prescribed for attention deficit disorder) without her child’s knowledge, so that the child will gain a competitive edge in school. The reasons this is an ugly use are that the child cannot voice her desires, the motivations are not clearly beneficent, and the long-term effects on the healthy pediatric brain are not well known. Another potential ugly use would be where everyone in a particular city unknowingly or without consent received modafinil through their water supply for the sole purpose of enabling a fully operational 24/7 workforce. While these uses may be ethical to some outlier policymakers, given the balance of social and individual risk and benefit, many other individuals may view such uses as socially unacceptable, largely because of lack of full knowledge by, or consent of, the targeted individual or population. However, if the general public is comfortable restricting the autonomous choices of a particular group (such as sexually violent predators or criminal inmates generally) then the coerced use of the approved drug might appear less “ugly” and more subjective, and perhaps would fall into our gray zone of questionable use.

Any drug can have unintended long-term effects on the brain, but is even more likely when the drug is being used by people who do not exhibit the symptoms for which the drug has been tested and approved. This is often thought of as an off-label use, and it would include the two examples of involuntary dosing of healthy children and entire cities discussed above. Off-label use of drugs or devices refers to the provider prescribing something for a purpose for which it is not approved by the Food and Drug Administration (FDA). This use is allowed because the FDA does not interfere with a physician’s independent practice of medicine and treatment choices; however, the drug or device company cannot market their product for an off-label use. There are examples in which drugs used for off-label purposes have been clinically tested for safety and effectiveness and professional organizations have recommended the practice: beta blockers for congestive heart failure and baby aspirin as a prophylaxis against cardiovascular disease in certain sub-populations (Stafford 2008; Healy 2009). There are debates as to whether it is ethical to insist that individuals wait for the long, expensive, and arduous FDA approval process before they can access a drug that might provide some benefit. It is not our place here to engage in that discussion. Instead we want to point to a type of off-label use of drugs in which the purpose is primarily to augment normal function or behavior—not to remedy a health condition.

One particularly common example of this is the use of modafinil in “normals” who seek not to treat a sleepiness disorder but to enhance their cognitive performance. Some leaders in the field of neuroethics find this practice to be ethically acceptable, so long as it is done responsibly in consenting adults and that concerns about unfair access are addressed (Greely 2008). Framed this way, the off-label use of this drug presents no new ethical hurdles, as many adults take drugs prescribed by their doctor for an off-label use. Even so, others worry about whether cognitive enhancement might stir up what it means to be an authentic version of yourself, whether individuals ought to strive for perfection (Satel 2004), whether widespread off-label use of modafinil redefines socially acceptable behavioral and performance standards, and whether these off-label uses are fair, safe, or socially destructive. Either way, new data suggests that modafinil does significantly enhance performance on various memory, cognition, and motor tasks in healthy normals (Turner et al. 2003).

While it might be tempting for a healthy college student to take modafinil to achieve a perfect score on a test, there are some risks, however difficult to measure. First, very little is known on how these biologics affect the non-disordered or diseased brain. A few studies have been done, and a recent one in rats suggested that despite what was previously thought, modafinil might affect the dopaminergic system and become a potential drug of abuse (Jeffrey 2009; Volkow et al. 2009). Second, once the drug becomes widely used on college campuses and students trade in their prescribed medication, two potentially damaging events occur: an individual does not receive the therapeutic she requires and a second individual takes a substance with unknown effects on his brain chemistry and physiology. Similar concerns arise with methylphenidate or dextroamphetamine, because of their potential to temporarily improve cognitive function and provide a competitive edge in a very competitive environment. Third, the source of the drug obtained may be questionable; recognizing a potential specialized niche, Internet entrepreneurs can sell the drugs on the black market. If and how the drugs have been altered by these entrepreneurs is a legitimate concern. No doubt the black market likely exists for other types of FDA approved drugs that can be used recreationally such as diazepam, sildenafil, or acetaminophen with hydrocodone, creating the same concern we articulate for off-label use of modafinil. And indeed, these examples ought not—cannot—be overlooked. Even so, we suggest that because a sense of competitiveness and remaining on par with peers largely drives the misuse of substances like modafinil and methylphenidate, the population susceptible to questionable sources of drugs expands, and thus so does the associated risk, to a group who might not otherwise take illegal or off-label drugs. Equality also seems to be more of a concern in drugs that enhance cognition: If individuals do not desire to put their health at risk by taking an off-label drug (from any source), are they placing themselves at an academic disadvantage? Is this a fair choice for adults, or one that is socially destructive?

Colleagues have presented cogent arguments for human enhancement using genetic technologies on the premise that there is equal access to the technologies (Caplan et al. 1999). The same could be said of pharmaceuticals that improve cognition. However, such universal equal access is an idealist’s dream, given the culture of competitiveness, the growing divide between the haves and have-nots, and the current lack of ability to adequately meet basic human needs for all people. Arguments on the other side challenge this by referencing the status quo of unfairness in achievement (i.e. how is off-label use of modafinil any different from expensive SAT preparation courses?). But is it a defensible ethical argument to reference other types of inequity? If policymakers did not believe in mitigating discrimination and inequity this tenet, each new form of discrimination might be embraced, on the grounds that we could never eradicate implicit and deep-seeded biases. The larger argument for off-label use of modafinil comes from looking to whether the health and social risks are outweighed by the social and individual benefits (Farah 2005).

In a similar vein, the military are, and historically have been, very interested in neuroscience findings that might affect or improve cognitive function (Moreno 2001). Currently the Department of Defense and members of the intelligence community are interested in understanding the brain and potential biologics that allow soldiers to remain more alert and functional even after extremely long periods without sleep. Likewise they are also interested in understanding how to increase soldiers’ resilience to pressures to divulge high-level security information and how to entice potential enemies to disclose information with minimal physical and psychological distress.

While beneficial in the context of considering the safety of a nation’s military personnel and the defense of the country’s citizens as a whole, this legitimization of enhancement could open the way for questionable uses outside the context of the military. Quite often, when a technology is developed for military use and is found to have use outside the context of defense, the technology can find its way into the civilian population—hence the use of the phrase dual use in the context of defense sponsored research (Neal et al. 2009).

For example, currently there is much debate in the physician education community as to the “right” number of work hours for medical residents. The long-held tradition has been that residents often work shifts that extend over a 24-hour period—usually with very little, if any sleep. It has long been part of the training—and one “test” to becoming a physician. There have been recent efforts to minimize shift hours (ACGME 2002; Ulmer et al. 2008) but anecdotal evidence suggests that many residents are resistant to the change—for various reasons (pride, fines imposed on departments, duty to their patients, etc). These residents are a likely population of off-label users of any alertness and cognitive enhancing drugs that the military would develop and use for national safety and security purposes. Whether such application is appropriate in either the physician training or the military domain we would argue is questionable. While in the context of physician training it is important to acknowledge the complexity of the situation and examine it closely, we would suggest perhaps focusing attention on policy issues such as the cost of medical education, fiscal structures that enable adequate hospital staffing, and culture change within the physician profession might also be in order.

Even if it is difficult to draw meaningful philosophical distinctions between drugbased cognitive enhancers and other forms of enhancement, it is important to note that drugbased enhancements do encourage the medicalization of normal behavior in addition to encouraging yet another form of health access inequality. Medicalization is worrisome because it often makes it more likely that importance of behavioral and social interventions will be overlooked or hugely under-valued in favor of a quick fix offered by a pill.

But not all off-label use is sinister. Sometimes clinicians prescribe a treatment that is so experimental that it is really better thought of as an “n of 1” study, where their patient is the only subject and their data may or may not ever be published or recorded. Using research findings or off-label uses to inform clinical decision making is fairly common. Some clinicians might be irresponsible in their prescribing habits, but many are partnering with their patients to problem-solve in creative ways when options are running out and desperation sets in. While clinicians might need guidance as to which questions to ask in determining which experimental therapies to try, they likely have some rough sense of comparative efficacy and appropriate care. A recent commentary in the Archives of Internal Medicine presents an ethical and professional guide framework for physicians (and patients) for a practice that is inevitable (Largent et al. 2009).

SHARED RESPONSIBILITY FOR IMPLEMENTING ETHICAL NEUROSCIENCE POLICIES

The examples discussed in the previous section are meant to emphasize the intersection of neuroscience research across several social domains. Indeed, hopefully the previous section also pointed out the balancing act that is required—the balancing of the good with the bad and an exploration of the questionable uses. A societal goal for science generated from scientific research is to facilitate, perhaps even maximize, the positive uses that can come while simultaneously minimizing and ideally avoiding, the negative outcomes. Of course imbedded in this notion is the necessity to have a discussion on what is positive and what is negative and how each is determined. That discussion belongs in the public domain, by and among policymakers, scientists, and private individuals. Because of the prominence of the brain as an image and the powerful effects of neuroessentialism, it is important, in fact one could argue imperative, that ethical and legal implications of the research be openly debated and policy and social considerations be integrated into the neuroscience research process and training (Sahakian and Morein-Zamir 2009). While many would probably agree with these statements, the question to be answered is who specifically has the responsibility to initiate these conversations and deliberations? Although there are individuals ideally positioned to help with the responsibility of balancing the positive and negative uses of neuroscience research, some are passing it off or “outsourcing” this responsibility. Even so, it must be mentioned that in the field of neuroethics, there appears to be much more involvement by the scientific community and interaction with lawyers than in other areas of bioethics.

Scientists are in a prime position to be contributing to the balancing we described—they understand what is possible technologically at the moment and what might be feasible in the future (and how feasible). Scientists are in a position to predict possible paths of translation of the knowledge they generate and to pose questions about the social and policy implications, should any of those translational paths be taken. What is more as citizens within society, while still motivated in part by self-interest, they have a general sensibility of what might be right or wrong in terms of social application of the knowledge, as they are closer to the precise findings, the methodological limitations of the research, and the specific research question. Of course opportunistic scientists could over-extend their own work, and this no doubt occurs. Policymakers will have to learn how to weed these individuals out and communicate with the scientists who are generally well respected as being less biased in their interpretations. Neal Lane, former Presidential Science Advisor, is known for his use of the phrase “civic scientist.” When describing a civic scientist, Lane beseeches his fellow scientists to listen to the needs, expectations, hopes, and concerns of their fellow citizens and give consideration to these as they do their work, and to participate in the public debates to contribute their contemplations of the potential social and policy applications of the science.

Lane is not the first to make such pleas. Albert Einstein noted in a 1931 talk at Caltech that scientists of all disciplines should not get lost in their diagrams, equations, and models, but need to remember that “the concerns of mankind and his fate” ought to be a main impetus for their scientific endeavors. Our interpretation is that scientists have a responsibility to give consideration to how their work may eventually be used externally without internally manipulating methods to bias the scientific inquiry itself. Scientific integrity and responsible research is more than not engaging in fabrication, falsification, or plagiarism. It entails, incorporating into the scientific method consideration of possible ethical and social implications and how they may impact public policies. Yet, many seem to be agnostic about their work in this context.

Legal professionals and policymakers are also in a prime position to contribute to this necessary balancing act. In day-to-day practice policymakers may not see this as part of their charge, but in all practicality it is, given their place in our social institutions. Once informed by neuroscientists, policymakers may then have a better appreciation of how neuroscience findings applied in a broader social context could or would influence public policy, incentives, and individual and community well being. Unlike scientists, however, this group tends to fall short of recognizing that science outside the walls of the laboratory is no longer an independent and isolated domain of knowledge. They may overlook the fact that their use of scientific data clearly places the scientific findings into a social context. That is, those in the legal and policy communities can be “blinded by the science,” and ignore questions of feasibility or ecological validity.

Indeed, scientific research is the pursuit of “truth”—but not Truth. Neuroscience research, as with most scientific research, happens in a controlled and regulated setting. The truth that is found is the truth for that situation and in the current moment, and it cannot be forgotten that science is dynamic, not a simple linear model as described by Vannevar Bush in Science – The Endless Frontier (5 July 1945). Several authors have suggested science is more than just a simple linear process (Stokes 1997; Neal et al. 2009 figure 1.3). The dynamic model as proposed by Neal, Smith, and McCormick suggests that regardless of what stage neuroscience is at, it can inform both the fundamental questions we ask, the potential ways in which we apply the knowledge, and the eventual public consumption. We take this yet one step further to suggest that at any point within the dynamic process of scientific discovery, questions of how the science might influence or impact social institutions and public policies can legitimately—and should be asked by scientists themselves, law- and policymakers, and the public.

PROPOSED FUNDING MODEL

Early on in the development of the National Institutes of Health (NIH) Human Genome Project, organizers realized that the new genomic discoveries might present unique questions that should be addressed simultaneously with the research (Meslin et al. 2007). Thus the ELSI program was born, which generated an impressive amount of scholarship on areas of research priority such as privacy, fairness, professional education, and clinical integration (Fisher 2005). While the researchers funded through the ELSI program have helped considerably to guide the conduct of ethics research and the public understanding of the genome, the project has also been criticized for lacking tangible policy deliverables. Without weighing in on that particular discussion, which ultimately depends on how one defines and measures policy success, it does seem that the model suffered from a few structural issues. First, in creating the ELSI program, and in particular the Centers for Excellence in Ethics Research (CEERS), a tight-knit group of researchers who could speak to each other and develop non-overlapping areas of expertise was formed. At the same time, this organizational structure of housing the ELSI program within the NHGRI encouraged an insular dynamic in the genetics and ethics community, ossifying the potential range of research topics and narrowing the number of different perspectives that were heard. As is the case in many domains of research, investigators need to “brand” themselves and their research agendas in order to receive large-scale grants. Once this branding takes place, what happens is, in effect, a market capture. What also happens is shoe-horning, where the researcher will see every additional discovery as necessitating the same type of questions they have asked before, even if more appropriate questions should be asked first. Having virtually one funding source encourages this type of market capture and internal politicization of research ideas. In a different way, projects that are publicly funded are limited in the policy suggestions they can make. As noted scholar Hank Greely has pointed out while suggesting both public and private funding of neuroethics research, “there are some inherent constraints on what government-funded ELSI-type programs can do. They are limited in the issues they can consider and the things they can say” (Greely 2002).

Given the lessons learned from the ground-breaking ELSI program and given the broad array of interdisciplinary and highly philosophical issues raised by neuroscience, it seems that researchers examining the intersection of neuroscience with ethics, law, policy, and society should pursue multiple avenues for funding. In addition to public funding through the NIH, partnerships with private foundations should be sought. An example of private foundation support is the MacArthur Foundation’s Law and Neuroscience Project, which has already funded small workshops on law and neuroscience, white papers and research projects on brain-based lie detection, an empirical review of cases involving neuroimaging in California, demonstration projects on implicit racial bias in jurors, and imaging studies examining how individuals make decisions about punishments. If the MacArthur Law and Neuroscience Project continues on beyond the initially funded 3 years, it is intended to fund larger research projects that could not otherwise be funded through NIH or the National Science Foundation (NSF). Given the moderate amounts of funding so far, the project has produced some fantastic results. Another private option may be the Greenwall Foundation. This independent non-profit foundation has a rich history and provides grants for the arts, humanities, and bioethics (Otten 1999). Its board of directors has recently embarked on a developing a plan to strengthen scholarship in bioethics, which the Foundation sees as instrumental in the face of the coming challenges. One piece of this plan might be to make an explicit effort to fund research at the intersection of ethics, policy, and neuroscience.

Another way of injecting ethics into the basic neuroscience research is to approach journals like Neuron, Cognitive Science, Nature Neuroscience, Journal of Neuroscience, and others, to request that the publications implement standards requiring peer-reviewed articles to include a sentence or two on the potential ethical and societal implications of their research. This would not be very thorough, but it would be a great way of highlighting potential ethical concerns by the people who are most familiar with the methodological limitations of the data and its external validity. To be sure, the statements made would probably be fairly speculative, but this same critique was made of requiring scholars to disclose potential conflicts of interest, and it has also served as a signal to researchers that disclosure is important. While this technique will not resolve any of the social issues, at least it would require the bench scientists to think about the potential impact their science might have on society, and alert policy researchers to the perhaps otherwise unobvious potential or lack of potential. In order for this to be reliable and not self-aggrandizing, the peer review process would have to apply to this section, peer-review panels would need to include an ethical and social review, and a robust conflicts of interest disclosure would be required.

This of course raises the question as to whether neuroscientists are aware of ethical, social, and policy implications of their research. While studies indicate that some scientists think about the ethical and social implications of their research (McCormick et al. 2009), these same studies suggest that a significant number give little attention to such issues for various reasons including lack of perceived relevancy and simple lack of awareness (McCormick et al. unpublished). Others have called for increased attention to public engagement and communication in neuroscience training programs (Illes et al. 2009), and we extend that call to include both formal and informal training venues for discussions about potential ethical and legal implications and how to include policy and social considerations throughout the research process.

Another suggestion for how to inject ethics into neuroscience research would be for the field to create professional norms and internal reputation sanctions to discourage researchers from dismissing the social and ethical implications of their research. Much has been written on norm creation and internalization in psychology, sociology, and economics, and we do not have space to delve into that rich literature here. Suffice it to say that there are points of social intersection where the field could create norms. One such opportunity would involve hosting conferences where one day’s plenary or panel sessions discuss ways of raising potential ethical and social implications of the neuroscience findings. Meetings could also make sure that the neuroethics and the science and society posters are not located in the far corners of the conference center, where only those actively seeking these posters venture. The Society for Neuroscience could incorporate more than just one panel session on social issues over the course of its 4-day annual meeting. These are just a few suggestions, but the idea is that from within the neuroscience community, institutional norms could be created to signal to researchers that the ethical implications of research are not entirely for separate “neuroethics” conferences. While not being a primary focus of the basic science researchers, they should at least have some sense of the neuroethics discussion and contribute to it in some way.

In addition to internal norm-structuring, political groups should encourage multiinstitute funding of ELSI research across all NIH institutes and centers. What a lasting legacy the current NIH Director would have were he to adapt the ELSI program—that he oversaw as the Director of NHGR—across the Institutes. In this specific case, rather than boot-strapping on the ELSI program by tailoring research as being neuroscience + genetics, other institutes engaged in neuroscience-related research (NIMH, NIDA, NIAAA NINDS, etc.) should be approached politically for setting aside some amount of money for research on the ethical and legal implications of neuroscience. This budget allocation could require that some small percentage of their research budget deal with the policy and social issues arising from the laboratory science they support. Outside of the NIH, the Department of Defense and the Department of Education might also be approached for developing funding for neuroethics research, given the obvious military and education uses (and abuses) of neuroscience research. While this requires considerable political motivation and currency, it is not insurmountable.

SUGGESTING A FRAMEWORK FOR POLICYMAKERS

The aim of this article was to inject a little humility into the way neuroscience findings are used by policymakers. Hopefully, by walking through various uses of neuroscience—some good, most gray, and some ugly—the chapter demonstrates the extreme diversity of neuroscience data, and the varying levels of ripeness for specific policy uses. Because neuroscience data will have different value depending on the application, articulated below are ten related and non-exhaustive factors that policymakers should consider before injecting a normative or policy debate with neuroscience data.

Questions one should ask before infusing policy or law with neuroscience data

• What is the probability of harm to the person whose neuroscience data is being analyzed? To the community?

• What is the magnitude of that harm to the individual? To the community of which the individual is a member?

• How reliable are the data? Have they been sufficiently replicated in the correct population, or is this a novel first-time finding?

• Have the data been peer-reviewed thoroughly?

• How valid are the data? Does the research protocol model the relevant real life setting at all, or is the experimental design unrealistic for social use?

• Is there a lack of knowledge as to the differences between people on the given variable that might have no effect on their social functioning? (I.e. normal individual differences without functional deficit.)

• What is the positive predictive value of the finding as a “biomarker” for some trait? Do scientists know what the base rate is for this trait in the population? What is the risk of false-negatives and false-positives, and what is a socially justified amount of risk? (This will depend greatly on the context; there might be a higher tolerance with false positives in an employment screening policy than during the guilt phase of a capital trial, where someone stands to lose her life).

• How valid and reliable is the status quo alternative? Is the devil that is known (i.e. polygraphy) definitely worse than the devil that is not known (i.e. fMRI-based lie detection) or are we making assumptions about its predictive power because it looks so objective and fancy?

• What is the probability of social harm (i.e. considering elements of distributive justice, civil rights, equality, efficient allocation of resources, etc.)? Does this use encourage discrimination of groups, stigmatization, lack of equitable access, racism, sexism?

• What is the magnitude of social harm? Is it very likely that the harm could be mitigated in some way, or be offset by the social or individual benefit?

This list is not meant to be exhaustive. But perhaps it will serve as a starting point for those interested in using neuroscience data for policy arguments. If nothing else, we hope it highlights the fact that neuroscience data come in all shapes and sizes and levels of validity, and policymakers, individual lay public, and scientists must be careful not to retrofit a single finding to make arguments they are predisposed to want to make. Neuroscience might be able to inform analyses focused on the “what” of social policy and law. Once sufficiently robust and tailored to this purpose, the findings can help describe empirically what is happening in various social scenarios, which in turn may facilitate better policy responses to how humans behave individually and socially. But despite its allure, neuroscience cannot presently answer the questions about whether societies ought to have the social goals they have and when and how they ought to decide to flex scientific findings to support or confront those goals.

REFERENCES

Accreditation Council for Graduate Medical Education (ACGME) (2002). Report of the ACGME work group on resident duty hours, 11 June 2002. Available at: http://www.acgme.org/acWebsite/dutyHours/dh_wkgroupreport611.pdf (accessed 22 November 2009).

Allen, C. (2008). We scream, we swoon. how dumb can we get? The Washington Post, 2 March 2008. Available at: http://www.washingtonpost.com/wp-dyn/content/article/2008/02/29/AR2008022902992_pf.html (accessed 22 November 2009).

Aronson, J.D. (2007). Brain imaging, culpability and the juvenile death penalty. Psychology, Public Policy, and Law, 13, 115–42.

Atkins v. Virginia (2002). 536 U.S. 304

Baron-Cohen, S. (2003). The Essential Difference: The Truth about the Male and Female Brain. New York: Basic Books.

Broughton, R.J. (1997). Randomized, double-blind, placebo-controlled crossover trial of modafinil in the treatment of excessive daytime sleepiness in narcolepsy. Neurology, 49, 444–51.

Brown, T. and Murphy, E.R. (2010). Through a scanner darkly: functional imaging as evidence of mental state. Stanford Law Review, 62, 1119–208.

Bush, V. (1945). Science – The Endless Frontier. Washington, DC: US Government Printing Office.

Caplan, A. (2002). No brainer: can we cope with the ethical ramifications of new knowledge of the human brain? In S.J. Marcus (ed.) Neuroethics: Mapping the Field, pp.95–106 [conference proceedings]. New York: Dana Foundation.

Caplan, A., McGee, G., and Magnus, D. (1999). What is immoral about eugenics? British Medical Journal, 319, 1284.

Doucet, H. (2007). Anthropological challenges raised by neuroscience. Cambridge Quarterly of Healthcare Ethics, 16, 219–26.

Faigman, D. and Saks, M. (2008). Failed forensics: how forensic science lost its way and how it might yet find it. Annual Review of Law and Social Science, 4, 149–71.

Farah, M. (2005). Neuroethics: the practical and the philosophical. Trends in Cognitive Sciences, 9, 34–40.

Fisher, E. (2005). Lessons learned from the Ethical, Legal and Social Implications program (ELSI): Planning societal implications research for the National Nanotechnology Program. Technology in Society, 27, 321–8.

Kleiner-Fishman, G., Herzog, J., Fisman, D.N., et al. (2006). Subthalamic nucleus deep brain stimulation: summary and meta-analysis of outcomes. Movement Disorders, 21, S290–S304.

Gazzaniga, M. (2005). The Ethical Brain. New York: Harper Perennial.

Glannon, W. (2008). Deep-brain stimulation for depression. HEC Forum, 20, 325–35.

Greely, H. (2002). Response. In S.J. Marcus (ed.) Neuroethics: Mapping the Field, pp.116–17 [conference proceedings]. New York: Dana Foundation.

Greely, H., Sahakian, B., Harris, J., et al. (2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature, 456, 702–5.

Greene, J. and Paxton, J. (2009). Patterns of neural activity associated with honest and dishonest moral decisions. Proceedings of the National Academy of Sciences, 106, 12506–11.

Gur, R. (1999). Sex differences in brain gray and white matter in healthy young adults: correlations with cognitive performance. The Journal of Neuroscience, 19, 4065–72.

Gurian, M., Henley, P., and Trueman, T. (2001). Boys and Girls Learn Differently: A Guide for Teachers and Parents, 1st edn. New York: Jossey-Bass.

Gurian, M. and Stevens, K. (2007). The Minds of Boys: Saving our Sons from Falling Behind in School and Life. New York: Jossey-Bass.

Healy, M. (2009). Prescribing drugs ‘off-label’: an ethical prescription. Los Angeles Times, 26 October 2009. Available at: http://latimesblogs.latimes.com/booster_shots/2009/10/prescribing-drugs-offlabel-an-ethical-prescription.html (accessed 21 November 2009).

Iacoboni, M., Freedman, J., Kaplan, J., et al. (2007). This is your brain on politics. New York Times, 11 November 2007. Available at: http://www.nytimes.com/2007/11/11/opinion/11freedman.html?_r=1&adxnnl=1&adxnnlx=1258902014-Fi2WBZbnXja1YPfaIlgkSA (accessed 22 November 2009).

Illes, J. and Racine, E. (2005). Imaging or imagining? A neuroethics challenge informed by genetics. American Journal of Bioethics, 5, 5–18.

Illes, J. and Bird, S. (2006). Neuroethics: a modern context for ethics in neuroscience. Trends in Neurosciences, 29, 511–17.

Illes J., Moser M.A., McCormick, J.B., et al. (2010). NeuroTalk: improving the communication of neuroscience. Nature Reviews Neuroscience, 11, 61–9.

Jeffrey, S. (2009). Study flags potential for abuse and dependence with modafinil. Medscape Medical News, 20 March 2009. Available at: http://www.medscape.com/viewarticle/589934_print (accessed 21 November 2009)

Kozel, F.A., Padgett, T.M. and George, M.S. (2004). A replication study of the neural correlates of deception. Behavioral Neuroscience, 118, 852–6.

Kozel, F.A., Johnson, K.A., Grenesko, E.L., et al. (2009). Functional MRI detection of deception after committing a mock sabotage crime. Journal of Forensic Sciences, 54, 220–31.

Kumar, R., Lozano, A.M., Kim, Y.J., et al. (1998). Double-blind evaluation of subthalamic nucleus deep brain stimulation in advanced Parkinson’s disease. Neurology, 51, 850–5.

Laitinen, L.V., Bergenheim, A.T., and Hariz, M.I. (1992). Leksell’s posteroventral pallidotomy in the treatment of Parkinson’s disease. Journal of Neurosurgery, 76, 53–61.

Lapidus, L. and Martin, E. Antiquated gender stereotypes underlie radical experiments in sex-segregated education. ACLU Blog Of Rights, 3 March 2008. Available at: http://www.aclu.org/blog/womens-rights/antiquated-gender-stereotypes-underlie-radical-experiments-sex-segregated-educati (accessed 21 November 2009).

Largent, E.A., Miller, F.G. and Pearson S.D. (2009). Going off-label without venturing off-course: evidence and ethical off-label prescribing. Archives of Internal Medicine, 169, 1745–7.

Loftus, E. and Hoffman, H.G. (1989). Misinformation & memory: the creation of new memories. Journal of Experimental Psychology, 118, 100–4.

Mayberg, H. (2005). Deep brain stimulation for treatment-resistant depression. Neuron, 45, 651–60.

McCormick, J.B., Boyce, A.M. and Cho, M.K. (2009). Biomedical scientists’ perceptions of ethical and social implications: is there a role for research ethics consultation? PLoS ONE, 4, 64659.

McCormick, J.B., Boyce, A.M., Ladd, J.M., and Cho, M.K. (in preparation). Barriers to considering ethical and societal implications of research: perceptions of biomedical scientists.

Meslin, E., Thomson, E. and Boyer, J. (1997). Bioethics inside the beltway: The Ethical, Legal, and Social Implications Research Program at the National Human Genome Research Institute. Kennedy Institute of Ethics Journal, 7.3, 291–8.

Moreno, J. (2001). Undue Risk: secret state experiments on humans. New York: Routledge.

Moreno, J. (2003). Neuroethics: an agenda for neuroscience and society. Nature Reviews Neuroscience, 4, 149–53.

Neal, H.A., Smith, T.L. and McCormick, J.B. (2009). Beyond Sputnik: American science policy in the 21st century. Ann Arbor, MI: University of Michigan Press.

Nelkin, D. and Lindee, S. (1995). The DNA Mystique: The Gene as a Cultural Icon. New York: WH Freeman & Co.

Neuroscience Research in Education Summit: The promise of interdisciplinary partnerships between brain sciences and education, 22–24 June 2009, Society for Neuroscience

Norfleet, J.A. (2007). Teaching the Male Brain: How Boys Think, Feel, and Learn in School. Thousand Oaks, CA: Corwin Press.

Otten, A.L. (1999). The Greenwall Foundation: A Story of a Work in Progress. New York: The Greenwall Foundation.

Picton, T. (1992). The P300 wave of the human event-related potential. Journal of Clinical Neurophysiology, 9, 456–79.

Racine, E., Bar-Ilan, O. and Illes, J. (2005). fMRI in the public eye. Nature Reviews Neuroscience, 6, 159–64.

Rammohan, K.W., Rosenberg, J.H., Lynn, D.J., et al. (2002). Efficacy and safety of modafinil (Provigil®) for the treatment of fatigue in multiple sclerosis: a two centre phase 2 study. Journal of Neurology, Neurosurgery, and Psychiatry 72, 179–183. See also http://www.provigil.com/index.php?t=pat&p=home for additional information about the drug from its seller.

Rodriguez-Oroz, M.C., Obeso, J.A., Lang, A.E., et al. (2005). Bilateral deep brain stimulation in Parkinson’s disease: a multicentre study with 4 years follow-up. Brain, 128, 2240–9.

Roper v. Simmons (2005). 543 U.S. 551.

Roskies, A. (2007). Neuroethics beyond genethics. EMBO Report, 8, S52–6.

Roskies, A. (2008). Neuroimaging and inferential distance. Neuroethics, 1, 19.

Sahakian, B. and Morein-Zamir S. (2009). Neuroscientists need neuroethics teaching. Science, 325, 147.

Satel, M.J. (2004). The case against perfection. The Atlantic Online, April 2004. Available at: http://www.theatlantic.com/doc/print/200404/sandel (accessed 22 November 2009).

Shaywitz, B. (1995). Sex differences in the functional organization of the brain for language. Nature, 373, 607–9.

Silberman, S. (2006). Don’t even think about lying: how brain scans are reinventing the science of lie detection. Wired Magazine, January 2006. Available at: http://www.wired.com/wired/archive/14.01/lying.html (accessed 22 November 2009)

Sip, K.E., Roepstorff, A., McGregor W., and Frith, C.D. (2008). Detecting deception: the scope and Limits. Trends in Cognitive Science, 12, 48–53.

Snead, O.C. (2007). Neuroimaging and the complexity of capital punishment. New York Law Review, 82, 1265–339.

Stafford, R.S. (2008). Regulating off-label drug use—rethinking the role of the FDA. New England Journal of Medicine, 358, 1427–9.

Steinberg, L. (2009). Are adolescents less mature than adults? Minors’ access to abortion, the juvenile death penalty, and the alleged APA “flip-flop.” American Psychologist, 64, 583–94.

Stokes, D.E. (1997). Pastuer’s quadrant: basic science and technological innovation. Washington, DC: Brookings Institution.

Turner, D.C., Robbins, T. W., Clark, L., et al. (2003). Cognitive enhancing effects of modafinil in healthy volunteers. Psychopharmocology, 165, 260–9.

Ulmer, C., Wolman, D.M. and Johns, M.M.E. (2008). Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: National Academies Press.

Volkow N.D., Fowler J.S., Logan J., et al. (2009). Effects of modafinil on dopamine and dopamine transporters in the male human brain: clinical implications. Journal of the American Medical Association, 301, 1148–54.

Washburn, D. Can this machine prove if you’re lying? VoiceOfSanDiego.org, 1 April 2009. Available at: http://www.voiceofsandiego.org/articles/2009/04/02/science/953mri040109.txt (accessed 21 November 2009).

Weil, E. (2008). Teaching boys and girls separately. New York Times Magazine, 2 March 2008.

Wolpe, P., Ford, P. and Harhay, M. (2008). Ethical issues in deep brain stimulation. Neurological Disease and Therapy, 91, 323–38.