THE FIRST SUPERMAN GOT HIS POWERS FROM BRAIN enhancement. Before there was Kal-El, the strange visitor from another planet who arrived on Earth with abilities far beyond those of mortal men, there was Bill Dunn, a destitute earthling who became transhuman by consuming a man-made drug. During the depths of the Great Depression, Dunn is plucked from a breadline by an unscrupulous chemist named Smalley. Lured by the promise of a square meal, the vagabond goes off to the scientist’s house but finds his drink laced with a psychoactive potion Smalley has recently developed. Dunn becomes dizzy and delirious, but soon recovers and realizes that he has gained supernatural powers of telepathy and clairvoyance. “I am a virtual sponge that absorbs every secret ever created,” the newly minted Superman declares. “Every science is known to me and the most abstruse questions are mere child’s play to my staggering intellect. I am a veritable God!”
The Dunn-Superman learns how to take advantage of his new talents, but many of his schemes come at the expense of others. He injects into people’s minds the desire to donate their money to him. A drugstore clerk gives him ten dollars without questioning, and a rich tycoon later cuts Dunn a check for $40,000 (equivalent to $700,000 today) without even having met him. Given his ability to see into the future, the Superman also turns out to be a very successful investor. As he becomes more confident in his powers, he also grows more destructive, however. He slays Smalley and tries to set off global conflict by sparking a diplomatic showdown at a fictionalized League of Nations. He is on the brink of murdering an unfortunate journalist sent to investigate him, when without notice the potion suddenly starts to wear off. Dunn metamorphoses back into the wretched hobo he started as, and his last prophetic vision is that of himself back in the breadline.
The strange story of Bill Dunn appeared in 1933 as a nine-page illustrated magazine piece entitled “The Reign of the Superman.” It was composed and self-published by two high school students named Jerome Siegel and Joe Shuster. Two years later, the pair reworked their initial Superman into the Man of Steel we now know and love; they sold the concept to Detective Comics in 1938, and the rest is history. As the superhero from Krypton went on to fame and glory, the more mundane prototype was forgotten. But the two teenagers’ early tale of an ordinary man made extraordinary by brain technology is one that now profoundly resonates with the hopes and fears of our later age—an age in which futuristic technology for modifying and manipulating the human nervous system is increasingly finding its way into reality.
Among the wonders of today’s neurotechnology are pills for making people smarter, devices that remotely monitor or stimulate the nervous system, and genetic techniques that could reshape the structure of the brain itself. One can easily imagine some of these tools in the arsenal of a comic book hero or villain, and Siegel and Shuster’s story gives us a taste of how things can begin to go awry; the hazards of human experimentation and the unscrupulous exploitation of technology for personal gain and injury to others are just some of the dangers. In the real world, we must think carefully through how any new neurotechnology can be safely and ethically applied. We must also decide what future technologies we should work toward and to what end. Should we strive to create genuine supermen or guard ourselves against their emergence?
In this chapter we will consider how the cerebral mystique and its idealization of the brain affect thinking about neurotechnology. We will see how the mystique adds to the allure of artificial brain interventions but also fosters artificial distinctions between technologies that act directly and indirectly on the brain. A more down-to-earth view of the brain and its relationship to the body and environment might degrade these distinctions and change the way we approach neurotechnology and its development; just as importantly, it could prompt us to look more closely at social issues around some of the less ostensibly neural technologies that manipulate our minds.
Perhaps nothing better exemplifies the promises and perils of neurotechnology than the concept of hacking the brain, a meme that in recent years has proliferated across the popular media. Aspirations and anxieties around this idea reflect the prevalent but questionable notion that purposefully changing people’s brains could be a pertinent way to change their lives. In a 2015 Atlantic article that takes “Hacking the Brain” as its title, journalist Maria Konnikova ties the phrase explicitly to the futuristic goal of enhancing intelligence. Other writers emphasize efforts in the here and now to alter human behavior by manipulating the brain with electrical or magnetic stimulation. Numerous talks in the trendy TED lecture series reference some form of brain or mind hacking, from neurosurgeon Andres Lozano’s presentation on how hacking the brain can make you healthier to the talk by magician Keith Barry, who we should “think of… as a hacker of the human brain.” The tone of these reports is typically exuberant—“Need more proof that the future is here?” asks the TED website. But some adopt a more apprehensive angle. Konnikova, for instance, wonders whether brain hacking could also lead to a “dystopia where an individual’s fate is determined wholly by his or her access to cognition-enhancing technology” or where “some Big Brother–like figure could gain control of our minds.”
Hacking is a living word with connotations that mirror this ambiguity. For me, the primary associations are with machetes, cleavers, scythes, and their use in such contexts as butcher shops, jungle forays, and the Rwandan genocide. For the students I teach at MIT, the dominant definition is a digital one, however. At a university where five of the ten most popular classes are in computer programming, hacking refers most often to the subversive but generally harmless pastime in which engineering aficionados breach and alter computer security systems, software, or electronic hardware. MIT is also famous for “hacks”—technically sophisticated pranks played by students to amaze others on and sometimes beyond campus. In this vein, MIT hackers once transported a police car to the top of the school’s Great Dome; they also stole an iconic cannon from rival university Caltech and reinstalled it at MIT. These different senses of hacking appear at first to be only loosely related, but they share an association with invasion and indelicacy. For example, although hacking the iPhone operating system does not literally split anything open, it does involve forcing one’s way into a previously forbidden space in the software of the device. Although the prankster’s hacking is not as brutal as the slashing of a slaughtered carcass, it is often done without subtlety, using whatever means are available.
Most people probably think of hacking the brain as something closest to the digital definition—it connotes breaking into the brain and manipulating it, typically by connecting it to artificial gadgetry like electrodes or fancy scanners. The rationale for brain hacking therefore benefits from the ubiquitous brain-as-computer analogy we saw in Chapter 2. Hacking the brain can seem glamorous, perhaps because it combines technological sophistication and edginess in the manner of MIT-style pranks, but the reality is often somewhat gruesome as well. This is because brain hacking almost invariably involves some kind of attack, either through the physical violence of surgery and disruption to biological tissue or through less damaging methods like fMRI that nevertheless intrude into the brain’s private places. Hacking the brain is not necessarily a good thing to do.
The most common contexts for brain manipulations are medical. For over a hundred years, physicians have used what is known as resective neurosurgery—the slicing apart of cerebral structures—to treat a variety of neurological and neuropsychiatric diseases, as well as brain cancers. The most infamous resective technique was the prefrontal lobotomy, a now-extinct treatment for schizophrenia introduced in the 1930s by the Portuguese neurosurgeon António Egas Moniz. The lobotomy involved cutting the white matter of the brain’s frontal lobes, which severed neural connections between these regions and the rest of the cerebral cortex. This process could in some cases reduce symptoms of psychosis, but only at considerable risk to the person undergoing the operation. In one variant, the surgeon would hammer a long metal needle through the back of the patient’s eye socket and then swipe the device sideways to carve across deep brain structures—a true brain hack if ever there were one (see Figure 14). Approximately 5 percent of lobotomy patients died during the surgery, more than one in ten developed postoperative convulsions, and many of the other survivors became impassive or catatonic. Lobotomies were nevertheless performed on thousands of patients up until the late 1960s, including celebrities such as John F. Kennedy’s sister Rosemary and Argentina’s first lady Eva “Evita” Perón.
Figure 14. Hacking the brain using old and new technology: (top left) diagram of the transorbital lobotomy procedure developed by surgeon Walter Freeman, (W. Freeman, “Transorbital leucotomy: The deep frontal cut,” Proceedings of the Royal Society of Medicine 41, 1 Suppl [1949]: 8–12, copyright ©1949 by The Royal Society of Medicine; Reprinted by permission of SAGE Publications, Ltd.); (top right) Freeman-style lobotomy instruments (Wellcome Library, London); (bottom left) Cathy Hutchinson controlling a prosthetic arm using her brain-machine interface; (bottom right) closeup view of the BrainGate electrode array implanted in Hutchinson’s brain. Bottom images courtesy braingate.org.
Although the lobotomy fell out of favor half a century ago, closely related forms of surgical brain hacking remain in wide use today. Most prominently, resective procedures are performed each year on hundreds of epileptic patients whose seizures cannot be controlled with drugs. In cases where seizure onset is linked to a focus of pathological activity in specific brain areas, doctors can try to limit the frequency or severity of attacks by destroying the focus or cutting around it. Although fewer than 10 percent of subjects experience significant complications from modern epilepsy surgery, there have been spectacular setbacks in the past. Most famous was the case of Henry Molaison, whose left and right hippocampal brain regions were removed during epilepsy surgery in 1953, leaving him without short-term memory for the rest of his life. Molaison’s experience led scientists to new insights about the role of the hippocampus in memory formation but also underscored the dangers inherent in such invasive techniques.
Modern forms of medically sanctioned hacking complement the neurosurgeon’s knife with more nuanced approaches. A technique called deep brain stimulation (DBS) has become one of the most widely used; it is used to treat movement-related problems like Parkinson’s disease and obsessive compulsive disorder, and it has now been applied in over one hundred thousand patients. DBS involves insertion of electrodes into the brain through small holes drilled into the skull. Each electrode is connected via subcutaneous leads to an implanted control module about the size of a cookie; at regular intervals, the module sends tiny pulses of electrical current through the wiring, delivering little kicks of energy to neurons near the electrode tip. Like resective surgery, DBS treatment is thought to act primarily by inactivating tissue in the neighborhood of the intervention, but it is reversible and can be adjusted as necessary. More experimental brain-hacking techniques use electrodes both to stimulate and to record signals from patients’ brains. The resulting information can be used to control DBS-style treatments in real time. Brain recordings can also help paralyzed patients interact with prostheses or other external devices, via what are known as brain-machine interfaces (BMIs). In an amazing demonstration of this technology, neuroscientists John Donoghue, Leigh Hochberg, and their colleagues implanted an array of ninety-six microelectrodes into the cerebral cortex of a paralyzed woman named Cathy Hutchinson. Using the BMI, Hutchinson gained the ability to control a robotic arm with her thoughts; she was able to serve herself a drink for the first time since suffering a devastating stroke fifteen years earlier (see Figure 14).
Breakthroughs such as Hutchinson’s BMI ignite the imagination and drive much of the fascination with brain hacking. Controlling a mechanical device using neural activity alone sounds almost superheroic, like Wonder Woman’s ability to fly an invisible airplane using only her mind. Could it be that these and other amazing powers are just around the corner for you and me as well? Studies performed outside the therapeutic realm have added fuel to this fire. In one example, researchers at the University of Washington used scalp electrode recordings (EEG) to control a device called a transcranial magnetic stimulator (TMS), which uses spatially targeted magnetic effects to inactivate brain areas just under the skull. Attaching the EEG and TMS hardware to two separate subjects isolated from each other in different rooms made it possible for the person wearing the EEG to remotely perturb brain activity in the other participant, demonstrating an extremely crude form of the kind of brain-to-brain communication used by such fictional species as Star Trek’s Talosians. In another well-publicized case, neuroscientists at the University of California, Berkeley used a computational algorithm to reconstruct imagery from fMRI scans of a subject who was watching a video. The fMRI-based reconstruction looked like a smeared-out version of the original video, inspiring speculation that such methods could be used for a rudimentary form of mind reading. “Like computers, human brains may be vulnerable to hackers,” proclaimed a news article reporting the work.
A brand of self-styled technological prophets foretells the extension of today’s brain hacking into yet more fantastic innovations that have yet to be realized. “Twenty years from now, we’ll have nanobots [that will] go into our brain through the capillaries and basically connect our neocortex to a synthetic neocortex in the cloud providing an extension of our neocortex,” predicts the author and engineer Raymond Kurzweil. Kurzweil believes that a consequent merging of human and artificial intelligence will radically change the human condition through a synthesis he and others refer to as the singularity. Taking a similar tack, physicist and science popularizer Michio Kaku writes that “one day, scientists might construct an ‘Internet of the mind,’ or a brain-net, where thoughts and emotions are sent electronically around the world.” “Even dreams will be videotaped and then ‘brain-mailed’ across the Internet,” Kaku adds, perhaps evoking the Berkeley imagery reconstruction study. Although many are skeptical of such predictions, conjectures like Kurzweil’s and Kaku’s garner substantial attention.
The futuristic potential of hacking the brain also influences the US military. For better or worse, the defense establishment’s interest goes well beyond the humane goal of rehabilitating wounded soldiers. The Defense Advanced Research Projects Agency (DARPA), which funds some of the military’s most cutting-edge projects, aims in part to leverage neuroscience to “optimize human aptitude and performance” on the battlefield. Another major thrust involves “understanding and improving interfaces between the biological and physical world to enable seamless hybrid systems.” Dispelling any doubt about what such systems could be used for, a team of DARPA engineers connected a quadriplegic patient named Jan Scheuerman to a BMI much like Cathy Hutchinson’s; after demonstrating Scheuerman’s ability to guide a robotic arm, the engineers had her take mental control of a simulated F-35, the Defense Department’s most advanced warplane. DARPA director Arati Prabhakar presented results from this real-life Wonder Woman at a 2015 conference on the Future of War. “We can now see the future where we can free the brain from the limitations of the human body,” Prabhakar announced proudly to the audience.
The idea that hacking the brain will free it from the body’s confines reflects much of the fascination with neurotechnology. But it is also an idea that springs from the cerebral mystique and carries with it three fallacies rooted in problems I have discussed throughout this book. First is the concept that brain and body are separable to begin with, a notion that illustrates the extent to which the brain has become a stand-in for the dualist’s disembodied soul. We saw in Chapter 5 that this is not only a philosophical error but a biological one, in conflict with the fact that many features of human behavior depend critically on reciprocal interactions between brain and body. The second fallacy lies in the sense that the brain is inherently stronger or less limited than the body. Chapter 2 criticized the discourse that depicts the brain and body as working by different principles, with the brain more abstract and inorganic in its modes of operation. In fact, the biological substrates of brain and body have qualitatively similar weaknesses, such as their limited endurance and capacity, as well as their susceptibility to infection, injury, and decay.
The third fallacy is the view that hacking the brain is a good way to break free of any limitations at all. In practice, no existing device comes close to accomplishing this. Although some of the recent triumphs of human neurotechnology are breathtaking, all are limited to some extent by the physical as well as metaphorical violence of hacking. Even noninvasive brain manipulation with TMS is said to feel like a woodpecker pecking at your head, and the crude telepathy that requires this seems like a poor substitute for good old-fashioned speech. Meanwhile, more meaningful neural interventions require risky brain surgery that few subjects would undergo without dire need. To patients with severe disabilities, the benefits of these technologies are merely restorative, and at most partially so. It is only against a backdrop of devastating dysfunction that a person gaining the ability to control a prosthetic arm or benefit from DBS sounds like a success story, and any healthy teenager with a joystick could fly the simulated F-35 better than DARPA’s Wonder Woman. It is certainly worthwhile to keep improving the technology for rehabilitating patients with brain injuries and diseases, but the potential for using such devices to hack additional abilities into or out of healthy brains seems remote, unappetizing, and possibly dangerous. Nevertheless, neurotechnological visions fueled by the cerebral mystique retain a special place in the fantasies of those who think about humanity’s evolution as a species, as we shall see.
The US presidential election of 2016 featured a famously strange roster of candidates, but few were more unusual than Zoltan Istvan. As the founder and candidate of the first political party associated with the so-called transhumanist movement, Istvan sought to represent “a growing group consisting of futurists, life extensionists, biohackers, technologists, singularitarians, cryonicists, techno-optimists, and many other scientific-minded people” who support the agenda of overcoming death and embracing radical technological change. “Who doesn’t want to have their lives made better through science and technology?” asks Istvan. The Transhumanist Party did not officially make it onto any of the state ballots, but Istvan’s unlikely bid nevertheless garnered news coverage from mainstream media outlets, an endorsement from Robert F. Kennedy III, and over twenty thousand followers on Twitter.
Istvan is a former journalist whose square jaw and fit physique belie his allegiance to some firmly geeky causes. The aspiring politician made his first media statement with a 2013 novel entitled The Transhumanist Wager, which tells the story of a philosopher-king named Jethro Knights, who leads the world into an era of peace, technophilia, and extremely long life spans. Knights, like his real-life philosophical predecessor Immanuel Kant, has a categorical imperative—a golden rule that instructs him and his fellow transhumanists to “safeguard [their] own existence above all else.” They wager that there is no afterlife and therefore decide that they must do everything they can to achieve immortality. Istvan’s hero comes to believe that “to combine brain neurons to the hardwiring of computers in order to download human consciousness [is] the most sensible and important direction for the immortality quest.” In the idyllic society he builds, everyone walks around with a computer chip in their head that allows them to communicate in a flash with other people or devices. “To stay youthful, healthy, and competitive with one another,” Istvan writes, “people spent money on functionally upgrading their bodies and the efficacy of their brains, and not so much on their wardrobes, cars, and other material possessions.” Smartphones and computers were integrated into the brain’s neural networks, allowing everyone to remain “always connected, always learning, and always evolving.”
Neurotechnologies like the ones Istvan writes of are woven deeply into the fabric of transhumanism, showing how idealization of the brain can shape a far-reaching vision for the future, in which enhancement of individual cognitive abilities appears to be the paramount goal. The transhumanist muse Robert Anton Wilson wrote over thirty years ago of a coming “Intelligence Intensification” that will “expand consciousness and sensitivity to signals and information.” Wilson reasoned that “Intelligence Intensification is attainable, because modern advances in neuroscience are showing us how to alter any imprinted, conditioned or learned reflex that previously restricted us.” The translation of neuroscience knowledge into the engineered brain implants and interfaces of the future was foretold in the late 1980s by Fereidoun M. Esfandiary, an Iranian Olympic basketball player who became the world’s first self-declared transhumanist and changed his name to FM-2030.
In the imaginations of people like FM-2030, futuristic tools for brain interfacing could include not only souped-up BMIs reminiscent of the Matrix movies and Star Trek’s Borg, but also the nanometer-scale robots called nanobots, which we saw mentioned above. Nanobots would be small enough to swim through the body and communicate with individual brain cells. Although some nanotechnology experts challenge the physical feasibility of recognizable robots at this scale, Ray Kurzweil and others seem to remain staunch believers in the power of such gadgets. Even Nicholas Negroponte, an MIT computer scientist and former director of the school’s prestigious Media Lab, once advertised the potential of neural nanobots, explaining that “in theory you could load Shakespeare into your bloodstream and as the little robots get to the various parts of the brain they deposit little pieces of Shakespeare or the little pieces of French if you want to learn how to speak French.” Of course, such takeovers of the mind could turn out to be anything but beneficial. A digital video series called H+ dramatized a plague of injected nanoscale brain interfaces in which a computer virus takes over the implants and incapacitates all of the modified humans who harbor them.
For many transhumanists, as with the fictional Jethro Knights, the road to immortality also runs through the brain. A popularly imagined strategy for achieving indefinite life involves the hypothetical procedure of uploading all the content from a person’s brain and then downloading it back into a new body or possibly a simulated environment. Here again, the brain functions like a soul, self-contained and separable from the body. “The upload is the posthuman,” explains the transhumanist thought leader Natasha Vita-More. “It’s the copying and transfer of the brain, your cognitive properties, onto a non-biological system, which could be a computer system.… So you would be within a whole different universe of computational matter. And that would be a very beautiful simulated environment.” To achieve the uploading, many seem to place their faith in the exhaustive anatomical analysis of brain tissue called connectomics, even though current methods have nowhere near the required throughput to scan an entire human brain, let alone to simulate the biology it represents. To while away the time as technology catches up to their aspirations, some transhumanists therefore turn to cryonics as a means to preserve their brains after bodily death. In Chapter 5, we encountered the Alcor Life Extension Foundation, which offers to freeze clients’ brains and store them indefinitely for a fee of $80,000. Alcor is run by Vita-More’s husband, Max More, a transhumanist philosopher who plans to have his own brain frozen when he dies. One of Alcor’s early clients was the trailblazing FM-2030, who failed to achieve his own dreams of immortality and sadly died of pancreatic cancer at the age of sixty-nine. His frozen head has now been steeping for over fifteen years in a vat of liquid nitrogen at Alcor’s headquarters in Scottsdale, Arizona.
The quest for cognitive improvement and immortality through brain technology represents the cerebral mystique and the denial of our biological nature at their most extreme. In its capacity as the gateway to a higher plane of human existence, the brain attains the status of a religious entity. Achieving life enhancements and extensions through neurotechnology implies not only the equation of each person to his or her brain but also a solipsistic focus on manipulating the person’s existence by manipulating that one brain alone. This mission largely ignores the interrelated, socially and environmentally dependent texture of human mental life, and trivializes the problems of human society by focusing on individual existential concerns that generally only interest the well-to-do. “Even in cases where social benefits [of transhumanist enhancement] are brought up, these are seen rather as the result of cumulative individualistic interventions,” writes ethicist Laura Cabrera. Indeed, it is hard to see places for collective values such as equality, empathy, and altruism in a transhumanist culture where people strive to safeguard their own individual existence—and in particular their brain’s existence—above all else.
This agenda is not as far from the mainstream as it might first appear. Although transhumanists may come across as a fringe element, there is plenty of contact between this group and various professional communities. Recent transhumanist conferences have featured leading neuroscientists, and several academic biologists are ostensibly working toward transhumanist aims. Defense organizations like DARPA are heavily influenced by transhumanist ideas about brain technology, as we have seen. In the arena of big business there is likewise evidence that transhumanist goals are gaining traction, as major Silicon Valley entities increasingly invest in anti-aging research. Even outside these power centers, a substantial portion of ordinary people could be attracted to transhumanist promises of living longer and becoming smarter. Some transhumanist ends are after all not so different from the objectives of modern medicine and education, broadly speaking. But how do the movement’s proposed means for achieving human enhancement compare with more conventional alternatives?
There are many reasons for taking issue with the transhumanists’ interest in advancing the human race through purposefully engineering the body, as opposed to letting natural selection run its course. Cultural taboos against “playing God” with the human form abound, but an even more general objection springs from the principle of unintended consequences, which warns us that tinkering in abrupt ways with time-tested physiological processes honed by evolution could go awry. There is the possibility of side effects both in individuals and in groups that have been altered with technologies like brain implants, genetic engineering, or life-extending drugs. A world in which nobody dies could also have severe problems unrelated to the methods by which immortality was achieved. Unless births are severely regulated, power and resources might eventually need to be split among trillions of transhumans, and conflicts would arise in the struggle for Lebensraum. The loss of generational turnover could also deal a great blow to human culture and innovation. The historian of science Thomas Kuhn famously observed that new scientific theories tend to catch on only when recalcitrant “old believers” literally die out. As in science, fresh ideas, fresh ambitions, and fresh faces are desirable in many walks of life. Would we want to jeopardize our potential for creative advancement by crystallizing our society into a fixed state?
Although the transhumanist goal of making everyone smarter seems uncontroversial enough, even this could have downsides, at least from an evolutionary standpoint. Among today’s humans there is considerable evidence for a negative correlation between education and fertility. Could it be that transhumans with enhanced cognition will be too busy being brilliant to reproduce? Looking around the planet, one sees quickly that the most abundant and arguably successful organisms are not necessarily those with the greatest intelligence. Beetles, for instance, have been around for approximately one hundred times longer on earth than we have and account for 25 percent of known species, with a global population likely well in excess of ten trillion. The great biologist J. B. S. Haldane once remarked that God seems to have “an inordinate fondness for beetles.”
But it is transhumanism’s inordinate fondness for brain technology that flavors its peculiar outlook for humankind and reveals the biases that arise from idealizing the brain. While you or I might imagine a future bristling with sleek innovations that surround and empower us, the typical transhumanist wants technology that physically penetrates us, and in particular our heads. It is as if the gadgets we use would be somehow intrinsically stronger or better if they were directly attached to the brain. We could get information from the internet directly into our brains without reading. We could drive our cars simply by thinking, without having to move our hands. We could communicate without having to strain our vocal cords or pump our lungs. In each case, electronic technologies like neural nanobots or brain chips obviate the need for biological components that are thought to do little more than get in the way. In these transhumanist visions of neurotechnology, the brain is carried forward into a digital universe, while the rest of the body is largely left behind.
But why should the technology that benefits our cognition and control directly tap into the brain? The approach seems to be motivated by a desire to get straight to the essence of the person, following the neuroessentialist mantra that “we are our brains.” From a practical standpoint, this seems excessively restrictive, however. “Most of the benefits you could imagine achieving through [brain implants] you could achieve by having the same device outside of you and then using your natural interfaces like your eyeballs, which can project a hundred million bits per second straight into your brain,” says philosopher Nick Bostrom of Oxford’s Future of Humanity Institute. Suppose, for instance, that you want to give someone the mental impression of a beetle; you have the choice between showing them a photograph and stimulating cells in the brain directly so as to produce the same imagery. Both routes would by definition result in exactly the same pattern of brain activity corresponding to the perception of a beetle, but there is no doubt that simply presenting the picture would be far easier. By contrast, “writing” the beetle percept directly into the brain would require invasive manipulation, as well as a degree of knowledge about brain function that far surpasses what we have at present. Even if sufficient understanding were available, trying to circumvent the biological input and output pathways that surround the brain would require improving on mechanisms that have served humanity well for millions of years.
Familiar real-world examples back up the idea that technology can substantially enhance human mental performance without directly contacting our brains. Over four thousand years ago, the Sumerians of southern Mesopotamia invented an abacus that may have been the world’s first computing machine. With devices like this, users could quickly manipulate larger numbers than possible using short-term memory and endogenous thought processes alone. The greatest cognitive aid to humankind was probably the invention of writing systems in societies such as the ancient Near East, Shang dynasty China, and pre-Columbian Mesoamerica. In some sense, the strength of the written message arose precisely from the fact that the brain is out of the equation once its thought content is recorded; this makes written dispatches more reliable than missives remembered in a human messenger’s mind. Philosopher Andy Clark argues that external artifacts like abacuses and written records form part of the “extended mind” of the person who uses them, just as purely brain-based processes or functions involving neural implants would. The mind and self are “best regarded as an extended system, a coupling of biological organism and external resources” that need not lie within the skin, Clark wrote in an influential 1998 essay with David Chalmers.
There could be costs to bringing external cognitive resources too close to our brains, even if the logistical barriers against doing so were surmounted. Speaking personally, for instance, I can say that my smartphone has become an indispensible prosthetic aid to my cognition and communication, but one I have no desire to see hardwired into my brain—it’s intrusive enough already. Similarly, the computers I use for my scientific research are wonderful number-crunching machines that help me and my colleagues solve our problems in the lab, but would do us no further good if they were embedded in our skulls. If we interfaced such computers directly to our brains, we might find ourselves constantly distracted by them, and our computations on the other end could wind up being disrupted by needless neural input. A different form of cognitive aid I and many Bostonians dream of is something that would make people better drivers, but again the best solution probably lies outside our brains. In this case, industry seems to be congealing around a strategy that is almost completely divorced from human cognition: have the cars drive themselves.
Practices in the therapeutic domain also demonstrate how neurotechnology that works around the brain might be preferable to technology that works within it. In 1968, a swashbuckling Royal Air Force veteran and physiologist named Giles Skey Brindley implanted the visual cortex of a blind patient with an array of eighty brain stimulation electrodes. Passing microcurrents of electricity through the electrodes caused the patient to experience a visual sensation called phosphenes—similar to the spots you can sometimes see after rubbing your eyes. The locations of the phosphenes depended on which electrode was stimulated, indicating that a rudimentary form of spatial acuity could be restored by stimulating different electrode combinations. Writing triumphantly of this success, Brindley and his coauthor suggested that the approach would one day enable the blind to “read print or handwriting, perhaps at speeds comparable with those habitual among sighted people.” In the ensuing years, however, Brindley became best known not for restoring vision to the blind but for figuring out how to achieve a chemically induced erection; in one instance, he supposedly dropped his pants in front of an audience at a major conference and made his point in the flesh. Meanwhile, Brindley’s idea of using brain implants for visual transduction was largely supplanted by the competing strategy of using similar electrode-based arrays farther away from the brain, in the eye itself. Retinal prosthetics often work better not only because of their relative ease of implantation but because they make better use of the body’s natural processes for admitting visual information to the brain. Similar advantages have made cochlear implants, rather than auditory cortex brain implants, the dominant therapy for treatable deafness.
Peripheral neurotechnology may also provide an especially promising route for improving human movement and motor function. Researchers have already discovered brain-independent ways to restore movement and control to patients who have lost limbs. Using a technique called targeted muscle reinnervation, surgeons can reconnect peripheral nerves from a patient’s missing limb to new muscle groups that in turn can control a prosthesis. In 2015, a fifty-nine-year-old man named Les Baugh was given the chance to undergo this procedure at the Johns Hopkins University in Baltimore. Baugh had lost both of his arms in an electrical accident as a teenager. After the neural remapping, he was fitted with two cybernetic arms that mounted over his shoulder stumps and moved on command from his reinnervated chest and shoulder muscles. Baugh was able to learn how to control the limbs after only ten days of training, performing such feats as stacking blocks and drinking from a cup. Unlike “locked-in” patients such as Cathy Hutchinson and Jan Scheuerman, who had lost all ability to communicate with the rest of their bodies, Baugh did not need to control his prostheses via a direct connection to his brain. His brain was barely less involved in controlling his artificial limb movement, however. Motor output from Baugh’s brain triggered the reinnervated muscles that directly piloted the limbs, but this process exploited the brain’s embodiment in a broader biological milieu, rather than trying to get around it.
To go beyond rehabilitation and enhance the capabilities of healthy individuals, a related approach involves fitting subjects with a so-called powered exoskeleton; the exoskeleton gives added strength and rigidity to its wearer via a system of braces and actuators, enhancing the wearer’s ability to perform physically demanding tasks. In Marvel’s Iron Man comics, the fictional Tony Stark controls a powerful exoskeleton using impulses from his brain, but real-life experimental exoskeletons take their input from their wearers’ bodies. For instance, the HAL-5 exoskeleton manufactured by the Japanese company Cyberdyne is controlled largely via a set of skin surface electrodes that read impulses from the wearer’s musculature and interpret them as movement commands to control the power suit. The suit looks a bit like Star Wars stormtrooper armor and enables people of average strength to lift objects as heavy as 150 pounds with little effort.
The closest thing to a superman we could produce today might sport one of Cyberdyne’s exoskeletons while enjoying extraordinary communications and computational capabilities afforded by portable or wearable electronic devices. If he saw through walls, it would be because he could pilot a remote-controlled, camera-bearing drone. If he could sense bodies in the dark, it would be because he wore infrared glasses. If he possessed a supercar or superplane, chances are that the vehicle would be super mainly because of its own autonomous control mechanisms, as opposed to a connection to its owner’s gray matter. Our modern superman would be a testament to the embodied brain, an individual whose nervous system transduces an extended array of inputs into actions enhanced by peripheral aids, noninvasively interfaced to distributed elements of his natural human physiology. This hero provides a counterpoint to transhumanist visions in which the hacked brain itself is the secret to transcending humankind’s limitations. Trying to improve the brain with invasive neurotechnology makes most sense in an imaginary world where the brain is differentiated from its surroundings, solitary, self-sufficient, and soul-like. If one accepts that the brain is a biological organ that functions integrally with the body and environment, then neurotechnological enhancements to human capability no longer need be bound to the brain.
The seeming remoteness of futuristic conceptions of cognitive enhancement does not keep people from worrying about their implications already. In a 2004 essay, the political scientist Francis Fukuyama labeled transhumanism one of “the world’s most dangerous ideas” because of the potential threat of transhumanist-style intelligence improvements to conceptions of human equality. “If we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind?” Fukuyama asked. “If some move ahead, can anyone afford not to follow?… Add in the implications for citizens of the world’s poorest countries—for whom biotechnology’s marvels likely will be out of reach—and the threat to the idea of equality becomes even more menacing.”
Concerns like Fukuyama’s might be more relevant to today’s society than we realize. Although the implants and nanobots of transhumanist fantasy may never really come to be, another class of intelligence enhancement is already here. So-called nootropic substances—from the Greek words meaning “mind-bending”—are consumable chemicals thought to be capable of improving concentration, memory, and other aspects of cognition. Nootropics are similar in spirit to the magic potion that gave Superman Bill Dunn his powers, albeit not as magical in actuality. The most ubiquitous examples are relatively mild naturally occurring stimulants like nicotine or caffeine, the humble “cognitive enhancer” we considered briefly in Chapter 5. Nootropics also include dietary supplements like omega-3 fatty acids, which are thought to foster positive mood, or the racetams, which may modulate activity of key neurotransmitters in the brain. The most powerful nootropic drugs are well-characterized prescription stimulants like amphetamine and methylphenidate, marketed respectively under the names Adderall and Ritalin as treatments for attention-deficit hyperactivity disorder (ADHD), as well as powerful sleep suppressants like modafinil, which is used in both medical and military contexts to promote alertness.
Although heavy-hitting prescription nootropics in the United States are legal only for approved therapeutic uses, they are widely abused in particular by students seeking an edge in their studies. A common pattern is for students to obtain substances from acquaintances who have legitimate prescriptions, and then to use the drugs nonmedically as aids to binge studying. A 2005 survey of over one hundred four-year colleges in the United States found that an average of 7 percent of students illegally used prescription stimulants, with rates of up to 25 percent at some individual colleges. Several studies have questioned whether prescription-strength nootropics truly boost academic performance, but their prevalence on campuses shows that there is considerable belief in their efficacy. Students who use prescription nootropics illicitly must value the potential rewards offered by these drugs enough to override concern about the possible criminal penalties for being caught.
Nonprescription nootropics are currently perfectly legal, but they are serious business in their own way. Silicon Valley start-ups like Nootrobox and truBrain have raised millions of dollars in investment capital to market products formulated from supposed nootropic ingredients. Merchandise like theirs appears to have a faddish following among the so-called biohackers who want the benefits of prescription cognitive enhancement drugs without the accompanying regulatory hassles. Nootrobox, for example, sells “stacks” of compounded over-the-counter nootropics, ranging from chewable coffee drops to capsules that combine ingredients from the Indian pennywort, the western roseroot, and various vitamins and neurotransmitter analogs. Each ingredient is said to be “generally regarded as safe” by the US Food and Drug Administration, but demonstrations of efficacy are also generally minimal; the company is currently trying to prove the effectiveness of its products through a clinical trial.
Whether you are a struggling student thinking about how to get ahold of prescription study aids or an ambitious entrepreneur wondering whether to spend $100 per month for an extra edge from commercial smart pills, you may already feel some of Fukuyama’s questions beginning to apply. If you are competing with nootropic-using peers, can you afford not to follow suit, given the cutthroat nature of the work environment? It is an anxiety that nootropic marketers consciously play on. “If you aren’t taking Alpha BRAIN, you are playing at a disadvantage,” warns the website of a company called Onnit, pushing its herbal brain-boosting supplement. Many of the competitors themselves agree. Explains businessman and self-help author Tim Ferriss, “Just like an Olympic athlete who’s willing to do almost anything, even if it shortens your life by five years, to get a gold medal, you’re going to think about what pills and potions you can take.” It is exactly this mindset that leads many to dystopian premonitions. “All this may be leading to a kind of society I’m not sure I want to live in,” laments New Yorker staff writer Margaret Talbot, “a society where we’re even more overworked and driven by technology than we already are, and where we have to take drugs to keep up; a society where we give children academic steroids along with their daily vitamins.”
Such fears about the encroachment of neurotechnology are the flip side of enthusiasm for neurotechnology’s benefits. But both assessments stem from an inflated sense of the brain’s significance compared with body and environment, and both may be mistaken in ways that apply also to other inflated concerns in our culture. You might, for instance, have heard the cliché that “one man’s terrorist is another man’s freedom fighter.” The saying highlights the subjectivity with which people tend to use the terrorist moniker to vilify their enemies, plus the fact that there is usually someone around who takes a positive view of whatever nefarious cause a given terrorist espouses. Meanwhile, despite the fact that terrorism rises near the top of voters’ concerns in most public opinion polls, New York Times columnist Nicholas Kristof points out that in recent years far more Americans have drowned in bathtubs than have been killed by terrorists. Regardless of one’s feelings about individual groups, terrorism as a phenomenon seems to get more attention than its impact on society should merit.
Brain technology, with both its advocates and its detractors, fits a similar pattern. We noted the anomalous zeal among transhumanists and others for technological approaches that interact physically with the brain, even though more peripheral technologies can achieve superior results without the danger and complexity associated with direct brain interventions. Similarly, foreboding about the antisocial effects of neurotechnology such as nootropic drugs may reflect an artificial and counterproductive distinction between cognition-enhancing strategies that act within brains as opposed to around them. If one is troubled, as Fukuyama is, by how neurotechnology stands to increase human inequality and promote a hypercompetitive society, one should direct equal concern toward the many activities that influence brains less directly but produce equivalent consequences. To perceive an unnatural threat from technologies that interact with the brain is no more rational than to display unnatural optimism about them. Neurotechnology may be neither a terrorist nor a freedom fighter, but just one of the many facts of life whose influences are complex and context-dependent.
In fact, ethicists who have considered the impact of nootropic use are generally quick to point out the continuum of related phenomena in which “smart drugs” sit. In a 2008 Nature magazine commentary on responsible use of cognitive-enhancing drugs, a group of experts led by the director of Stanford University’s Neuroscience and Society Program, Henry Greely, draw parallels between nootropic use and improvements like education, nutrition, exercise, and sleep, all of which affect brain function. They argue that “cognitive-enhancing drugs seem morally equivalent to other, more familiar, enhancements” and that “a proper societal response will involve making enhancements available while managing their risks.” Another analysis of ethical issues associated with cognitive enhancement commissioned by the British Medical Association makes some of the same comparisons. Putting the debate over artificial cognitive enhancements in context, the BMA panel also emphasizes that “we need to remember that a wide range of social factors directly or indirectly affect health, welfare and social success.” “Merely to focus on one, such as an individual’s cognitive abilities,” they argue, “ignores the fact that many different social determinants affect individuals’ ability to thrive physically and psychologically and to succeed socially.”
From cradle to grave, the world treats people and their brains very differently. Some babies are born with biological determinants that predispose them to academic achievement, possibly because of differences in the quality of their attention, endurance, memory, or speed. Perhaps more importantly, some babies are also born to pushy parents who give them books instead of toys for their birthdays and start them in afterschool programs from the moment they can talk. Wealth disparities contribute to cognitive enhancement in multiple ways, from parents’ capacity to spend time helping their children over educational hurdles to their ability to afford benefits like computers or private lessons. The culture of a household and its broader social context plays an immense role beyond schooling per se, influencing facets of life like emotional well-being, ambition, and health. When kids grow up and leave the family, their childhood engrams stay with them, as most certainly do generic biases introduced by their socioeconomic origins. And make no mistake about it: each socially determined factor affects the brain just as surely as a genetic contribution or a nootropic drug might. The brain is plastic and can be changed by a huge range of inputs. Education and values become imprinted into the brain like other memories, and they influence future behavior as a result. Economic and social security affect stress levels and the body-wide physiological pathways that go with them. For all of these reasons, it seems unlikely that currently available neurotechnologies could significantly worsen what is already an extraordinarily uneven playing field for society’s diverse team of eight billion nervous systems.
This is not to say that questions about how to regulate nootropics should not be asked. Particularly given the prevalence of illegal prescription drug use in the United States, as well as limited evidence concerning safety and efficacy of over-the-counter nootropic supplements, some degree of further analysis and regulatory action is certainly called for. But if a goal of governing nootropics and other cognition-enhancing neurotechnologies is to ensure that they do not contribute to the injustices that critics like Fukuyama fear, then it seems at least as worthwhile to think about how to compensate better for the unfair allocation of “soft neurotechnologies,” such as pushy parents and competitive communities, that already promote inequality in more powerful ways than any form of drug or device-based brain hacking is likely to.
It is no coincidence that the Titan Prometheus became the patron deity of both hacking and human enhancement, and his story also symbolizes some of the points I have tried to make in this chapter. According to ancient Greek mythology, Prometheus molded the first human from clay and then empowered him by giving him fire stolen from Mount Olympus in violation of Zeus’s wishes. As punishment for crossing the king of the gods, Prometheus was condemned to spend eternity chained to a rock, beset by a ravenous eagle that devoured part of his liver every day, until the hero Hercules freed him. “Prometheus stole fire from the gods on behalf of mankind. That’s all some youthful hacker outlaws today need know to inspire them to adopt Prometheus as their icon,” explains technology commentator Ken Goffman. We can see aspects of the Prometheus legend in the stories of people like Edward Snowden, who shed light on the National Security Agency’s inner workings and was thereupon forced into exile, or Aaron Swartz, the hacktivist whose campaign for open access to online resources led to his arrest and subsequent suicide. That Prometheus himself was released from his rock can be seen as a vindication of his inventiveness and an acceptance of his technology. But the creator’s release also gives him wider latitude to invent and perhaps be judged again.
I have argued that both hopes and fears about neurotechnology should be unshackled from the brain as Prometheus was from his rock. Because the brain is just a prism through which myriad internal and external influences refract, our aspirations will often be addressed more easily by modifying the influences than by manipulating the prism itself. By uncoupling from the brain our visions for the future of the mind, we vastly expand the scope for new technology development. Similarly, by broadening our concerns about the unwelcome effects of cognition-altering technology from those that work directly on the brain, we might gain renewed motivation to address existing discrepancies in education and culture. Such inequalities affect our brains as definitively as any pill or implant would, and they pervade our society already.
The cerebral mystique constrains our thinking about neurotechnology in much the same way it restricts views about mental illness and the individual’s place in society, as discussed in Chapters 7 and 8. In each case, the mystique fosters a tendency to analyze people’s problems in terms of their brains alone. To questions about what makes us do what we do, what makes us experience mental pathologies, or what could improve our cognitive abilities, the cerebral mystique offers one answer: the brain. But the fundamental lesson of neuroscience is that the brain is a biotic organ, embedded in a continuum of natural causes and connections that together contribute to our biological minds. This means that the brain cannot be all there is. To any question about altering or explaining human behavior there are actually many answers, at levels embracing not only the brain but the body and environment it resides in. In an age in which self-absorption and self-centeredness have reached epidemic proportions and the socially minded values of previous generations are on the wane, the message that you are not only your brain may be one of the most important lessons science has to teach us. Accepting this message involves rejecting myths about the special soul-like qualities of the brain and understanding how the brain is physiologically coupled to its surroundings. It is only by doing so that we can truly grapple with our place as biological beings in a universe of interrelationships.