CHAPTER 8

Where Do We Go from Here? (And Where Have We Been?)

Similar techniques helped crack the code used in the visual thalamus and early visual cortical areas, as the onion layers of the brain began to be peeled back, one by one. A complete cellular-based working model of how the mouse moves through a maze in response to what it sees, together with the ontology of the approximately one thousand different cell types that make up the brain, was achieved in the mid 2020s. The sense of touch, hearing, and smell were decrypted a few years later.

FROM AN IMAGINED CONVERSATION WITH A TRAVELER FROM THE FUTURE AS RECALLED BY CHRISTOF KOCH AND GARY MARCUS IN “NEUROSCIENCE IN 2064: A LOOK AT THE LAST CENTURY,” IN The Future of the Brain

Is the study of Einstein’s brain relevant to twenty-first century neuroscience, or is it a “period piece” showcasing the limitations of the clinicopathologic methods of nineteenth- and twentieth-century brain science? Can we expect a few dozen photographs, a couple of thousand microscope slides and 240 blocks (if we could find them all) of brain tissue soaked in formalin for more than sixty years to inform us about the genius of Albert Einstein? Or as I attempt to extract living thought from dead neural tissue, am I like Samuel Taylor Coleridge’s guide, who “points with his finger to a heap of stones, and tells the traveler, ‘That is Babylon, or Persepolis’ ”?1

My starting points are the polar opposite definitions of brain as pure anatomy and mind as pure function. In a living person, there is likely some overlap of these presumably conjoined concepts. And then we are again brought face-to-face with Chalmers’s importunate “hard problem” and the quandary of determining how much the brain and the mind overlap.2 If this particular exercise in neuroscience is to prevail as a credible endeavor, at some point we must take the bull by the horns and posit as a first principle that Einstein’s genius was sufficiently different from the human “norm”—whatever that may be—to be reflected in the physical architecture of his brain. I simply cannot state the materialist stance of brain-mind equivalence with greater clarity. I scrupulously avoid qualified phraseology such as “brain enables mind,” which leaves the door open just a crack for notions of nonphysical mind-stuff to slip in. Any dualist who has trudged through the previous seven chapters hoping in vain for a clear Mason-Dixon Line separating the mind and the brain may depart now. If it’s any consolation to the Cartesians snapping this volume shut, the mind of man should be regarded as no less miraculous despite its physical trappings.

Why Einstein’s brain? If we are looking for a brain functioning at the highest levels of human thought, the brain of a world-class physicist is not a bad place to begin. C. P. Snow avers that there may be even further intellectual gradations when considering the cohort of all physicists: “First of all, great theoreticians are even rarer animals than great experimentalists. That kind of conceptual skill is one of the most uncommon of all human gifts.”3 Also, the availability of “specimens” played a determining role in our particular line of research; there are no brains of Newton, Galileo, Darwin, et cetera, to be had for neuroanatomical study.

The technology (photographic cortical macroanalysis, systematic brain dissection, and cortical histology with selective staining) available for our study4 of Einstein’s brain has existed for anywhere from one century (Cajal’s microscopy), to nearly two centuries (Daguerre’s photography), to nearly six centuries (Vesalius’s detailed engravings of the brain dissection in De Humani Corporis Fabrica5). A neuroscientist working in his lab a century ago would not be discomfited by our approach. Before we relegate old-school brain science to a quaint footnote in the history of the study of the human nervous system and proceed to bleeding edge and Big Science neuroscience (with an estimated price tag of $5 billion through 2025 for the U.S. BRAIN [Brain Research through Advancing Innovative Neurotechnologies] Initiative),6 a backward glance at the powerful insights generated by past studies of “paradigm-shifting brains” will provide perspective on the lessons of Einstein’s brain.

I will propose that the brains that most profoundly changed our shared way of viewing brain function and anatomy were all damaged. Lesion-induced loss of function has been the mainstay of brain localization of function for over 150 years (or maybe thirty-seven centuries if we include the Sixteenth/Seventeenth Dynasty hieroglyphic accounts of penetrating head wounds in The Edwin Smith Surgical Papyrus).7 Research methods underwent a sea change with the advent of electrical and chemical (strychnine) stimulation of the cortex and microelectrode recordings of receptive fields of neurons in the visual cortex in the early to mid-twentieth century. Beginning in the 1970s with positron emission tomography (the PET scan), functional neuroimaging has become the dominant (read: “grant getting”) methodology for large-scale (as opposed to cellular/molecular) neuroscience. Part of the uphill climb in presenting the undamaged brain of Einstein as a persuasive neuroanatomic avatar for genius is that we are most accustomed to gleaning our cerebral structure-function insights from wounded brains selectively “assaulted” by the surgeon’s scalpel, encephalitis, shrapnel, epilepsy, “accidents of nature,” and even an iron tamping bar.

Our nascent understanding of frontal lobe function began on September 13, 1848, when Phineas Gage, a twenty-five-year-old railroad construction foreman, began tamping a blasting hole filled with explosive powder, which inadvertently had not been covered with a cushioning layer of sand. Following a loud detonation, the 109-centimeter long iron tamping bar “enters Gage’s left cheek, pierces the base of the skull, traverses the front of his brain, and exits at high speed through the top of his head. The rod has landed more than a hundred feet away, covered in blood and brains.”8 Miraculously, Gage survived another thirteen years, and despite his gruesome brain injury he never demonstrated any paralysis or speech impairment. Strikingly, this “most efficient and capable” young man became “irreverent and capricious,” expressed himself with “abundant profanity,” and “had taken leave of his sense of responsibility.”9 Lacking an autopsy (although he examined the exhumed skull), Gage’s physician, John Harlow, could not definitively uncover the missing link between the frontal lobe and behavior. One hundred forty-six years later Hanna Damasio reconstructed the events on that fateful day in 1848 and found that Gage had sustained an injury to the ventromedial region (above the eye sockets and close to the space separating the cerebral hemispheres) of both frontal lobes. What we now know (and Harlow didn’t) is that Gage’s neuroanatomical trauma fits the pattern of other patients with frontal lobe pathology whose “ability to make rational decisions in personal and social matters is invariably compromised and so is their processing of emotion.” We now fathom that somehow the frontal lobes envelop many of the defining capacities of the human condition and that the journey to our still incomplete understanding of the expanse of cortex directly behind our foreheads began with Phineas Gage’s “Horrible Accident.”10

Among the obstacles facing John Harlow and the elucidation of Gage’s frontal lobe function in the neuroscientific arena of the mid-nineteenth century was Gage’s dearth of clear-cut neurologic signs and symptoms, such as hemiparesis (weakness on one side of the body) or language disturbance … and no brain to examine. (Gage had been dead for five years when Harlow requested his exhumation.) No such barriers impeded the French physician, anatomist, and anthropologist Paul Broca in 1861 when he demonstrated the uncut brain of Louis Victor Leborgne at a meeting of the French Society of Anthropology. Leborgne had developed seizures in his youth, language loss (aphasia) at age thirty, and paralysis (hemiplegia) on his right side by his forties. His intelligence and understanding of speech were unimpaired; however, his verbal output consisted of the monosyllable, “tan,” and an occasional curse when he was exasperated. (Aphasic patients may retain emotional speech when they are unable to retrieve the rest of their vocabulary.) When the brain of Tan (as he came to be called) was examined, a fluid-filled depression the size of a chicken egg with surrounding brain softening was found in the third frontal convolution of the left frontal lobe. Modern research has attributed Tan’s brain damage to a focal inflammation of the brain termed Rasmussen’s encephalitis.11 Others (Marc Dax, for one) had previously reported in 1836 the association of language with the brain’s left hemisphere, but as is frequently the case with eponymous fame, the laurels of scientific discovery went to a latecomer, and Broca’s aphasia became part of our clinical lexicon. Broca’s aphasia is notable for the loss of expressive language. In Tan’s case it was a profound loss, while in milder cases the patient’s verbal output may lack articles (such as a, an, the) and conjunctions (such as and, or, but), resulting in so-called telegraphic speech. Neurologists recognize Broca’s aphasia as a nonfluent speech disorder, and this contrasts with fluent aphasia, in which the patient retains flowing and at times nonsensical (word salad) speech. Additionally, fluent aphasics may be unable to comprehend spoken or written language—the hallmark of a receptive aphasia. In 1874 Carl Wernicke described this other great disorder of human language in patients with left temporal lobe lesions.

But it all began with Tan. By the latter half of the nineteenth century, the brain of Tan had laid the foundation for all subsequent work on the neuroanatomy of language with Broca’s convincing demonstration that the expression of language was linked to the frontal lobe and the left cerebral hemisphere, which is dominant for language in over 90 percent of individuals.

Like Tan, Henry Molaison began having seizures when he was young, possibly due to a childhood concussion. Although his language and motor functions were unimpaired, his epilepsy relentlessly progressed, and by the time he was twenty-seven, he was having several daily seizures, both the generalized type in which he would fall to the ground, and the nonconvulsive type in which he became unresponsive, staring vacantly into space. The seizures were not controlled despite the combined use of the most powerful anticonvulsant medications in the 1950s—diphenylhydantoin, phenobarbital, mephenytoin, and trimethadione.12

With his ability to earn a livelihood and maintain an independent existence jeopardized by his worsening epilepsy, Henry Molaison sought a desperate cure. On August 25, 1953, lacking a clear-cut cortical focus for seizures on electroencephalography (EEG), Dr. William Beecher Scoville performed a bilateral medial temporal lobectomy in the hope that the source(s) of Moliason’s seizures were present (but electroencephalographically silent) in his temporal lobes. Although Scoville’s frankly experimental operation greatly reduced but did not completely eliminate Moliason’s seizures, the benefit came at an unimaginable price: “The defining deficit in H.M.’s case was an inability to form new declarative memories.” (Declarative, or explicit, memory is our recall for facts and events. This differs from implicit memory, which is called upon when we ride a bicycle.) Moliason could “carry on a conversation proficiently but several minutes later would be unable to remember having had the exchange or the person with whom he spoke.”13

Until his death in 2008, Henry Gustav Moliason’s identity and name were closely held secrets, and he was known only as H. M. in dozens of research papers and book chapters. At postmortem serial sectioning of his brain, 2,401 digital anatomical images revealed that Scoville had almost completely removed the entorhinal cortex (EC) of the medial portion of both temporal lobes. The EC is memory’s “gateway to the hippocampus for the inflow of information from the cerebral cortex and subcortical nuclei.” Jacopo Annese, the neuroscientist who sectioned the brain, observed that “during life, H.M. was the best-known and possibly the most studied patient in neuroscience.”14 His brain established beyond reasonable doubt that bilateral resection of the hippocampus, its input, or neighboring medial temporal lobe structures can halt the consolidation and storage of explicit information in long-term memory.15

It took over half a century to link the name of a real person (Henry Moliason) to the brain of an amnesiac but, for better or worse, there have been other iconically localizing brains of anonymous patients known only to posterity by their initials. This was the case when Canadian neurosurgeon Wilder Penfield (1891–1976) explored other functions of the temporal lobe. His electrical stimulation studies of motor and sensory cortex during epilepsy surgery produced the classic “homunculus” (little man) maps that correlated parts of the human body to discrete areas of cortex. (See Figure 3.1.) Penfield performed unilateral (as opposed to Scoville’s bilateral) temporal lobectomies to treat some of his epileptic patients at the Montreal Neurological Institute. As he attempted to localize an area of epileptogenic cortex, Penfield would perform a craniotomy and use bipolar electrodes to briefly (for milliseconds) apply two to four volts of electricity to exposed cortex. The patient, under local anesthesia, would verbally report the sensations or movements elicited by the electrical stimulation. In 1936 Penfield operated on J. V., a fourteen-year-old-girl with seizures, and she reported dread-imbued flashbacks of a menacing man when the lateral portion of her right temporal lobe was stimulated. Moving the electrodes a few centimeters forward along her temporal cortex summoned auditory (not visual) hallucinations of accusatory family voices. When the electrodes were shifted posteriorly to the adjacent border of her right occipital lobe, she saw simple photopsias (“stars”) in her left field of vision, rather than the complex, almost cinematic, visions emanating from her previously damaged temporal lobe. In this remarkable case, Penfield glimpsed the temporal lobe’s capacity to recreate traumatic memories and to generate auditory and visual “psychical” hallucinations of immense sensory and emotional complexity.16 Although excitatory stimulation of the temporal lobes, whether by epilepsy or electrodes, is a well-known cause of hallucinations, Penfield’s pathfinding work explains only a small percentage of the hallucinations that I encounter in clinical practice. We still have a long way to go in fleshing out our understanding of the hallucinations that occur with schizophrenia, visual loss (Bonnet’s syndrome), and psychedelic drugs.17

We do not know the identity of J. V., who remains cloaked in the anonymity of Penfield’s charts of 1,132 patients who went to the Montreal Neurological Institute to undergo operations for epilepsy from 1934 to 1960.18 Penfield’s cortical cartography was based on a series of patients that could likely never be assembled today. In the 1930s the indication for seizure surgery was epilepsy that could not be controlled by the era’s limited pharmacopoeia of phenobarbital and bromides. Even the subsequent mainstay of anticonvulsant medications, Dilantin (diphenylhydantoin), did not come upon the scene until 1938. In contrast, more than twenty approved anticonvulsant medications and four neuromodulation devices, which electrically stimulate cranial nerves, “deep” thalamic nuclei, and “eloquent” nonresectable cortex, are currently available to treat seizures.19 Progress in pharmacology over nine decades, electrical neuromodulation, and more sophisticated EEGs have obviated much of the need for seizure surgery (and intraoperative cortical mapping). This is not to deny the present-day need for cortical mapping using a single electrode or electrode grid and the utility of seizure surgery such as temporal lobectomy, cortical excision, or sectioning the corpus callosum in patients with intractable epilepsy. It is simply to point out that the clinical window of opportunity for a golden age of brain exploration opened by Penfield’s desperate cure for epilepsy has subsequently greatly narrowed.

For the last anonymous patient, Private F., the term unknown soldier might be more appropriate. Private F. was a British soldier on the western front when he sustained a through-and-through bullet wound to the back of his skull (occiput) on July 11, 1915. At this point in World War I, Gordon Morgan Holmes (1876–1965) was serving as a consultant neurologist to the British Expeditionary Force, and Private F. was brought to the field hospital in France where Holmes was stationed. Holmes, destined to become a Fellow of the Royal Society in 1933 (and one of the last practicing clinicians to gain the distinction) was intensely interested in the cerebral representation of vision. The concept that portions of each retina mapped anatomically onto specific regions of the visual cortex in the occipital lobes had been proposed in the late nineteenth century, but the precise cortical location of macular vision subserving the central visual field remained elusive.

As you read this text, you are relying on a small and circumscribed portion of your retina known as the macula lutea due to the yellowish tint seen during ophthalmoscopy performed by your eye doctor. The anatomy of the macula enables our highest-resolution vision (for fine print), and its brain connections were sorted out by Gordon Holmes (and his study of Private F.) during the Great War. Tragically, it took the high-muzzle-velocity gunshot wounds of World War I (and the Russo-Japanese War during 1904–1905) to create the straight entry-to-exit wounds that extirpated portions of soldiers’ visual cortices with almost surgical precision. In the relatively primitive conditions of a field hospital such as Boulogne, where ten physicians would care for nine hundred acutely wounded soldiers, Holmes would treat and examine the neurologically wounded. At night, wrapped in his British “warm,” he would pore over his case notes and formal visual fields measured by bringing a small hand perimeter to patient cots or bedsides. From 1914 to 1918, Holmes assessed the visual field defects of several hundred men with gunshot wounds of the occipital lobes. Lacking the resources of modern neuroimaging (CT scans lay sixty years in the future), Holmes relied on the primitive technology (from our modern perspective) of stereoscopic x-ray examinations of the skull showing the bony defects created by the trajectories of bullets and shrapnel to gauge the areas of damaged brain.

The Swedish neurologist Salomon Henschen (1847–1930) had found that the visual cortex of one hemisphere subserved vision in the contralateral hemifields of both eyes; for example, a patient with a tumor of the left occipital lobe will lose peripheral vision on the right in both eyes (in neurologic argot, a right homonymous hemianopsia). Although Henschen got the cortical representation of peripheral vision right, he went off the tracks when he localized central, or foveal, vision to the anterior or medial portion of primary (calcarine) visual cortex. And this is where Private F. provided the crucial finding that foveal vision resided in the “poles” (most rearward projecting tips) of the occipital lobes. His skull x-ray delineated a “large flake” of the inner table of the skull displaced against the poles of the occipital lobes. Visual fields demonstrated “a large absolute central scotoma” (blind spot) in each eye. Holmes concluded that macular vision was lost when the tips of the occipital lobes were “bruised” as the depressed skull fracture created by the bullet wound exerted pressure on them.20 Private F. and hundreds of other unknown soldiers enabled Holmes to infer “that each point of the retina is sharply represented in a corresponding point of the visual cortex”21 and to establish, once and for all, the anatomical locus of macular vision. Holmes modestly contended that his map of the cortical representation of vision was a “schema” or a “diagram [that] does not claim to be in any respect accurate.”22 Holmes, “indubitably an Irishman … in complexion, physique and predominantly temperament,”23 was being overly cautious, and a century after its publication I still rely upon his “schema” of vision when evaluating a patient with abnormal visual fields. (Although Holmes’s “schema” needs a slight tweak to align with later findings of greater so-called cortical magnification of central vision,24 it really has withstood the test of time.)

While alive, these five patients taught Drs. Harlow, Broca, Scoville, Penfield, and Holmes fundamental lessons about the organization of the human brain that had never been made clear or, at best, had been dimly recognized in the 1.9 million years of human sentiency since Homo erectus. All were aware and cooperative and coping as best they could with the loss of part of the neurological repertoire they had been born with. To a greater or lesser extent, this is how clinical neurology is taught (or at least how I learned the craft). For example, if the ulnar nerve is compressed, the patient has difficulty spreading her fingers apart, and if the fusiform gyrus of the occipital lobe is damaged by a cerebrovascular accident, the patient may not recognize familiar faces (prosopagnosia). This is classic lesion-based or clinicopathologic neurology in which the neurologist relies on the patient’s account of her symptoms (the clinical history), examines the patient for signs of impaired neurological function, and may demonstrate altered neuroanatomy with tests ranging from neuroimaging (CT, MR, or angiography), to tissue biopsy, to (in cases of a fatal outcome) autopsy. The clinical history is paramount. I teach medical students that if they have ten minutes to evaluate a patient, allocate nine minutes to taking a history, which will point to the specific part of the neurologic exam they should perform to arrive at an accurate diagnosis.

Unfortunately, the methods of clinical neurology do not work so well with Einstein, who was not seeking a neurologic diagnosis and whose brain did not demonstrate any lesions after multiple reviews by neuropathologists and anatomists (including our close analysis of postmortem brain photographs). For Einstein, in the stead of a chief complaint and first-person patient history, we must content ourselves with his body of scientific work in physics. It is incorporated into his voluminous papers and less plentiful writings about his examined life and personal epistemology that are scattered in sources such as his autobiographical notes25 and his letter to Jacques Hadamard citing the elements of “muscular type” in his thought.26 Moreover, can we somehow correlate Einstein’s achievements and personal testimony with the one-off gross anatomical arrangement of his 1,230-gram brain circa 1955? I confess that despite my years of study with gifted, canny, and at times visionary neurologists, neuroanatomists, neuropathologists, and neurosurgeons who spend inordinate amounts of time with the human brain as they hold, dissect, repair, medicate, and peer at it through a microscope and record its electrical activity, I must have missed the lecture on “How to Identify the Brain of a Genius.” Nor, for the record, am I prepared to give that lecture, but is our neuroanatomical study a good, bad, or indifferent place to begin? This begs the question: “Does neuroanatomy as observed by the human eye have more to teach us about brain function in the twenty-first century?”

The anomalous cortical anatomy of Einstein’s brain was a surprise both for us and the world at large. Dean Falk strongly believed that a closer look at Einstein’s cortex was warranted, and when Harvey’s “lost” photographs were unearthed, her expertise and unique skill set as a paleoneurologist discovered something new and exciting that had been hidden in plain sight since 1955. Putting our discovery aside, the prevailing belief in biology is that the gross anatomical features of the human body have been completely mapped. Reflecting that notion, the medical school I work at, Rutgers-Robert Wood Johnson Medical School, has not had a Department of Anatomy among its twenty-one departments and three institutes for well over a decade! (However there are a few “stealth” anatomists still teaching first-year medical students under the aegis of the Department of Neuroscience and Cell Biology, but the stand-alone Department of Anatomy is going the way of the dodo and the passenger pigeon.) Before we shut the door on research into neuroanatomy, the discovery of lymphatic vessels within the brain in 2014 bears mentioning. The body’s lymphatic drainage system was discovered in the mid-seventeenth century, and as a medical student I was taught that the brain uniquely lacks lymphatics. This anatomical “truth” was relegated to oblivion when Louveau and colleagues at the University of Virginia found brain lymphatic vessels acting as conduits for fluid and immune cells from the cerebrospinal fluid to deep cervical lymph nodes.27 The lymphatic drainage vessels nestled in the dural sinuses of the brain had eluded anatomists’ prying eyes from the time of Vesalius, nearly six hundred years ago.

Before dismissing “observational” normal neuroanatomy (such as Louveau’s lymphatics), which is subtler and doesn’t grab the scientific headlines as readily as its close relative, lesion-based neurology, let’s contemplate a few structures on the brain’s surface (or just below it) that have informed us about the brain’s function. Some have been known for centuries and others since 2015.

In the nineteenth century after examining the diseased brains of their patients, both Paul Broca and Carl Wernicke came to grips with the critical role played by the brain’s left hemisphere in the expression and comprehension of language. A century later Norman Geschwind, possibly the most gifted behavioral neurologist of my generation, wondered if normal brains displayed anatomical evidence for language function. He questioned the received wisdom that “there were no significant anatomical asymmetries between the hemispheres and that the cause of cerebral dominance [for language] would have to be sought in purely physiological or in subtle anatomical differences between the two sides.”28 What he found was “a highly significant difference between the left and right hemispheres in an area known to be of significance in language functions” in the course of postmortem examinations of one hundred adult human brains free of significant pathology. A single, sweeping cut of whole brain from front to back revealed that the planum temporale, a wedge-shaped portion of temporal lobe within the Sylvian fissure and bordered in front by Heschl’s gyrus, was demonstrably larger in subjects’ left hemispheres. This expanse of left temporal cortex encompassed Wernicke’s area, which is critical for understanding language. After noting the absence of such asymmetry in anthropoid apes and its presence in the endocranial cast of a Neanderthal man, Geschwind speculated that the inequality of the right and left planum temporale was “the first solid piece of evidence as to the evolution of changes in the brain responsible for language.”29

In 1968 asymmetry of the planum temporale was seen best with an axial (front to back) slice (literally, a slice … neuropathologists carry a large knife for that exact purpose when students and residents attend a teaching exercise known as brain cutting). The asymmetry of the temporal lobes can now be assessed in living patients’ MRI or CT scans but in 1968 Geschwind relied on postmortem dissection. In contrast, Dean Falk’s discovery of Einstein’s cortical knob (and its signature inverted omega shape) required a thoughtful inspection of Harvey’s photographs (and not a brain knife).30 The cortical knob is an example of observational surface neuroanatomy par excellence. Originally, it was declared to be a new anatomical “landmark” for the purposes of identifying the brain’s precentral gyrus, which controls movements.31 The knob itself proved to be a region of variant motor cortex anatomy specializing in hand movements, but it was not until later that it was found to be significantly associated with musicians, whose brains are endowed with larger right cortical knobs in string players and larger left cortical knobs in keyboard players.32 The original description of the cortical knob was based on functional magnetic resonance imaging (fMRI) of eleven subjects and not actual brains.33 After reviewing five of Harvey’s photographs reproduced in Witelson’s Lancet article,34 Dean Falk recognized that Einstein’s brain had a cortical knob on the right, and based on Bangert’s study of musicians,35 she made the first connection between Einstein’s variant precentral gyrus anatomy and his violin proficiency.36 As I write this in 2017, if you google cortical knob, most of the pictures are either MRI or fMRI images. Actual photographs of brains with cortical knobs are few and far between, but of those, the Google search algorithm strongly favors Harvey’s pictures of Einstein’s autopsied brain to epitomize the cortical knob. It goes without saying that in this age of rampant technology, most people learn about the easily observable macroanatomy of the cortical knob from the technical wizardry of nuclear magnetic resonance imaging rather than a simple snapshot of postmortem brain.

On February 2, 1776, Francisco Gennari, a medical student at the University of Parma, also sliced an ice-hardened brain from front to back. (Don’t assume that ice hardening is a quaint and antiquated technique … just ask any surgeon who takes frozen sections of tissue in the operating room.) Gennari stared intently at the cut surface of the occipital lobe and spied a whitish line paralleling the sinuous course of the interhemispheric portion of both occipital lobes (Figure 8.1).37 Thus was the debut of cerebral architectonics—the study of regional differences in cortical structure. As it turns out, Gennari’s “stripe” was comprised of myelinated nerve fibers (which would appear white on gross inspection) located in the fourth of the six layers of the occipital cortex. Most importantly (and what Gennari would not know), is that his line demarcates the primary visual (or calcarine) cortex. It would not be until 1892 that Henschen would deduce that the occipital cortex encompassing Gennari’s line was “no less than the primary visual center of the brain.”38

Figure 8.1. Francesco Gennari’s illustration of the prominent white band seen in the cerebral cortex of both occipital lobes (“D” and “F” in the lower portion of the plate) which would later be found to demarcate the primary visual cortex. (Mitchell Glickstein and Giacomo Rizzolatti, “Francesco Gennari and the Structure of the Cerebral Cortex,” Trends in Neurosciences 7, no. 12 [1984]: 464–467.)

The incremental increase in our understanding of occipital lobe neuroanatomy visible to the naked eye and subsequently its function was built upon the work of Gennari, then Henschen, and then Holmes, among others. (Actually, many others. My apologies to the legacies of the numerous students of the eye and brain throughout neurohistory.) Before we get too comfortable with the occipital lobe and the neurology of vision, consider Maller et al.’s discovery that the occipital lobe “bends” in patients with depression.39 Eschewing a brain knife in the twenty-first century, they used MRI software to “slice” the brains of fifty-one patients with major depressive disorders and found that eighteen had a curving occipital lobe that wrapped around its fellow in the opposite hemisphere, while only six out of forty-eight controls displayed such a curvature. In contradistinction to Witelson’s earlier conclusion, we found that Einstein’s brain was not symmetrical, and Harvey’s photographs confirmed the presence of wider left occipital and right frontal lobes.40 Given her extensive experience with endocasts of hominid skulls, Dean Falk judged Einstein’s brain asymmetry to be an example of the most typical petalia (the protrusion of one hemisphere relative to another) pattern in humans. According to Maller, the increased lobar length and width of petalias differ from occipital bending, which is characterized by one occipital lobe (usually the left) crossing the midline and “warping” the interhemispheric fissure. He went on to speculate that “incomplete neural pruning” in depressed patients diminished the space available for brain growth, and because intracranial volume peaks at around age seven, “the brain may become squashed and forced to ‘wrap’ around the other occipital lobe.”41 This new revelation of naked eye neuroanatomy is both puzzling and emblematic of the complex multipotentiality of the brain. If you are looking for a common foundation for emotion and vision in the occipital lobe, keep looking. The larger left occipital lobe withstanding, there is no evidence that Einstein suffered from clinical depression. Although he had met and corresponded with Freud, Einstein was uninterested in psychotherapy or delving into the subconscious and declared, “I should like very much to remain in the darkness of not having been analyzed.”42 Maller would contend that “occipital asymmetry and occipital bending are separate phenomena,” and therefore, the particular anatomy of Einstein’s left occipital lobe cannot be regarded as a surrogate for depression. Although we can correlate neuroanatomical findings with some aspects of behavior, we are unable to prove causation (to the surprise of no one, including the ghost of David Hume). Why does a deformation of the occipital lobe “cause” depression, or why does a one-centimeter shortening of the paracingulate gyrus in the medial prefrontal cortex increase the likelihood of hallucinations in schizophrenics?43

The answers are unknown at present, but the observations are thought-provoking. Unlike Broca, we will not have to await a present-day Tan’s death to ascertain the site and extent of the brain lesion. Neuroimaging and its refinements (e.g., increase the strength of the MR magnetic field from 1.5 to 3.0 tesla and more detailed brain images result) provide details of in vivo brain anatomy that were unimaginable when I was a neurology resident in the 1970s, and the University of Virginia School of Medicine did not have a CT scan!

Even the dominance of genetics may come up short when we scrutinize cerebral architecture. Portions of the genome are partially informative as to the sequence of embryologic steps and building-block proteins used to construct and deconstruct (by means of apoptosis) a brain. As I have previously remarked, twenty thousand or so genes cannot reasonably be expected to contain enough information to completely hardwire the thousands of connections per neuron in a brain with eighty-five billion neurons. To bring this point home, consider the brain surface anatomy of identical (monozygotic) twins. After studying the MR brain imaging of twenty pairs of monozygotic twins, a research team at Heinrich Heine University in Germany found that the gyral and sulcal patterns in all the twin pairs were dissimilar. In other words, despite their identical genomes, each member of the twin pair had a grossly different neuroanatomy, leading the investigators to surmise “that the development of the convolutions of the brain is strongly influenced by nongenetic factors—that is, by environment, experience, or chance.”44 This is not to downplay the ascendancy of genomics in human biology, particularly since the watershed achievement of mapping out the entire sequence of human DNA in the first few years of the twenty-first century. For example, rare familial disorders of language have been linked to mutations of the FOXP2 gene.45 Although we know that this particular DNA sequence on chromosome seven is critical for language, we are in the dark regarding the details of neuroanatomy and neurophysiology that connect the dots from gene to behavior (in this case, language). We know some tantalizing generalities about how FOXP2 gene mutations lead to abnormal branching and length of axons and dendrites, but we are a long way from grasping (or imaging) how a normal FOXP2 gene fabricates a normal Broca’s area in a human frontal lobe.

My analysis has concentrated on the lessons of gross neuroanatomy and neuropathology because in the case of Harvey’s photographs that is the vein we have mined. (I will briefly revisit the prospect of a re-examination of the microscopic neuroanatomy preserved and stained in Harvey’s slides before we bid adieu to Einstein.) As the sun sets on the ancient regime of the clinicoanatomic study of the brain, consider what twenty-first century neuroscience might do if a new Einstein was proclaimed to be walking in our midst. I will gladly entertain nominations for the intellectual cynosure of our time (aka, the smartest guy in the room, albeit from a global perspective) from the floor. How about Stephen Hawking and the explanation of black holes? Or Andrew Wiles, who formulated the proof of Fermat’s last theorem, which had remained unresolved for 358 years? And self-proclamations aside, let’s please agree to definitely not nominate Kanye West; both he and his publicists might profit from reading my chapter on genius.

Assuming we have identified and agreed upon the Genius of Our Age, what technology should we bring to bear on the study of his/her brain? Before we can embark on our latter-day Einstein research project, it’s de rigueur to craft a grant proposal including a scientific rationale and a budget estimating the project’s direct costs for materials, labor, construction, et cetera (and more importantly, the additional indirect costs, which are levied at 20 to 85 percent of the amount of, for instance, a National Institutes of Health grant). The indirect costs pay for the overhead expenses, such as the lighting and heating bills for the lab, and bring a lupine smile to the university administrator contemplating the manna about to rain down from grant-giving heaven). Just how much is the financial bite to “look under the hood” of brains of geniuses (or otherwise)? How about if we had $4.5 billion to effect “a comprehensive, mechanistic understanding of mental function”?46 Well, U.S. taxpayer, we do! It’s called the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, and it was launched by President Obama in April 2013. Its overarching goal “is to map the circuits of the brain, measure the fluctuating patterns of electrical and chemical activity flowing within those circuits, and understand how their interplay creates our unique cognitive and behavioral capabilities.”47 Aspiring to be the neurobiology equivalent of the Apollo man-on-the-moon missions or the Human Genome Project, the BRAIN Initiative is facing some very formidable obstacles in attempting to image the brain’s circuitry. Numbered among these is the immense (and uncatalogued) diversity of neurons and glia and the need to image their electrical and chemical activity within a time frame of milliseconds. How do you microscopically image a single cortical neuron and all its synapses that extend for over a meter and span nearly the entire brain volume? Given the present trade-off between imaging large volumes or achieving fine-grained resolution, we lack the capability to depict large volumes of neural tissue at a synaptic level. Viewing the all-important point of communication between neurons, the synapse, is limited to electron microscopy (EM) of tiny volumes of brain tissue, on the order of one-tenth of a cubic micron. At the scale accessible to EM, “a full cubic millimeter of brain volume resolved to the level of seeing every synapse would require many months or even years to image and far longer to analyze.”48

Facing these challenges, the BRAIN Initiative has charted a course for investigative neuroscience for the twenty-first century and has bold-faced the following high priorities:

  1. Discovering diversity with a census of neuronal and glial cell types.

  2. Maps at multiple scales: generate circuit diagrams that vary in resolution from synapses to the whole brain.

  3. The brain in action: produce a dynamic picture of the functioning brain.

  4. Demonstrating causality: link brain activity to behavior by directly activating and inhibiting populations of neurons.

  5. Identifying fundamental principles: produce conceptual foundations for understanding the biological basis of mental processes through the development of new theoretical and data analysis tools.

  6. Advancing human neuroscience, particularly as it pertains to understanding the human brain and treating its disorders.

  7. Integrate new technological and conceptual approaches produced in goals 1–6 to discover how dynamic patterns of neural activity are transformed into cognition, emotion, perception, and action in health and disease.49

Drs. Bargmann and Newsome and thirteen other neuroscience luminaries have provided a 146-page road map to “a single, integrated science of cells, circuits, brain, and behavior.” If we arrive at a better generic understanding of the brain, we will also likely gain insight into the brain of a genius … and just maybe the next Einstein when he comes rolling along. One problem with road maps in general is their inaccuracy where stretches of highway are under construction (and detours are rife). For the BRAIN Initiative to succeed, we’ve got until 2025 for a whole lot of “road building” vis a vis innovative brain technology (e.g., when do we get a “macro-micro” neuroimaging apparatus working in real time?) to be devised and staggering amounts of data to acquire and analyze.

As we place our bets on the BRAIN Initiative’s chances of crossing the finish line to attain a “single, integrated” neuroscience, I will review several of the most promising technologies (functional neuroimaging, mapping the connectome, electrical recording/stimulation of neurons, and artificial intelligence simulation of the brain) for our rendezvous with a future Einstein. Some are already here, all are being refined, and some are “not ready for prime time.”

Functional MRI (fMRI) “is currently the best tool we have for gaining insights into brain function and formulating interesting and eventually testable hypotheses.”50 Advocates will declare that “[f]MRI enables us to see what is happening in a brain while a subject is thinking, and, under certain conditions, even to partially see into the contents of those thoughts.”51 However, detractors will dismiss fMRI as tantamount to magnetic phrenology. Like it or not, our newly minted Einstein will undoubtedly undergo an fMRI. Why? Aside from the cool color pictures of parts of the brain “lighting up” during a mental task, fMRI is based on the physiologic bedrock of the 127-year-old principle of neurovascular coupling.52 In essence, Sir Charles Sherrington, who incidentally coined the term synapse, postulated that the brain’s vascular supply “can be varied locally in correspondence with local variations of functional activity,”53 and the upshot is that if a particular portion of the brain, cortical module, or a group of neurons is activated by motor, sensory, or cognitive task, the metabolic demands of the neurons will increase, and the neuronal blood supply must be augmented in reponse. Fast-forward to 1991, and the cover of Science is an MRI image of the visual cortex lighting up in a subject undergoing photic stimulation (Figure 8.2).54 We now know that the increased delivery of oxygenated hemoglobin to stimulated neurons will increase the magnetization (diamagnetic) effect on water molecules in surrounding tissue, and the signal will become increasingly visible to the detectors in the MRI scanner. For instance, if a subject undergoing fMRI speaks (or even thinks) of words, Broca’s area of her language-dominant frontal lobe will produce increased magnetization that appears on the MRI image. So when we encounter our living and breathing Einstein 2.0, we need only to inveigle him or her to undergo an fMRI, think profound thoughts, and voila!—we have a picture of the “genius circuits” of the human brain. Or do we?

As a point of comparison, when I look at a cerebral angiogram with the column of intra-arterial contrast interrupted by a blood clot completely occluding the middle cerebral artery in a patient with cerebrovascular disease, I can really see, via old-fashioned analogue methodology, that a particular area of the brain is at risk for infarction of neural tissue (in other words, a stroke). If I still have doubts about actual brain tissue damage due to mitigating factors, such as collateral circulation, I can obtain a diffusion-weighted MRI, which will detect a focal area of increased brain signal due to the decreased Brownian motion (diffusion) of water “in the dying region deprived of blood.”55 Although diffusion-weighted MRI and fMRI both examine aspects of blood flow in the brain, the imaging line of demarcation between living and infarcted brain tissue is a whole lot clearer in the former than in fMRI, which is not used to diagnose stroke but rather to detect very weak differences in blood flow, on the order of 1 percent, between activated and nonactivated areas in a living brain. Given the random background activity fluctuations of up to 3 percent, extracting signal from noise is a major problem that is relegated to software. The stunning fMRI images depend on statistically based algorithmic detection and would be invisible to a mad scientist inspecting the subject’s brain through a portion of skull replaced with a plexiglass window. So the brain activity we “see” on fMRI is what the software package shows us, and one recent study of three commonly used software packages has shown that the detection of activated areas of the brain lighting up the scan can be overestimated (with a false-positive rate of up to 70 percent).56

Figure 8.2. Functional magnetic resonance (MR) imaging of vision. This image of primary visual cortex was obtained during MR imaging with the intravenous contrast agent, gadolinium, while volunteers viewed a flashing checkerboard pattern. As demarcated by areas of increased signal midline at the hemispheric posterior poles, there was a 32 + / 10 percent increase of cerebral blood volume in the primary visual cortex during visual stimulation. Blood oxygen level–dependent (BOLD) MR imaging subsequently supplanted MR with gadolinium as the functional MR technique of choice. (American Association for the Advancement of Science “Cover,” in Science, November 1, 1991.)

Assuming that the area of increased hemodynamic perfusion of brain tissue is a bona fide signal and not some random noise, what does it tell us about the underlying neurophysiology? Is the fMRI identifying a population of neurons that are repeatedly and rapidly depolarizing with excitatory effect on other synaptically linked neurons in a circuit? Not necessarily. The increased metabolic demands of inhibitory neurons will produce a signal on fMRI that “may potentially confuse excitation and inhibition.”57 It’s as if functional neuroimaging can’t tell the difference between an on switch and an off switch!

Leaving the complexities of hardware and software aside, the test subject during an fMRI brain-mapping experiment faces some unique cognitive challenges. The linchpin of effective mapping is correlating changes in fMRI activation with changes in mental or sensorimotor activity. The tricky part is to establish a baseline mental state and then direct the subject to perform an assigned mental task, such as thinking of words, to assess the activation of Broca’s area in the frontal lobe. Before starting a period of word generation, the subject is asked “to think of nothing” to create a cognitive baseline. A little introspection will surely persuade you that allowing your mind to go completely blank might be possible for adepts of Zen koans with years of meditative training but most definitely is not possible for what most fMRI studies deem to be a normal human subject; that is, a cash-strapped college sophomore. Cerebral electrical activity never completely ceases (except in extraordinary neurophysiological circumstances, such as brain death or near-lethal barbiturate overdose, which display electrocerebral silence during an EEG). And this is to say nothing of ongoing unconscious brain activity, which is inaccessible to the subject’s awareness. Making your mental landscape a complete tabula rasa is probably not an option for most of us. Assuming the brain works linearly by piling a new cognitive task atop an underlying mental state, we are compelled to regard fMRI activation as a quantitative change in ongoing brain metabolism rather than a qualitative change from no activity to some localized activity.58

As he silently performs serial subtractions inside the claustrophobia-inducing confines of an MRI “tube,” the bilateral (L > R) inferior parietal lobules and prefrontal cortices light up in neuroscientist (and volunteer subject) Stanislas Dehaene’s brain.59 Does this pedestrian exercise in basic math have the same neural underpinnings as any virtuoso mathematical manipulations (akin to tensor calculus of a century ago) that our latter-day Einstein carries out? Something will surely light up on functional neuroimaging as Einstein redux thinks in eleven dimensions, but is it the “genius math center”? As a hidebound localization-inclined neurologist, my pulse quickens at the prospect of an fMRI X marking the “math spot” on the cerebral map. My clinical intuition is bolstered as I recall my patients with Gerstmann’s syndrome,60 whose left parietal lobe strokes cause a quartet of deficits, including the inability to: (1) write, (2) distinguish right from left, and (3) identify fingers (as in “Show me your left ring finger”). Number four is dyscalculia—the patient can’t carry out simple mathematical operations.61 By the compelling logic of lesion-based analysis the left parietal lobe is our math center. Right?

Dear Reader, don’t buy it! No cortical module is an island. A (maybe the) defining hallmark of brain architecture is connectivity. That is not to gainsay the crucial role played by both parietal lobes in mathematical operations but “only in combining the capacities of several million neurons, spread out in distributed cortical and subcortical networks, does the brain attain its impressive computational power.”62 M.-Marsel Mesulam, who learned his behavioral neurology as one of Norman Geschwind’s residents at Boston City Hospital, has put “distributed networks” nicely into perspective: “The structural foundations of cognitive and behavioral domains take the form of partially overlapping, large-scale networks organized around reciprocally interconnected epicenters.”63 He has identified at least five large-scale networks subserving spatial attention, language, memory-emotion, executive function-comportment, and face-and-object identification.

How can we reconcile focal activations summoned forth on fMRI by arithmetic with large distributed neural networks? It seems that the widely distributed networks have relatively discrete areas of maximal activity (and vulnerability), which are eagerly sought out by a neurologist in the process of diagnosing a circumscribed structural brain lesion. My therapeutic options for “repairing” a damaged distributed neural network are limited, but if I can find a tumor or subdural hematoma overlying the left parietal lobe, neurosurgical excision of the tumor or drainage of the hematoma through a burr hole may dramatically improve the patient’s Gerstmann syndrome by restoring function at the “neural bottleneck” located at the dominant hemisphere’s parietal lobe. Other such network bottlenecks well known to clinicians are Wernicke’s area for receptive language function, the hippocampus-entorhinal complex for explicit memory, and the amygdala for emotion.64

Current functional neuroimaging (including positron emission tomography [PET], to which I have given scant attention) may be coming up short by providing a tip-of-the-iceberg perspective of the function of brain networks. Additionally, fMRI’s perspective on cubes (voxels) of brain tissue reaches levels of significance as determined by software (and not the human eye).65 The statistical thresholds of that software must be thoughtfully monitored to avoid “voodoo correlations” like the identification of a “cluster of activated voxels in the brain of a dead fish [salmon] that had been ‘asked’ to perform a social-perspective-taking task while lying inside a functional MRI scanner.”66 That said, I can assure you that the royal road for studying the next Einstein’s brain will be functional neuroimaging and not dissection with a brain knife or a microtome. If the BRAIN Initiative’s priority list can serve as a crystal ball for advances in MRI in the first half of the twenty-first century, our “genius study team” will obtain higher-resolution images by increasing the strength of the magnetic fields generated by the superconducting magnets in the MRI scanners. With greater field strength, the present two-millimeter resolution could be improved to less than one millimeter—the spatial level of cortical columns and laminae (layers). Not confined to the detection of strokes alone (vide supra), the finer-grained resolution of diffusion-weighted MRI (DW-MRI), which exploits the physico-anatomical characteristic of water molecules diffusing more rapidly lengthwise along axons, will refine maps of white matter connectivity. The extent of the “unseen underwater iceberg” of network distribution alluded to earlier can be elucidated further by the ongoing use of fMRI to detect task-initiated cerebral activation and the resting state fMRI (rfMRI). The latter reveals that “functionally-related areas that are co-activated during performance of a task also exhibit correlated spontaneous fluctuations when subjects are simply ‘resting’ in the MR scanner.”67

Last, even the highest-resolution fMRI with graphic patterns of activation in the most beguiling colors is only as good as the neuropsychological task set forth for the subject. The neuropsychological paradigm could not be more effective in its straightforward simplicity, as in the case in which the subject is asked to tap his left ring finger (and his right precentral gyrus lights up). However, our intrepid neuropsychologist, armed with the best fMRI that NIH—read taxpayer—money can buy, will face a formidable task in charting the wellsprings of unconscious creativity. How did Albert Einstein recognize that a man falling from a roof would reveal “the deep connection between gravity and accelerated motion”68 and ultimately lead to the theory of general relativity? Is there a creative process residing in the unconscious mind or ineffable “protothought” that is beyond the reach of current functional neuroimaging? In the words of Wittgenstein, “Whereof one cannot speak, thereof one must be silent.”69

While fMRI and PET “give insights into the location of functionally defined cortical fields, tractography goes beyond this to reveal how such fields are connected.”70 Since 1985 we have been able to obtain images of the orientation of myelinated fibers connecting neurons in the living brain through the evolution of DW-MRI into diffusion tensor (DT) tractography “in which white matter tracts are reconstructed in three dimensions” based on water’s six- to eightfold higher anisotropic diffusion rate in a direction parallel (as opposed to perpendicular) to the pathways of myelinated axons. With the advent of DT imaging, we began to explore the connectome, which “is the totality of connections between the neurons in a nervous system.” Princeton’s Sebastian Seung has offered a sweeping and profound corollary: “Minds differ because connectomes differ.”71 Actually, the connectome can be approached from two different perspectives, macro- and micro-, and Seung has trained his sights on neuronal connectomes, which I would consider to be the microconnectome.

Clinical neurologists of my ilk have learned to understand many neurobehavioral and language disorders as disconnection syndromes framed by the nineteenth-century “diagram makers” such as Wernicke and Dejerine. The seminal research on disconnection was forgotten after WWI but was revived in Geschwind’s 1965 classic paper “Disconnexion Syndromes in Animals and Man.”72 Although the term connectome would not be coined until 2005,73 Geschwind and his intellectual predecessors were nevertheless studying its diseases and lesions. However, until the last decade of the twentieth century, death was a prerequisite for visualizing the white matter connectivity of the human brain. The pathologist could see and feel the softening (encephalomalacia) or hardening (sclerosis—as in multiple sclerosis) of the brain lesions that disrupted neural circuitry and caused scarcely believable behavioral syndromes, such as being able to write but not read (alexia without agraphia). Pathologists also used microscopy to visualize the pathways of dying neurons and axons marked by retrograde and anterograde (Wallerian) degeneration. Mapping connectivity with white matter tract degeneration is a postmortem technique. Radioactive neuronal tracers, such as tritiated proline-fucose, injected into the eyes of macaque monkeys are transported along the axonal and transsynaptic highways from the retina to the optic nerve to the lateral geniculate body to the striate (visual) cortex. This technique of tracing the visual pathways led to David Hubel and Torsten Wiesel’s 1981 Nobel Prize but would be an unethical way to explore the connectome in living and breathing humans.

With the advent of DT imaging, mapping the connectome of living brains (known as hodology, from the Greek hodos, “path” or “road”) became reality. We have learned “that every area of the neocortex is linked with other cortical and subcortical areas by pathways grouped into five fiber bundles.”74 These bundles of axons include (1) association fibers running to ipsilateral cortical areas; (2) corticostriatal fibers linking cortex and the basal ganglia deep in the cerebral hemispheres; (3) commissural fibers that cross to the opposite hemisphere, with the corpus callosum being the largest; (4) corticothalamic fibers projecting to the thalamic nuclei, which are other deep hemispheric masses of gray matter; and (5) corticopontine fibers passing to the pons (Latin for “bridge”), which is the portion of the brain stem linking the midbrain to the lower brain stem (medulla oblongata).75 Every neurologist has to be familiar with diseases, such as multiple sclerosis, that target the central white matter exclusively, and the evolving hodologic maps go a long way toward clarifying why a demyelinating lesion at a particular white matter location causes a specific neurologic sign or symptom. Moreover, DT images of the connectome illustrate distributed neural circuits in vivo with startling clarity unimaginable to neuroscientists a mere generation ago. Is there a downside?

I may be looking a neuroimaging gift horse in the mouth, but if we are to progress in studying the connectome, current limitations (and future goals) can’t be swept under the rug. DT tractography is an anatomical exercise that cannot inform us whether a synapse is inhibitory or excitatory or point out the direction of biological “electricity” flowing (propagation of the action potential) along bundles of axons. When white matter pathways turn sharply, as in Wilbrand’s knee in the optic chiasm, or crossed fibers (decussations) mingle with uncrossed fibers, the resolving power of DT tractography may not be up to the task of delineating the actual neuroanatomy. And DT imaging is blind to the anatomical details of the synapse. MRI resolves one-cubic-millimeter voxels of brain tissue, but to “see” synapses EM must resolve tissue volumes that are a trillionfold smaller (less than one hundred cubic nanometers!).76 The upshot is that currently with DT tractography “there is no way of being sure that diffusion pathways synapse in the grey matter at all.”77

In the words of the self-confessed “neuronal chauvinist” Sebastian Seung, “We need to see neurons to find regional connectomes.”78 The intrepid souls who wish to obtain absolutely faithful representations of the microconnectome parcellate the neural tissue into one-cubic-millimeter blocks reconstructed from 33,333 sections cut to a thickness of thirty nanometers by microtomes or focused ion beams. A far cry from Cajal’s tracings of Golgi stained neurons, axons, and dendrites by light microscopy over a century ago, connectomics is arguably the most demanding investigation (sequencing of the three billion base-pairs of the human genome not excepted) of biologic structure ever attempted (Figures 8.3 and 8.4). Harvard’s Jeffrey Lichtman has upped the ante on Koch’s estimate of one hundred thousand cells per cubic millimeter of primate cortex79 and has conceded that “the actual number of different objects and their synaptic interconnections in a volume of brain tissue is unknown and, at the moment, even difficult to estimate or bound.”80 Each thirty-nanometer slice is teeming with axons, segments of myelin sheaths bounded by nodes of Ranvier, dendrites, synapses, synaptic vesicles, mitochondria, glia, et cetera. Despite the staggering complexity of the anatomy crammed into a very tiny space, acquiring microscopic images of the neural cross-sections is not the hardest part. The going gets tough with “labelling and tracing each neuronal process as it wends its way through the stack of images.”81

Figure 8.3. The connectome as depicted in 1909. Cajal’s 1909 drawing of a microscopic section of mammalian cortex used the Golgi technique to selectively stain a random sample of pyramidal neurons (b and c) and their connecting of axons and dendrites. (Santiago Ramòn y Cajal, Histologie du système nerveux de l’homme et des vertébré [Paris: A. Maloine, 1909].)

The only connectome completely mapped to date is to be found in the previously cited roundworm Caenorhabditis elegans, a one-millimeter hermaphroditic soil-dweller that has neither a respiratory or a circulatory system. C. elegans has 302 neurons, and it took South African biologist and 2002 Nobel Prize–winner Sidney Brenner and his team over a dozen years to analyze the electron microscope images of fifty-nanometer worm slices “to sort out which synapses belong to which neurons.”82 With the 1986 publication of its complete 340-page connectomic road map (“The Structure of the Nervous System of the Nematode Caenorhabditis elegans83), C. elegans joined the group of workhorse lower organisms—fruit fly (genetics), squid (axonal action potential), and sea hare (synaptic basis of learning)—that have just the right anatomy and physiology to catapult scientific research.

Figure 8.4. The connectome as depicted in 2011. Stacked images of Daniel Berger’s 2011 electron microscopy of layer five of mouse sensorimotor cortex reconstructs a 6 × 6 × 7.5 micron cube comprised of neurons, glia, axons, dendrites, and subcellular organs such as mitochondria. (Daniel Berger, “Stack of Tissue Images Ready for Construction,” Connectome, 2011. http://connectomethebook.com/?portfolio=atum-cortex-reconstructions.)

If it took a dozen years to complete the wiring diagram for a creature that has a paltry 302 neurons, the prospects for mapping larger nervous systems, including the eighty-five-billion-neuron human brain, become formidable (if not impossible). If done manually, plotting out the connectome “would take a trained technician a million working years for each cubic millimetre of brain; luckily computer vision and machine-learning algorithms speed things up.”84 Additionally, neuroscientists, such as Seung, and computer scientists, such as Zoran Popovic, have crowdsourced connectome science by creating videogames in which gamers with no particular neuroscience background compete at reconstructing three-dimensional images of neurons and their connections. Using a mouse to click onto an errant axon pictured in an Allen Institute for Brain Science electron micrograph of human or murine brain, the citizen scientists have increased “the number of neuron reconstructions from 2.33 a week that a team of professional analysts were doing on their own, to 8.3 reconstructions a week.”85 Faster reconstructions aside, “in connectomics, the size of the input set is at the high end of the big data range, and possibly among the largest data sets ever acquired” (my italics).86

Simply put, “neuroscientists cannot claim to understand brains as long as the network level of brain organization is uncharted.”87 On the road to the connectome, we may encounter unimagined structures, such as the “crown-of-thorns” neuron that wraps around the entire circumference of the mouse’s brain. Could the unprecedented length of yet-to-be-discovered human analogs of this class of neuron extending from the small, thin sheet of neurons in the claustrum coordinate “inputs and outputs across the brain to create consciousness”?88

Fundamental questions about the finest-grained large-scale anatomical study ever proposed remain unanswered. Does neuroanatomy tell us anything about neurophysiology? It is believed that “the structural wiring details [of neural circuits] per se are insufficient to derive the firing patterns” of neurons. Or even pure morphology of neurons may have its limits—neurons that look alike may be in different molecular classes that are apparent only on immunofluorescent staining with EM.89 Moreover, the “moving parts” of vesicle release of neurotransmitters at the synapse would elude connectomic electron micrographs. The image of a vesicle being released (exocytosed) from the presynaptic membrane would be similar to a freeze-action photo of a curveball leaving a pitcher’s fingertips. The photo will not tell you if the pitch will be a ball or a strike, and EM cannot predict what the neurotransmitter contents of the vesicles will do at the postsynaptic receptors. “There is the possibility that the wiring is normal but the receptors are not.”90 The sheer magnitude of connectomics and its daunting prospects may compel us to trim in our sails and “consider the possibility of reconstructions of neuronal substructures as opposed to whole brains and hope that testing these substructures will reveal enough modularity and regularity to allow deductions of interesting general organizational principles and overall function.”91

Although Thomas Harvey’s dissection of Einstein’s brain preceded the debut of the conception of the connectome by fifty years, the applicability of this neuroscientific approach is readily apparent when we review Dr. Weiwei Men’s research on Einstein’s corpus callosum, as discussed in chapter 5. The corpus callosum, the largest commissure in the human brain, is effectively a white matter bridge connecting the right and left cerebral hemispheres. Dr. Men measured the midline structures exposed by Harvey’s photographs of Einstein’s bisected brain and established that Einstein’s corpus callosum was significantly larger compared to young and old controls.92 This must be regarded as proof positive that Einstein’s connectome (like his cortex) was exceptional. Can the same assertion be made regarding Einstein’s microscopic connectome? Not yet. Of the two-thousand-plus microscope slides sectioned by Thomas Harvey and Marta Keller in 1955, many (the precise number is unknown) were processed with Weigert stain, which reveals myelinated axons under light microscopy. Despite the plentiful availability of slides that displayed the myelinated connections of Einstein’s brain, white matter was largely ignored in the five peer-reviewed studies (published between 1985 and 2006) of Einstein’s neural microanatomy that focused on cell counts and glial morphology (see chapter 4). Although David LaBerge (in chapter 1) has speculated about the possible salubrious role on cognition played by longer apical dendrites in layer five of the neocortex,93 Einstein’s microconnectome remains uncharted territory.

If our hypothetical twenty-first-century Einstein should permit neuroscientists to get “up close and personal,” even dyed-in-the-wool connectomists concede that “the ideal brain-imaging technology would provide both a complete map of the activity of all neurons and synapses in real time during normal behaviors. Even better would be to do this in a human being who can report on their thoughts while behaving. Unfortunately we are a long way from such technologies.”94

When a clinical neurologist ponders a disturbance of the “activity of all neurons and synapses,” her/his test of choice is the EEG. In a routine EEG, sixteen to twenty-five (actually, twenty-three at our hospital) silver, tin, steel, or gold electrodes coated with silver chloride are placed on the scalp. They do not record the electrical activity of eighty-five billion neurons; what the montages of electrodes detect is “an attenuated [by layers of scalp, skull, and meninges] measure of the extracellular current flow from the summated activity of many neurons.”95 Each electrode detects the activity of neurons populating approximately six square centimeters of underlying cortex; synaptic activity (measured in microvolts) of the pyramidal neurons is a principle source of EEG activity. The normal EEG has been studied since Hans Berger’s recording of “brain waves” with a string galvanometer in 1924. The frequency of the waves ranges from one to thirty hertz (Hz) (cycles per second), and for clinical purposes is divided into: delta (one-half to four hertz), theta (four to seven hertz), alpha (eight to thirteen hertz), and beta (thirteen to thirty hertz) rhythms. The summation of the neuronal synaptic currents accessible to surface electrodes provides no insight into the thought content. We do know that alpha rhythms are associated with “relaxed wakefulness” and that lower-amplitude beta activity may be elicited by “intense mental activity.” A slowed frequency in the delta range occurs transiently when we fall asleep or are in a coma. Lest you assume that the slower EEG frequencies are an exclusive marker for impaired cognition, I occasionally encounter patients who make no meaningful response to external stimuli (coma) and whose EEG records a predominant rhythm of eight to twelve hertz. This is a so-called “alpha coma,” which can be seen in patients with brain stem lesions or diffuse hypoxic brain damage. Electroencephalography is an invaluable diagnostic tool for determining if a patient has epilepsy or certain types of altered mental status. For example, it’s great for detecting cognitive dysfunction associated with liver failure (hepatic encephalopathy) but is insensitive to the mental ravages of Alzheimer disease or schizophrenia.

Although over ninety years of electroencephalographic research has taught us that brain “rhythms do not equal reasoning,” the possibility of a genius signature on EEG recordings was taken very seriously when Einstein, his Institute for Advanced Study colleague John von Neumann, and Norbert Wiener, the author of Cybernetics, underwent EEGs sometime in late 1950.96 It has long been known that alpha rhythm can be “blocked” or desynchronized when the subject focuses his or her attention. The eminent neurosurgeon Wilder Penfield related an account that “Einstein was found to show a fairly continuous alpha rhythm while carrying out rather intricate mathematical operations, which, however, were fairly automatic for him. Suddenly his alpha waves dropped out and he appeared restless. When asked if there was anything wrong, he replied that he had found a mistake in the calculations he had made the day before.”97 The blocking of the alpha rhythm reflected Einstein’s “concentration of attention” rather than his thought content per se. Nevertheless, Alejandro P. Arellano, the National Institute of Mental Health investigator who recorded Einstein’s, von Neumann’s, and Wiener’s brain wave tracings, found different distributions of alpha and theta rhythms during “intense mental work,” such as thinking about relativity during Einstein’s session. Arellano theorized that the changing distribution and synchronicity of these rhythms comprised different “scanning mechanisms” that could facilitate “brightness and originality, creative and abstractive thinking.”98 This “scanning” theory of higher mental function remains mostly metaphorical (and unconfirmed).

If routine EEGs cannot deduce mental content, does finer-grained micro-EEG technology hold promise? Rather than the relatively crude recording of the summated activity of millions of neurons in six square centimeters of cortex underlying each scalp electrode, the Brain Activity Map Project aspires “to record every action potential from every neuron within a circuit.”99 Is it possible to parse the eight-to-twelve-hertz alpha waves detected on routine EEG into millions of measurements of constituent neuronal and synaptic electrical events? More likely, a breakthrough technology is requisite to attain the critical goal of creating an accurate map of the brain’s functions, which depend “on rapid reversals of membrane potential, known as action potentials, to transmit signals between one part of a neuron and another distant part and … [are contingent] on smaller, slower changes in [membrane] potential at sites of synaptic contact (i.e. synaptic potentials) to mediate the exchange of information between one cell and the next.”100 A current mapping technology is a four-by-four-millimeter, one-hundred-silicon-electrode grid that is implanted permanently in the cortex of rats, monkeys, and humans (for neural prosthetics trials). These can measure the extracellular action potentials from tens to hundreds of individual neurons.101 The anticipated advances in penetrating electrode design have led the neuroscientists of the Brain Activity Map project to predict monitoring capabilities of hundreds of thousands of neurons by the mid-2020s. And don’t forget … electric current can flow in two directions, allowing electrodes to record and “influence the activity of every neuron individually in these circuits, because testing function requires intervention.”102 One caveat: As the Brain Activity Map initiative brings new understanding to circuit neuroscience, it aspires to develop “novel devices and strategies for fine control brain stimulation”103 which is great when treating the “diseased circuits” of Parkinson’s Disease but worrisome if it lifts the lid on the Pandora’s Box of mind control. The latter option is not usually spelled out in NIH grant proposals (but could be a real selling point for Defense Advanced Research Projects Agency [DARPA] funding). If “shocking” neurons with invasive electrodes seems regrettably crude and destructive, the adroit manipulation of neuronal circuitry with light became part of the neuroscience tool kit in 2005 with the arrival of optogenetics.104 Microbial opsins are light-sensitive proteins that can control ion channels in cell membranes. If an opsin gene is inserted (transfected) into a neuron, light can now turn the neuron on or off with millisecond precision by opening (and closing) ion gates and triggering (or suppressing) neuronal depolarization, which underlies action potentials by which nerve cells communicate. Stanford’s Karl Deisseroth, who introduced channelrhodopsin-2 into mammalian neurons, has also combined fiber optics and electrodes (optrodes) to measure the electrical activity of neural circuits turned on by exposing “opsin-ized” neurons to light.

Recording the neural electricity at microlevels of neurons with opsin genes and every action potential spike measured in millivolts will create an unprecedented data deluge, and if we are to sift any meaning from a petabyte (a million gigabytes) of data, we will become increasingly dependent on computer analysis and conclusions drawn from the machine-learning recognition of data patterns. “When they [computers] come up with these conclusions, we have no idea how, we just know the general process.” Venki Ramakrishnan, a Nobelist and a student of the deep biological structure of the ribosome, has encapsulated our growing dilemma: “So we’re in a situation where we’re asking, how do we understand results that come from this [computer] analysis? This is going to happen more and more as datasets get bigger.”105 Data sets of this magnitude may be like Jorge Luis Borges’s apocryphal “Map of the Empire” whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forbears had been, saw that the vast Map was Useless.106 The invaluable standby of the neuroscientist’s intuitive grasp of the phenomena (as in corn geneticist Barbara McClintock’s “feeling for the organism”) that he/she is encountering will be beset by the difficulty inherent in the vast complexity of the neural data generated by brain activity mapping, connectomics, and fMRI. By itself the acquisition of unimaginably immense data will not solve the “problem” of the human brain. Cognitive scientist and veteran subject of 84 fMRIs Russell Poldrack contends that “we know the entire connectivity of C. elegans … but we still don’t understand how C. elegans does what it does.” Data alone does not assure understanding and “people are coming to realize that theory is important.”107 The most basic questions remain to be answered: “We need to know what level of cellular activity produces thought. ‘Does it take 1000 cells? 10 million? 100 million?’ ”108

Every era has drawn on its own complex technological zeitgeist to devise a working model of the brain. Candidate technologies have included Descartes’s clock, the loom, the telephone exchange, a chemical plant, a radar scanner, a hologram, and currently the computer.109 Setting aside the massive programmatic studies underway on the brain’s biologic structure and function, can we combine our exponentially accruing knowledge of human neurobiology with the twenty-first-century dominance and ubiquity of the computer to begin to understand (and presume to recreate) the brain of an Einstein? The audacious proposal to build and program a machine capable of replicating any human’s (let alone Einstein’s) mind is the ultimate quest of “strong” artificial intelligence (AI). Strong AI proposes that the brain is a very clever computer that runs a program to produce the mind. Simply put, computers can think … or, at least, proponents of AI believe that they can. In 1936, newly arrived in Princeton (and about four blocks from Einstein’s house), Alan Turing was going over the proofs of his “On Computable Numbers, with an Application to the Entscheidungsproblem110 and ushering in the ur-computer, the universal Turing machine.111 Turing’s “thirty-five pages would lead the way from logic to machines.”112 By 1950 Turing confided that another fifty years in the future “one will be able to speak of machines thinking without expecting to be contraindicated”; however, in 1950 he considered the question “Can machines think?” to be “too meaningless to deserve discussion.” Rather than attempt to define the terms machine and think, he replaced the questions of machine intelligence with the questions posed by “the imitation game,”113 which came to be known as the Turing Test.114 The players in the game were a man, a woman, and an “interrogator” in a separate room. By asking questions and receiving responses via teletype (to preclude auditory cues), the interrogator had to judge which subject was the man and which was the woman. Turing then changed the rules of the game when “a machine [a digital computer] takes the part of A [a man],” and the interrogator’s charge became to distinguish the remaining human from the computer. Although Turing’s strong AI prophecy was that in a half-century’s time the interrogator would have a 30 percent or more chance of confusing man and machine after posing questions for five minutes, he had an inkling that “machines carry out something which ought to be described as thinking but which is very different from what a man does” (my italics]).115

Let’s fast-forward to 1980 and find out how very different machine thinking can be. Philosopher John Searle at Berkeley proposed a thought experiment in which you are locked in a room with “baskets full of Chinese symbols” (ideograms). You do not speak Chinese. You are given a rule book in English that specifies the manipulation of the Chinese symbols “purely formally, in terms of their syntax, not their semantics.” Unbeknownst to you some of the Chinese symbols passed into your room by people outside are “questions,” and the rulebook allows you to select symbols that are “answers to the questions.” In time you get so good at symbol manipulation that the answers you pass out of the room are “indistinguishable from those of a native Chinese speaker.” You have become adroit at formal syntactical manipulations of Chinese symbols without learning or understanding a single word of Chinese! Now imagine that the sole denizen of the Chinese room is a digital computer. “All the computer has, as you have, is a formal program for manipulating uninterpreted Chinese symbols.” Searle has driven home his compelling argument against the strong AI position “with a very simple logical truth, namely, syntax alone is not sufficient for semantics, and digital computers insofar as they are computers have, by definition, a syntax alone.” Furthermore, he concluded that “no computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.”116

The cyberneticist and the philosopher have expressed diverging views on the limits of machine intelligence and whether the brain is a computer. Inherent in Turing’s and Searle’s dialogues across decades is the common ground of defining the limits of the computer-like qualities of the human brain. How so? Superficially, the appearance of three pounds of biological wetware chockablock full of neurons, glia, vessels filled with blood, and cerebrospinal fluid is strikingly different than a box filled with silicon chips and effecting a user interface with a video display terminal, a keyboard, and a mouse. (Supercomputers such as the Cray Titan at Oak Ridge National Laboratory are a little bigger. It’s about the size of a basketball court filled with two hundred cabinets.)

Appearances can be deceiving, but when did we begin to bracket brains and computers together? The design for the world’s first general-purpose computer, the Analytical Engine, was drafted in 1837 by Charles Babbage, the then Lucasian Professor of Mathematics at Cambridge. This mechanical marvel would likely have been steam-powered and considerably heavier than its fifteen-ton prototype (Difference Engine No. 1). It was never built, and posterity has no record of Babbage’s musings over mind and machine. Neither Babbage nor his cyberneticist descendant, Alan Turing, were biologists, and we must look to a savant of the life sciences for an initial scholarly foray into the computability of the brain. That multifaceted scientist/clinician was Warren Sturgis McCullough (1898–1969), who trained in 1928 as a neurologist at Bellevue, then in 1932 as a psychiatrist at the Rockland State Hospital for the Insane, and then in 1934 as a Sterling Fellow in neurophysiology at Yale, which was the epicenter for American brain science (see chapter 3). In this era before rigid distinctions were imposed by medical specialty board certification, McCullough would best be designated as a neuropsychiatrist, but his “avowed intent of learning enough physiology of man to understand how brains work”117 would tear down the interdisciplinary barriers between cybernetics and neuroscience. Working in Dusser de Barenne’s lab at Yale, he became adept at cortical localization using strychnine neuronal stimulation. When he moved to the Illinois Neuropsychiatric Institute in 1941, McCulloch’s expertise in neurophysiology and exposure to mathematical biology was foundational in envisioning the all-or-none properties of neuronal discharges as equivalent to the propositional logic of Boolean functions (AND [conjunction], OR [disjunction], and NOT [negation]).118 In 1943 his collaboration with Walter Pitts brought forth the landmark paper “A Logical Calculus of the Ideas Immanent in Nervous Activity.”119 They sought nothing less than to bring “work on mathematical logic and problems in computability to bear on our understanding of the brain.” This paper may have heralded the onslaught of the “brain-as-a-computer” metaphor, and McCullough later conceded that he and Pitts had been inspired by “Turing’s idea of a ‘logical machine.’ ”120 From his study of functional cortical organization, McCulloch regarded the nervous system as “a net of neurons, each having a soma and an axon … [and] adjunctions or synapses.” Launched by McCulloch’s tenet that the brain was built of discrete anatomical/functional units known as neurons, the ensuing decades would propose vacuum tubes and later, transistors, as the equivalent of neurons for computer “brains.”

Setting wiring diagrams or neural nets aside for the moment, if we hope to “build” a brain of normal human or even Einsteinian capacity we need to know if our synthetic neuronal building blocks are the same as the ones that Mother Nature provides. It does not require a graduate degree in solid-state physics or cellular biology to apprehend that transistors, with their tripartite base, collector, and emitter fashioned out of silicon and tweaked with electron donor dopants, are at best ersatz neurons and very different from the real thing. The profound difference in the functional repertoire of transistors and neurons accounts for the radical difference in the architecture of these man-made artifacts and living cells. Transistors are semiconductors that can switch or amplify signals. They do not create their own energy or repair/modify/reproduce their physical structure. Neurons do. They reproduce themselves (and carry around nuclear DNA for the purpose) and produce energy as they break ATP bonds in their mitochondria. Neurons manufacture proteins in their ribosomes to repair and modify their structure, which in the case of spinal anterior horn cells can extend a meter or more, requiring an elaborate axonal transport system to ferry proteins from cell body to distant synapse. “If the dream of a bacterium is to become two bacteria,”121 the dream of a neuron is to synapse with another neuron, and this requires very “untransistor-like” capacities, such as elongating and branching dendrites and axons or the learning-induced remodeling of synapses. Neurons collaborate with other cells (oligodendroglia) to acquire myelin insulation wrapped around axons to ensure the rapid conduction of action potentials. Neurons even have their very own cytoskeletons necessary for mitosis and motility.

Are you convinced of the irreconcilable differences between inert silicon chips (integrated circuits with numerous tiny transistors) and living carbon cells? I am … but I fear that I’m swimming against the riptide of strong AI. Even Christof Koch, a spokesman for the neurobiology side of the equation, endorsed the proposition that “synapses are analogous to transistors.”122 The neuroscientist Gary Marcus has not foreseen “any in-principle limitation” to machine thinking and has averred, “Carbon isn’t magical, and I suspect silicon will do just fine.”123 Notwithstanding the assertions of Koch and Marcus that silicon and carbon-based chemistries are both plausible options for fabricating brains, evolution clearly did not favor silicon as a platform for biochemistry. Astrophysicist Neil deGrasse Tyson has explained that carbon and silicon atoms “have similar outer orbital structures of their electrons.” “Why not imagine silicon-based life [and brains]? Nothing stopping you in principle. But in practice, carbon is about ten times more abundant in the universe than silicon. Also, silicon molecules tend to stay tightly bound, making them unwilling players in the world of experimental chemistry that is life.”124 Despite the visions of sci-fi authors and aspirations of strong AI, biological variation and natural selection over the unfathomable epochs of deep time (to be discussed later) have not brought forth a silicon neuron. Nature does not exploit every opportunity and different outcomes might result if we could start over from the point when life emerged on earth 3.7 billion years ago, but if the lessons of evolutionary history are to be heeded, a silicon nervous system identical to the human brain never was and never will be.

Let me climb down from my soapbox and assume that we can conflate neurons and transistors. If so, how many transistors are required to meet the specifications of an Einstein brain? If we allot eighty-five billion neurons to a “base model” human brain, that milestone has been surpassed by IBM’s TrueNorth, which simulates 530 billion neurons. And yet, although supercomputers can defeat chess grandmasters and prevail over Jeopardy! champions, “things every dummy can do, like recognizing objects or picking them up, are much harder.”125 It appears that despite the advent of an infernal machine comprised of 530 billion ersatz neurons, the “singularity”—a kind of cybernetic End of Days in which AI surpasses human intelligence and takes over—has yet (if ever) to transpire. One critical difference between brain and computer may lie at the level of synaptic connections. “The typical gate of a transistor in the central processing unit is connected to a mere handful of other gates, whereas a single cortical neuron is linked to tens of thousands of other neurons.”126 Moreover, Cambridge physiologist Dennis Bray has been led to speculate that “in an artificial machine, we must consider that something is missing from the canonical microchip.”127

Duly noted, but hope springs eternal in the hearts of the strong AI congregation. If we are ever to reach the Holy Grail of a bona fide thinking machine, New York University’s Gary Marcus has envisioned three routes to success: (1) “Bigger and faster machines,” (2) Better “learning algorithms” and “Bigger Data,” and (3) “Understand what it is that evolution did in the construction of the human brain.”128

Q. Bigger and faster machines?

A. I suppose that if TrueNorth’s 530 billion surrogate neurons are insufficient, we can hope that Moore’s law (which states that the number of transistors on a microchip will roughly double every two years) will lead to even bigger supercomputers, but microchips may “hit the wall” when the miniaturization of circuit features “get to the 2–3 nanometre limit,” which is about ten atoms across.129

Q. Better learning algorithms?

A. Marcus’s query presupposes a machine that gets smarter as it accrues experience, also known as learning, and AI based on deep learning has largely supplanted “Good Old-Fashioned AI” (GOFAI), which was based on symbolic representations and required top-down programming. Machines that can learn on the fly without superimposed algorithms accomplish this very human skill set with interconnected layers of silicon neuron stand-ins. These neural networks (nets) can modify the connections of their layers with backpropagation as they detect error signals. Learning takes place as neural nets modify their own codes “to find the link between input and output—cause and effect—in situations where the relationship is complex or unclear.”130 As far as we know in neurobiology, the final common pathway in the brain’s modification of its connections is increased synaptic growth or neurotransmitter release131 and not purely informational; that is, neural code changes. “Unfortunately, such [neural] networks are also as opaque as the brain. Instead of storing what they have learned in a neat block of digital memory, they diffuse the information in a way that is exceedingly difficult to decipher.”132

Or we could be barking up the wrong binary code “tree” in the search for the brain’s real and true algorithm. With the its all-or-none action potentials, neuronal intercommunication has been equated for better or worse with the binary code of classical digital computation based on bits that are either zero or one. In contrast, quantum computing uses qbits, which can represent zero and one at the same time (superposition). A quantum algorithm exploits superposition to scan multiple data sets at the same time rather than in the serial sequencing of classical computation.133 Does the brain operate along the lines of quantum computing, which holds the promise of unprecedented speed and capacity to process huge amounts of information? Does quantum computing in its infancy offer a bold insight into our neural reality or, like Descartes’s clock,134 is it the latest technocultural metaphor for brain physiology?135

Q. How did evolution construct the brain?

A. As Daniel Dennett and Richard Dawkins take great pain to explain, we must appreciate the cosmic paradox that the most complex organ extant was created by the “blind, uncomprehending, and purposeless processes” of genetic variation and natural selection.136 In essence, our intelligence arose from Unintelligent Design. (I speak as a practitioner of the field of applied biology known as medicine and a student of the nuts and bolts of Darwinian evolution as it applies to the brain. I make no attempt to address the profound questions posed by religion with scientific answers nor vice versa; in the words of paleontologist Stephen J. Gould, these domains of human knowledge and faith are separate, nonoverlapping magisteria137 that, for what it’s worth, I believe can coexist to the benefit of thoughtful and devout people.)

In stark contrast, the digital computer is an example of Intelligent Design par excellence, and so right out of the starting blocks, we must recognize that machine intelligence and biological intelligence got to where they are via very different routes (top-down technical design vs. bottom-up evolution) and time frames (eighty years counting down from Turing vs. 3.7 billion years counting down from the appearance of life on earth). The brain-computer analogy comes up short again when we acknowledge the generalizations that “brains are parallel (they execute many millions of ‘computations’ simultaneously, spread over the whole fabric of the brain); computers are serial (they execute one single simple command after another, a serial single-file stream of computations that makes up for its narrowness by its blinding speed).”138 These apparent discrepancies between biologic and machine intelligences present formidable obstacles to the AI modeling of brain functions. In the words of George Box: “Essentially all models are wrong but some are useful.”139

If we turn to evolutionary biology for answers, according to Dennett we are engaged in an exercise in “reverse engineering.”140 A major prerequisite for reverse engineering the final product of brain evolution is a detailed grasp of correlative human neuroanatomy, by which I mean our knowledge of the function of a given neural structure. For example, as best we can experimentally surmise, a creature with a single type of retinal photoreceptor will not see colors. Dogs have two kinds of retinal cones and can see colors (surprise! I too used to think that Lassie was colorblind), albeit not as well as humans with three kinds of cones. Freud said, “Anatomy is Destiny,” and in this particular case he got it right.141 However, the human brain is a little more complex than the canine retina, and our present knowledge of human neuroanatomy may not suffice for reverse engineering. In 1993 Francis Crick decried the “backwardness of human neuroanatomy” as compared to the detailed maps of macaque brains142 (please don’t ask what has to be done to a macaque and his brain to obtain that map, but in the interests of full disclosure the procedure goes along the lines of: (1) open macaque’s skull, (2) inject horseradish peroxidase tracer into brain, (3) kill the animal after a few days, and (4) examine sections of the brain with a microscope to determine what axonal pathways were traversed by the tracer).143

Twenty-four years later, we still have a long way to go in our efforts to anatomize the human brain, as implied by this chapter’s review of the ongoing efforts of the BRAIN Initiative, fMRI, connectomics, the Brain Activity Map Project, et cetera. We find ourselves faced with the dilemma voiced by Francis Collins, who led the Human Genome Project (and no stranger to complex biological problems). He observed that when looking at pictures of the connectome, “It’d be like, you know, taking your laptop and prying the top off and staring at the parts inside, you’d be able to say, yeah, this is connected to that, but you wouldn’t know how it worked.”144

Gary Marcus’s bet is that we need to look at the evolution of the brain to “solve” strong AI, and I enthusiastically concur that to best understand the brain, you need to start by studying the brain. What else would you expect a neurologist to think? But before you depart from the ranks of the cybernetics “true believers” and embrace evolution as the royal road to understanding the brain, please heed biologist Leslie Orgel’s Second Rule: “Evolution is cleverer than you are.”145

To bring the curtain down on my chapter-opening interrogatory—“Where do we go from here?”—we need two things: a rearview mirror and a crystal ball that can foretell the discoveries of aspirational neuroscience.

In the mirror we see that our rediscovery of the “lost” photographs of Einstein’s brain raised many questions … some profound and some of mainly historical interest. Only further study of the brains of off-the-charts geniuses will affirm whether our careful examination was the last gasp of clinicopathologic correlation (soon to be forgotten in the wake of contemporary neuroscience replete with functional neuroimaging and connectomics) or a Rosetta stone for the neural underpinnings of genius (if such exist). Posterity will not afford us the opportunity for anatomical study of the brains of Newton, Galileo, Picasso, Clerk Maxwell, Darwin, Shakespeare, or Da Vinci, to test the hypothesis that a common and distinctive neural thread runs throughout genius. And, as I edit this, the death of Stephen Hawking on March 14, 2018 (the one-hundred-thirty-ninth anniversary of Einstein’s birth!) beckons further study of an elite brain, cortical changes of his amyotrophic lateral sclerosis of fifty-five years duration notwithstanding. Regrettably (for the hypothesis), the most intensively studied brain in the last hundred years has been that of Vladimir Lenin (see chapter 3). The resulting thirty-thousand-plus microscope slides led the Vogts to attribute Lenin’s political genius to numerous “jumbo” pyramidal neurons observed in cortical layer three—a finding (and conclusion) that Wilder Penfield questioned. Before dismissing the exceptional-neuron-as-a-hallmark-of-intelligence theory out of hand, consider the very distinctive Von Economo neurons, which are large spindle-shaped nerve cells found in the large “intelligent” mammalian brains of humans, great apes, elephants, and cetaceans.146

It is a cruel irony that as I conclude my account of finding Thomas Harvey’s lost cache of photographs and microscope slides of Einstein’s brain, I must own up to the unfortunate occurrence that sometime after May 15, 2000, the remaining 180 or so celloidin-embedded brain blocks disappeared! The last color photographs of the two glass specimen jars containing the gauze-shrouded tissue blocks were taken by me (see Figure 1.4) on that spring day in 2000 at the Medical Center of Princeton and are readily found on the Internet and in textbooks. In December 2011 I was reminded and reassured that Thomas Harvey had left “part of the brain” to Princeton.147 However, as I write this on a beautiful spring day six years later, those blocks are officially AWOL. A spokeswoman for the Medical Center of Princeton at Plainsboro (the latest iteration of Princeton Hospital, where Thomas Harvey undertook Einstein’s autopsy) informed me that the “Princeton HealthCare System [PHCS] does not have the legal right to the brain, nor do we have possession of it. It is our unconfirmed belief that Dr. Krauss does.”148 PHCS had previously closed the door on further inquiries regarding the whereabouts of Einstein’s brain with the declaration that Elliot Krauss, Princeton’s chief of pathology (see chapter 1) “does not wish to be interviewed.”149 However, in the first week of February 2018, Dr. Krauss broke with precedent and met with Japanese broadcast media. Lest you get your hopes up, he declined to be filmed and would not divulge the location of the brain blocks.150

If the motherlode of Einstein’s brain tissue has been lost or declared off-limits by the Medical Center of Princeton, the legacy of Einstein’s brain will recede from view that much more quickly. But what of the tissue that we have preserved and made available for scholarly inquiry? A large portion of Harvey’s microscope slides remain both mounted on glass and in digital form. The new neural science of connectomics may help avert the last rites being given to Thomas Harvey’s meticulously sectioned and stained microscope slides. Histologic studies to date have been confined to counts of the density of neurons and glial cells. No investigator has published studies of the axonal anatomy visualized on the Weigert-stained slides. By repurposing/reprocessing the slides for scanning EM, could the greater detail of connectomic-level imaging be brought to bear on slides that Harvey intended for light microscopy?

Shifting my gaze from the rearview mirror to the crystal ball holding the neuroscientific hopes and dreams summoned forth by Einstein’s brain, three aspirational questions emerge from the mists:

Q. Can we “grow” another Einstein?

A. When I published my first paper on Einstein in 2001, it was mistakenly believed that I had access to the same brain blocks that I had photographed151 and could provide tissue samples. The interested parties were either molecular biologists skilled at PCR (polymerase chain reaction) or would-be employers of such scientists for the sole purpose of sequencing Einstein’s genome. With a complete Einsteinian DNA sequence in hand, the prospect of cloning would exert a strong attraction. Recalling the thermodynamics of DNA, I quickly disabused them of their cloning notion. As discussed in chapter 2, the room-temperature formaldehyde that Harvey used to perfuse Einstein’s brain on April 18, 1955, led to the irreversible denaturation of his DNA into fragments one hundred to two hundred base-pairs in length.152 At present we lack the technology to reassemble those fragments into the continuous three-billion base-pair sequence of Einstein’s genome. We don’t even have a complete list of which genes interspersed among those three billion base-pairs subserve intelligence. Intelligence is a classic polygenic human trait, and a recent study of 78,308 individuals found fifty-two genes (forty new and twelve confirmed) influencing human intelligence.153 Further data-mining will inevitably identify more intelligence-linked genes. With sizeable gaps in the genetic blueprint (aka, nature) for intelligence, that is, the fifty-two genes account for less than 5 percent of variance in intelligence and the immense variability of the environmental impact on the brain’s wiring (aka, nurture)—cloning another Einstein remains a distant (if not unrealizable) prospect.

Q. Can we “build” (program) another Einstein?

A. As outlined in my earlier discussion of strong AI, we have no definitive evidence that the brain operates in a manner analogous to serial, parallel, or quantum computing … or whether the brain’s cognitive architecture is sui generis. I will piggyback my opinion onto Gary Marcus’s “bold claim” that “I don’t think we will ever understand [or build] the brain until we understand what kind of computer it is.”154 At chapter’s end, I will propose a biological justification for the elusive nature of attaining Marcus’s sine qua non of understanding the “kind of computer” that is operative during human cognition.

Q. How about enhancing human intelligence to approach Einsteinian levels?

A. We’re probably not going to reach that goal by following my parents’ admonitions, such as studying hard, getting plenty of sleep, and not watching television or reading comic books. Parents of the current generation of digital natives probably have formulated their own updated rules, such as “no computer gaming and try to express your thoughts in more than 280 characters” (sorry, Twitter).

An interesting and germane development has not unexpectedly been brought to you by modern medicine. No longer content with “treating disease” (“therapy”), my colleagues in neurology are branching into “improving normal abilities” (“enhancement”).155 Do we have an Einstein pill? Not yet … but we’re working on it. Need to be on point at that morning meeting after four hours of sleep? How about modafinil? Can’t concentrate ? Try methylphenidate. Experiencing memory loss associated with early Alzheimer disease? Anticholinestase inhibitors are just the ticket. Anxious? Selective serotonin reuptake blockers (SSRIs) can come to the rescue.

No big surprises are found here. Doctors treat neurologic or psychiatric disease with appropriate pharmacotherapy. In the latter part of my clinical career, there has been mission creep from therapeutic indications to the demand for quality-of-life enhancement. Patients with normal intellect, memory, and no sleep deprivation want to think and remember better (or have the option to maintain peak performance on four hours of sleep nightly). Do these drugs really work for normal people, or is this a pharmacologic Faustian bargain that exposes the patient/client/lifestyle perfectionist (you choose) to side effects and no benefit? Maybe. Scientific uncertainty aside, this is a place where cultural expectations and medicine increasingly collide.

Is there anything else on the cognitive enhancement menu besides drugs? In a word, it’s electricity. Walt Whitman may have presciently sung about “the body electric” in 1855,156 nearly a century before Hodgkin and Huxley157 explained the ionic basis of the axonal action potential, but recently the brain’s electricity has been annexed from the provinces of poetry and physiology to become the realm of do-it-yourself cognitive enhancement. On eBay for $39.95, you can purchase a transcranial direct current stimulation (tDCS) kit containing two scalp electrodes with a two-milliamp power supply and then you’re all set to apply low-voltage direct current to your brain. Simply strap the saline-soaked sponges containing the electrodes to your scalp and turn on the current for usually about twenty minutes. Remember to allow at least forty-eight hours between sessions. Depending on the polarity of stimulation, the resting neuronal membrane potential is hyperpolarized (refractory to firing) or depolarized (the neuronal action potential spikes or fires). Will tDCS make you into another Einstein or at least a little smarter? It may help with depression but the cognitive benefits are unproven. Due to the brain’s connectivity, tDCS may affect parts of the brain distant from the cortex underlying the electrodes and “changes initiated during stimulation can be longlasting and even self-perpetuating.”158

As opposed to the scattershot effects of tDCS on the underlying cortex, could the greater precision of stimulating electrodes applied to circumscribed areas of the cortex elicit profound thoughts worthy of an Einstein? In the heyday of cortical mapping in the 1930s and 1940s, Wilder Penfield applied from one half to five volts of electrical stimulation through bipolar electrodes, with the points three millimeters apart. From the temporal lobes, the electrodes summoned forth quasi-cinematic fragments of the patients’ memories, and from the anterior bank of the precentral gyrus, movements of the opposite side of the face and body were evoked. In contrast, Penfield’s electrodes failed to summon forth language when current was applied to Broca’s area (for expressive speech). The anterior frontal lobes, the seats of higher intellect, planning, and initiative, were “usually resistant to electrical stimulation,” which was turned off when it became painful.159 The higher realm of thought, with its foundational basis in language, has eluded the “interrogation” of eloquent cortex by electrode, and the prospect of evoking words and thoughts befitting an Einstein with external application of electricity remains a long way from realization.

I have enumerated the formidable, if not insuperable, barriers to mapping both anatomically and physiologically the brain(s) of Einstein(s) past and future or creating proxy Einsteins through genetic, cybernetic, and electric technology. Is it within the powers of the human brain to understand itself on the deepest levels, or will our very own thought processes remain perpetually beyond our ken? Have I been exploring an absurd proposition akin to asking a dog to understand calculus? As was (very) apocryphally attributed to Zhou Enlai when asked about the impact of the French Revolution nearly two centuries earlier, he responded, “It may be too early to say.”

I lay the profound difficulty of the “problem” of Einstein’s (or any) brain squarely at the feet of the unimaginably powerful conjunction of natural selection and deep time. Deep time is an essential part of the intellectual tool kit of geologists, and an acquisition of this sensibility is a hard-won perspective for nongeologists or for those of us working with biological phenomena. In Basin and Range, John McPhee conceded that we may be able to measure deep time, but “the human mind may not have evolved enough to be able to comprehend deep time.” “A sense of geologic time is the most important thing to suggest to the nongeologist: the slow rate of geologic processes, centimetres per year, with huge effects, if continued for enough years.”160 If the seafloor spreads four centimeters per year for eighty million years (or roughly the duration of the Cretaceous period), the sea bottom will have expanded to separate continental coastlines by nearly two thousand miles! (I apologize if my linear dead-reckoning runs roughshod over the complexity of plate tectonics.) Now forget about magma migrating up through the oceanic crust and consider the rate of spontaneous mutations occurring throughout the primordial biomass, commencing with the appearance of life on earth. Assume that significant mutations for structural proteins or enzymes transpire at a particular chronologic frequency and multiply that mutation rate by deep time. The rate of mutation (or variation) of DNA/RNA varies according to the organism and the size of its genome. Fruit flies, our old friend the roundworm (C. elegans), and mice have higher mutation rates per base pair for specific chromosomal loci than humans.161 For the human genome it is estimated that each zygote (fertilized egg) has sixty-four new mutations! Not all mutations change the structure/physiology (phenotype) of people, and of those that do, some are adaptive, some are harmful, and some are neutral, with no observable impact. It is mind-boggling to consider the structural impact on the evolving brain that is orchestrated by the cumulative genomic change accruing over countless eons. In contrast, don’t assume that the rate of genetic change is so gradual and incremental that it can be viewed only from the perspective of deep time and must be imperceptible on an everyday timescale. In some circumstances phenotypic evolution can be directly observed over periods as short as two years (in the case of the climate-induced changes in the beak size of ground finches in the Galapagos Islands, for example).162 If measurable change can be observed in as little as two years, imagine what can transpire on the scale of deep time! To evolve nervous systems to the current top-of-the-line human brain, biological deep time began 3.4 to 3.8 billion years ago with the debut of bacteria. About two billion years ago, the first complex cells with a true nucleus (eukaryotes) appeared, and 1.4 billion years later, complex multicellular eukaryotes came on the scene.163 Is it any wonder that over the inconceivable expanse of deep time countless generations of all living organisms reproducing with differing mutation rates would produce innumerable incremental changes (with the occasional macromutation or “hopeful monster” thrown in for good measure) to lead to the most complex biological structure to date (as far as the fossil record permits us to know)? For neuroanatomists the brain is the mother-of-all palimpsests teeming with shortcuts (U fibers), detours (optic nerve crossover at the optic chiasm), circuits with no clear beginning or end (feedforward/feedbackward of the afferent visual system), blind alleys (rods and cones located furthest from the image focused on the retinal surface), workarounds (thoracic motor neurons can be relied upon to control breathing muscles when the cervical spinal cord is damaged), redundant systems (two motor systems—pyramidal and extrapyramidal), dead-ends (when more than 90 percent of cells in the substantia nigra die off, we can’t regrow them, and Parkinson’s disease results), add-ons (the phylogenetically “new” color-sensitive parvocellular visual system added to the “old” black-and-white sensitive magnocellular system), waves of neuronal migration, both chemical and electrical synapses, myelinated and unmyelinated axons, et cetera.

The irresistible force of natural selection imposed on genetic variation produced by copying errors, DNA damage, and recombination over the course of deep time could be headed in one direction only—complexity. In the words of Gary Marcus: “From the 100 + cortical areas in the human brain, with vast numbers of apparently orderly connections between them, to the hundreds of neuronal types, to the enormous amount of molecular complexity within individual cells and synapses, the dominant theme of the brain is not simplicity (as so many computational neuroscientists seem to hope) but complexity.”164 Seeking to bring order out of the chaos of this daunting complexity are two seminal concepts of the “young sciences” of neurology and cybernetics—the neuron doctrine (1906) and Turing’s universal machine (1936).

Am I proposing that our analytical concepts must be as venerable as the phenomena studied? Of course not, but I am sanguine that as a field of scientific inquiry matures, the accretion of knowledge at least demarcates more clearly the known, the partly speculative, and the truly unknown. The platform of a mature science may provide the launch site for a true breakthrough. Although scientists knew about x-ray crystallography and deoxyribonucleic acid before 1953, biology was never the same after Watson and Crick unveiled the double helix. At this point in the race (or maybe ultramarathon), to understand the brain, we are just beginning to hit our stride out of the starting blocks.

We may be interested in some brains more than others, but I’m afraid that further study of Einstein’s brain may well be an off ramp, diverging from the superhighway of twenty-first century neuroscience. There will likely never be an Institute of Einstein Brain Studies that will underwrite research addressing the “obvious” questions. Was Einstein’s connectome unique? Is the cortical morphology of all geniuses exceptional? Will neuroanatomists ever encounter a doppelganger (or phenocopy) of Einstein’s brain that in life was the seat of nongenius intellect?

The questions may remain unanswered, but our fascination with Einstein’s achievements and their connection to his brain remains undiminished for over six decades after Thomas Harvey presumed to grasp the brain in his hands, plunge it into formalin, and begin the Promethean quest for physical traces of unparalleled genius in 1,230 grams of neural tissue.