2

What Is Schizophrenia?

What if we are all potential schizophrenics? What if our ancestors were schizophrenic as a matter of course?

What if schizophrenia were the foundational state of human consciousness?

What if vestiges of this preconscious state remain embedded in the human brain, in all newborns’ brains—dormant but viable, awaiting a collision with some random circumstance to be hurt into poetry—Yeats’s phrase—if only the dark poetry of destruction and self-destruction? Or, perhaps equally disturbing, what if that spurred state gave us the luminous poetry of art? Or the poetry of God?

Such questions can seem outlandish, yet they have been posed by serious, scholarly men and women in modern times in attempts to answer an unanswerable question: What is schizophrenia?

So little is known about schizophrenia that neuropsychiatrists and researchers hesitate to offer a definitive theory of causation. Of its origins and causes, the writer and professor of psychology Richard Noll has suggested that “contemporary readers would do well to be humbled by our current state of scientific knowledge.” He points out that more than thirty thousand articles on the disease were published between 1998 and 2007, and that the output since then has increased to about five thousand articles per year. This illness shares with cancer, its partner in catastrophic affliction, an almost otherworldly imperviousness to definitive understanding and cure.

Neuropsychiatrists and allied professionals have only recently moved toward agreement on several fundamental likelihoods. Among them:

What we call “schizophrenia” is not a single disease, or a “categorical illness,” but a rare clustering of several distinct malfunctions in the brain.

These malfunctions are genetic in nature, yet in a far more complex way than direct genetic inheritances like hair or eye color.

These genetic malfunctions are unlikely to produce schizophrenia in an individual unless they are stimulated by environmental conditions. By far the most causative environmental factor is stress, especially during gestation in the womb, early childhood, and adolescence—stages in which the brain is continually reshaping itself, and thus vulnerable to disruption. Stress can take the form of a person’s enduring sustained anger, fear, or anxiety, or a combination of these. Stress works its damage by prompting an oversupply of cortisol, the normally life-sustaining “stress hormone” that converts high-energy glycogen to glucose in liver and in muscle tissue. Yet when it is called upon to contain a rush of glycogen, cortisol can transform itself into “Public Enemy Number One,” as one health advocate put it. The steroid hormone swells to flood levels and triggers weight gain, high blood pressure, heart disease, damage to the immune system, and an overflow of cholesterol. Stress is a likely trigger for schizophrenia.

Many scientists believe that stress is especially destructive during the natural adolescent process of “pruning”—a critical and necessary period of cell destruction that can leave the prefrontal cortex open to disruption. I will explain pruning in greater detail later.

Scientists generally agree that the disease produces three sets of symptoms: positive, negative, and cognitive. Positive symptoms of schizophrenia are the most dramatic. They beckon the sufferer into an imaginary world, a world of shapes and presences and, most commonly, voices. Some people with schizophrenia can construct those voices and hallucinations into an alternate identity that either speaks to them or that they inhabit, as when they come to believe themselves a great leader from history, or even a god. In extreme cases, the patient acts out these delusions, sometimes with violent, deadly, and self-destructive results.

Negative symptoms embrace a range of responses that manifest as generalized withdrawal. They can take the form of decreased motivation, cauterized emotions, a passive turning away from friends, and listlessness—symptoms that are distinct from symptoms of clinical depression. Cognitive symptoms can include a loss of memory, a lack of focus on what is happening or being said, and a diminished ability to process information and take useful action based on it.

Despite such increasingly authoritative theories, I cannot put aside my layman’s fascination with a book that appeared at the dawn of neuropsychiatric discovery, an era that would strongly interrogate the book’s assumptions. Despite the perceptions of obsolescence, it remains a book that, as many of its critics concede, offers richly provocative speculation on the origin of madness within its larger theme: an exploration into what the author calls “the consciousness of consciousness.”

The Origin of Consciousness in the Breakdown of the Bicameral Mind was written by the late psychologist Julian Jaynes and published in 1976. Despite the portentous tone of the title, it is an unusually audacious, original, and eloquently written speculation on why and how human beings think, especially about themselves. It probes the question of why people sometimes hallucinate images, hear disembodied voices, express fantastical thoughts, and behave in ways that make no sense to “ordinary” people.

Jaynes, the son of a Unitarian minister, drew upon the extraclinical viewpoints he gathered as a playwright and actor before he turned to psychology. He proposed that until as recently as three thousand years ago, human beings were not “conscious” in the way that consciousness is understood today. That is, they were not conscious of being conscious, with all the introspection that state implies. We were instead a largely instinctual species, according to Jaynes, subject to “the authority of sound.” We believed and obeyed without skepticism the seemingly autonomous voices that came into our thoughts.

To oversimplify Jaynes’s theory (and oversimplification is virtually the only way to discuss his arguments without quoting the 469-page book entire), three millennia ago the two halves of our brain, though connected by millions of fibers, functioned almost autonomously in a sharp division of labor. The left hemisphere contained (as it still does) three “speech areas” that enabled our understanding of language. This left half was what Jaynes called the “man (human) part.” The ununified right hemisphere was the repository of something far more complex: the seedbed, perhaps, of mysticism and religion. Here lay Jaynes’s “god” part: the received sounds, most importantly human speech, actual and imagined. The bicameral mind did not distinguish between the actual and the imagined. Remembered voices bore the same authenticity as voices of other people in the moment. They often were admonitory—the voices of the father, the village elder—and thus commanded obedience. Hence: voices as gods.

The halves of the bicameral brain functioned almost independently through the epochs of subsistence agriculture and nomadic exploring, epochs of scattered populations and relatively little social complexity. It was the increase in population density and intense social interactions—divisions of labor, inventions, warfare—that obliged the brain to evolve into self-awareness, and to recognize internal thoughts as internal thoughts, not messages from on high. Yet the voices of our thoughts still echoed in us as if they came from other entities. In fact, Jaynes argues, in our own times, even everyday voices command us to a kind of obedience. To understand someone speaking to us, “we have to become the other person; or rather, we let him become part of us for a brief second. We suspend our own identities.” What also has survived even in our evolved time is “a vestigial godlike function in the right hemisphere,” according to Jaynes. “If [my] model is correct,” he writes, “there might be some residual indication, no matter how small, of the ancient divine function of the right hemisphere.”1 Later in the book he writes: “What we now call schizophrenia… begins in human history as a relationship to the divine, and only around 400 B.C.”2

The nascent shift in awareness from the bicameral to the integrated brain, Jaynes believes, can be located in certain distinctions between the two great epics commonly attributed to Homer, the Iliad and the Odyssey. The two poems—passed along orally at first—were composed probably a century apart between 750 and 650 BC. (Some scholars place the creations farther back.) As the science writer Veronique Greenwood explained in a profile of Jaynes, the characters in the Iliad have no ability to look inside themselves: “They do only what is suggested by the gods.” But in the Odyssey, “the characters are capable of something like interior thought… The modern mind, with its internal narrative and longing for direction from a higher power, appears.”3

Jaynes’s assertions have struck neurologically trained readers as eccentric or, at best, fatally compromised by his unawareness of the discoveries just then commencing. For instance, he writes, “Whatever brain areas are utilized, it is absolutely certain that such voices do exist,” adding that they are experienced exactly like actual sound. “They are heard by many completely normal people to varying degrees.” He seems a bit suspiciously sure of himself at times. The bicameral voices of antiquity, he averred, “were in quality very much like auditory hallucinations in contemporary people.”4 Yet readers of Jaynes who have lived out some of these assertions remain a bit more open-minded. I have experienced such voices on perhaps half a dozen very brief occasions that I can only dimly recall. I do remember that they sounded real. Most of these, I think, occurred while I was slipping into sleep or emerging from it, but they were not dreams. I cannot, however, vouch for being what Jaynes called “completely normal.”

A century after naming this multiheaded beast, science is beginning to understand the biological mechanisms underlying the symptoms of schizophrenia and the psychosocial factors that influence their expression. Yet a vast and tragic gulf still separates scientific understanding from the incomprehension of people in general, including relatives of the afflicted, taxpayers, and the chain-link network of law enforcement, the courts, and jails. This mass public confusion has resulted in uncounted millions of wasted resources, much of it vaporized due to lost economic production, but more of it expended on maintaining punitive institutions such as jails, which have become the country’s largest de facto mental institutions and which specialize, however unwittingly, in making an inmate’s mental illness worse. Enlightened systems of care would cost Americans far less than the thoughtless incarceration and the resultant recidivism among those who must struggle to manage their actions. America, it seems, is not yet ready for enlightenment.

The cost to America’s human treasury—the miasma of disabling personal agony, bewilderment, and social ostracism felt by a victim and his or her loved ones—is beyond any system of counting. Yet the abstraction of “human treasury” tends to distract one from contemplating the ruined uniqueness and hopes of individual lives.

What if you raised a child who grew up sunny, loved, and loving, perhaps unaccountably talented, a source of family joy, only to watch that child slowly transform in adolescence into a mysterious stranger, shorn of affect, dull of gaze, unresponsive to communication—and perhaps worse?

What if you grew to understand that this stranger was indeed communicating—but with no one whom you could see or hear?

What if you were forced to commence the lifelong process of reckoning with the likelihood that this child you thought you knew might persist in living, yet would never really return?

What if this transformation deepened and grew malign? What if this offspring of yours believed that you meant him harm—or intended to harm you? Or wanted to harm others? Or herself?

What if all of this misfortune were compounded by primal fear and judgmental withdrawal on the part of friends and even relatives? What if you picked up on gossip, as you surely would, among these friends, and relatives, and casual acquaintances, that your child’s madness was just an extension of his unhappiness, or weak character—or your own failures as a parent?

As if the symptoms of schizophrenia were not devastating enough in themselves, nature has added a cruel joke, a seemingly valueless yet powerful barrier between the sufferer and professionals reaching out to help. The cruel joke is called anosognosia. Anosognosia, a Greek term connoting a blockage of insight into one’s self (literally, “without knowledge”), is the false conviction within a person that nothing is wrong with his mind. It stems from a physiological by-product of psychosis, and accompanies about 50 percent of schizophrenia occurrences and 40 percent of bipolar cases. Anosognosia disrupts the parietal lobe’s capacity to interpret sensory information from around the body. It may also be present in victims of strokes or of physical trauma to the brain.

Kevin would never admit that he suffered from mental illness. The closest term that he would tolerate was “a condition.” As his illness deepened, so did his anosognosia. At the outset of his treatment, he consented to a prescribed regimen of oral medications. But nearly three years later—when the end was near—he decided that he did not need them, and he calmly informed us that he was going to stop taking them. We could not budge him from this insistence. Predictably, his refusal to take his pills led to another break, which led to another of his several hospitalizations. After that, he agreed to resume his regimen of meds, or so we’d thought. But after his death we discovered some of these drugs concealed or scattered in our basement.

It was not until nearly a decade after Kevin’s death that my family learned—far too late—that our son’s suicidal impulses might have been suppressed by a well-established, but underused, alternative method of administering antipsychotic medications. This is the so-called “depot” method—“depot” from the French sense of “place of deposit.” More recently the concept has been rebranded as LAIs, for “long-acting injectables.”

“Depot” was introduced in the late 1960s precisely to neutralize anosognosia. It involves a periodic injection performed by a clinician, rather than a self-administered daily oral dosage, of the prescribed medication. (Not all antipsychotic meds, particularly among the “second generation” ones that will be examined in chapter 15, can be transferred from the oral to the depot method.) The deposit usually goes into the muscled tissue of the patient’s buttocks. The density of the muscle tissue ensures that the injected substance will flow into the patient’s system gradually, and in consistent quantities, over a period of time—usually about a month. The clinician serves as an outside monitor who keeps track of the patient’s cooperation. Oral meds, on the other hand, must be taken daily, and the responsibility usually falls on the patient. And therein lies the biggest rub.

Among the most notoriously feckless and forgetful populations in the world is that of mid-adolescents. Stir in with those traits the twin poisons of schizophrenia and anosognosia, and you have what seems to be a near-guarantee of catastrophe.

The biggest question when comparing “depot” over oral dosage is whether that intuitive response is valid: whether “depots” actually reduce incidences of relapse into psychosis, as they were designed to do.

Advocates of the depot method tend to feel certain that the answer is yes. In 2007, two British investigators, Maxine Patel of the Institute of Psychiatry in London and Mark Taylor, lead clinician at Springpark Centre, Glasgow, made a flat assertion: “Depots overcome overt, covert, and unintentional nonadherence” to patients’ medical regimens.5 Statistical studies, however, have been less categorical. In 2011, a team of investigators reviewed ten recent studies of 1,700 participants and concluded: “Depot antipsychotic drugs significantly reduced relapse,” pointing to “relative and absolute risk reductions” of 30 and 10 percent.6 Another review of studies pointed out that most schizophrenia patients stopped taking their meds at some point—75 percent after only two years—and asserted that no strong evidence pointed to a decisive benefit of the depot method.7

As for the Powers family, we endured our own “study,” one we would have given anything to avoid. Our younger son, Kevin, was prescribed oral antipsychotics by a series of psychiatrists over three years; he loathed taking them yet pretended not to; he went off them; eventually hiding the ones he assured us he’d taken; and destroyed himself in the midst of a psychotic break. Our older son, Dean, some years and some resistance later, accepted the shrewdly constructed arguments of his psychiatrist, agreed to “depot” injections of Haldol, and has lived, and has improved.

In the decade or so before the end of the nineteenth century, mind science was still struggling to free itself from the last tangles of superstition, supernatural explanation, and the use of metaphor as a means of explaining madness. The young Viennese thinker Sigmund Freud was building upon innovative work by his European contemporaries to give humankind a sophisticated if fanciful conceptual scaffold on which to build rational understanding of the irrational. Freud classified aspects and functions of the mind, introducing such new terms as “the unconscious,” “libido,” “repression,” “denial,” “catharsis,” “parapraxis,” “transference,” “dream interpretation”; and “the id,” “the ego,” and “the superego.” He hypnotized his patients and got them to recall things they didn’t know they remembered. Exploring these recovered memories often relieved those patients of their symptoms of depression, hysteria, or compulsive behavior.

Freud’s methods seemed to work, at least for a small and select circle of patients. He was thought a revolutionary; he had created something where nothing had existed before, a sweeping model of the mind, built upon internally consistent components. No one had ever previously attempted a unified theory of human thought and its intricacies. Yet the changing times were pleading for such a theory; the Western world was hurtling from rural to urban, religious to secular, uncritical assumption to sharp analytic skepticism. “Progress,” founded on questioning the verities of received thought and wisdom, was becoming the new secular god. People were losing patience with the blandishments that until the late nineteenth century fed the hunger for comprehending the mind’s mysteries: blandishments such as demons, “humours,” planetary gravity, “energy,” Scrooge’s undigested bit of beef. All these faded before Freud’s dense nosologies.

The problem remained that Freud’s constructs were metaphors. They were intended to alleviate compulsive preoccupations of the mind—neuroses—via painstakingly coaxing memories from the patient: “psychoanalysis.” Freud’s categories did not describe anything physical, visible, tactile. Nothing he said was verifiable via the scientific method. His scaffolding, if powerfully persuasive, was a scaffolding of words, employed to extract words.

Much of what we believe about “the mind” we must express through metaphor. The mind is itself a metaphor. The brain, where the corporeal answers resided, lay inaccessible, surrounded by a hard layer of protective bone, the skull. Unlike blood or bodily tissue, it could not be extracted for inspection without killing the patient. Or without waiting until the patient was dead.

The only way into the brain, for millennia, was via autopsy. As the practice of dissecting dead bodies grew in sophistication, it proved invaluable in advancing medical knowledge, as when the German Rudolf Ludwig Carl Virchow used it to discover a large quantity of white cells in the body of a fifty-year-old woman; he called this condition leukämie.

Inevitably, autopsy led doctors to explore the most complex organ in the body, thanks to Emil Kraepelin, who was the first psychiatrist to apply empirical science to evaluate the brain. Almost no laypeople in America today would recognize his name, yet many in the profession still consider him to be the father of modern psychiatry instead of Freud, whose theories Kraepelin detested.

Born in 1856, the same year as Freud, Kraepelin made his mark early, in 1883, when at age twenty-seven he published his foundational Compendium der Psychiatrie, arguing his case for the organic causes of mental illness and setting out his groundbreaking systems of classification. He charged headlong at the established masters of “mind-cure.” His great text dismissed the idea that a given illness could be inferred from a given symptom. Instead, it was constituted of particular, observable combinations of symptoms that became evident as the illness progressed and pointed to its specific nature. Kraepelin drew this conclusion from poring over thousands of medical case studies. As for psychiatry, he concluded it should be joined to medical science, given that severe mental illness was the result of flawed biological processes in the brain—measurable through the steady deterioration of the patient’s thought processes and behavior in ways unprompted by external experiences. From his studies in biology he assembled his pioneering classifications of mental affliction in essentially two overall categories. One he called the exogenous, which stemmed from the reversals of fortune that life can bring to people, of the kinds that many people brought to Freud. These patients, Kraepelin agreed, could be treated. He assigned this category the umbrella designation manic-depressive. The second classification was incalculably more foreboding, and what it described set Kraepelin apart from previous psychiatric thought—even, and especially, from his great contemporary—and bête noire—Sigmund Freud. This was the endogenous, the inner region of the physical brain. Here lay the zone of organic brain damage (unacknowledged by the Viennese master) that was caused by flawed organic patterns and deteriorating tissue.

The organic deterioration Kraepelin postulated was, and is, associated with extreme old age. It is called dementia. But Kraepelin noticed from the case studies that a significant number of patients had begun showing symptoms of dementia in their teens and early twenties. Something was going wrong with the brains of these young people, and it was going wrong decades ahead of normal expectations. What could it be?

In 1896, Kraepelin introduced a term to define his answer to that question: “dementia praecox.” (That same year Freud coined “psychoanalysis.”) This Latinate word has references to “precocious” and “premature.” Kraepelin had no hesitation about defining this form of illness as biological in origin, the result, perhaps, of “toxic” secretions from the sex glands (take that, Sigmund!) or the intestines.

Other psychiatrists soon gravitated to the biological point of view. In 1903, a thirty-nine-year-old doctor left his practice in Frankfurt to join Kraepelin as an assistant at the Munich medical school. Alois Alzheimer had admired the Munich doctor’s work from afar, and he saw that it converged with some of his own. Alzheimer was then treating a fifty-one-year-old woman who was beset by delusions, hallucinations, paranoia, and bursts of violent behavior—classic symptoms of senile dementia, for which she was, statistically, too young. Alzheimer had never seen a case like this. When the woman died in 1906 at age fifty-five, Alzheimer obtained consent from her family to examine her brain. What he found in his autopsy—peering at thinly sliced slivers of cortex matter—were deposits of plaque and decayed strands of nerve cells: common flaws in the brain of, say, a ninety-year-old, but virtually unheard of in someone so young.

Alzheimer’s disease,* as it became known, is similar to senile dementia, though its onset before age sixty-five was and remains extremely rare. In a grossly unfair example of “unintended consequences,” its frequency has greatly expanded exactly because of medical progress, which has eliminated or deferred the onset of many lethal diseases, thus allowing people to live longer, and thus bringing more and more of them into the age range for this excruciating, slowly progressing affliction. And it remains irreversible.

The scientific value of Kraepelin’s and Alzheimer’s discoveries is immense. They decisively ratified the existence of physical causes of mental illness, and thus legitimized decades of development of neurotechnology—culminating, thus far at least, in CAT and PET scans, EEGs, MRIs, and MEGs—methods, all, of providing information about what is happening in the living brain, whether the presence of tumors, evidence of epilepsy, or—eventually—the glimmerings of the causes of schizophrenia.

A figure even more important than Alzheimer in building upon Kraepelin’s ideas (often, paradoxically, by taking issue with them) was the Swiss psychiatrist Eugen Bleuler. Bleuler was born in 1857, a year after Kraepelin and Freud. Unlike the emotionally remote Kraepelin with his dependence upon case studies, Bleuler plunged directly into the lives of his psychiatric patients. Not only did he conduct analytical interviews with them, which partly rebridged the gap that Kraepelin had marked off from Freud, but he socialized with them, accompanied them on hikes, arranged theater productions in which he acted along with them, and sometimes supervised their financial interests.8 These intimate encounters allowed Bleuler to gather analytic insights unavailable through the filter of abstract assessment. He came to believe that mental illnesses covered an even wider and more complex range than the types Kraepelin had laid out—that they could include not just organic but environmental components, such as the ravages of abuse, traumatic shock, stress, all events that roughly corresponded to Kraepelin’s “exogenous” category.

Here was the essence of what would become the “spectrum” concept of differentiated mental disorders, and with ever-improving diagnostic technology, its principles were to light the way for tremendous gains in brain science.* But Bleuler left himself open for years of second-guessing, not least because of a single word choice. Perhaps a little dazzled by the promise of his own refinements, he persuaded his colleagues to drop “dementia praecox” from the lexicon and replace it with a word of his own. Getting rid of Kraepelin’s term was not a bad idea in itself: Bleuler had grown convinced that it was not dementia that psychiatrists were seeing in young patients, but something else, something a great deal more complicated. In 1908 he summed up this something else with the term “schizophrenia.” This was not a wise choice. The word’s literal meaning, drawing upon the Greek, meant “a splitting of the mind.” Its ambiguity practically pleaded for misinterpretation, especially among laypeople. By “split,” Bleuler meant a “loosening of associations”—the associations being the physical conductors that unify thoughts and behavior. (In important ways, this term anticipates Julian Jaynes’s formulations.) In other words, psychosis did not always arise from “dementia,” or decaying cells and tissue. In people who were not decrepit, it usually was triggered by a cluster of genetic flaws, often the result of heredity.

Yet “a loosening of associations” was for many just a short step from “split personality,” which itself was but a short step from the old, largely discarded ghost of demonic possession. The times were rife with images of it, especially from the Gothic precincts of the British Isles. The Scotsman Robert Louis Stevenson published The Strange Case of Dr Jekyll and Mr Hyde in 1886. In 1897, the Irish novelist Bram Stoker published Dracula. Even Freud, from far-off Vienna, joined the fun—albeit unintentionally—with his 1918 paper on the patient he called “the Wolf Man.” The Wolf Man was not in fact a shape-shifter; he was a depressive Russian aristocrat who, while very young, had witnessed his parents having sex, started having dreams of trees filled with white wolves, and from then on never felt really great, though he lived until 1979. It may not be entirely insignificant that the parents were using a sex technique that involved canine resonances.

Bleuler accepted Freud’s emphasis on the unconscious, his use of hypnotism, and his strong diagnostic interest in hysteria. But he held back from accepting psychoanalysis as the cure-all for mental illness. He posited that the affliction, or afflictions, presented both basic and accessory symptoms. Basic symptoms, probably passed along through heredity, involved the deterioration and ultimate breakdown of the ability to think. Accessory symptoms were manifestations of that breakdown: delusions and hallucinations. The fundamentals of his formulation retain their diagnostic dimensions today.

Both Kraepelin and Bleuler (despite their differences) seem to have possessed great gifts, including extraordinary intuition. As has been remarked, decades would have to pass before neurotechnology came along to ratify and fine-tune their pioneering proposals about physical contributors—lesions, in a word—to madness. The basics of brain formation, so inaccessible to their generation, are by now familiar to most educated people. Still, a brief summary here may help the reader comprehend the inception of schizophrenia as we presently understand it, following Kraepelin’s and Bleuler’s beacon.

The prefrontal cortex is a complex, fragile region of the brain. In its healthy state, it directs human impulses toward rational choices and away from destructive or self-destructive behavior. It allows us to deal with the present moment while storing plans for the future. Yet as the newest part of the brain to develop in human evolution, the prefrontal cortex is also the region that takes the longest time to reach maturity, or maximum operating efficiency. It will not be fully functional until the person is past the age of twenty.

This out-of-sync progress ranks among the most profound natural misfortunes of humanity. For while the prefrontal cortex is taking its time, other powerful components of the human-in-progress have raced across the finish line and function without the cortex’s restraints. A young adult with a still-developing prefrontal cortex will have reached physical maturity, which of course means the capacity to reproduce and the strong hormonal drive to do so. The hormone testosterone emerges and unleashes aggressive urges. Given the formative turmoil of the prefrontal cortex, emotional behavior is under the inadequate jurisdiction of the amygdala, a small and primitive region near the center of the brain. The amygdala is about reactions—impulses—rather than rational thought, the great boon of the prefrontal cortex.

The activity holding back the prefrontal cortex’s final maturation is a kind of neurological housecleaning. In its final development stage, the cortex must actually lose some of its prefrontal “gray matter,” the clusters of nerve-cell bodies that formed transmission routes during infancy and childhood. This “synaptic pruning” peaks in late adolescence and is necessary for a regrouping of the cortical connections and routes that will orchestrate brain functions for the rest of a person’s life, at least until old-age decay sets in.

It is normally during this period that a schizophrenia-inducing gene cluster will activate. The reasons that this does or does not happen remain elusive. In 1983, University of California professor of psychiatry Irwin Feinberg suggested that schizophrenia could be triggered by “excessive” pruning of the cortical synapses, especially if it is accompanied by a reciprocal failure to prune certain subcortical structures. Over the years Feinberg’s hypothesis became the basis of productive refinements. In 2011, psychiatric researchers Gabor Faludi and Karoly Mirnics published a review that cited sixty references to Feinberg’s “radical new theory,” as they called it, and endorsed the growing consensus that schizophrenia is “a mental disorder with a complex etiology that arises as an interaction between genetic and environmental factors.”

As should be evident by now, the most implacable barrier to conquering schizophrenia (besides public apathy and governmental disinvolvement) has been the almost inexhaustible complexities of the brain, with its billions of electrochemically stimulated neurons and its labyrinth of interconnecting conduits—one hundred thousand miles of axons in each human being, submicroscopically separated by up to one quadrillion (fifteen zeros) synaptic connections, or spaces between neurons.

We can focus on just one example of these complexities and the conceptual halls of mirrors they can produce. The MRI, which affords a “look” through the brain’s gnarly protective barriers via magnetism and radio waves, has generated a wealth of new peripheral understanding—and new scientific debate—around mental illness. Scanning the brain can illuminate structural abnormalities—disconnections, say, in the pathways through which brain chemicals flow. These chemicals include the widely versatile neurotransmitter dopamine, which regulates cognition, motor control, and emotional functions. They include another vital neurotransmitter, serotonin, a mainstay of the central nervous system that governs social behavior, memory, and sexual function. MRIs are also helpful in tracking the entire trajectory of synapse and circuit formation that can damage the wiring of the brain.

Yet our illuminations often lead only to new questions. There are simply too many variables. In September 2013, the National Institutes of Health announced a list of nine neuroscience goals, in response to President Obama’s BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies), which by that point had drawn public and private pledges of more than $300 million toward developing new tools and research on schizophrenia. The research psychologist Gary Marcus at New York University, analyzing these goals in the New Yorker, noted that the report itself acknowledged the core challenge: that “brains—even small ones—are vastly complex.” He continued:

When sophisticated tools and techniques do manage to lead neuroscientists to evidence of abnormality in the brain, they are often stymied by yet another of the disease’s exasperating enigmas: Is the “abnormality” they are looking at a result of schizophrenia? Or is it a cause?

This quandary is well expressed in an essay titled “The Aetiology of Schizophrenia”:

That structural brain abnormalities exist in schizophrenia is generally accepted to be established… beyond dispute. However, the meaning of these abnormalities, in understanding the pathogenesis of the illness, is far less clear. Questions remain as to whether structural abnormalities predispose to the development of schizophrenia, [or] whether acute schizophrenic psychosis can actually damage the brain, causing altered structure… The presence of structural brain abnormalities in [unaffected] relatives of patients with schizophrenia suggests that “schizophrenia genes” are likely to be involved in (abnormal) brain development, but that the expression of the structural brain correlate of the genes is not enough, in itself, to “cause” schizophrenia.10

The genes that underpin schizophrenia may have been favored by natural selection, according to a survey of human and primate genetic sequences. The discovery suggests that genes linked to the debilitating brain condition conferred some advantage that allowed them to persist in the population—although it is far from clear what this advantage might have been.11

Despite these complexities, the centrality of congenital factors to the disease was resoundingly ratified in 2014 by the Schizophrenia Working Group of the Psychiatric Genomics Consortium, a collaboration among some three hundred scientists from thirty-five countries. After examining the genomes of some 37,000 people with schizophrenia and comparing them with those of more than 113,000 healthy subjects, the group claimed to have identified an astounding 128 gene variants connected with schizophrenia. These genes occupied 108 locations on the genome, with most of them having never before been associated with the affliction.12

It’s true that contemporary research has unlocked many secrets about how the brain works. Advances have been spectacular in neuropsychology (which is, briefly, the psychiatry-based study of how reasoning works and why/how people experience impairment); in technology (such as magnetic resonance imaging, or MRI, to facilitate the study of the brain); and in the development of “psychotropic” medications, from the anxiolytic Valium to the antipsychotic Haldol and beyond.

These techniques and findings have helped alleviate a range of relatively minor discontents. They have also shed light on electrical and chemical impulses as they move through the brain; on the nature of receptors; and on the functions of the cerebrum, cerebellum, diencephalon, and brain stem. Yet, when applied to the task of conquering the most feared and devastating mental disorder of them all, our cutting-edge tools have scarcely begun to cut the edge.