In the game of life, you are dealt genes. You can’t change your genome; it’s the hand you must play. The genomic worldview is pessimistic, constrained on all sides. In contrast, your connectome changes throughout life, and you have some control over that process. The connectome bears an optimistic message of possibility and potential. Or does it? How much can we really change ourselves?
The Serenity Prayer, quoted at the beginning of Chapter 2, echoes the sentiments of an older rhyme:
For every ailment under the sun
There is a remedy, or there is none;
If there be one, try to find it;
If there be none, never mind it.
That kind of mixed message is also on display in the self-help section of your local bookstore. Browse for a few minutes and you’ll come across many books that don’t tell you how to change; instead, they teach resignation. If you’re persuaded that you can’t possibly change your spouse, you may stop nagging and learn to be happy with your marriage. If you believe that your weight is genetically determined, you may cease dieting and enjoy eating once again. On the other end of the spectrum, diet books like I Can Make You Thin and Master Your Metabolism are titled to inspire optimism about losing weight. In his guide to self-help books, What You Can Change and What You Can’t, the psychologist Martin Seligman lays out the empirical evidence for pessimism. Only 5 or 10 percent of people actually achieve long-term weight loss by dieting. That’s a depressingly low number.
So is change really possible? The twin studies showed that genes may influence human behavior but do not completely determine it. Nevertheless, another type of determinism has emerged, this one based on the brain, and almost as pessimistic. “Johnny’s just that way—he’s wired differently,” you hear people say. Such connectome determinism denies the possibility of significant personal change after childhood. The idea is that connectomes may start out malleable but become fixed by adulthood, in line with the old Jesuit saying, “Give me the child until he is seven and I’ll give you the man.”
The most obvious implication of connectome determinism is that changing people should be easiest in the first years of life. The construction of a brain is a long and complex process. Surely it’s more effective to intervene during the early stages of construction, rather than later on. While a house is being built, it’s relatively easy to deviate from the architect’s original blueprint. But as anyone who has remodeled a house knows, it’s much harder to make major changes after the house is finished. If you’ve tried to learn a foreign language as an adult, you may have found it a struggle. Even if you were successful, you probably didn’t end up sounding like a native speaker. Since children seem to learn second languages effortlessly, their brains appear to be more malleable. But does this idea really generalize to mental abilities other than language?
In 1997, then–First Lady Hillary Clinton hosted a conference at the White House entitled “What New Research on the Brain Tells Us about Our Youngest Children.” Enthusiasts of the “zero-to-three movement” gathered to hear claims that neuroscience had proven the effectiveness of intervening during the first three years of life. At the conference was the actor and director Rob Reiner, who started the I Am Your Child Foundation, also in 1997. He was beginning to create a series of educational videos for parents about the principles of childrearing. The inaugural title was “The First Years Last Forever,” which sounded ominously deterministic.
Actually, neuroscience has been unable to confirm or deny such claims, because it’s been difficult to identify exactly what changes in the brain cause learning. Could the zero-to-three movement base its claims of determinism on the neo-phrenological theory that learning is caused by synapse creation? (Let’s ignore the considerable evidence against this theory, for the sake of argument.) The answer would be yes if synapse creation were impossible in adults. But William Greenough and other researchers showed that connection number still increases even when adult rats are placed in enriched cages. The rate was slower than in young rats, but still substantial. And remember the MRI studies of the cortex in people learning to juggle? Thickening occurred in the elderly as well as young adults. Finally, watching synapses through a microscope has shown that reconnection still continues in the brains of adult rats, as mentioned previously. Neuroscientists have not demonstrated a drop in reconnection with age as dramatic as the decrease in language-learning ability. Therefore, the first form of connectome determinism, “reconnection denial,” does not seem tenable.
A second form has emerged, however: “rewiring denial.” The “wires” of the brain are laid down in early life, as neurons extend axons and dendrites. Retraction of branches also occurs during development. Using microscopy, researchers have been able to capture videos of these remarkable processes. Often the tip of an axon makes a synapse onto a dendrite, gripping as if the synapse were like a hand. The creation of such a synapse appears to stimulate the axon to grow further, but if such a synapse is eliminated, the axon loses its hold and retracts. In general, it seems that axonal branches can’t be stable unless they make synapses. Although growth and retraction are highly dynamic in the young brain, rewiring deniers believe that they grind to a halt in the adult brain. The wires can be reconnected in new ways by synapses, and synapses can be reweighted by changing their strengths, but the wires themselves are fixed.
Rewiring is hotly debated because of its suspected role in remapping, the dramatic changes in function observed after brain injury or amputation. To understand the importance of rewiring, we need to revisit a more fundamental question: What defines the function of a brain region?
The whole notion of a brain region with a well-defined function implicitly depends on an empirical fact. Through measurements of neural spiking, it has been shown that neurons near each other in the brain (neighboring cell bodies) tend to have similar functions. One can imagine a different kind of brain in which neurons are chaotically scattered without any regard for their functions. It wouldn’t make sense to divide such a brain into regions.
But why do the neurons in a region have similar functions? One reason is that most connections in the brain are between nearby neurons. This means neurons in a region “listen” mainly to each other, so we’d expect them to have similar functions, much as we’d expect less diversity of opinions among a group of people who mainly keep to themselves. This is part of the story, but not all of it.
The brain also contains some connections between distant neurons. In effect, neurons in the same region “listen” to neurons in other regions as well as each other. Couldn’t these faraway sources of input lead to diversity? Indeed they could if they were distributed all over the brain, but in fact they are typically confined to a limited number of regions. Returning to the social analogy, you could imagine a brain region as a group of people who listen to the outside world a bit, but only by reading the same newspapers and watching the same television shows. These external influences are so narrow that they don’t lead to diversity either.
Why are long-range connections constrained in this way? The answer has to do with the organization of brain wiring. Most pairs of regions lack axons running between them, so their neurons have no way of connecting with each other. In other words, any given region is wired to a limited set of source and target regions. This set has been called a “connectional fingerprint,” as it appears to be unique for each region. The fingerprint is often highly informative about the region’s function. For example, the reason that Brodmann area 3 mediates bodily sensations, a function I mentioned earlier, is that this area is wired to pathways bringing touch, temperature, and pain signals from the spinal cord. Similarly, the reason that Brodmann area 4 controls movements of the body is that this area sends many axons to the spinal cord, which in turn is wired to the muscles of the body.
These examples suggest that a region’s function depends greatly on its wiring with other regions. If that’s true, altering the wiring could change the function. Remarkably, this principle has been demonstrated by “rewiring” a nominally auditory area of the cortex to serve the function of vision. The first step was taken in 1973 by Gerald Schneider, who discovered an ingenious method to reroute axons growing in the brains of newborn hamsters. By damaging certain brain regions, he diverted retinal axons from their normal target in a visual pathway to an alternative destination in an auditory pathway. This had the effect of sending visual signals to a cortical area that is normally auditory.
The functional consequences of this rewiring were investigated in the 1990s by Mriganka Sur and his collaborators. After repeating Schneider’s procedure in ferrets, they showed that neurons in the auditory cortex now responded to visual stimulation. Furthermore, the ferrets could still see even after the visual cortex was disabled, presumably by using their auditory cortex. Both pieces of evidence implied that the auditory cortex had changed its function to be visual. Similar “cross-modal” plasticity has also been observed in humans. For example, in those who are blind from an early age, the visual cortex is activated when they read Braille with their fingertips.
Such findings are consistent with Lashley’s doctrine of equipotentiality, but they suggest an important qualification: A cortical area indeed has the potential to learn any function, but only if the necessary wiring with other brain regions exists. If every area in the cortex were wired to every other area (and to all other regions outside the cortex), then equipotentiality might hold without any provisos. Wouldn’t the brain be far more versatile and resilient if its wiring were “all to all”? Maybe so, but it would also swell to gigantic proportions. All those wires take up space, as well as consume energy. The brain has evidently evolved to economize, which is why the wiring between regions is selective.
The Schneider and Sur experiments induced young brains to wire up differently. What about the adult brain? If the wiring between regions becomes fixed in adulthood, that would constrain the potential for change. Conversely, if the adult brain could rewire, it would have more potential to recover from injury or disease. This is why researchers so badly want to know whether rewiring is possible in adulthood, and also find therapies to promote the phenomenon.
In 1970, a thirteen-year-old girl came to the attention of social workers in Los Angeles. She was mute, disturbed, and severely underdeveloped. Genie (a pseudonym) had been a victim of terrible abuse. She had spent her entire life in isolation, tied up or otherwise confined to a single room by her father. Her case aroused great public attention and sympathy. Doctors and researchers hoped that she could recover from her traumatic childhood, and they resolved to help her learn language and other social behaviors.
Coincidentally, 1970 also saw the premiere of François Truffaut’s film L’Enfant Sauvage, about the Wild Boy of Aveyron. Victor was discovered around 1800 wandering naked and alone in the woods of France. Efforts were made to “civilize” him, but he never learned to speak more than a few words. History has recorded other examples of so-called feral children, who grew up lacking exposure to human love and affection. No feral child was ever able to learn language.
Cases like Victor’s suggested the existence of a critical period for the learning of language and social behaviors. Deprived of the opportunity to learn during the critical period, feral children could not learn these behaviors later on. In metaphorical terms, the door to learning hangs open during the critical period; then it swings shut and locks. While this interpretation is plausible, too little is known about feral children for it to be scientifically rigorous.
When Genie was found, researchers hoped that her case might overturn the theory of the critical period. They resolved to study Genie and rehabilitate her at the same time. She made some encouraging progress in learning language, but eventually funding for the research dried up. Then Genie’s life took a tragic turn as she passed through a series of foster homes and seemed to regress.
Around the time the research ended, scientific papers reported that Genie was still learning new words but was struggling with syntax. According to later popular accounts, the researchers became discouraged, predicting that she would never learn real sentence structure. Whether Genie would have progressed further will never be known. She provided some evidence for a critical period in language learning, but it is difficult to draw firm scientific conclusions, however heartbreaking and gripping her case may be.
Optometrists encounter less harrowing forms of deprivation all the time. Weak vision in one eye often goes unnoticed if the other eye provides clear sight. Wearing eyeglasses or having a cataract removed easily corrects the problem with the eye. Nevertheless, the patient may still not see clearly with the corrected eye, or be stereo-blind, because there is still something wrong with the brain. (At a movie theater you’ve probably tried 3D glasses, which give a sensation of depth by presenting slightly different images to the two eyes. Those who can’t perceive 3D in this way are said to be stereo-blind.) The condition, known as amblyopia to specialists, is nicknamed “lazy eye,” but the disorder involves the brain as well as the eye.
Amblyopia suggests that we are not simply born with the ability to see; we must also learn from experience, and there is a critical period for this process. If the brain is deprived of normal visual stimulation from one eye during this limited time window, it does not develop normally. The effect is irreversible in adulthood. Children, however, recover normal vision if amblyopia is detected and treated early; their brains are still malleable. On the flip side, if an adult develops poor vision in a single eye, it has no lasting effect on the brain. Correcting the eye produces full recovery.
Amblyopia seems to document the claim made in the title of Rob Reiner’s video, The First Years Last Forever. Early intervention is crucial, as the zero-to-three movement contends. Amblyopia treatments suggest that the brain becomes less malleable after the critical period. But can that be shown directly by neuroscience? How exactly do poor vision and corrected vision change the brain during the critical period, and why don’t these changes happen later on?
In the 1960s and 1970s David Hubel and Torsten Wiesel investigated these questions with experiments on kittens. To simulate amblyopia, they occluded vision in one eye, a condition they called “monocular deprivation.” Several months later they removed the occlusion and tested visual capability. The kittens could not see well with the previously deprived eye, much like human patients with amblyopia. To find out what had changed in the brain, Hubel and Wiesel recorded spikes from neurons in Brodmann area 17. Since this cortical area is important for vision, it’s also known as primary visual cortex or V1. They measured the responsiveness of each neuron to visual stimulation of the left eye alone, and of the right eye alone. Few neurons responded to stimulation of the previously deprived eye.
The functions of V1 neurons had been altered by monocular deprivation. Could this have been caused by a connectome change? That’s a good guess if we believe the connectionist mantra that the function of a neuron is chiefly defined by its connections with other neurons. In the 1990s Antonella Antonini and Michael Stryker provided evidence pointing to the rewiring of axons bringing visual information into V1. Each incoming axon is monocular, meaning that it carries signals from just one eye. Depriving one eye caused its axons to retract dramatically, and the other eye’s axons to grow. In effect, rewiring eliminated pathways from the deprived eye to V1, and created new pathways from the other eye to V1. This plausibly explained why Hubel and Wiesel had observed few V1 neurons responsive to the previously deprived eye.
Rewiring of V1 was important because it identified a connectome change that could be the cause of learning. Since rewiring both created and eliminated synapses and pathways, it served as another counterexample to the neo-phrenological idea that learning is simply the creation of synapses.
Antonini and Stryker were also able to address another question: Why is the brain less malleable after the critical period? Hubel and Wiesel had shown that monocular deprivation induced V1 changes in young kittens but not in adults. Once induced, the changes were reversible while the kittens were young, but became irreversible in adulthood. Antonini and Stryker explained this by showing that monocular deprivation in adulthood did not rewire V1. Furthermore, rewiring induced during the critical period was reversible if monocular deprivation was ended early, but not if it was ended late.
Antonini and Stryker’s research would seem to support the case for early intervention, as recommended by the zero-to-three movement. But an important pitfall of this argument has been pointed out by William Greenough, who discovered the increases in neural connections produced by environmental enrichment in rat brains. Amblyopia, like Genie’s lonely upbringing, deprived children of normal experiences. It suggested the existence of a critical period for deprivation. Does it necessarily follow that there is also a critical period for enriching childhood with special experiences?
Greenough and his colleagues say it doesn’t. Since experiences like visual stimulation and exposure to language were normally available to all children throughout human history, brain development “expects” to encounter them, and has evolved to rely heavily upon them. On the other hand, experiences like reading books were not available to our ancient ancestors. Brain development could not have evolved to depend upon them. That’s why adults can still learn to read, even if they did not have the opportunity in childhood.
What the zero-to-three movement really needs is an example of a critical period for learning from altered experience—an example that goes beyond mere deprivation. One such experiment was pioneered in 1897 by the American psychologist George Stratton. He fastened a homemade telescope to his face and placed opaque materials around the eyepiece so that no other light could enter his eyes. The telescope was designed not to magnify images but to invert them. It turned the world upside down and also reversed left and right like a mirror. Stratton heroically wore the telescope twelve hours a day and blindfolded himself when he took it off.
As you can well imagine, Stratton was extremely disoriented at first, even nauseated. His vision conflicted with his movements. If he tried to reach for an object to his side, he would use the wrong hand. When he corrected himself and used the other hand, even the simple act of pouring milk into a glass was exhausting. His vision also conflicted with his hearing: “As I sat in the garden, a friend who was talking with me began to throw some pebbles into the distance to one side of my actual view. The sound of the stones striking the ground came, oddly enough, from the opposite direction from that in which I had seen them pass out of my sight, and from which I involuntarily expected to catch the sound.” But by the time Stratton ended the experiment, on the eighth day, he was moving with greater ease, and his vision and hearing had harmonized: “The fire, for instance, sputtered where I saw it. The tapping of my pencil on the arm of my chair seemed without question to issue from the visible pencil.”
What Stratton had discovered is that the brain can recalibrate vision, hearing, and movement to resolve conflicts between them. Eye surgeons have encountered a similar recalibration in patients with strabismus. This condition, more commonly known as “crossed eyes,” is sometimes corrected by surgery on eye muscles to rotate the eye. Turning the eye in this manner changes the vision of the patients, effectively rotating the world around them. The rotation is revealed by a simple experiment, in which patients are requested to point in the direction of a visual target while not being allowed to see their pointing arm. They consistently point to one side of the target, because their movements now conflict with their changed vision. But if they are tested again a few days after surgery, the pointing errors are reduced, showing that the brain is recalibrating.
What happens in the brains of patients as they adapt to strabismus surgery? Starting in the 1980s, Eric Knudsen and his collaborators addressed this question with experiments on barn owls. They used special eyeglasses that rotated the world 23 degrees to the right by bending light rays to one side. This mimicked the rotation of the visual world produced by strabismus surgery. (In fact, similar eyeglasses are sometimes used as a treatment for severe strabismus.) Owls raised with these eyeglasses behaved in a way that looked skewed to an observer. If they heard a sound, they turned their heads to the right of the source. This skewed behavior enabled them to look at the source, as it compensated for the rotation caused by the eyeglasses.
To study the neural basis of this behavioral change, Knudsen and his collaborators examined the inferior colliculus. This part of the brain is important for computing the direction of a sound based on comparing signals from the left and right ears. Much as there is a map of the body in Brodmann areas 3 and 4 (the sensory and motor “homunculi”), there is a map of the external world in the inferior colliculus. By recording spikes from neurons in this structure, Knudsen and his collaborators showed that the inferior colliculus map was displaced in a direction consistent with the skewed-looking behavior. They also showed that incoming axons shifted over in the map, suggesting that remapping had been caused by rewiring.
Knudsen and his collaborators further demonstrated a critical period for learning by applying and removing the eyeglasses at different ages. Placing the eyeglasses on adult owls raised normally did not produce a change in looking behavior. If young owls were raised with eyeglasses, the effects were reversible if the eyeglasses were removed early but not if removed in adulthood.
Based on the examples of the inferior colliculus and V1, it seems we can deny the possibility of rewiring in the adult brain. This might explain why adults have greater difficulty adapting to change. I mentioned in Chapter 2 that adults do not recover from hemispherectomy as well as children do. More generally, the Kennard Principle states that the earlier the brain damage, the greater the recovery of function. This principle has been criticized as simplistic, since exceptions are well-known, but it has some element of truth. It follows from rewiring denial, because rewiring is an important mechanism for remapping.
At the same time, the doctrine of rewiring denial is still under attack. Researchers using microscopes to monitor axons over long time periods in living brains have shown that new branches can grow in adults. The experiments are controversial, but there is a growing consensus that at least short growths are possible, though long extensions might not be. Some suspect that such rewiring is responsible for the cortical remapping that accompanies phantom limbs, although there is still little conclusive evidence.
Other researchers are challenging the concept of the critical period, saying that the effects of early deprivation may be more reversible than was previously thought. The conventional wisdom has been that it’s impossible to acquire stereo vision in adulthood. In her book Fixing My Gaze, the neuroscientist Susan Barry relates how she acquired some stereo vision in her forties, after a lifetime of stereo blindness caused by childhood strabismus. She was able to do this by subjecting herself to a special regimen that trained her vision.
Barry’s success suggests that the effects of critical-period experience are only difficult, not impossible, to reverse. Antonini and Stryker seemed to demonstrate convincingly that V1 lost its potential for change in adulthood, because rewiring ceased. This seemingly open-and-shut case has recently been challenged by the discovery of several treatments that restore plasticity to adult V1. Researchers have employed four weeks of the antidepressant medication fluoxetine (better known by the trade name Prozac), pretreatment with ten days of darkness, or simple environmental enrichment in the style of Rosenzweig. These treatments appear to extend the critical period into adulthood or eliminate it altogether.
Knudsen and his collaborators initially emphasized the failure of adult owls to adjust to rotation of the visual world. But later experiments sent a more optimistic message. The owls wore a sequence of eyeglasses, each of which rotated the world by a progressively larger angle. Over time, the owls eventually adapted to the same 23-degree rotation that the young owls could handle in one giant adjustment. The finding supported the general idea that adults can learn as much as juveniles, if training is done correctly.
Optimism about adult brain plasticity is currently in vogue. In the 1990s the zero-to-three movement contrasted the rigidity of the adult brain with the flexibility of the infant brain. Now the pendulum has swung to the other extreme. In his book The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science, Norman Doidge tells inspiring stories about adults who have managed amazing recoveries from neurological problems. He argues that the brain is exceedingly plastic, much more than neuroscientists and physicians ever thought.
Of course the truth lies somewhere in between. It’s incorrect to flatly deny the possibility of adult rewiring, but such denials might hold water if they were qualified with conditions. For example, they could be restricted to specific types of branches growing from certain neurons toward others, or from certain regions to others. And it’s simplistic to regard rewiring as a single phenomenon. Rewiring actually encompasses a large number of processes involved in the growth and retraction of neurites. A more refined denial of rewiring might focus on just one of the processes included in this catchall term.
Since denials are conditional rather than absolute, they might be sidestepped by the right kind of training program, as Knudsen showed. And it appears that brain injury facilitates rewiring by releasing axonal growth mechanisms that are normally suppressed by certain molecules. Future drug therapies might target these molecules, enabling the brain to rewire in ways that are not currently possible.
Because of our crude experimental techniques, only drastic kinds of rewiring have been detectable. That’s why neuroscientists resorted to the rather extreme experiences of monocular deprivation and Stratton-type eyeglasses. The still-invisible, subtler kinds of rewiring could well be important for more normal types of learning. Simply by providing a clearer picture of the phenomenon, connectomics is bound to aid research in this area.
In 1999 a bitter fight erupted between two neuroscientists. In one corner stood the defending champion, Pasko Rakic of Yale University. Starting in the 1970s, his famous papers had firmly established a dogma: No new neurons are added to the mammalian brain after birth, or at least after puberty. The upstart was Elizabeth Gould of Princeton University, who had astounded her colleagues by reporting new neurons in the neocortex of adult monkeys. (Most of the cerebral cortex consists of neocortex, the part mapped by Brodmann.) Her discovery was hailed by the New York Times as the “most startling” of the decade.
It’s not hard to understand why this face-off between two professors ended up on the front page. It’s amazing when the body repairs itself. Skin wounds heal, leaving only a scar. Of all the internal organs, the liver is the champion at self-repair, growing back even if two-thirds of it has been removed. If the adult neocortex could add new neurons, that would mean the brain has more capacity to heal itself than anyone expected.
In the end, neither contender could be declared the undisputed champion. The “no new neurons” dogma prevailed in the neocortex. However, Rakic himself was forced to concede that neurons are continually added to two regions of the adult brain, the hippocampus and the olfactory bulb. (The olfactory bulb is for the nose what the retina is for the eye, and the hippocampus is a major non-neocortical part of the cortex.)
Since new neurons normally appear in these two regions, even in the absence of injury, they presumably aren’t for healing. Perhaps they enhance learning potential, much as new synapses were hypothesized to increase memory capacity by adding potential to learn new associations. The hippocampus belongs to the medial temporal lobe, in which the Jennifer Aniston neuron was found. Some researchers believe that the hippocampus serves as the “gateway” to memory; they theorize that it stores information first and later transfers it to other regions like the neocortex. If this is the case, the hippocampus might need to be extremely plastic, and new neurons would endow it with extra plasticity. Similarly, the olfactory bulb might use new neurons to help store memories of smells.
According to neural Darwinism, synapse elimination works in tandem with creation to store memories. Likewise, we’d expect the creation of neurons to be accompanied by a parallel process of elimination. This pattern holds true for many types of cells, which die throughout the body during development. Such death is said to be “programmed,” because it resembles suicide. Cells naturally contain self-destruct mechanisms and can initiate them when triggered by the appropriate stimuli.
You might think that your hand grew fingers by adding cells. No—actually, cell death etched away at your embryonic hand to create spaces between your fingers. If this process fails to happen properly, a baby is born with fingers fused together, a minor birth defect that can be corrected by surgery. So cell death acts like a sculptor, chiseling away material rather than adding it.
This is the case for the brain as well as the body. Roughly as many of your neurons died as survived while you floated in the womb. It may seem wasteful to create so many neurons and then kill them off. But if “survival of the fittest” was an effective way of dealing with synapses, it might also work well for neurons. Perhaps the developing nervous system refines itself through survival of neurons that make the “right” connections, coupled with elimination of those that don’t. This Darwinian interpretation has been proposed not only for development but also for creation and elimination of neurons in adulthood, which I’ll call regeneration.
If regeneration is so great for learning, why doesn’t the neocortex do it? Perhaps this structure needs more stability to retain what has already been learned, and must settle for less plasticity in order to achieve that. But Gould’s report of new neocortical neurons is not alone in the literature; similar studies have been published sporadically since the 1960s. Perhaps these scattered papers contain some grain of truth that’s contrary to the current thinking among neuroscientists.
We could resolve the controversy by hypothesizing that the degree of neocortical plasticity depends on the nature of the animal’s environment. Plasticity might well plummet in captivity, for confinement in small cages must be dull compared with life in the wild, and presumably demands little learning. The brain could respond by minimizing the creation of neurons, and most of those created might not survive elimination for long. In this scenario, new neurons indeed exist, but in small and fluctuating numbers that are hard to see, which would explain why researchers are split. It’s entirely possible that more natural living conditions would foster learning and plasticity, and new neurons would become more numerous.
You might not be convinced by this speculation, but it illustrates a general moral of the Rakic–Gould story: We should be cautious about blanket denials of regeneration, rewiring, or other types of connectome change. A denial has to be accompanied by qualifications if it’s to be taken seriously. Furthermore, the denial may well cease to be valid under some other conditions.
As neuroscientists have learned more about regeneration, simply counting the number of new neurons has become too crude. We’d like to know why certain neurons survive while others are eliminated. In the Darwinian theory, the survivors are the ones that manage to integrate into the network of old neurons by making the right connections. But we have little idea what “right” means, and there is little prospect of finding out unless we can see connections. That’s why connectomics will be important for figuring out whether and how regeneration serves learning.
I’ve talked about four types of connectome change—reweighting, reconnection, rewiring, and regeneration. The four R’s play a large role in improving “normal” brains and healing diseased or injured ones. Realizing the full potential of the four R’s is arguably the most important goal of neuroscience. Denials of one or more of them were the basis of past claims of connectome determinism. We now know that such claims are too simplistic to be true, unless they come with qualifications.
Furthermore, the potential of the four R’s is not fixed. Earlier I mentioned that the brain can increase axonal growth after injury. In addition, damage to the neocortex is known to attract newly born neurons, which migrate into the zone of injury and become another exception to the “no new neurons” rule. These effects of injury are mediated by molecules that are currently being researched. In principle we should be able to promote the four R’s through artificial means, by manipulating such molecules. That’s the way genes exert their influence on connectomes, and future drugs will do the same. But the four R’s are also guided by experiences, so finer control will be achieved by supplementing molecular manipulations with training regimens.
This agenda for a neuroscience of change sounds exciting, but will it really put us on the right track? It rests on certain important assumptions that are plausible but still largely unverified. Most crucially, is it true that changing minds is ultimately about changing connectomes? That’s the obvious implication of theories that reduce perception, thought, and other mental phenomena to patterns of spiking generated by patterns of neural connections. Testing these theories would tell us whether connectionism really makes sense. It’s a fact that the four R’s of connectome change exist in the brain, but right now we can only speculate about how they’re involved in learning. In the Darwinian view, synapses, branches, and neurons are created to endow the brain with new potential to learn. Some of this potential is actualized by Hebbian strengthening, which enables certain synapses, branches, and neurons to survive. The rest are eliminated to clear away unused potential. Without careful scrutiny of these theories, it’s unlikely that we’ll be able to harness the power of the four R’s effectively.
To critically examine the ideas of connectionism, we must subject them to empirical investigation. Neuroscientists have danced around this challenge for over a century without having truly taken it on. The problem is that the doctrine’s central quantity—the connectome—has been unobservable. It has been difficult or impossible to study the connections between neurons, because the methods of neuroanatomy have only been up to the coarser task of mapping the connections between brain regions.
We’re getting there—but we have to speed up the process radically. It took over a dozen years to find the connectome of the worm C. elegans, and finding connectomes in brains more like our own is of course much more difficult. In the next part of this book I’ll explore the advanced technologies being invented for finding connectomes and consider how they’ll be deployed in the new science of connectomics.