8 How the Brain Works

Ever since Alcmaeon of Croton (fifth century BC) and Hippocrates (fourth century BC) recognized the brain as the seat of intelligence, humanity has been interested in understanding how it works. Despite this early start, it took more than two millennia to become common knowledge that it is in the brain that intelligence and memory reside. For many centuries, it was believed that intelligence resided in the heart; even today, we say that something that was memorized was “learned by heart.”

The latest estimates put the total number of cells in an average human body at 37 trillion (Bianconi et al. 2013) and the total number of cells in the brain at roughly 86 billion (Azevedo et al. 2009), which means that less than 0.5 percent of our cells are in the brain. However, these cells are at the center of who we are. They define our mind, and when they go they take with them our memories, our personality, and our very essence.

Most of us are strongly attached to our bodies, or to specific physical characteristics of them. Despite the attachment to our bodies, if given the choice most of us probably would prefer to donate a brain than to be the recipient of one, if the transplant of this organ could be performed—something that may actually be attempted in years to come.

Our knowledge of the structures of neurons and other cells in the brain is relatively recent. Although the microscope was invented around the end of the sixteenth century, it took many centuries for microscopes to be successfully applied in the observation of individual brain cells, because individual neurons are more difficult to see than many other types of cells.

Neurons are small, their bodies ranging from a few microns to 100 microns in size. However, they are comparable in size to red blood cells and spermatozoa. What makes neurons difficult to see is the fact that they are intertwined in a dense mass, making it hard to discern individual cells. To borrow an analogy used by Sebastian Seung in his illuminating 2012 book Connectome, the neurons in the brain—a mass of intermixed cell bodies, axons, dendritic trees, and other support cells—look like a plate of cooked spaghetti. Even though you can point a microscope at the surface of a section of brain tissue, it isn’t possible to discriminate individual cells, because all you see is a tangled mess of cell components.

A significant advancement came in 1873 when Camillo Golgi discovered a method for staining brain tissue that would stain only a small fraction of the neurons, making them visible among the mass of other neurons. We still don’t know why the Golgi stain makes only a small percentage of the neurons visible, but it made it possible to distinguish, under a microscope, individual neurons from the mass of biological tissue surrounding them. Santiago Ramón y Cajal (1904) used the Golgi stain and a microscope to establish beyond doubt that nerve cells are independent of one another, to identify many types of neurons, and to map large parts of the brain. Ramón y Cajal published, along with his results, many illustrations of neurons that remain useful to this day. One of them is shown here as figure 8.1.

10998_008_fig_001.jpg

Figure 8.1 A drawing of Purkinje cells (A) and granule cells (B) in the cerebellum of a pigeon by Santiago Ramón y Cajal.

The identification of neurons and their ramifications represented the first significant advance in our understanding of the brain. Ramón y Cajal put it this way in his 1901 book Recuerdos de mi vida: “I expressed the surprise which I experienced upon seeing with my own eyes the wonderful revelatory powers of the chrome-silver reaction and the absence of any excitement in the scientific world aroused by its discovery.”

Golgi and Ramón y Cajal shared a Nobel Prize for their contributions, although they never agreed on the way nerve cells interact. Golgi believed that neurons connected with one another, forming a sort of a super-cell; Ramón y Cajal believed that different neurons touched but remained separate and communicated through some yet unknown method. Ramón y Cajal was eventually proved right in the 1920s, when Otto Loewi and Henry Dale demonstrated that neurotransmitters were involved in passing signals between neurons. However, we had to wait until 1954 for the final evidence, when George Palade, George Bennett, and Eduardo de Robertis used the recently invented electron microscope to reveal the structure of synapses. We now know that there are chemical synapses and electrical synapses. Chemical synapses are much more numerous than electrical ones.

To understand how the brain works, it is important to understand how the individual neurons work, and how they are organized into the amazingly complex system we call the brain.

How Neurons Work

The many different types of neurons can be classified by morphology, by function, or by location. Golgi grouped neurons into two types: those with long axons used to move signals over long distances (type I) and those with short axons (type II). The simplest morphology of type I neurons, of which spinal motor neurons are a good example, consists of a cell body called the soma and a long thin axon covered by a myelin sheath. The sheath helps in signal propagation. Branching out from the cell body is a dendritic tree that receives signals from other neurons.

A generic neuron, depicted schematically in figure 8.2a, has three main parts: the dendrites, the soma, and the axon. Figure 8.2b depicts a real neuron from a mouse brain, located in layer 4 of the primary visual area (whose behavior is briefly described later in this chapter). A typical neuron receives inputs in the dendritic tree. Then, in some complex way, it adds all the input contributions in the soma and, if it receives sufficient input, it sends an output through the axon. The output is characterized by the firing of the neuron, which takes place when the neuron receives sufficient input. Neuron firing is a phenomenon generated by the cell membrane. This membrane has specific electrical characteristics that we inherited from the primitive organisms known as choanoflagellates (which were mentioned in chapter 6).

10998_008_fig_002.jpg

Figure 8.2 (a) A schematic diagram of a neuron by Nicolas Rougier (2007), available at Wikimedia Commons. (b) A real neuron from layer 4 of the primary visual area of a mouse brain, reconstructed and made available by the Allen Institute for Brain Science. Little circles at the end of dendrites mark locations where the reconstruction stopped, while the larger circle marks the location of the soma.

Neurons, like other cells, consist of a complex array of cellular machinery surrounded by a lipid membrane. Inside the membrane, neurons have much of the same machinery that other cells have, swimming inside a salt-water solution: many different types of cytoplasmic organelles (including mitochondria) and a nucleus, in which DNA replication and RNA synthesis takes place. However, since their job is to carry information from one place to the other in the form of nerve impulses, nerve cell have a different, highly specialized membrane. The membrane of a nerve cell is a complex structure containing many proteins that enable or block the passage of various substances. Of particular interest in the present context are pores, ion channels, and ion pumps of different sizes and shapes. Channels can open or close, and therefore they control the rate at which substances cross the membrane. Most ion channels are permeable only for one type of ion. Ion channels allow ions to move in the direction of the concentration gradient, moving ions from high concentration regions to low concentration regions. Ion pumps are membrane proteins that actively pump ions in or out of the cells, using cellular energy obtained usually from adenosine 5´-triphosphate (ATP) to move the ions against their concentration gradient. Some ion channels—those known as voltage dependent—have a pumping capacity that is influenced by the voltage difference across the membrane (known as the membrane potential).

The membrane potential, determined by the concentration of charged ions inside and outside the nerve cell, typically ranges from 40 to 80 millivolts (mV), being more positive outside. The cells are bathed in a salt-water solution. The electric potential inside the cell and that outside the cell differ because the concentrations of the ions in the salt water vary across the membrane. Ions flow across the channels, in one direction or the other, while others are pumped by the ion pumps. At rest, there is a voltage difference of roughly 70 mV between the two sides of the membrane. Many ions have different concentrations inside and outside the cell membrane. The potassium ion K+ has a higher concentration inside than outside; the sodium ion Na+ and the chloride ion Cl have higher concentrations outside than inside. These differences in concentrations of charged ions create the membrane potential.

The existence of the neuron membrane, and the characteristics of the channels and pumps, lead to a behavior of the neurons that can be well modeled by an electrical circuit. With these general concepts in mind, we can now try to understand how a detailed electrical model of a neuron can be used to simulate its behavior. Let us first consider an electrical model of a section of the membrane.

The important part of the behavior of a neuron for the purposes of signal transmission consists in the opening and the closing of channels and pumps that transport the ions across the membrane. It is possible to derive a detailed electrical model for a small section of the membrane, and to interconnect these sections to obtain a working model for the whole neuron. The model for a segment of passive membrane can therefore be a simple electrical circuit, such as that illustrated in figure 8.3. Attentive readers will notice many resemblances between this circuit and the circuit that was used in figure 3.2 to illustrate Maxwell’s equations.

10998_008_fig_003.jpg

Figure 8.3 An electrical diagram of a simplified model of a patch of neuron membrane.

In figure 8.3 the voltage sources VK and VNa stand for the equilibrium potential created by the differences in ion concentrations (only K+ and Na+ are considered in this example). The conductances GK and GNa represent the combined effect of all open channels permeable to ions. The capacitor models the membrane capacitance. It can be assumed, without loss of modeling power, that the outside of the neuron is at zero potential. The passive membrane of a neuron is therefore represented by a large number of circuits like this one, each corresponding to one patch of the membrane, interconnected by conductances in the direction of signal transmission. The two voltage sources that model the sodium and potassium ion concentrations can be replaced by an electrically equivalent voltage source and conductance, called the Thévenin equivalent; the resulting circuit is illustrated in figure 8.4.

10998_008_fig_004.jpg

Figure 8.4 Simplified models of patches of neuron membrane interconnected by axial conductances.

Alan Hodgkin and Andrew Huxley have extensively studied the electrical model of the neuron membrane in the presence of the many different types of ion pumps and channels that are present in real neurons. The behavior of an active membrane is much more complex than the passive model shown, the added complexity being due to the time-dependent and voltage-dependent variation of the conductances. However, extensive studies conducted by Hodgkin, Huxley, and many other researchers that followed them have told us how neuron membranes work in great detail.

In their experiments, Hodgkin and Huxley used the giant axon of the squid to obtain and validate models for ionic mechanisms of action potentials. That axon, which is up to a millimeter in diameter, controls part of the squid’s water-jet propulsion system. Its size makes it amenable to experiments and measurements that would otherwise be very difficult. The Hodgkin-Huxley model describes in detail the behavior of time-dependent and voltage-dependent conductances that obey specific equations. The model for an active segment of the membrane then becomes a generalization of the one shown in figure 8.4, with conductances that vary with time and voltage. For each conductance, there is a specific equation that specifies its value as a function of time and the voltage across it. With this model, Hodgkin and Huxley (1952) were able to simulate the generation of axon potentials in the squid giant axon and to accurately predict effects not explicitly included in the models. In particular, they predicted the axonal propagation speed with a good accuracy, using parameters obtained experimentally. Different generalizations of this model for particular types of neurons were obtained through the work of many other researchers.

Figure 8.5 illustrates the components of the model for an active segment of a membrane from a pyramidal neuron in the human brain. Each conductance shown corresponds to a particular ion conductance mechanism in the membrane. If we plug in the detailed time and voltage dependences of each of the parameters involved in this circuit, and interconnect many of these circuits, one for each patch of the membrane, we obtain an electrical model for a complete neuron.

10998_008_fig_005.jpg

Figure 8.5 A complete electrical diagram of Hodgkin and Huxley’s model of a patch of neuron membrane of a pyramidal neuron.

When a neuron isn’t receiving input from other neurons, the value of the membrane potential is stable and the neuron doesn’t fire. Firing occurs only when the neuron receives input from other neurons through the synapses that interconnect the neurons, usually connecting the axon of the pre-synaptic neuron to a dendrite in the post-synaptic neuron. When enough neurons with their axons connected to the dendritic tree of this neuron are firing, electrical activity in the pre-synaptic neurons is converted, by the activity of the synapses, into an electrical response that results in an increase (hyperpolarization) or a decrease (depolarization) of the membrane potential in the receiving neuron.

If one section of the membrane in the receiving neuron—most commonly a section at the base of the axon (Stuart et al. 1997)—is sufficiently depolarized, one observes, both in computer simulations and in real neurons, that the membrane depolarizes further, which leads to a rapid self-sustained decrease of the membrane potential (called an action potential). This rapid self-sustained decrease of the membrane potential corresponds to the well-known spiking behavior of neurons. When a neuron spikes, the membrane potential depolarizes suddenly, recovering a value closer to its resting state in a few milliseconds. Figure 8.6 shows the resulting voltages across the membrane (obtained using a simulator).

10998_008_fig_006.jpg

Figure 8.6 A simulation of the electrical behavior of a pyramidal neuron.

When the membrane voltage reaches a certain level, it undergoes a regenerative process that generates a spike. After firing, a neuron has a refractory period, during which it isn’t able to generate other spikes even if excited. The length of this period varies widely from neuron to neuron. The spiking depicted in figure 8.6 corresponds to the evolution in time of the voltage in a section of the neuron membrane. However, since this section of the membrane is connected to adjacent sections, this depolarization leads to depolarization in the adjacent sections, which causes the spike to propagate through the axon until it reaches the synaptic connections in the axon terminals. The characteristics of active membranes are such that, once an impulse is initiated, it propagates at a speed that depends only on the geometry of the axon. In fact, once the impulse is initiated, its effects are essentially independent of the waveform in the soma or in the dendritic tree where it started. It is, in a way, a digital signal, either present or absent. However, the frequency and perhaps the timing of the spikes is used to encode information transmitted between neurons.

When the spike reaches a chemical synapse that makes a connection with another neuron, it forces the release of neurotransmitter molecules from the synapse vesicles in the pre-synaptic membrane. These molecules attach to receivers in the post-synaptic membrane and change the state of ionic channels in that membrane. These changes create a variation in the ion fluxes that depolarizes (in excitatory connections) or hyperpolarizes (in inhibitory connections) the membrane of the receiving neuron. Those changes in the voltage drop across the membrane are known, respectively, as excitatory post-synaptic potentials (EPSP) and inhibitory post-synaptic potentials (IPSP). Chemical synapses, the most common synapses, are located in gaps between the membranes of the pre-synaptic and post-synaptic neurons that range from 20 to 40 nanometers.

The brain also contains electrical synapses. An electrical synapse contains channels that cross the membranes of the target neurons and allow ions to flow from the synapse to the next neuron, thereby transmitting the signal. When the membrane potential of the pre-synaptic neuron changes, ions may move through these channels and transmit the signal. Electrical synapses conduct nerve impulses faster than chemical synapses, but they do not provide electrical gain. For that reason, the signal in the post-synaptic neuron is an attenuated version of the signal in the originating neuron.

Our current knowledge of the detailed workings of the neuron membrane enables us to simulate, with great accuracy, the behavior of single neurons or of networks of neurons. Such simulation requires detailed information about the structure and electrical properties of each neuron and about how the neurons are interconnected, including the specific characteristics of each synapse. A number of projects (including some described in the next chapter, such as the Allen Brain Atlas) aim at developing accurate models for neurons in complex brains. This is done by choosing specific neurons and measuring, in great detail, their electrical response to a number of different stimuli. These stimuli are injected into the neuron using very thin electrical probes. The responses obtained can then be used to create and tune very precise electrical models of neurons.

Neurons of more complex organisms are much smaller and more diverse than the neurons Hodgkin and Huxley studied, and are interconnected in a very complex network with many billions of neurons and many trillions of synapses. Understanding the detailed organization of this complex network (perhaps the most complex task ever undertaken) could lead to fundamental changes in medicine, in technology, and in society. This objective is being pursued in many, many ways.

The Brain’s Structure and Organization

A modern human brain has nearly 100 billion neurons, each of them making connections, through synapses, with many other neurons. The total number of synapses in a human brain is estimated to be between 1014 and 1015, which gives an average number of synapses per neuron between 1,000 and 10,000. Some neurons, however, have much more than 10,000 synapses.

Neuroanatomy has enabled us to identify the general functions and characteristics of many different areas of the brain. The different characteristics of gray matter (composed mainly of neuron bodies) and white matter (composed mainly of neuron axons) have been known for centuries. Some parts of the brain (including the cortex) are composed mostly of gray matter; others (including the corpus callosum, a structure that interconnects the two hemispheres) are composed mostly of white matter. However, exactly how the brain’s various areas work remains largely a mystery, although some areas are better understood than others. The brain is usually considered to be divided into three main parts: the forebrain, the midbrain, and the hindbrain, each of them subdivided in a number of areas. Each area has been associated with a number of functions involved in the behavior of the body.

The cerebrum (a part of the forebrain) is the largest part of the human brain, and is commonly associated with higher brain functions, including memory, problem solving, thinking, and feeling. It also controls movement. In general, the closer an area is to the sensory inputs, the better it is understood. The cortex, the largest part of the cerebrum, is of special interest, because it is involved in higher reasoning and in the functions we associate with cognition and intelligence. It is believed to be more flexible and adjustable than the more primitive parts of the brain. The cortex is a layer of neural tissue, between 2 and 4 millimeters thick, that covers most of the brain. The cortex is folded in order to increase the amount of cortex surface area that can fit into the volume available within the skull. The pattern of folds is similar in different individuals but shows many small variations.

The anatomist Korbinian Brodmann defined and numbered brain cortex areas mostly on the basis of the cellular composition of the tissues observed with a microscope. (See Brodmann 1909.) On the basis of his systematic analysis of the microscopic features of the cortex of humans and several other species, Brodmann mapped the cortex into 52 areas. (See figure 8.7.) The results Brodmann published in 1909 remain in use today as a general map of the cortex.

10998_008_fig_007.jpg

Figure 8.7 A diagram of the Brodmann areas reprinted from Ranson and Saunders 1920. (a) Lateral surface. (b) Medial surface.

Brodmann’s map of the human cortex remains the most widely known and frequently cited, although many other studies have proposed alternative and mode detailed maps. Brodmann’s map has been discussed, debated, and refined for more than a hundred years. Many of the 52 areas Brodmann defined on the basis of their neuronal organization have since been found to be closely related to various cortical functions. For example, areas 1–3 are the primary somatosensory cortex, areas 41 and 42 correspond closely to primary auditory cortex, and area 17 is the primary visual cortex. Some Brodmann areas exist only in non-human primates. The terminology of Brodmann areas has been used extensively in studies of the brain employing many different technologies, including electrode implantation and various imaging methods.

Studies based on imaging technologies have shown that different areas of the brain become active when the brain executes specific tasks. Detailed “atlases” of the brain based on the results of these studies (Mazziotta et al. 2001; Heckemann et al. 2006) can be used to understand how functions are distributed in the brain. In general, a particular area may be active when the brain executes a number of different tasks, including sensory processing, language usage or muscle control.

A particularly well-researched area is the visual cortex. Area 17 has been studied so extensively that it provides us with a good illustration of how the brain works. The primary visual area (V1) of the cerebral cortex, which in primates coincides with Brodmann area 17, performs the first stage of cortical processing of visual information. It is involved in early visual processing, in the detection of patterns, in the perception of contours, in the tracking of motion, and in many other functions. Extensive research performed in this area has given us a reasonably good understanding of the way it operates: It processes information received from the retina and transforms the information into high-level features, such as edges, contours and line movement. Its output is then fed to upstream visual areas V2 and V3.

The detailed workings of the retina (an extremely complex system in itself) have been studied extensively, and the flow of signals from the retina to the visual cortex is reasonably well understood. In the retina, receptors (cones and rods) detect incoming photons and perform signal processing, the main purpose of which is to detect center-surround features (a dark center in a light surround, or the opposite). Retinal ganglion cells, sensitive to these center-surround features, send nervous impulses through the optic nerve into the lateral geniculate nucleus (LGN), a small, ovoid part of the brain that is, in effect, a relay center. The LGN performs some transformations and does some signal processing on these inputs to obtain three-dimensional information. The LGN then sends the processed signals to the visual cortex and perhaps to other cortical areas, as illustrated in figure 8.8.

10998_008_fig_008.jpg

Figure 8.8 Neural pathways involved in the first phases of image processing by the brain.

Electrode recording from the cortex of living mammals, pioneered by David Hubel and Torsten Wiesel (1962, 1968), has enabled us to understand how cells in the primary visual cortex process the information coming from the lateral geniculate nucleus. Studies have shown that the primary visual cortex consists mainly of cells responsive to simple and complex features in the input.

The groundbreaking work of Hubel and Wiesel, described beautifully in Hubel’s 1988 book Eye, Brain, and Vision, advanced our understanding of the way we perceive the world by extracting relevant features from the images obtained by the retina. These features get more and more complex as the signals go deeper and deeper into the visual system. For example, ocular dominance columns—groups of cells in the visual cortex—respond to visual stimuli received from one eye or the other. Ocular dominance columns are groups of neurons, organized in stripes in the visual cortex, that respond preferentially to input from either the left eye or the right eye. The columns, laid out in striped patterns across the surface of the primary visual cortex, span multiple cortical layers and detect different features. The particular features detected vary across the surface of the cortex, continuously, in complex patterns called orientation columns. Simple cells, as they are known, detect the presence of a line in a particular part of the retina—either a dark line surrounded by lighter areas or the opposite, a light line surrounded by darker areas. Complex cells perform the next steps in the analysis. They respond to a properly oriented line that sweeps across the receptive field, unlike simple cells that respond only to a stationary line critically positioned in one particular area of their receptive field. Complex cells in the cortex exhibiting many different functions have been found. Some respond more strongly when longer edges move across their receptive field, others are sensitive to line endings, and others are sensitive to different combinations of signals coming from simple cells.

The architecture of the primary visual cortex derived from the experiments mentioned above and from other experiments has given us a reasonably good understanding of the way signals flow in this area of the brain. Incoming neurons from the lateral geniculate nucleus enter mainly in layer 4 of the cortex, relaying information to cells with center-surround receptive fields and to simple cells. Layers 2, 3, 5, and 6 consist mainly of complex cells that receive signals from simple cells in the different sub-layers of layer 4.

The workings of other areas are not yet understood as well as the workings of area V1. Despite the extensive brain research that has taken place in recent decades, only fairly general knowledge of the roles of most of the brain’s areas and of how they operate has been obtained so far. Among the impediments to a more detailed understanding are the complexity of the areas further upstream in the signal-processing pipeline, the lack of a principled approach to explaining how the brain works, and the limitations of the mechanisms currently available to obtain information about the detailed behavior of brain cells.

We know, however, that it is not correct to view the brain as a system that simply processes sensory inputs and converts them into actions (Sporns 2011). Even when not processing inputs, the brain exhibits spontaneous activity, generating what are usually called brain waves. These waves, which occur even during resting states, have different names, depending on the frequency of the electromagnetic signals they generate: alpha (frequencies in the range 8–13 Hz), beta (13–35 Hz), gamma (35–100 Hz), theta (3–8 Hz), and delta (0.5–3 Hz). The ranges are indicative and not uniquely defined, but different brain waves have been associated with different types of brain states, such as deep sleep, meditation, and conscious thought. But despite extensive research on the roles of spontaneous neural activity and of the resulting brain waves, we still know very little about the roles this activity plays (Raichle 2009). What we know is that complex patterns of neural activity are constantly active in the brain, even during deep sleep, and that they play important but mostly unknown roles in cognition. These patterns of activity result from the oscillations of large groups of neurons, interconnected into complex feedback loops, at many different scales, which range from neighboring neuron connections to long-range interconnections between distant areas of the brain.

A complete understanding of the function and the behavior of each part of the brain and of each group of neurons will probably remain outside the range of the possible for many years. Each individual brain is different, and it is likely that only general rules about the brain’s organization and its functioning will be common to different brains. At present, trying to understand the general mechanisms brains use to organize themselves is an important objective for brain sciences, even in the absence of a better understanding of the detailed functions of different areas of the brain.

Brain Development

When we speak of understanding the brain, we must keep in mind that a complete understanding of the detailed workings of a particular brain will probably never be within the reach of a single human mind. In the same way, no human mind can have a complete understanding of the detailed workings of a present-day computer, bit by bit. However, as our knowledge advances, it may become possible to obtain a clear understanding of the general mechanisms used by brains to organize themselves and to become what they are, in the same way that we have a general understanding of the architecture and mechanisms used by computers.

This parallel between understanding the brain and understanding a computer is useful. Even though no one can keep in mind the detailed behavior of a present-day computer, humans have designed computers, and therefore it is fair to say that humans understand computers. Human understanding of computers doesn’t correspond to detailed knowledge, on the part of any individual or any group, of the voltage and current in each single transistor. However, humans understand the general principles used to design a computer, the way each part is interconnected with other parts, the behavior of each different part, and how these parts, working together, perform the tasks they were designed to perform. With the brain, things are much more complex. Even though there are general principles governing the brain’s organization, most of them are not yet well understood. However, we know enough to believe that the brain’s structure and organization result from a combination of genetic encoding and brain plasticity.

It is obvious that genetic encoding specifies how the human brain is constructed of cells built from proteins, metabolites, water, and other constituents. Many genes in the human genome encode various properties of the cells in the human brain and (very, very indirectly) the structure of the human brain. However, the genetic information doesn’t specify the details of each particular neuron and each particular connection between neurons. The genetic makeup of each organism controls the way brain cells are created and duplicated, and controls the large-scale architecture of the brain, as well as the architecture of the other components of the body. However, in humans and other higher organisms the genetic makeup doesn’t control the specific connections neurons make with one another, nor does it control the activity patterns that characterize the brain’s operation.

Extensive studies on brain development in model organisms and in humans are aimed at increasing our understanding of the cellular and molecular mechanisms that control the way nervous systems are created during embryonic development. Researchers working in developmental biology have used different model organisms, including the ones referred to in chapter 7, to study how brains develop and self-organize. They have found that, through chemical gradients and other mechanisms, genetic information directs the neurons to grow their axons from one area of the brain to other areas. However, the detailed pattern of connections that is established between neurons is too complex to be encoded uniquely by the genes, and too dynamic to be statically defined by genetic factors. What is encoded in the genome is a set of recipes for how to make neurons, how to control the division of neuron cells and the subsequent multiplication of neurons, how to direct the extension of the neuron axons and dendritic trees, and how to control the establishment of connections with other neurons.

Current research aims at understanding the developmental processes that control, among other things, the creation and differentiation of neurons, the migration of new neurons from their birthplace to their final positions, the growth and guidance of axon cones, the creation of synapses between these axons and their post-synaptic partners, and the pruning of many branches that takes place later. Many of these processes are controlled in a general way by the genetic makeup of the organism and are independent of the specific activities of particular neuron cells.

The formation of the human brain begins with the neural tube. It forms, during the third week of gestation, from the neural progenitor cells located in a structure called the neural plate (Stiles and Jernigan 2010). By the end of the eighth week of gestation, a number of brain regions have been formed and different patterns of brain tissue begin to appear.

The billions of neurons that constitute a human brain result from the reproduction of neural progenitor cells, which divide to form new cells. Neurons are mature cells and do not divide further to give origin to other neurons. Neural progenitor cells, however, can divide many times to generate two identical neural progenitor cells capable of further division. These cells then produce either neurons or glial cells, according to the biochemical and genetic regulation signals they receive. Glial cells (also called glia or neuroglia) are non-neuronal cells that maintain the chemical equilibrium in the brain and provide physical support for neurons. They are also involved in the generation of myelin (an insulator that surrounds some axons, increasing their transmission speed and avoiding signal loss).

In humans, the generation of new neurons in the cortex is complete around the sixteenth week of gestation (Clancy, Darlington, and Finlay 2001). After they are generated, neurons differentiate into different types. Neuron production is located mainly in an area that will later become the ventricular zone. The neurons produced in that region migrate into the developing neocortex and into other areas. The various mechanisms used by neurons to migrate have been studied extensively, but the process is extremely complex and is only partially understood (Kasthuri et al. 2015; Edmondson and Hatten 1987).

Once a neuron has reached its target region, it develops axons and dendrites in order to establish connections with other neurons. Neurons create dense arbors of dendrites in their immediate vicinity, and extend their axons by extending the axons’ growth cones, guided by chemical gradients that direct the growth cones toward their intended targets. The fact that some of the molecules used to guide the growth of axons are attractive and others are repulsive results in a complex set of orientation clues perceived by the growth cones. Once the axons reach their target zone, they establish synapses with dendritic trees in the area.

A significant fraction of the neurons that develop during this process and a significant fraction of the connections they establish disappear in the next stages (pre-natal and post-natal) of the brain’s development. A significant fraction of the neurons die. Even in those that don’t die, many connections are pruned and removed. Initial connection patterns in the developing brain involve many more synapses than the ones that remain after the brain reaches its more stable state, in late childhood. Overall, the total number of established synapses may be cut by half relatively to the peak value it reached in early childhood.

The processes of neuron migration, growth-cone extension, and pruning are controlled, in large part, by genetic and biochemical factors. However, this information isn’t sufficient to encode the patterns in a fully formed brain. Brain plasticity also plays a very significant role. The detailed pattern of connections in a fully formed brain results in large part from activity-dependent mechanisms in which the detailed activity patterns of the neurons, resulting from sensory experience and from spontaneous neuron activation, control the formation of new synapses and the pruning of existing ones.

Plasticity, Learning, and Memory

The human brain is plastic (that is, able to change) not only during its development but throughout a person’s life. Plasticity is what gives a normal brain the ability to learn and to modify its behavior as new experiences occur. Though plasticity is strongest during childhood, it remains a fundamental and significant property of the brain throughout a person’s life.

The brain’s plasticity comes into play every time we see or hear something new, every time we make a new memory, and every time we think. Even though we don’t have a complete understanding of the processes that create memories, there is significant evidence that long-term memories are stored in the connection patterns of the brain. Short-term memories are likely to be related to specific patterns of activity in the brain, but those patterns are also related to changes (perhaps short-lived changes) in connectivity patterns, which means that brain plasticity is active every second of our lives.

The connections between the neurons are dynamic, and they change as a consequence of synaptic plasticity. Synaptic plasticity is responsible not only for the refinement of newly established neural circuits in early infancy, but also for the imprinting of memories later in life and for almost all the mechanisms that are related to learning and adaptability. The general principles and rules that control the plasticity of synapses (and, therefore, of the brain) are only partially understood.

Santiago Ramón y Cajal was probably the first to suggest that there was a learning mechanism that didn’t require the creation of new neurons. In his 1894 Croonian Lecture to the Royal Society, he proposed that memories might be formed by changing the strengths of the connections between existing neurons.

Donald Hebb, in 1949, followed up on Ramón y Cajal’s ideas and proposed that neurons might grow new synapses or undergo metabolic changes that enhance their ability to exchange information. He proposed two simple principles, which have hence been found to be present in many cases. The first states that the repeated and simultaneous activation of two neurons leads to a reinforcement of connections between them. The second states that, if two neurons are repeatedly active sequentially, then the connections from the first to the second become strengthened (Hebb 1949). This reinforcement of the connections between two neurons that fire in a correlated way came to be known as Hebb’s Rule. Hebb’s original proposal was presented as follows in his 1949 book The Organization of Behavior:

Let us assume that the persistence or repetition of a reverberatory activity (or “trace”) tends to induce lasting cellular changes that add to its stability. … When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

Today these principles are often rephrased to mean that changes in the efficacy of synaptic transmission result from correlations in the firing activity of pre-synaptic and post-synaptic neurons, leading to the well-known statement “Neurons that fire together wire together.” The fact that this formulation is more general than Hebb’s original rule implies that a neuron that contributes to the firing of another neuron has to be active slightly before the other neuron. This idea of correlation-based learning, now generally called Hebbian learning, probably plays a significant role in the plasticity of synapses.

Hubel and Wiesel studied the self-organization of the visual cortex by performing experiments with cats and other mammals that were deprived of vision in one eye before the circuits in the visual cortex had had time to develop (Hubel and Wiesel 1962, 1968; Hubel 1988). In cats deprived of the use of one eye, the columns in the primary visual cortex rearranged themselves to take over the areas that normally would have received input from the deprived eye. Their results showed that the development of cortical structures that process images (e.g., simple cells and complex cells) depends on visual input. Other experiments performed with more specific forms of visual deprivation confirmed those findings. In one such experiment, raising cats from birth with one eye able to view only horizontal lines and the other eye able to view only vertical lines led to a corresponding arrangement of the ocular dominance columns in the visual cortex (Hirsch and Spinelli 1970; Blakemore and Cooper 1970). The receptive fields of cells in the visual cortex were oriented horizontally or vertically depending on which eye they were sensitive to, and no cells sensitive to oblique lines were found. Thus, there is conclusive evidence that the complex arrangements found in the visual cortex are attributable in large part to activity-dependent plasticity that, in the case of the aforementioned experiment, is active only during early infancy. Many other results concerning the visual cortex and areas of the cortex dedicated to other senses have confirmed Hubel and Wiesel’s discoveries. However, the mechanisms that underlie this plasticity are still poorly understood and remain subjects of research.

A number of different phenomena are believed to support synaptic plasticity. Long-term potentiation (LTP), the most prominent of those phenomena, has been studied extensively and is closely related to Hebb’s Rule. The term LTP refers to a long-term increase in the strength of synapses in response to specific patterns of activity that involved the pre-synaptic and post-synaptic neurons. The opposite of LTP is called long-term depression (LTD). LTP, discovered in the rabbit hippocampus by Terje Lømo (1966), is believed to be among the cellular mechanisms that underlie learning and memory (Bliss and Collingridge 1993; Bliss and Lømo 1973).

Long-term potentiation occurs in a number of brain tissues when the adequate stimuli are present, but it has been most studied in the hippocampus of many mammals, including humans (Cooke and Bliss 2006). LTP is expressed as a persistent increase in the neural response in a pathway when neural stimuli with the right properties and appropriate strength and duration are present. It has been shown that the existence of LTP and the creation of memories are correlated, and that chemical changes which block LTP also block the creation of memories. This result provides convincing evidence that long-term potentiation is at least one of the mechanisms, if not the most important one, involved in the implantation of long-term memories.

Another phenomenon that may play a significant role in synaptic plasticity is called neural back-propagation. Despite its name, the phenomenon is only vaguely related to the back-propagation algorithm that was mentioned in chapter 5. In neural back-propagation, the action potential that (in most cases) originates at the base of the axon also creates an action potential that goes back, although with decreased intensity, into the dendritic arbor (Stuart et al. 1997). Some researchers believe that this simple process can be used in a manner similar to the back-propagation algorithm, used in multi-layer perceptrons, to back-propagate an error signal; however, not enough evidence of that mechanism has been uncovered so far.

Synaptic plasticity is not the only mechanism underlying brain plasticity. Although creation, reinforcement, and destruction of synapses are probably the most important mechanisms that lead to brain plasticity, other mechanisms are also likely to be involved. Adult nerve cells do not reproduce, and therefore no new neurons are created in adults (except in the hippocampus, where some stem cells can divide to generate new neurons). Since, in general, there is no creation of new nerve cells, brain plasticity isn’t likely to happen through the most obvious mechanism, the creation of new neurons. However, mechanisms whereby existing neurons migrate or grow new extensions (axons or dendritic trees) have been discovered, and they may account for a significant amount of adults’ brain plasticity.

Further research on the mechanisms involved in brain plasticity and on the mechanisms involved in the brain’s development will eventually lead to a much clearer understanding of how the brain organizes itself. Understanding the principles behind the brain’s development implies the creation and simulation of much better models for the activity-independent mechanisms that genes use to control the brain’s formation and the activity-dependent mechanisms that give it plasticity.

To get a better grasp of the current status of brain science, it is interesting to look at the projects under way in this area and at the technologies used to look deep inside the brain. Those are the topics of the next chapter.