3    The Classical Neuron

I met a traveller from an antique land

Who said: “Two vast and trunkless legs of stone

Stand in the desert. Near them on the sand,

Half sunk, a shattered visage lies, whose frown

And wrinkled lip and sneer of cold command

Tell that its sculptor well those passions read

Which yet survive, stamped on these lifeless things,

The hand that mocked them and the heart that fed.”

—Percy Bysshe Shelley (1792–1822), “Ozymandias”

When anatomists began to study the fine structure of the brain, they saw that the neurons seemed to be connected by a dense mesh of fibers: thin fibers, axons, often traveled for long distances, while thicker fibers, dendrites, stayed close to the cell body and often branched extensively. By the end of the nineteenth century, Ramon y Cajal had shown that the neurons are not connected directly to each other but are physically separated.1 Accordingly, for information to leave one neuron and be received by another, a message had to be released from the one, which was then recognized by the other. Although the fibers of different neurons did not form a continuous network, the axons often came very close to cell bodies and dendrites. By the early twentieth century it was recognized that these contacts included sites that we now know of as synapses.2 Somehow, it seemed, messages crossed from neuron to neuron at these synapses.

The advent of electron microscopy made it possible to see things invisible to the light microscope, and it became clear that the synapse has many specialized features that, under close interrogation, eventually yielded their identities.

We now know that, at a synapse, the axon terminal is filled with many small vesicles that contain a chemical neurotransmitter, often the excitatory neurotransmitter glutamate or the inhibitory neurotransmitter GABA (gamma-amino butyric acid). These vesicles can be released by electrical signals which open pores in the terminal membrane to allow calcium ions to enter. This results in one or more vesicles fusing with the terminal membrane and emptying their content into the synaptic cleft, the narrow channel between the terminal and the dendrite (figure 3.1).

Figure 3.1

The classical neuron.

The other side of the synapse includes a complex structure, the postsynaptic density, which organizes the dendrite’s response to chemical signals. This contains receptors for neurotransmitters, and it regulates their availability at the postsynaptic membrane. Receptors are proteins with a particular shape that allows certain other molecules to bind to them and thereby trigger some kind of signal. Some receptors are ion channels. When a neurotransmitter binds to one of these, the ion channel opens and a current flows into or out of the dendrite—these receptors convert chemical signals into electrical signals.

As in most cells, the cell body of a neuron contains the nucleus (which contains the cell’s DNA), the rough endoplasmic reticulum (where peptides are assembled), and the Golgi apparatus (which packages peptides into vesicles). The fluid inside a neuron, as in all cells, is not the same as the fluid outside; it contains more potassium and less sodium, because the cell has pumps that import potassium while expelling sodium. Both sodium and potassium ions are positively charged, and, because the pumps expel more sodium than they import potassium, the neuron (like all cells) is electrically “polarized”—electrodes inside and outside it will measure a difference in voltage, with the inside of the cell negative relative to the outside; this difference is the cell’s membrane potential. The normal “resting” membrane potential of a neuron is usually at about −70 mV relative to the outside of the cell. These differences between the inside and outside of a neuron are critical to understanding how neurotransmitters work. When glutamate binds to a receptor on the surface of a neuron, the shape of that receptor changes, and a pore opens to allow sodium ions through. Because the neuron is negatively polarized and its interior is low in sodium, a current carried by sodium ions will enter, raising (“depolarizing”) the membrane potential by a small amount for a few milliseconds.

A typical neuron receives several thousand synaptic inputs from hundreds of neurons. The dendrites “integrate” these. If enough excitatory synapses are activated, the membrane potential will be depolarized beyond a critical point—the spike threshold. One site in the neuron, usually close to the cell body, contains abundant voltage-sensitive sodium channels, and when the neuron is depolarized enough, these start to open. As sodium enters, it depolarizes the neuron further, more channels open, and more sodium enters. The result is a voltage “spike”—an action potential. As this spike depolarizes the neuron, voltage-sensitive potassium channels open, potassium leaves the neuron, and the neuron repolarizes: in about a millisecond, the spike is over at its site of initiation. However, the spike travels down the axon as an electrical wave, on and on until it reaches the axon terminals. In 1963 Alan Hodgkin and Andrew Huxley were awarded a Nobel Prize for their role in understanding the mechanisms involved, and for producing a mathematical model that encapsulates that understanding.3

The information continuously received by a neuron is integrated across time (the effects of each input last a few milliseconds) and space (inputs arrive at different places on the dendrites and cell body). When the integrated signal exceeds the spike threshold, a spike is generated that travels down the axons: when it reaches the axon terminals, the spike will open calcium channels and the calcium entry will cause a neurotransmitter to be released. In this way, the chemical information received by a neuron is transformed into electrical signals, which are themselves resolved into a pattern of spikes that travel down the axon to generate another chemical signal that will be transmitted to other neurons. Thus, in a sense, spikes “encode” the information received by a neuron—but neurons talk to each other not by spikes themselves, but by chemical signals released by the spikes (figure 3.2).

Figure 3.2

Generating spikes. Neurons are electrically polarized—at rest, the electrical potential inside a neuron is about 60–70 mV negative with respect to the outside. Neurotransmitters disturb this state by opening ion channels in the membrane, allowing brief currents to flow into or out of the neuron. The inhibitory transmitter GABA opens chloride channels—these cause negatively charged chloride ions to enter the cell (chloride is much more abundant in the extracellular fluid), and this current hyperpolarizes the neuron, causing an IPSP (an inhibitory postsynaptic potential). Conversely, the excitatory neurotransmitter glutamate causes sodium channels to open, and this causes the positively charged sodium ions to enter the neuron, depolarizing it and causing an EPSP (an excitatory postsynaptic potential). If a flurry of EPSPs causes a depolarization that exceeds the neuron’s spike threshold, the neuron will fire a spike (an action potential). After a spike, neurons are typically inexcitable for a period because of a short hyperpolarizing afterpotential, and this limits how fast a neuron can fire. Trains of spikes can cause complex long-lasting changes in excitability—depolarizing afterpotentials or hyperpolarizing afterpotentials and combinations of these. These cause neurons to discharge in particular patterns of spikes. These properties vary considerably between different neuronal types.

For neuronal networks to become efficient at the tasks they execute, the synapses must “learn” from experience.4 If a task requires one neuron to cause another to fire a spike, for that task to be fulfilled more efficiently experience must cause the synapse between the neurons to become stronger. This can be achieved either by the release of more neurotransmitter from the axon terminals or by an increase in the sensitivity of the dendrite. Both of these types of synaptic plasticity are common, as are many others. Synapses are not fixed forever: the neuronal circuits in our brain are continually being restructured throughout our lives. At every moment some synapses are being pruned while others are proliferating: yesterday’s brain was not quite today’s.

Experience-dependent learning requires that some signal causes certain synapses to be strengthened. Many “retrograde” signals pass back from the dendrite to the axon terminal. One is nitric oxide; produced in many neurons when they are activated, this gas diffuses back into impinging axon terminals. Other retrograde signals include adenosine, prostaglandins, neurosteroids, and many neuropeptides. If a signal from one neuron is immediately followed by a spike in a neuron that it contacts, then a retrograde signal might strengthen the synapses between them, and this can make particular neuronal circuits more efficient. This is how, for neuronal networks, practice makes perfect. Conversely, if spike activity in two neurons is consistently uncorrelated then the synapses between them can become weakened or eliminated. Other “reward signals” from more distant neurons may signal that the flow of information through a network has been productive, and can reinforce all of the synapses in the chains that have been recently active. One of these signals is the neurotransmitter dopamine, released in the “reward circuits” of the brain, about which I will say more later.

This sketch of the brain, so thinly summarizing a century of endeavor, presents the brain as a computational structure whose power resides in its scale (the numbers of neurons and synapses), in its complexity (the connections that each neuron makes to other neurons), in the ability of neurons to compute rapidly (in milliseconds), and in the ability of neuronal networks to modify their connections in the light of experience.

By the end of the twentieth century an important refinement in our understanding of neurons had become accepted. The spike activity of neurons does not simply reflect the information received: it is also determined by their intrinsic properties, properties that vary from neuron to neuron.5,6 Some neurons generate spikes regularly without any synaptic input: in these, synaptic activity does not determine the exact timing of spikes but modulates the average frequency of their discharge. Other neurons show prolonged activity-dependent changes in excitability. Some of these become more active as they are activated, so excitatory inputs trigger not single spikes but bursts of spikes; some show regular short bursts, others activity that waxes and wanes. Still others show long bursts separated by long silences. Sometimes the bursts begin with a peak of intensity, sometimes they end on a peak: the varieties seem endless.

Spike activity also depends on the mini-networks that neurons form with close neighbors. One common network motif comprises an excitatory neuron connected to an inhibitory interneuron that projects back to it; this can convert a continuous input into a transient output or into a sequence of bursts. Thus neurons, through their intrinsic properties and by the mini-networks that they form, transform the information they receive in diverse ways.

In 1959, Jerry Lettvin and colleagues published “What the Frog’s Eye Tells the Frog’s Brain” in the Proceedings of the Institute of Radio Engineers.7 This paper became a classic both for the insight it provided and for its prose. Here is an extract from the introduction:

The frog does not seem to see or, at any rate, is not concerned with the detail of stationary parts of the world around him. He will starve to death surrounded by food if it is not moving. His choice of food is determined only by size and movement. He will leap to capture any object the size of an insect or worm, providing it moves like one. He can be fooled easily not only by a bit of dangled meat but by any moving small object. His sex life is conducted by sound and touch. His choice of paths in escaping enemies does not seem to be governed by anything more devious than leaping to where it is darker. Since he is equally at home in water and on land, why should it matter where he lights after jumping or what particular direction he takes?

This paper showed that the outputs of the frog retina express the visual image in terms of (1) local sharp edges and contrast; (2) the curvature of edge of a dark object; (3) the movement of edges; and (4) the local dimmings produced by movement. In the retina, information processing involves discarding everything irrelevant to the two things that matter most: whether there is food to be had or danger to be avoided. This parsimony does not reflect anything simple or primitive about the frog’s eye: the frog retina contains about a million receptor cells and about 4 million neurons, and frogs have been molded and tempered by the fires of fortune and natural selection for the same millennia that mammals have, and are better at being frogs than any mammal could be.

The processing involves resolving edges and changes. For neurons, to resolve an edge involves no more than comparing the activity of neurons to that of their immediate neighbors. To resolve changes is even more straightforward: neurons generally are good at responding to changes; retaining a steady signal is harder because when a neuron is excited by a constant stimulus, its activity generally declines. There can be many reasons for this: in the case of the photoreceptors in retinas, adaptation is a result of bleaching of the photopigment and is unavoidable.

Is our vision like that of the frog? Richard Dawkins, in Unweaving the Rainbow (1998), conceived that our retina, like that of the frog, only reports changes in what we see and that our brains reconstruct images by keeping track of those changes. It is true that the rods and cones in our retinas adapt rapidly to sustained activation, so any image that is fixed on any part of the retina will fade. However, when we “fixate” on some object of interest, the stability of our gaze is an illusion: our eyes are never still. “Fixational eye movements”—microsaccades, ocular drifts, and ocular microtremor—mean that whenever we look steadily at something, the retinal image is constantly “jittering,” however still we perceive it to be. The fixational eye movements “work around” the problem of photobleaching to maintain a constant image of a constant scene. When lovers gaze into each other’s eyes what they see is not an illusion, though what they read there might be. However, that constant image is encoded not in a stable collage of active retinal neurons, but in a constantly flickering and jittering map.

The properties of each neuron determine whether it will adapt to a sustained stimulus or maintain a constant response.7 These properties determine the pattern of spikes that a neuron generates—the particular sequence of spikes that a neuron fires in response to a given stimulus. The sequence matters for many reasons, but especially because spikes that are clustered together release more neurotransmitter than the same number of spikes generated sparsely, and because the effects of packets of neurotransmitter released in quick succession can summate to give a stronger signal to the next neuron in the chain. Thus mechanisms that govern the generation of patterns of spiking activity are of major importance.

The first insight came from Herbert Gasser and Josef Engaler, who in 1944 were awarded the Nobel Prize in Physiology for showing that spikes in peripheral nerves were followed by slow “afterpotentials” that produced prolonged changes in excitability.8 They saw that these properties would also be present at the site of spike initiation, implying that neurons might retain some “memory” of recent spike activity that would affect how they responded to synaptic input. We now know that, in different neurons, spike activity generates different sequences of afterpotentials that can generate complex patterns of spiking. Hyperpolarizing afterpotentials make neurons less likely to fire again after they have been active, while depolarizing afterpotentials make them more likely to fire again. In some neurons, a flurry of inputs that lasts just a second might trigger a burst of spikes that lasts longer than a minute. This might be useful, as a way of holding for a while the memory of a transient event, but when a neuron responds to an input by a stereotyped burst of spikes, any information that was present in the fine structure of the input is irretrievably lost.

Whenever a signal passes from one neuron to another, information is lost. A sensory cell might respond to a stimulus by a fluctuating electrical signal that reflects the sensory signal, but when that signal is transformed into a train of spikes, the details of the fluctuations are lost. When the train reaches a synapse, it is converted to another fluctuating chemical signal, and this has an inconsistent relationship to the train of spikes that evoked it: it depends on the availability of vesicles for release, which fluctuates stochastically. In the postsynaptic cell, that chemical signal activates receptors and is transformed again into a fluctuating electrical signal with further loss of fidelity. That new signal, along with signals from thousands of other synapses, triggers a new train of spikes—and the patterning of that new train is influenced by the intrinsic properties of the neuron. Some neurons generate bursts: short fast bursts, long slow bursts, complex fractured bursts—bursts occur in different neurons with a seemingly endless variety. Other neurons that have a “pacemaker potential” fire with a metronomic regularity; many others fire seemingly at random. The spiking activity of neurons in the brain has an inconsistent, gossamer-like association with information in the sense in which we would normally understand it.

If connections between neurons were all purposeful and all activity were meaningful, then this vast network would indeed have a massive capacity for processing information. The human brain has at least 80 billion neurons, each of which make, on average, 10,000 connections with other neurons—8 × 1014 points of information transfer. Many neurons can fire 200 spikes per second, although few do so except rarely and briefly. But if each neuron might fire a spike every 5 milliseconds, the rate of information processing might seem to be between 1016 and 1017 calculations per second.

At the rate at which the power of computers is increasing, desktop computers might soon do better than this. Donald Moore pointed out in 1965 that, every year since the integrated circuit was invented, the number of transistors per square inch had doubled, and he predicted that this trend would continue—a prediction that became known as Moore’s law.9 In 1965, chips had about 60 transistors; by 2014, the 15-core Xeon Ivy Bridge-EX had more than 4 billion. Computational capacity is measured in FLOPS—floating point calculations per second. In 1997, 109 FLOPS cost about $42,000; by 2012 this had fallen to about 6 cents, and, in 2016, a supercomputer in China became the fastest in the world at 1017 FLOPS. If costs continue to fall and power continues to increase at present rates, $1,000 will buy 2 × 1017 calculations per second by 2023, and desktop computers will apparently have a computational capacity equivalent to that of the human brain.

However, this calculation of the brain’s computational power is misleading. The 10,000 outputs of a neuron are not independent: they all carry the same information, and they don’t go to 10,000 different neurons but usually to just a few hundred at most. Synapses are unreliable, neurons are typically noisy and erratic, and most neurons never fire at anything like 200 spikes per second—indeed, many are not capable of doing so. The brain also has extensive redundancy: Parkinson’s disease has few symptoms until about 85% of the neurons in the substantia nigra have degenerated. Spiking activity in the brain has nothing like the computational capacity claimed for it.

The brain determines how our bodies will respond to changes in our environment, and all of its neurons contribute, to some extent, to the generation of all behaviors. Exactly how they contribute depends on their intrinsic properties, which are heterogeneous, and on their connections with other neurons, which are part rule-governed, part molded by the contingencies of experience, and part accidental. The agents within our brain are messy and unreliable products of genetics, epigenetics, accident, and chance. This makes any analogy between digital computers and the brain misleading.

The era of digital computers might soon be closing. Transistors are built from thin wafers of silicon that form logic gates by allowing a current to pass if a small voltage is applied to the wafer but not otherwise. If the wafers are too thin, “electron tunneling” will occur: a current will erratically find its way through even without any applied voltage. We are fast approaching this hard limit to miniaturization.

On the horizon, quantum computing promises vast increases in computing power. Whereas conventional logic gates have two states, “true” or “false,” quantum devices have many possible states, and, because they are built on a molecular scale, the density of packing of devices is practically unlimited. Quantum devices will be noisy and unreliable, and new computer architectures must be developed that are self-organizing, fault-tolerant, and self-correcting, and that function robustly in the presence of noise. New software must find ways of coping with uncertainty in the operation of individual elements. These are characteristics of the brain, which should be a source of inspiration for new architectures and systems: while computers might soon be powerful enough to help us understand the brain, we might have to understand the brain better before we can program them. This prospect challenges the pretensions of neuroscience: do we understand much, or do we just know a lot? Is our understanding of the brain anything more than a lazy borrowing from a (possibly mistaken) understanding of how digital computers process information?

The classical view is a monumental edifice, a monumental achievement. Artificial neural networks, constructed by analogy with this classical view,10 drove progress in artificial intelligence and machine learning with wide applications in data analysis and robotics. Such networks are magnificent calculators, exquisite rational agents. But where in this edifice are our passions—where is the “heart” of the brain?

Shelley’s poem “Ozymandias” ends,

And on the pedestal these words appear:

“My name is Ozymandias, king of kings:

Look on my works, ye Mighty, and despair!”

Nothing beside remains. Round the decay

Of that colossal wreck, boundless and bare,

The lone and level sands stretch far away.

Notes