Neurons transmit information by firing action potentials and connect to other neurons via synapses. As we have just seen, an action potential is a short change in the membrane voltage of the cell (usually a millisecond or so in duration). If we had a wire or other conducting medium near that neuron when it fired its action potential, it would be detectable on the wire as a quick spike. If we had a sensitive enough instrument, we could detect that spike. There are many technologies that allow us to see signals that reflect this activity. These technologies allow us to (in effect) read the animal’s mind, to ask what it is thinking about, to see what it sees, and to hear what it hears. We will also see technologies that allow us to manipulate (at a limited level) those physical signals and change what is in the animal’s mind, what it sees and what it hears.
Each neuron fires its spikes carrying information about some aspect of the world, some sensory signal, some motor action to be taken, some internal process, or some combination of all three. One of the key questions in behavioral neuroscience is What information does a neuron’s spiking pattern encode?
We generally answer this question by relating the neural firing of a cell to some behavioral variable through a tuning curve. Classically, a tuning curve defines the likelihood that you will see spikes from a given neuron when an event occurs. For example, an auditory cell in the primary auditory cortex may be tuned to a specific frequency of sound. If you hear a tone at that frequency, that cell will fire spikes. Other cells will be tuned to aspects of that sound, the timbre, even the specific song if you recognize it. A visual cell in the primary visual cortex may be tuned to a spot of light, but other cells in deeper visual cortices (sometimes called “higher-order cortices” because their representations require more processing) are tuned to faces, trees, or other objects.
It is important to note that a cell’s tuning curve does not explain why the cell is tuned to that information in the world, only that the cell’s firing is correlated with it. Visual cells are tuned to visual signals because light impacts the retina, which fires the retinal ganglion cells, which transmits information to the thalamus, which then sends signals to the visual cortex, making the spikes fired by the neuron in visual cortex correlated to the visual input. Primary motor cortex cells are tuned to specific muscle movements because they send their output to motor-controlling neurons in the spinal cord, so their firing influences the muscle and their spikes are correlated with its movements.
Many neuroscientists will say that tuning curves can only be applied to behavioral variables that directly relate to the immediate world of the animal, sensory signals (following a flash of light, a specific sound) or motor actions (preceding the movement of a specific muscle), but the brain is a physical object. When you think of your grandmother, some cells are firing that represent the information about your grandmother. If you think about what she looked like the last time you saw her, cells are firing that represent information about her face. If you think about her voice, cells are firing that represent information about what she sounded like.
Much of the older popular neuroscience literature talked about the “grandmother cell,” implying that there was a single cell in the brain that fired to thoughts of your grandmother and only to your grandmother. As we will see later, this hypothesis is incorrect. Representations in the brain are distributed. Some cells will fire more to women than to men, other cells to old people more than young, and still other cells to specific aspects of your grandmother such as her voice or her face, and it is that set of cells that represents the concept of your grandmother, not any one of those cells.
To understand the concept of a distributed representation, imagine turning on the television to find it showing a football game. You want to know which teams are playing because you want to decide if it’s a game you want to watch, but the camera is just passing over the audience. (To make this thought-experiment work, you’ll need to ignore the fact that modern television sports programs have little text-boxes in the corner of the screen that tell you who’s playing and what the score is.) From the clothes of the fans, you could probably make a very good guess about the two teams. Looking at any one fan won’t tell you. Maybe that person just happens to like that team. Fans might not be wearing the team jersey, but they’ll likely be wearing clothes with the team colors. If you notice that most of the fans are wearing a specific team’s colors, that’s a pretty good indication of which teams are playing. You’ll probably notice that there are two teams’ jerseys and colors that are much more commonly represented. The fans are representing information about which teams are playing. In fact, you can probably determine which team is doing well at a given moment by the expressions on the faces of one team’s fans versus another. The fans are a distributed representation of the two teams—you can learn a lot about the game just by watching the fans.
Mathematically, a cell can represent any variable as long as its spiking is reliably related to that variable. For example, cells in the rodent hippocampus are tuned to the location of the animal.1 As the animal runs around an environment, a given place cell will fire spikes in a preferred location (the place field of the cell). This is not a sensory response, nor is it a motor initiation signal; it is actually a representation of location.A
Tuning curves often entail firing strongly to a single preferred stimulus, which then falls off as the stimulus differs from that preferred stimulus. For example, auditory neurons in the primary auditory cortex have a preferred frequency at which they fire the most spikes; as the supplied sound frequency drops off in each direction, the number of spikes that the cell fires decreases.2 Each auditory neuron has a different preferred frequency. When you hear a note, each cell will fire spikes reflecting the harmonics of that note. Place cells in the hippocampus have a preferred location. At the center of that place field, the cell fires the most spikes the most reliably; at distances from that place field, the cell fires fewer and fewer spikes, until it becomes quiet. Each place cell is tuned to a different location in space in a different environment. The representation of a position in the environment entails a distribution of cells firing, some (with preferred locations near the position) firing a lot and some (with preferred locations far from the position) firing only a little.
However, tuning curves don’t have to have a single preferred stimulus.3 All that matters is that the spiking of the cell be reliably informative about the behavioral variable. Because information is about defining groups, this is equivalent to saying that the behavioral variable must be reliably informative about the spiking times of the cell.4
This means that we can determine information about the animal’s representation of the world from the firing of cells. Let’s go back to place cells. Each place cell is tuned to a different location in space. This means that we can invert the tuning curve—if we know what location is represented by that place cell, then when that place cell fires spikes, we can infer that the animal is representing information about that location. Even so, a typical place field is pretty large. (In rats, place fields can range in size from 20 centimeters to meters in length.5) Even more importantly, the relationship between firing spikes and the location of the animal is not 100%. Sometimes a cell fires spikes when the animal is outside the cell’s place field; sometimes the cell doesn’t fire when the animal is in the place field. But if we have two cells with overlapping place fields, we can be more accurate because we can determine if the first cell is firing but not the second, if the second cell is firing but not the first, or if both are firing. Current technologies allow us to record as many as several hundred cells, simultaneously, from awake, behaving animals. By integrating information from hundreds of cells, we can decode location quite accurately.6 From several hundred cells, we can typically decode location to an accuracy of better than 1 centimeter! We can even determine when the cells are representing a location different from the animal’s current location;7 we can determine where the animal is imagining or thinking about. We can read the rat’s mind.
Since we’ve already talked about recording electrical signals and identifying neural firing patterns from them, a good next signal to discuss is the local field potential (LFP). If one takes the same electrical signal appearing on the electrodes and listens to the low-frequency components instead of the high-frequency components, one can record LFPs.B Currently, no one is completely sure how LFPs are generated, but they seem to be related to the slow electrical currents arising from synapses.8
In many brain structures like the hippocampus and the basal ganglia, nearby cells do not tend to be processing the same information, and LFP signals do not carry information about the task at hand, but they carry information about how the brain is processing those tasks.9 These LFPs tend to indicate the state of the system and most likely reflect different processing regimes. For example, when you are attentive, actively listening to someone talking to you or actively trying to understand new information you are reading about, the LFP in your hippocampus shows a 7 to 10 Hz, almost sinusoidal rhythm, called “rhythmic slow activity” (RSA) or theta (for the frequency band it covers). In contrast, if you are fading out, not really listening, losing your attention, nodding off, the LFP in your hippocampus shows a more broad-spectrum arrhythmic activity called “large amplitude irregular activity” (LIA).10
In contrast, in the mammalian neocortex, nearby cells do process similar information.11 (One of the most remarkable things about the mammalian cortex is that although the brain itself is unquestionably three-dimensional in its connectivity and its structure, the cortex itself is more of a sheet than a three-dimensional structure. The six layers of cortex form a thin sheet that changes tremendously in area but not much in depth between mammals. Evolution seems to be taking a specific template and copying it in two dimensions to make larger and larger cortices.12) Because the processing units in cortex are arranged in small 0.3-millimeter-wide columns, where all the cells in a column are processing similar things, there is evidence that LFPs in the cortex can carry signals about the information in that column.13
Imagine you are trying to understand the game of American football but you can’t see the game. All you have is a microphone, which you could lower to listen to a single person. Perhaps you get to listen to the quarterback or the coach and learn a lot about the game. This is like single-cell recording, where you find that the signal on your microphone tells you a lot about the behavioral variable (the game). On the other hand, you might end up listening to the drunk fan in the third row, or the businessman discussing stock sales, and not find much relationship to what is going on in the game. If, however, you pull your microphone up and listen to the roar of the crowd, you can also learn about the game. If you hear the crowd chanting “Defense! Defense!” or the crowd being really quiet, you can determine pretty accurately which team is on offense when. You can learn when a really good (or really bad) play happens just by listening to the volume of the crowd. That’s like LFP in the hippocampus or the basal ganglia. If all the fans from one team are sitting together and all the fans from the other team are sitting together, you might even be able to tell which team is winning by the volume of the shouts of each group. That’s like LFP in the cortex.
Many people are familiar with a form of LFP recorded outside the skull: the electroencephalogram (EEG).14 EEG electrodes are placed around the scalp and record the same sorts of waves as LFPs. Just as with LFPs, EEGs are primarily used to categorize brain states. Many people, for example, are familiar with slow-wave and REM sleep, which can be identified by EEG oscillations. Different moods and attention states can also be measured by EEG changes. For example, calm, relaxed states are often identified with EEG rhythms (arising from the cortex) in the 8 to 12 Hz range (confusingly called “alpha waves” in the EEG literature and “theta waves” in the LFP literature).
People can learn to control their EEG rhythms.15 There are now games available that use EEG signals to control balls or lights. Touted as allowing one to “control the ball with your mind,” these games are measuring EEG signals from a headset and using the ratio of the power in different frequency regimes (usually alpha [8–12 Hz] to beta [12–30 Hz]) to change the speed of a fan that blows a ball up or down.16
When EEG signals are measured in response to impulse events rather than looking at the ongoing frequencies in the oscillation, these signals are sometimes referred to as event-related potentials (ERPs).17 They occur when a single event drives enough activity in the brain to be detected as a sharp change in the EEG.
One of the most common ERP signals studied is the error-related negativity (ERN), in which EEG recorded from the frontal part of the scalp changes in response to making an error.18 Interestingly, this activity occurs approximately 100 milliseconds after one starts the movement, which can be long before the movement is complete. (The ERN seems to reflect that “oops, that’s wrong” signal we sometimes feel when we react too quickly.) We now know that the ERN arises from signals in the anterior cingulate cortex, which is involved in monitoring actions and allowing one to take a different action than one usually might. (See, for example, the discussion of overcoming a phobia and facing one’s fears in our discussion of self-control in Chapter 15.)
The primary information-measuring experiments in this book come from animal experiments because we have the technology to record from single neurons and neuronal ensembles in animals.C Such technology works in humans as well but requires implantation of wire electrodes into the subject; therefore, this is done only for clinical treatment.
Sometimes scientists are allowed to piggyback onto the clinical implantation and (with the permission of the subject and the oversight of a review board) record from electrodes implanted for other reasons. Two examples where such electrodes are routinely implanted are for the determination of the initiation sites of epileptic seizures and for deep-brain stimulation for the treatment of Parkinson’s disease.
Briefly, epilepsy entails neurons going haywire. The first treatment attempted is generally antiseizure drugs, but that works for only a limited portion of the epileptic population.19 If pharmacological treatments don’t work, an alternate option is to literally burn away the part of the brain going haywire. Of course, damaging the wrong part of the brain is extremely problematic (we have seen some tragic examples in this book), and physicians are very careful to do their best to ensure that they get only the right starting point for the epilepsy and that the part they are going to remove is not involved in critical intellectual faculties. To do this, they implant electrodes and record single neurons and LFPs as the subject does tasks.20 Since having the electrodes in one’s brain is not a painful process and recording from neurons does not damage the brain, many patients are happy to do a few extra cognitive tasks and to allow scientists to record from their brain during the procedure.
These recordings, however, can be taken in only very limited situations (during the rare times when a patient has a clinical reason for having electrodes implanted) and in patients with dysfunctional brains (that’s why they’re getting the clinical treatment). To study normal function in nonpatients, we need a technology capable of recording neural signals from awake subjects. We’ve already discussed EEG. There is a related measurement technology called MEG that works like EEG but uses the magnetic component of the electromagnetic wave. Although MEG is less distorted by the skull than EEG, both MEG and EEG suffer from an inability to localize the signal. These technologies can target “frontal cortex” versus “visual cortex” (which is in the far back of the brain), but they can’t tell you what is going on in the shape-recognition component of the visual cortex, or the striatum of the basal ganglia, or the hippocampus.
If you’ve been following neuroscience over the past few decades in the popular press, you are probably wondering why I haven’t brought in functional magnetic resonance imaging (fMRI) until now. fMRI has become beloved by the popular press because it allows insight into the brains of normal human subjects. It creates beautiful pictures. It can be run on normal people performing complex, human tasks, such as language and cognition. But what’s really important to understand about fMRI (and its cousins, such as positron emission tomography [PET]) is that it measures blood flow, not neural activity.21
fMRI measures the magnetic spin of hemoglobin molecules in the blood. Mammalian blood carries oxygen to the tissues by binding it to the hemoglobin molecule. Hemoglobin with bound oxygen is redder than unbound hemoglobin; it also has a different resonance to magnetic changes. An fMRI machine is a very big, very strong set of magnets. By changing the magnetic field in specific ways, it can detect the level of oxygen in the blood.
Modern high-field magnet fMRI machines can measure the oxygenation level in approximately a one-millimeter cube of tissue. While a 1 mm × 1 mm × 1 mm voxel (volume-pixel) sounds small, individual neurons are typically 50 micrometers in diameter (8000 times smaller). For computational reasons, fMRI experiments often use much larger voxels, even as large as 3 or 4 mm on a side. (PET requires an even larger voxel size.) That means that each voxel could contain a hundred thousand neurons. Even allowing that neurons are only a small portion of the brain tissue (most of which contains support structures, like capillaries, glial support cells, and the axons and dendrites of the neurons), that voxel still contains tens of thousands of neurons.
So what does blood oxygenation have to do with neural activity? At this point, we know they are related, but not exactly how. This is still an area of active investigation in neuroscience.22 We know that when an area is “neurally active” (for example, your primary visual cortex is more active when you are looking at a flashing visual scene, like a movie, than when you are closing your eyes and listening to an auditory scene, say listening to music), the cells are using glucose, drawing extra energy from the body.23;D The overworked neurons send a signal to the capillaries to allow more blood to flow. The neurons don’t actually need more oxygen, so there’s more blood flowing, but the same amount of oxygen is pulled out, so what’s flowing by becomes less deoxygenated, which is detectable with fMRI. These changes in blood flow are called the hemodynamic response, which occurs over the course of about 4 to 5 seconds. (Remember that a single neural spike is 1 millisecond—that is, 1/1000th of a second!) Typical human reaction time to a surprising stimulus is only a few hundred milliseconds.E fMRI researchers have developed ways to infer the “neural activity” from the hemodynamic response to identify the timing, by essentially relating the blood flow to events that occurred 4 to 5 seconds before the signal was detected, but even that can’t get to what single neurons are doing or the specific timing of their spikes.
It’s not clear at this point what is causing the neurons to spend all that energy.25 This means that fMRI can tell you which parts of the brain are working hard, but it can’t tell you how the neurons in that part of the brain are processing information.
All of the technologies described above are correlational. They ask to what extent two signals change together: a neural signal (firing of neurons, LFPs, EEG, blood flow in the brain) and a behavioral variable (the location of the animal, a sensory signal, a motor action). Logically, correlation is not causation. Just because two things co-occur does not mean that one has caused the other. Correlation is necessary (though not sufficient) for causation, and, in practice, it’s a pretty good indicator of causation. Nevertheless, scientists in general, and medical scientists in particular, are particularly concerned with causation because they do not want to make the mistake of thinking that two things that co-occur are causing one another. They have, thus, developed several technologies that allow control of the system.
The oldest method of manipulating the system was to damage it physically, to lesion part of it. If that part is critical for accomplishing some task, then the animal (human or otherwise) will be impaired at that task. Obviously, one cannot go in and lesion human tissues for experimental purposes. However, the brain is a particularly sensitive biological organ, requiring constant blood flow. Cutting off the blood (or even just the oxygen) supply for even a few minutes can lead to neuronal death, leaving the person with an unfortunate brain lesion.26 This is what happens during a stroke, for example.F
Because of the way that the blood vessels in the brain branch, a common stroke occurs above the right parietal cortex (just above and behind your right ear).28 When this part of the brain is damaged, people have trouble recognizing the left side of visual objects.G This inability to recognize objects on one side of the visual field is called hemispatial neglect because it entails the inability to process (it neglects) half of (hemi) the spatial visual field. Patients with hemispatial neglect will draw only the right side of a clock, and will see the clock at all only if it is on the right side of their visual field.
Hemispatial neglect even occurs in imagined spaces. In a classic experiment, Eduardo Bisiach and Claudio Luzatti asked hemispatial neglect patients to imagine the Piazza Del Duomo in Milan (the patients’ native city).30 They asked the patients to imagine standing on the south end of the Piazza and to name the buildings in the square. The patients each listed only the buildings on the east side (their right). When they were asked to imagine standing on the north end and to name the buildings, they named only those on the west side (their right). When pressed, they said that was all they could imagine, even though they had named the other buildings only a few minutes earlier! Visual imagination uses the same perceptual systems that we use in seeing.31
Other common strokes notoriously affect certain motor and language areas, leading to inabilities to process language. Because different strokes affect the ability to recognize or to produce language, even as early as the 1870s, Paul Broca (1824–1880) and Carl Wernicke (1848–1905) were able to separately identify the locations of the human brain that controlled speech production (now called Broca’s area) or speech recognition (now called Wernicke’s area).32
Brain lesions can also arise from medical procedures, some intentional, others not. For example, epilepsy that cannot be controlled by pharmacological means (with drugs) can still sometimes be controlled by taking out the focus or starting site.33 Epilepsy is like an avalanche or a wildfire: it has a starting point, from which it spreads. If that starting point is consistent from seizure to seizure within a patient and can be found, it can be removed. Of course, removing a part of the brain damages its ability to process information, but sometimes a balance has to be found between living with epilepsy and living with a damaged brain. Hippocampal resection (removal) in epileptic patients (such as H.M.) was critical to the discovery that there are at least two separate decision-making systems (Deliberative, dependent on hippocampal function, and Procedural, independent of hippocampal function).34
The problem with inferring function from lesion studies is that solving tasks requires lots of components. Let’s go back to the thermostat analogy in Chapter 2—there are lots of lesions that could be made to the thermostat that would make your house cold. The furnace could be broken. If it’s a gas furnace, the pilot light could be out or the gas turned off. The temperature sensor could be broken. All of these would cause the furnace not to be on and the house to be cold, but we wouldn’t want to conclude that the temperature sensor and the pilot light were doing the same thing.
The other problem with lesions is that of compensation. As biological organisms, we have backup systems. (During most of evolution, there was no repair shop to take ourselves to if something broke down, so animals that evolved backup systems were more likely to survive and procreate than those without.) If an animal has had enough time to recover from the lesion, it may have found new ways to do something and may be able to solve the same tasks in different ways from how an intact animal would solve them.
To prevent compensation, and since one cannot morally control the lesions one sees in humans (the only human lesions one is going to see are caused by natural phenomena), scientists have developed several complex techniques to inactivate parts of the brain reliably. The most common technique used in animals is pharmacological, in which a very small amount of chemical that temporarily affects the neurons is infused into an area of the brain. For example, lidocaine blocks a certain kind of sodium ion channel, which prevents neurons from firing. When the dentist injects lidocaine under the gum, it prevents the nerves in the skin from communicating with the brain, leaving your mouth numb to any pain that would be caused by the subsequent procedure.
In humans, one does not want to inject chemicals into the brains of experimental subjects, but a new technology has been recently developed called transcranial magnetic stimulation (TMS).35 TMS creates a quickly changing magnetic field that can stimulate and then disrupt neural firing in a targeted area of cortex. The effect is disruptive but transient, wearing off after a few minutes.H
Because the neuronal processes (such as the synaptic channels that listen to specific neurotransmitters) depend on the genetic structure of the cell on an ongoing basis, changing that genetic structure changes how the cell processes information. New technologies are now available that allow direct manipulation of that genetic structure.37;I If the DNA is changed (say to include the information to build a new, different kind of ion channel), then the cell will follow that DNA.
We now know that DNA is more like a programming language, with “if–then” statements and other sorts of local control (as in, “if protein X is present, then also generate protein Y”). This means that modern genetic manipulations can express new proteins (new ion channels, new processes) so that they appear only in a limited subset of cells, such as only in the CA3 neurons of the hippocampus; or so that they appear only in the presence of an extrinsic chemical, such as the antibiotic doxycycline; or so that they only appear after a certain time, such as only in cells that are active (firing a lot of spikes) during the task.39 These manipulations produce changes in the abilities of animals to solve tasks, which informs us how the system works, physically.
In the past few years, researchers have developed a new technology where a light-sensitive ion channel is genetically implanted into an animal.40 These channels produce voltage changes in response to certain frequencies of light. This means that if a light is shined on cells with these channels in them, the cells can be induced to fire spikes or prevented from firing spikes. Because these ion channels can be placed in specific cells in specific brain areas, it can give remarkable control of the neural mechanism. This is a burgeoning field called optogenetics and has the potential to revolutionize our understanding of the actual mechanisms of brain function.
Of course, cellular systems are also electrical, and one can also pass current into the system to manipulate it. For example, a common treatment for Parkinson’s disease is deep-brain stimulation, in which an electrode is placed into a structure (commonly the subthalamic nucleus, a part of the basal ganglia). Regular stimulation from this electrode (at between 50 and 200 Hz, depending on what works for the patient) somehow resets the subthalamic nucleus, allowing it to process information correctly again.41 Although deep-brain stimulation works wonders in some patients, we don’t know why, we don’t know what the mechanism is. There are lots of theories, including that it prevents neural firing, thus shutting down a dysfunctional structure; that it resets neural oscillations that have become pathological; and that it is making it easier (or harder) for axons that pass by the structure to transmit information.42
Stimulation can also be used to manipulate information being represented within the brain.J In the mid-1990s, William Newsome and his colleagues were able to change decisions made by a monkey watching a visual stimulus by stimulating certain parts of the visual cortex. The monkey had to decide if a set of dots on a screen was moving in one direction or another. By stimulating the part of the brain that represented visual motion, they could change the monkey’s decision, presumably by changing its perception.43
Actually, stimulation to write information into the brain has been used in humans for over 40 years. Cochlear implants work by laying electrical contacts along the inner ear.44 An external microphone translates the sound into the appropriate electrical signal that the ear cells expect. The electrical contacts stimulate the cells, allowing the brain to hear sounds.
Several laboratories have attempted to use retinal implants to restore vision in blind patients.45 Similar to the cochlear implants, external images are translated by a camera to electrical signals that are sent to stimulation electrodes in the retina. Early clinical trials are now under way. One of the major complications of both the cochlear and visual implants is that the stimulation has to match the neural representations that the brain expects. By understanding the principles of representation, we’ve been able to begin the process of cybernetics, actually recovering lost function.
• Andrea M. Green and John F. Kalaska (2011). Learning to move machines with the mind. Trends in Neurosciences, 34, 61–75.
• Adam Johnson, André A. Fenton, Cliff Kentros, and A. David Redish (2009). Looking for cognition in the structure in the noise. Trends in Cognitive Sciences, 13, 55–64.
• Frederick Rieke, David Warland, Rob de Ruyter van Steveninck, and William Bialek (1996). Spikes. Cambridge, MA: MIT Press.