7 Can Neuroscience Tell Us What Talleyrand Meant?
Charles Maurice de Talleyrand-Périgord was a diplomat who successively served Louis XVI, revolutionary France, then Napoléon, and finally the restored Bourbon kings, Charles XV and Louis-Philippe. Only the subtlest and most devious of foreign ministers could have survived, let alone thrived over the fifty years he was at the pinnacle of European diplomacy. He was so devious that when he died, the Austrian foreign minister, Count Metternich, is famously (but falsely) reputed to have said, “I wonder what he meant by that?”
The point of the anecdote is obvious. We’re all forever trying to read other people’s beliefs and desires from their actions. This is much harder in some cases than in others. All his life, Talleyrand tried to make it impossible to figure out what he really wanted and what he thought might be the best way to attain it. Everything he did seemed to be a ploy or to reflect some stratagem no one could fathom.
The job of narrative history is to uncover the motives and designs that explain human actions. The persistent fascination with what Talleyrand did mean, and not just what he could have meant, by the many stratagems he seemed endlessly to deploy, has made him a repeated subject of the historian’s and especially the biographer’s art. The dozens of biographies written about Talleyrand include one by First Lord of the Admiralty Duff Cooper (Cooper, 2001 [1932]), an important British cabinet minister before World War Two, and another by Crane Brinton (Brinton, 1963 [1936]), great Harvard historian who shaped a generation of scholars. Cooper and Brinton had, between them, much to figure out about their clever quarry: why an archbishop would leave the Church to support the French Revolution, why Napoléon’s foreign minister would betray him to the emperor’s Austrian and Russian opponents, why Talleyrand was allowed to remain at the French court even after Napoléon discovered his treachery, how he managed to shift allegiance to the restored Bourbon king, Louis XVIII, and, finally, how he managed to subvert the Congress of Vienna, restoring defeated France to the first rank of European powers.1 This last achievement drew the attention of a young American scholar, Henry Kissinger.
Applying himself to the matter, Kissinger came to believe he could discern what Talleyrand had in mind, what his beliefs and desires were, at least the ones that explained his decisions and choices at the Congress of Vienna (Kissinger, 1956, 1957). He wrote his doctoral dissertation and many books on the subject. In his conduct of American foreign policy under Presidents Nixon and Ford, Kissinger would be guided by what he thought he’d learned about Talleyrand’s (and Metternich’s) beliefs and desires, employing the theory of mind. (We’ll consider the lessons Kissinger drew from his historical narrative of Talleyrand’s achievements at the Congress of Vienna in chapter 11.)
But how could Kissinger have been so sure he knew what Talleyrand was thinking? All he had to go on was what Talleyrand did and wrote (which included a vast preserved correspondence)—and these are the very things that beliefs and desires are supposed to explain. How can historians avoid the dangers of circular reasoning, which deludes them into thinking they understand, but which leads them to write what amounts to historical fiction, stories that do no more than scratch the itch of our curiosity?
You’d think at least one of the possible stories of what Talleyrand believed and desired must be right. Perhaps Kissinger and, indeed, every other historian until now got Talleyrand wrong. But, as we came to see in chapter 6, there has to be a fact of the matter about what Talleyrand thought. Without it, narrative history can’t explain the actions of anyone. Without it, narrative history is not just wrong; it’s impossible.
The point here is not that narrative history is fallible. We know that applying the theory of mind to the available evidence is never enough to be completely confident that our historical or biographical explanations are right. That’s one reason why these explanations constantly get rewritten. The available evidence doesn’t rule out alternative hypotheses about the beliefs and desires that drive the actions we want to explain. The evidence we need to do that is supposed to be in people’s minds, and we just can’t get into their minds. All that is true enough. But the problem is much deeper.
For narrative history to even stand a chance of truly explaining people’s actions, we have to be confident that there really are beliefs and desires in people’s minds and that they really do cause them to act. Chapter 6 showed that our own conscious experience of our thoughts provides us no assurance that there is a fact of the matter about what we believe and desire, and thus no basis for confidence that there is such a fact of the matter about what other people believe and desire as well. If consciousness can’t provide the assurance that our minds work the way the theory of mind tells us they do, what can?
There is one, and only one, recourse, one path out of this impasse. We know that the specific beliefs and actual desires about which narrative history can only hypothesize are to be found in people’s minds. Unless the mind is a distinct entity from the brain, surely, the beliefs and the desires that bring about behavior are somehow inscribed in the brain, in the neural circuits of the cerebral cortex.
If the mind and the brain are in fact not distinct but one and the same thing, then our confidence in the theory of mind as the right theory to explain human thoughts and actions gives us the very tools we need to solve narrative history’s problem, to figure out, at least in principle, what people think. Of course, an “in principle” solution is all we’ll get for the foreseeable future, at least until we can read people’s thoughts from their brains in real time. But because, in most cases of historical inquiry, there will be no pressing need to read the exact beliefs and desires that drive an action, a method of doing so will provide at least in principle the ground rules for settling historical disputes. And it will ensure that there is a fact of the matter, a right answer to the explanatory questions narrative history seeks to answer, even if we don’t yet have the means to nail that answer down exactly. The only way to vindicate narrative history’s recourse to the theory of mind is through neuroscience.
The theory of mind’s relevance to neuroscience has been evident to neuroscientists almost since the emergence of their discipline in the nineteenth century. Along with everyone else, neuroscientists have embraced the theory of mind, at least as a first approximation to explaining human thoughts and actions. The theory of mind told them that, to discover how beliefs and desires worked together to generate the body’s actions—speech, movements, activities, responses to stimuli—they first had to find where in the brain the beliefs and desires were lodged. The theory of mind established both the initial research program of neuroscientists and the basis for their hopes to vindicate the theory with the successes they anticipated in pursuing that program.
In this chapter, we’ll try to find that vindication. We won’t look to neuroscience to tell historians exactly what was in Talleyrand’s mind. That would, of course, require a time machine and mind-reading equipment, neither of which exists (nor, in the case of the time machine, is ever likely to). What narrative history needs is some assurance that when people act, there really is a desire box and a belief box somehow and somewhere in their brains that brings about their actions. Neuroscientists would vindicate narrative history’s explanations if they showed that clear signs of beliefs and desires could be detected in the brain and that beliefs and desires worked roughly the way the theory of mind and thus also narrative history said they did. That may sound easy, until we realize exactly what task neuroscientists had set for themselves. It was showing how the neurons in brain tissue could inscribe, store, and transmit thoughts—beliefs and desires—about the world, statements with content, how neural circuits could represent the ways things are arranged and the ways we want things to be arranged. The task for neuroscientists was thus to show how and why the theory of mind was true, or at least true to a first approximation.
Expecting the theory of mind to drive the research program of neuroscience exemplifies a familiar recipe for success in science. Consider the history of genetics: Gregor Mendel noticed some regularities in the heredity of obvious traits of pea plants—their height, the color of their seeds, whether the seeds were wrinkled or smooth. In 1866, he framed a theory that consisted of two “laws” to explain his observations (roughly the laws of independent assortment and of segregation). The theory hypothesized the existence of unobservable entities (the word “genes” would not be coined to describe them for another forty years or so). Accepting Mendel’s theory motivated scientists to launch a research program to identify specific genes for specific traits, to locate these genes in the body, to uncover their structure and composition, their mode of action, how changes in them effect hereditary traits, and so on. The research program driven by Mendel’s theory resulted in many Nobel Prizes for the twentieth-century scientists who answered these questions.
Neuroscientists should expect the theory of mind to play the same role in their research program: through the mechanism sketched out in the chapter 3 diagram (figure 3.2), the theory advances hypotheses about the decision-making process that brings about people’s actions. It falls to the neuroscientists to fill in the details of this process.
Here again is that diagram (figure 7.1).

“Boxology” of the theory of mind sets out marching orders for the research program of neuroscience. From Nichols et al., 1996, fig. 1.
Thus neuroscience is tasked by the theory of mind to try to uncover the neural mechanisms that constituted each of the boxes—square and hexagonal—in the figure 7.1 diagram. And then it has to elucidate the mechanisms that underlie each of the arrows in the diagram as well. Naturally, when neuroscientists begin to do so, they might have to change the configuration of the diagram considerably, adding new boxes and new arrows, dividing boxes, or even eliminating boxes altogether in favor of a flowchart quite different from the diagram they start with. In doing so, however, neuroscientists would still be following the lead of the twentieth-century molecular biologists who wrought radical changes to Mendel’s theory of heredity, even though the result was recognizably a development and enrichment of the theory he propounded in 1866.
Neuroscientists’ exploration of how the brain worked began to take serious scientific shape at the end of the nineteenth century. In addition to the initial guidance the theory of mind provided them, neuroscientists had guidance from case studies of patients with mental illness or conditions, where the theory of mind appeared to break down, where patients behaved in ways contrary to what the theory led neuroscientists to expect. Starting with such behaviors, the neurologists among them sought the lesions, injuries, defects in the brain that caused them. Adopting a top-down strategy,2 they watched how patients with mental illnesses or conditions responded to stimuli and then sought the brain injuries that were impairing the patients’ reasoning, that were giving rise to their mistaken beliefs and pathological desires, and that, separately or together, were producing “deviant” behaviors. Psychoanalysis of all types revealed the neurologists’ reliance on the theory of mind when they began to theorize about unconscious beliefs and desires. Sigmund Freud was clear in his insistence that such theorizing could only be vindicated by locating these unconscious beliefs and desires in the brain (Kitcher, 1992). Freud’s confidence in the relevance of neuroscience to understanding the mind was increasingly vindicated in the course of the twentieth century. Starting with the lesions uncovered mainly in autopsies of cadavers, neuroscientists moved on, first to electroencephalography (EEG), then to neuroimaging (MRI and fMRI), and later to direct electrical stimulation of distinct parts in the brain (TMS). Over the past century, they learned a great deal about the specific location of various sensory, cognitive, and affective mental phenomena and thus also about how the theory of mind appeared to be hardwired into the temporoparietal junction, the dorsolateral prefrontal, the inferior frontal gyrus, and the ventral medial prefrontal cortex (as we saw in chapter 5).
But they needed to locate something like beliefs and desires in people’s brains and to figure out how the neural circuits that carried these beliefs and desires caused people’s behaviors. Two questions immediately arose. What should neuroscientists look for? And what if neuroscientists couldn’t locate something like beliefs and desires in the brain?
The answer to the second question is obvious: if they couldn’t locate something somewhere in the brain corresponding at least roughly to the boxes and arrows in the figure 7.1 diagram, the theory of mind would be in big trouble. Although just how big that trouble might be is the subject of chapter 8), to start with, if neuroscientists failed to vindicate the theory of mind as even a first approximation to the truth about how the brain worked, they’d either have to give up the theory of mind altogether or give up the notion that the mind was, even to a first approximation, the brain. Many nonscientists might not be troubled by this second possibility. But most scientists would be deeply troubled. What’s worse, wedded as it was to the theory of mind, narrative history would have to admit that its explanations required the operation of nonphysical, nonmaterial processes that couldn’t be scientifically studied at all.
The first question, “What should neuroscientists look for?” is one explicitly addressed by the theory of mind. It tells them to find where in brain beliefs and desires are inscribed, stored, and transmitted to guide behavior. In other words, they have to find where the neural circuits that carried the beliefs and desires are located—which networks of neurons, anatomical regions, or modules of the brain encoded them. Then it tells them they to explain, for example, how the neural circuit that carries the belief that Paris is the capital of France differs from the neural circuit that carries the belief that Berlin is the capital of Germany; and to explain what differences each neural circuit makes to the behavior of a person who holds one of these beliefs when the neural circuit is paired with the neural circuit that carries the desire to go to the French capital versus the neural circuit that carries the desire to go the German capital.
For the moment, let’s just focus on beliefs, starting with a relatively simple one, the belief people have that Paris is the capital of France. Once neuroscientists figure out how the brain stored that belief, they’d begin to get a handle on how the brain stored more complicated thoughts, like the belief that a war in the Balkans would break up the Triple Alliance.
What the neuroscientists are trying to narrow down, locate, and identify are the neural circuits somewhere in one or perhaps many different parts of the brain that represent the fact that Paris as the capital of France, have the statement to that effect for its content, and is about Paris. Why? Because that’s what beliefs are—representations of how the world is or isn’t arranged; they do that by containing statements about things, in this case about Paris. If something is not a representation, then it’s not a belief. So, pursuing the research program dictated by the theory of mind, neuroscientists need to look for how the gray matter represents things.
In particular, what they are looking for in our example is a network of connected neurons that “contain” the statement that Paris is the capital of France, a network organized in such a way that it is “about” at least three distinct things: (1) “about” Paris; (2) “about” its being a capital; and (3) “about” France. This would be a network of neurons—perhaps hundreds of thousands or even millions of them—wired up distinctively and differently from another network of neurons, say, one that “contains” the statement that Berlin is the capital of Germany.
Before the neuroscientists even begin, however, we can rule out some alternative ways in which a network of neurons could represent. Could the network represent Paris the same way a postcard or a painting does? A picture postcard and a painting are about Paris and contain information about Paris presumably because they look like some part of Paris. They represent by being “representational,” that is, there is some one-to-one relationship between the lines, planes, curves, and colors on the picture postcard or painting and the features of the part of Paris they depict. Of course, Paris is three-dimensional, made of asphalt and marble, bricks and glass, and the postcard and painting aren’t. Still, the ratio between the size of images in the postcard or realistic painting and the size of the things in the actual place from the photographer’s or painter’s perspective would fall within a fairly narrow range of values. That’s enough for a sort of “scale model” representation in this case.
Could the neural circuit in a person’s brain that represented the fact that Paris is the capital of France be about Paris because its “wiring diagram” looked like Paris did in the picture postcard or painting? Obviously not. The synaptic connections in the belief neural circuit that represented the fact that Paris was the capital of France couldn’t look anything like Paris because there’d just be no way they could physically “look like” that fact. Nothing in the brain could look like a particular fact in the way representational pictures look like particular things. There are indefinitely many facts about Paris we could come to learn from looking at a photo of it. Which of these facts, once in our minds, would the photo look like? None of them? All of them? If there is a belief neural circuit that stored a fact about Paris, it couldn’t do it the way a photo stored facts, even if a neural circuit could look like Paris.
Well, why couldn’t a neural circuit record the fact that Paris is the capital of France the way that the printed letters on a page do? Suppose somewhere in the brain there was a neural circuit that was the brain’s way of “writing down” and “storing” the fact that Paris is the capital of France. Such a neural circuit would be about Paris.
Here an argument from chapters 3 and 6 becomes crucial again. Recall our discussion of what makes the red octagons of street signs into representations of the command to stop. There is nothing intrinsically “stop-ish” about the shape or color or combination of shape and color of these signs. The red octagons represent the command to stop because something inside our minds/brains interprets them that way. For a Paris neural circuit to represent the fact that Paris is the capital of France in the same way (i.e., symbolizing the fact), there would have to be something else in our brain, some additional neural circuit to interpret the Paris neural circuit as doing so. And this further interpreting device in the brain would itself have to represent both (1) the Paris neural circuit and (2) Paris itself and to interpret the first as representing the second, just as our minds/brains do when they interpret the red octagon of a street sign as representing the command to stop. The second neural circuit that interpreted the first would have to be both about Paris and about the first neural circuit. But this would double our problem instead of solving it. We would have gone from the problem of how one neural circuit could be about Paris to the problem of how a second neural circuit could be both about Paris and about the first neural circuit.
An analogy might make this problem clearer. Suppose you were trying to get to Paris, but you got lost. Then you saw a rectangle with some white marks on black (figure 7.2).

Road sign showing direction to Paris.
How would you know that the white marks in the rectangle represent the fact that Paris was to the right of where you were now? Well, your brain immediately and unconsciously would interpret the configuration of white and black on the rectangle as being about the place you wanted to go. But to interpret the white marks on black as representing Paris, you’d need to have stored information in your brain that already represented Paris, and then to compare that stored information with the sign’s white marks on black. The sign’s white marks on black would mean the direction to Paris only because your brain recognized the marks as being about Paris. You’d have a thought about the sign, that it was about Paris. Knowing that the sign gave the direction to Paris would require that there be two representations in your brain—one of the sign and another of the city that the sign pointed you toward. There would be two instances of aboutness in your thoughts: one about the sign, and another about where it was pointing.
Now suppose the neural circuit that represented your belief that Paris was to the right did so in the same way the octagonal road sign represents the command to stop. Or suppose there was a microscopic version of the road sign in a neural circuit somewhere in your brain. For it to represent the direction to Paris, there would have to be another representational neural circuit elsewhere in your brain to interpret the microscopic road sign. A microscopic road sign in a neural circuit of the brain, no matter what “letters” it was written in, would require that there be another neural circuit to read it, one representing both the rectangular road sign in the first neural circuit and what the sign itself represented—Paris. This would begin a regress, requiring a third neural circuit to explain how the second circuit represented what it read off the microscopic sign in the first circuit, and so on ad infinitum.
But sentences aren’t the only things that convey information. Maps represent the way things are arranged, too. Could a neural circuit in your brain represent the fact that Paris is the capital of France the way that a map does?
Consider the big five-pointed star in the circle next to the word “PARIS” in capital letters on the map of France in figure 7.3; it indicates that Paris is the capital of France. But how exactly does it do that? Well, most maps come with a “key” or “legend”—a list of symbols usually on the bottom left side of the map. If this were a typical map of Europe, in its legend or key, the words “national capital” would be written in capital letters of the same font as “PARIS” and next to it a five-pointed star in a circle. And you’d know the conventions of map displays well enough to use that information to interpret this particular map.

Map of France represents that Paris is the national capital.
So, if your original neural network represented the fact that Paris is the capital of France the way a map does, there would have to be something somewhere else in your brain like a “key” or “legend”—a list of symbols together with some writing that guided the interpretation of the symbols.
Do the neurons of your brain work like that? Well, they could, but only if there were some other neural network in your brain that played the role of the key. And there would have to be still another neural network that could interpret the key or legend. But one part of your brain having to interpret another part is just the problem we were trying to avoid. We’re back to the original problem we thought the map metaphor might solve: how do neural circuits in the brain represent—in this case how do they interpret the neurons that give the key as symbols, meaning, for example “National capitals are indicated by a five-pointed star in a circle.”
We’ve excluded three amateurish hypotheses about how neural circuits might carry information. We went over their mistakes carefully to be on our guard, as neuroscientists have to be on theirs, against making the same mistakes in considering other, more sophisticated, better-informed hypotheses about how beliefs might be composed of neural circuits.
One obvious thing about beliefs that helps neuroscientists in their research program is that they are held in memory. Most of our beliefs are not present before our minds in consciousness. Thus, for example, we believe that Paris is the capital of France even when we’re not rehearsing that thought in our consciousness. Knowing that, a good place to look for beliefs would be in the part of the brain that stores memories, in particular, whatever part stored what cognitive neuroscientists call “declarative” or “explicit” memories—those we describe in sentences we believe to be true.
Here some lessons from clinical medicine became important. A male patient most often referred to only as “H.M.” (perhaps the most famous patient in neuroscience) suffered from severe epilepsy. To control it, he was subjected to surgical removal of his hippocampus and several other nearby structures of the temporal lobe, including the entorhinal cortices, in the 1950s. (Much of H.M.’s history and the findings work with him generated are recorded in Dittrich, 2016.)
As a result of the removal of these parts of his brain (figure 7.4), H.M. immediately lost the ability to remember almost all new information, though he retained other abilities, among them the ability to learn new motor skills. This enabled neuroscientists to identify the crucial role of the hippocampus in memory formation. Thus began continuing research into understanding how regions of the hippocampus create, store, and deploy “explicit memories,” ones we usually describe as “beliefs expressed in statements” such as the belief that Paris is the capital of France. The most obvious question facing the researchers was, Exactly how are explicit memories encoded and first stored in the hippocampus and surrounding temporal lobe (figure 7.5)?

H.M.’s brain compared with a normal brain, showing area excised, including the hippocampus and entorhinal cortex. From https://

The hippocampus, where information is first recorded and stored. From https://
Given what was known about the human brain and its composition in the 1950s, when the surgery on H.M. was performed, that question was also the least answerable one facing neuroscientists. So vast, complicated, and inaccessible to experimenters were the neural networks of the brain that no one really had the slightest idea how to answer this most obvious question. But now, sixty years and three Nobel Prizes later, we’re beginning to, and in detail.
The human brain comprises 86 billion neurons, each linked up to a 10,000 or so other neurons. Almost all of these neurons seem mainly to do one special thing, and do it continually: firing in synch, over and over again, in a vast number of input/output circuits, they move discrete electrical charges, “action potentials” to other neurons. They do this largely by moving a few different kinds of neurotransmitters—small charged molecules that can defuse rapidly in the gaps (called “synapses”) between neurons. The only neurons that don’t work in almost exactly this way are the ones that respond directly to sensory inputs and the ones connected to muscle fibers, although even these connect to other neurons in the same way that most neurons do. When sufficiently many electrical signals are sent from one neuron to another over a short enough period, the neurons build new synapses that reach out to others (Kandel, 2000). The mechanism by which this happens is well understood. The increase in neurotransmitter movement causes a chain of events inside the neuron back to certain genes in its nucleus that switches them on. They start to produce new proteins to build new synaptic connections that make electrical signal transmission easier. This vast number of input/output circuits, which include larger input/output circuits composed of wired-up sets of smaller input/output circuits and are all pretty much the same in their molecular neurobiology, carry all the information the brain stores. If that’s all the neural circuits ever do—fire in synch, over and over again—then how they encode, store, and express beliefs stored as explicit memories must be in the details of their firing patterns.
We are going to dive into the details. Like all science the details will be difficult to absorb and to keep in mind. That’s because science isn’t stories. But the details turn out to be of the greatest importance for the prospect of grounding the narrative explanations of history and biography. In fact they will unravel any confidence we might have had that history is more than just a collection of engrossing but utterly fictional stories. That is why we need to go into the details. It’s only by seeing how the brain really works, and what it means for the theory of mind, that we can come to grips with the problems that confront history’s claim to explanatory knowledge.
Learning what H.M.’s symptoms revealed, the neuroscientist Eric Kandel set out to answer the question of how the neurons store memories. His work was rewarded by the Nobel Prize in 2000. This work was extended, deepened, developed by the discoveries, first, of John O’Keefe and, then, of May-Britt Moser and Edvard Moser, for which all three won the 2014 Nobel Prize. Between them, Kandel, O’Keefe and the Mosers revealed exactly how the brain encodes the information that the theory of mind says is contained in our beliefs. What Kandel discovered was very troubling for the theory of mind, as we’ll see. And then matters were made worse for the theory by what O’Keefe and the Mosers revealed.
Kandel knew from the beginning there was no point in trying to figure out how neural circuits in the human brain stored beliefs in the form of memories. So he and his lab decided to start small, on a simple model system: the brain of the sea slug Aplysia californica (figure 7.6), which has a small number of very large neurons. Since the sea slug’s brain doesn’t store explicit beliefs in its memory, it wasn’t at all clear that this research would have anything to say about how the human brain stored them. But it did, as Kandel’s Nobel Prize shows.

Aplysia californica, the sea slug, whose brain served as Kandel’s model system. From https://
Even though sea slugs don’t acquire new beliefs that their brains can store as explicit memories, they can learn new behaviors by classical conditioning and “remember” them, at least for a while. Recall how Ivan Pavlov conditioned dogs to salivate at the sound of a bell when it had previously been rung at mealtimes. The sea slug can be conditioned too: touch its front and it won’t do anything much. Give it an electrical shock and it will shrink back. Touch its front while giving it an electrical shock enough times and eventually it will shrink back when you touch its front without the shock. Kandel found that, through conditioning, it could learn a response and remember it, at least for a while. He called what the sea slug learned an “implicit memory” because the sea slug had acquired a new ability, a new disposition or capacity to respond to stimuli. Because its brain had a small number of large neurons, Kandel was able to identify exactly which neurons were involved in storing the implicit memory and exactly how they did it—which anatomical changes, driven by which somatic genes producing which particular proteins, resulted in new neural circuits that encoded the newly learned behavior (Kandel, 2000).
Kandel also found that implicit memories in the sea slug came in two versions: short- and long-term, depending on the number of training trials to which its neural circuits were exposed. A little conditioning—a few front touches plus shock associations—produced a short-term implicit memory, one that wore off after a short time. More conditioning produced an implicit memory that lasted longer. Kandel was able to exploit advances in neurogenomics—the use of gene-knockout and gene-silencing techniques in the study of neurons—to show that the difference between short- and long-term implicit memory was the result of switching on the somatic genes in the neurons that build new synapses.
The difference between short- and long-term implicit memories in the sea slug was reflected in a fairly obvious anatomical difference: a short-term implicit memory appeared to be a matter of establishing temporary bonding relationships between molecules in the synapses, bonds that degraded quickly, whereas a long-term implicit memory appeared to be a matter of building more new synapses between the neurons. The former produced “short-term potentiation” or STP the latter, “long-term potentiation” or LTP. (“Potentiation” is neurospeak for increase in signal transmission at the synapses.)
Short-term implicit learning occurred when a few touches plus shocks provoked a chain of molecular events that briefly modified the number and shape of neurotransmitter molecules in existing synapses that already linked neurons. The modification of the neural pathway to respond to touch stimulus alone lasted for a short time, as the neurotransmitter molecules diffuse and degrade.
Learning to respond that way for a long time—long-term implicit memory through the process of long-term potentiation or LTP—occurs when touches and shocks are paired many more times. The repeated pairings at first produce short-term implicit memory, but keeping this stimulation up produces more neurotransmitter molecules, which then diffused back from the synapses to the neurons’ nuclei, where they switch on somatic genes to produce the building blocks of new synapses. These new synaptic connections work in the same way that the smaller number of synaptic connections laid down for short-term implicit memory do, but their larger number means that the learned response will continue even if a significant number of the synaptic connections degrade, as they do over time.
Once Kandel and his team had revealed the mechanism of implicit memory in the sea slug, they showed it was pretty much the same mechanism—same neurotransmitters, same somatic genes, same synapses or synaptic connections—that produced implicit memory in other species, such as the roundworm Caenorhabditis elegans and the fruit fly Drosophila melanogaster.
What they had discovered in the sea slug was nothing less than Pavlov’s classical conditioning mechanism. But what does the classical conditioning of sea slug neurons have to do with beliefs, like Paris is the capital of France, being stored by human neurons? Everything, it turns out. When we acquire new beliefs and store them in memory, the neurons in our brains do exactly the same thing that the neurons in the sea slug’s brain do when it acquires and stores new behaviors—only with a lot more LTP and lots more neurons growing new synapses.
Our explicit memories are composed and stored in the hippocampus (figure 7.4, right), then moved to (neuroscientists would say “consolidated in”) the gray matter of the neocortex that immediately surrounds the hippocampus. Now, these two brain structures are completely absent in the sea slug, roundworm, and fruit fly. Nevertheless Kandel and his colleagues were able to show that the same molecular mechanisms and the same somatic genes that build new synaptic connections responsible for acquiring and storing implicit long-term memories in the sea slug, roundworm, and fruit fly are also responsible for acquiring and storing explicit long-term memories in vertebrates, mammals, primates—and us (Kitamura et al., 2015). It turns out that our acquisition and storage of explicit memories—beliefs—are just long-term potentiation or LTP.
But, wait, just because the lowest-level neural processes appear to be the same across the phylogenetic spectrum from roundworms, fruit flies, and sea slugs to humans is no reason to say that there’s no difference between, say, how the brains of sea slugs acquire and store long-term memories and how ours do. Surely, the vastly greater number of neurons in our than in sea slugs’ brains makes a significant difference in itself ? Surely, there are other significant differences between our and sea slugs’ brains: how neurons are wired together, for example, and whether higher-level structures, anatomically distinct parts, specialized regions, dedicated areas, and modules are present or absent in the brains. Surely, all the beliefs acquired and stored as long-term memories can’t just be chalked up to LTP in both sea slug and human brains? This is an objection that readers are likely to raise repeatedly in what follows. We’ll only be able to address it after reviewing other findings by neuroscientists. So, bear with me for the moment (we’ll come back to this argument at the end of chapter 9).
In Kandel’s original experiments, rats were motivated by fear of drowning to acquire an explicit belief: that a hidden platform was at a certain location in the deep-water pool in which they were placed (figure 7.7).

Morris water maze, used by Kandel to uncover explicit memory mechanisms in the rat. From https://
With careful study of the experimental rats’ hippocampi (the rat brain has both neocortex and hippocampus) before, during, and after they learned the location of the hidden platform and acquired the belief about its location, Kandel and his team showed that long-term memory of the platform’s location is the same process (long-term potentiation or LTP) in the rat, right down to the neurotransmitters and the somatic genes, that it is in the sea slug. They then were able to show that LTP of explicit memories works the same way in humans—same new synapses, grown by the same process of somatic gene regulation in the same part of the brain, the hippocampus (Bailey, Bartsch, and Kandel, 1996).3
Other researchers would find that, once acquired by LTP in the hippocampus, explicit memories are moved to (consolidated in) information storage circuits in the neocortex—the visual, auditory, and parietal cortices— by the same molecular biology of LTP. They were able to show that it works by the same molecular modifications of neural circuits in the human brain that Kandel’s team discovered in the sea slug brain when it acquired long-term implicit memories—long-term dispositions to respond to stimuli (Kitamura et al., 2015; we’ll go more deeply into the relevant details, and why they’re relevant later in the chapter).
The details of the neural connections involved in long-term storage of implicit memories (abilities and dispositions to behave) differ only by number from the details of the neural connections involved in long-term storage of explicit memories (beliefs). The difference in number is great, however: in the sea slug, the number of neurons that have to be wired up to store a bit of conditioned behavior might be a few hundred (after all, its brain has only 18,000 neurons in all). In the human brain, the simplest stored belief is going to involve hundreds of thousands of neurons. But whether a few hundred or even a million neurons, what’s going on in the brain of a human is the same thing that’s going on in the brain of a sea slug; there’s just much, much more of it.
Actually, the discovery that the sea slug’s neurons and ours do exactly the same thing should come as no surprise. It vindicates an old maxim that goes back to the eighteenth century: “Natura non facit saltum,” literally, “Nature does not make a jump.” Substantively, differences in the biological domain are matters of degree, not of kind. What Kandel and his colleagues discovered is that abilities stored as explicit memories in the rat brain at least are just a much greater number of the same sort of abilities stored as implicit memories in the sea slug brain. They showed that neural circuits didn’t store information by representing it, being about it, having it as their content. They showed that storing explicit memories was a matter of neurons being connected to one another to produce certain kinds of results, events inside the brain, in the neural networks, and eventually in behavior that other animals, like humans, could detect. Convincingly demonstrating that what Kandel and his colleagues had reported was in fact true took considerable work, work that earned three other neuroscientists, John O’Keefe and the team of May-Britt and Edvard Moser, a Nobel Prize in 2014.
By themselves, of course, Kandel’s findings would probably not convince you of much. The similarities his team found at the level of individual neurons may not be the whole story, or even much of it. When a difference in degree is as great as the difference between LTP in a brain of 18,000 neurons and LTP in a brain of 87 billion, the result might well be a difference in kind. Experimenting on rats and sea slugs won’t tell us, nor did it tell neuroscience researchers, how the human brain stores beliefs that can be expressed in sentences like “Paris is the capital of France.”
But, not being able to directly experiment on humans, what could neuroscientists who were curious about these matters do? What they needed was, first, animals with brains sufficiently similar to ours that could be experimented on in large numbers and without raising ethical qualms. Sufficient similarity would have to be a matter of having brains with the same parts arranged in the same way, but with a lot fewer neurons, of course, since no nonhuman animals have brains even close to as big as ours except apes, and ethical concerns rule out experimenting on them almost as strongly as they rule out experimenting on humans. The obvious candidate animals were still the ones Kandel employed, rats. Whether the rat brain was sufficiently similar to ours to be a good “model system,” couldn’t be decided in advance. It would be decided by whether results of experiments on the rat’s brain enabled researchers to make precise and reliable predictions about human brains and how they worked. So far, the overwhelming evidence amassed by neuroscientists is that the similarities are great enough at every level of organization to make the rat brain a good, though not perfect, “model system” for the human brain (figure 7.8, plate 2) in the areas of cell physiology and, to a large extent, gross anatomy and physiology, as well as in the areas of clinical medicine, pharmacology, psychopharmacology, and clinical psychology. The great thing about experimental neuroscience is that hypotheses researchers frame about the human brain on the basis of rat studies can be tested, at least in principle.

Relevant parts of the rat and the human brain—the hippocampus and the entorhinal cortex. From https://
There is, however, an obvious problem in using the rat brain to figure out how the human brain acquires and stores beliefs. It’s much harder to try to figure out how the rat brain acquires and stores rat beliefs than how the human brain acquires and stores human beliefs since we don’t really know exactly what rats believe to begin with. We can tell one another what we believe in spoken language. Rats can’t, at least not in any language we can understand. So, the first problem neuroscientists faced was to find a set of sentences in English or any other spoken language that described some statements that rats by their behaviors clearly appeared to believe and then to search for where in the rat brain these beliefs were acquired and stored, and for how they were acquired, stored and used. But in order to have much bearing on how our brains acquire, store, and use beliefs, the rat beliefs would have to be about statements of the same kind we could and do believe ourselves. Researchers would have to find beliefs that rats clearly appeared to have that are sufficiently like our belief that Paris is the capital of France, for example, so that they could with confidence infer from how the rat brain carries and uses its beliefs to how our brains carry and use the belief about Paris.
Are there sentences in English (or Norwegian as we’ll see) that express statements that experimenters could confidently attribute as believed by both rats and humans? If there are, experimenters could try to figure out how the rat brain stores and uses these beliefs and then ask whether the human brain does it the same way. What experimenters need to do is find unambiguous rat beliefs that they could reliably read from rat behaviors and then look for where and how these beliefs are stored in the rat brain. Reading rat beliefs from rat behaviors is really not so different from how we read other people’s beliefs from their behaviors, mostly from what they say or write. But saying and writing are still behaviors and inferring from them exactly what people believe is often pretty dicey, even when they’re sincere and speak our language. Sometimes it’s better to ignore what people say and instead watch what they do. In some respects, then, what rats do—their nonverbal behaviors—may be as good a guide to what rats believe as what we say or write is to what we believe—or even a better guide.
Here are some good candidates for unambiguous beliefs we and rats share that are sufficiently like the belief that Paris is the capital of France: beliefs about our or their current location, about the path we or they took to get there, about the direction from which we or they traveled and how fast or slow, the obstacles we or they had to circumvent, about where food, water, warmth, electrical shocks are, about what choices we or they face in the near future when we or they search for food, water, warmth, and so on. These are shared beliefs that experimenters can pretty safely read from rats’ behaviors, especially after they have trained up the rats in their labs.
Figuring out how the rat brain encodes, stores, and uses beliefs from rat behaviors won an Irish American New Yorker transplanted to Britain, John O’Keefe, and two Norwegians, May-Britt and Edvard Moser, a Nobel Prize in 2014, fifteen years after Eric Kandel won his. What O’Keefe and the Mosers discovered was how the rat brain and the human brain store beliefs. And, as we’ll see, what they also showed was that nothing in the rat’s brain or in ours works anything like the way the theory of mind says beliefs and desires work—as representations with content expressed in statements about things.
O’Keefe and his coworkers were focused on the hippocampus because, like Kandel, they knew from amnesia studies in humans (especially in patient H.M.) that it forms and initially stores beliefs we often express as explicit memories. What they discovered were the first indications of exactly how the neural circuits in the hippocampus do this. Actually, it was by accident that O’Keefe uncovered what he called “place cells” in the hippocampus. These cells are the exact location in the brain where information about the current position and trajectory of the body is encoded (O’Keefe, 2014). Twenty years later, the Mosers discovered cells nearby that store information about the geography of local environments, its boundaries, and the body’s orientation and speed. They called these cells “grid cells” for reasons that will become apparent. The Mosers also figured out how the information that grid cells store is fed to the place cells (Moser, E., 2014, Moser, M-B., 2014). Within a few years after their work became known, the role of these cells in recording other nongeographical information was vastly expanded (Manns and Eichenbaum, 2009). Eventually, the techniques O’Keefe and the Mosers employed were used to identify other neural circuits dedicated to recording the full range of information about the rat’s past, present, and future environment—its beliefs about its world. Making our way through some of these discoveries, we’ll see how differently the rat brain and the human brain work from the way the theory of mind requires them to.
It will be important for the lessons to be extracted from this research that we begin by explicitly adopting the perspective of the experimenter and not the subject. We need to learn what they discovered about the hippocampus and the entorhinal cortex next to it (figure 7.9), and then ask how these two brain regions work together to store and use information.

Schematic of rat hippocampus and entorhinal cortex. From https://
Wire up individual neurons in a rat’s brain and put it in a square or rectangular or round box. Electrodes positioned over different neurons in the medial entorhinal cortex will fire whenever the rat passes a particular spot in the box (figure 7.10). Give each neuron that fires a label—a number, a letter, whatever. Let a rat walk around long enough and you can identify where it is in the box just by seeing which neuron in its brain fires. Mark the floor of the box with the number of the neuron that fires when the rat passes that spot.

Experimental setup. As the rat moves around the cage, electrodes in its brain record when particular neurons fire; locations in the space are color coded to the neurons that fire many more times than other neurons when the rat passes over those particular locations. The experimenter can read the rat’s location off the neurons firing strongest. From http://
What you get when you run this experiment is a remarkable grid across the whole floor of the cage, not an x,y square grid, but a grid composed of hexagonal regions that divide up the space (figure 7.11).

Place cells firing in the hippocampus give the rat’s location in the experimental box. Grid cells firing in the medial entorhinal cortex give the geography of the box: each dot represents the firing of a single neuron. Each neuron fires preferentially when the rat is at or very near a specific location in the box. From http://
Do the same with a bigger box and recording from a different set of neurons in the same part of the medial entorhinal cortex. They fire in the same patterns of triangles that form larger hexagons. In fact there are at least four sets of neurons that fire in hexagonal patterns for larger and larger boxes. All these neurons are located along the edge of the medial entorhinal cortex next to the hippocampus. As you record neurons’ firing from the top to the bottom of the medial entorhinal cortex there is an increase in the number of neurons that fire for bigger hexagonals. But ones that record smaller size hexagons continue to be distributed among them. In this way, the experimenters were able to locate distinct clusters of neurons along the edge of the entorhinal cortex nearest the hippocampus that fired for different-sized boxes.
We need to be clear here. The dots that mark the firing of particular neurons when the rat crossed over particular spots in the boxes marked the locations of those spots; it was these locations which formed hexagonal patterns. The neurons that fired weren’t wired together into anything like hexagons. There was no one-to-one “mapping” from locations of spots in the boxes to locations of neurons in the medial entorhinal cortex of the rat’s brain that would enable experimenters to “read” the shape of the space between locations of spots in the boxes from the shape of the space between the neurons that fired when the rat was at these different spots in the boxes. The experimenters found that the neurons that fired were pretty much spread out over the back of the medial entorhinal cortex, and, though called “grid neurons,” these neurons weren’t in a grid themselves.
Experimenters can draw a scaled map, an accurate representation of the rat’s play space, just by watching which neurons fire in the rat’s medial entorhinal cortex, to predict which neurons would fire just by watching where the rat was in its play space. It would be easy to mistake such a map of the rat’s play space made by reading the recorded firing of grid cells for a map the rat or its brain might make for itself. Indeed, this is the very mistake O’Keefe made about his own earlier discovery of the place cells that connected to the grid cells. The mistake, though relatively harmless to his research progress, seriously obscures how the brain worked. In his 2014 Nobel Lecture, he quoted from an earlier paper: “These findings suggest that the hippocampus provides the rest of the brain with a spatial reference map” (O’Keefe and Dostrovsky, 1971, p. 174, as qtd. in O’Keefe, 2014; emphasis added). O’Keefe went on to say that his discovery of place cells vindicated the theory, first proposed by behavioral psychologist Edward C. Tolman (Tolman, 1948), that “animals found their way around environments by creating internal representations, which were more complicated and interesting than the simple associations between stimuli and responses beloved of the behaviorists of the Hullian persuasion” (O’Keefe, 2014, p. 278; O’Keefe is referring to the American psychologist Clark Leonard Hull). There’s a clear tipoff that what O’Keefe discovered couldn’t be right. No neuroscientist has ever uncovered some other set of neurons anywhere in “the rest of” the rat brain that actually treated the firing of grid or place cells as a representation, a set of symbols arranged to correspond, by some interpretation, to reality. And no neuroscientist has even looked for the “key” or “legend” that decoded this “spatial reference map” for the rat because that’s not how grid or place cells work. They aren’t representations at all, at least not to the rat. As we’ll see, the emerging indications of how these cells work don’t vindicate O’Keefe’s map metaphor. For that’s what it is, a metaphor, harmless for understanding the neurology that concerned O’Keefe, but seriously misleading for everyone else.
That the place cells don’t represent location for the rat or its brain is of the first importance. So far as the rat is concerned, the place cells are not about its location, don’t contain any “readings” about where it is, don’t mean “now at location x,y in the box.” If they did, O’Keefe, the Mosers, and Kandel would have to start looking for some other part of the rat’s brain that interpreted the place cells as being about location, containing readings, meaning some statement expressed in the hippocampus by a rat brain’s thinking. So how does what happens in the place cells and elsewhere in the rat’s brain control its behaviors?
To answer that question, we’ll have to dive even more deeply into neuroscientists’ findings about what is happening in the grid and place cells, what information the grid cells are sending to the place cells and the place cells to the neocortex, and how the rat brain uses that information to guide the rat’s behaviors. We’ll need to review some of what neuroscientists have learned about the electrochemistry of the neural circuits the Mosers discovered. That will make plain that nothing happens in the rat brain the way that the theory of mind requires.
But why should these matters be of the slightest interest to the historian or to anyone else interested in Talleyrand’s biography? Well, to begin with, the implications of the neuroscientists’ discoveries about the rat’s brain are not limited to geographical beliefs. The place cells aren’t just cells for places. So far as neuroscientists can see, these cells record a vast range of “associations”: much of what the rat learns and remembers about all aspects of its environment (Manns and Eichenbaum, 2006). Second, neuroscientist have lavished all this research on rats because they recognize that most of their findings apply to humans, too. There is no evidence against and a lot of evidence for the conclusion that our much bigger brains are doing just more of the same things rats’ brains are doing (Kitamura et al., 2015). Finally, what neuroscientists are learning is how our brains decide, how they choose our courses of action, in the light of our environmental circumstances and previous experiences (Yu and Frank, 2015). Surely, getting a handle on that matter should be of the greatest importance in vindicating the narrative historian’s task.
In the course of fifty years at the pinnacle of French politics, Talleyrand had to make myriad critical decisions that mattered to his success, indeed his survival, and the fate of a dozen European regimes for and against whom he was working. Just to take one example, in 1807, he had to decide whether to betray his emperor, Napoléon, by entering into intrigues with the Russia tsar and the Austrian court to undermine him. Why did he make those decisions? Employing the same theory of mind, for two centuries, famous biographers have wrestled with this question, without resolving it. What was his motive? Was it venality?—Talleyrand took bribes. Calculation?—he was a survivor. Patriotism?—he claimed always to serve France first. Animus?—Napoléon had shamed him before the imperial court.
Any of these motives would make a compelling story. But do any of them trace what was actually going through Talleyrand’s mind?
Notes
1. Taking at face value what Talleyrand wrote down in his letters and diplomatic aides-mémoire, Cooper and Brinton didn’t disagree about much, but on what was going on in Talleyrand’s mind, they certainly diverged, at least in some instances. In 1809, Talleyrand allowed himself to be seen in Paris with an old enemy, Napoléon’s Minister of Police Joseph Fouché. Brinton writes: “Clearly two such Machiavellian characters had not come together for mere love.… Was it to restore the Directory? The Bourbons? Was it to put Murat on the throne? It is more likely that both Talleyrand and Fouché were afraid that [Napoléon’s war in Spain] … might end in complete disaster, and that they wished to seem to have deserted the Emperor in time to act as king makers” (Brinton, 1963 [1936], p. 153; emphasis added). Based on the same sources, Cooper comes to quite a different conclusion about what was going on in Talleyrand’s mind, writing that Talleyrand “knew his words and deeds [being seen in private conversation with Fouché] would be reported and that Napoléon could put only the worst interpretation on them. The explanation can only be that it was his policy at the time to form the nucleus of an open opposition which … might thus become strong enough without overthrowing Napoléon, to exercise so powerful an influence as to compel him to alter his policy in the direction in which all moderate men desired” (Cooper, 2001 [1932], p. 184; emphasis added). Who’s right? Cooper? Brinton? Neither? There has to be a fact of the matter here, doesn’t there, even if we can’t ever establish what it was?
2. Although the study of cell physiology might, in principle, have been a productive bottom-up strategy for neuroscience’s research program—to first identify the individual nerve cells (neurons), their networks, and the electrochemistry of their activity as these related to theory of mind—in practice, it was not; hardly any neuroscientists adopted it. With good reason: there are some 87 billion neurons in the human brain, and the number involved in even the simplest mental activity is astronomical.
3. Bailey, Bartsch, and Kandel were also able to show that explicit memory in the rat works exactly the same way that implicit memory does in the sea slug Aplysia, with both short- and long-term potentiation (STP and LTP; Bailey, Bartsch, and Kandel, 1996).