long-term memory
Rule #6
Remember to repeat.
FOR MANY YEARS, TEXTBOOKS described the birth of a memory using a metaphor involving cranky dockworkers, a large bookstore, and a small loading dock. An event to be processed into memory was likened to somebody dropping off a load of books onto the dock. If a dockworker hauled the load into the vast bookstore, it became stored for a lifetime. Because the loading dock was small, only a few loads could be processed at any one time. If someone dumped a new load of books on the dock before the previous ones were removed, the cranky workers simply pushed the old ones over the side.
Nobody uses this metaphor anymore, and there are ample reasons to wish it good riddance. Short-term memory is a much more active, much less sequential, far more complex process than the metaphor suggests. We now suspect that short-term memory is actually a collection of temporary memory capacities. Each capacity specializes in processing a specific type of information. Each operates in a parallel fashion with the others. To reflect this multifaceted talent, short-term memory is now called working memory. Perhaps the best way to explain working memory is to describe it in action.
I can think of no better illustration than the professional chess world’s first real rock star: Miguel Najdorf. Rarely was a man more at ease with his greatness than Najdorf. He was a short, dapper fellow gifted with a truly enormous voice, and he had an annoying tendency to poll members of his audience on how they thought he was doing. Najdorf in 1939 traveled to a competition in Buenos Aires with the national team. Two weeks later, Germany invaded Najdorf’s home country of Poland. Unable to return, Najdorf rode out the Holocaust tucked safely inside Argentina. He lost his parents, four brothers, and his wife to the concentration camps. In hopes that any remaining family might read about it and contact him, he once played 45 games of chess simultaneously, as a publicity stunt. He won 39 of these games, drew 4, and lost 2. While that is amazing in its own right, the truly phenomenal part is that he played all 45 games in all 11 hours blindfolded.
You did not read that wrong. Najdorf never physically saw any of the chessboards or pieces; he played each game in his mind. From the verbal information he received with each move, to his visualizations of each board, several components of working memory were working simultaneously in Najdorf’s mind. This allowed him to function in his profession, just as they do in yours and mine (though perhaps with a slightly different efficiency).
Working memory is now known to be a busy, temporary workspace, a desktop the brain uses to process newly acquired information. The man most associated with characterizing it is Alan Baddeley, a British scientist who looks unnervingly like the angel Clarence Oddbody in the movie It’s a Wonderful Life. Baddeley is most famous for describing working memory as a three-component model: auditory, visual, and executive.
The first component allows us to retain some auditory information, and it is assigned to information that is linguistic. Baddeley called it a phonological loop. Najdorf was able to use this component because his opponents were forced to declare their moves verbally.
The second component allows us to retain some visual information; this memory register is assigned to any images and spatial input the brain encounters. Baddeley called it a visuo-spatial sketchpad. Najdorf would have used it as he visualized each game.
The third component is a controlling function called the central executive, which keeps track of all activities throughout working memory. Najdorf used this ability to separate one game from another.
In later publications, Baddeley proposed a fourth component, called the episodic buffer, assigned to any stories a person might hear. This buffer has not been investigated extensively. Regardless of the number of parallel systems ultimately discovered, researchers agree that all share two important characteristics: All have a limited capacity, and all have a limited duration. If the information is not transformed into a more durable form, it will soon disappear. As you recall, our friend Ebbinghaus was the first to demonstrate the existence of two types of memory systems, a short form and a long form. He further demonstrated that repetition could convert one into the other under certain conditions. The process of converting short-term memory traces to longer, sturdier forms is called consolidation.
consolidation
At first, a memory trace is flexible, labile, subject to amendment, and at great risk for extinction. Most of the inputs we encounter in a given day fall into this category. But some memories stick with us. Initially fragile, these memories strengthen with time and become remarkably persistent. They eventually reach a state where they appear to be infinitely retrievable and resistant to amendment. As we shall see, however, they may not be as stable as we think. Nonetheless, we call these forms long-term memories.
Like working memory, there appear to be different forms of long-term memory, most of which interact with one another. Unlike working memory, however, there is not as much agreement as to what those forms are. Most researchers believe there are semantic memory systems, which tend to remember things like your Aunt Martha’s favorite dress or your weight in high school. Most also believe there is episodic memory, in charge of remembering “episodes” of past experiences, complete with characters, plots, and time stamps (like your 25th high school reunion). One of its subsets is autobiographical memory, which features a familiar protagonist: you. We used to think that consolidation, the mechanism that guides this transformation into stability, affected only newly acquired memories. Once the memory hardened, it never returned to its initial fragile condition. We don’t think that anymore.
Consider the following story, which happened while I was watching a TV documentary with my then 6-year-old son. It was about dog shows. When the camera focused on a German shepherd with a black muzzle, an event that occurred when I was about his age came flooding back to my awareness.
In 1960, our backyard neighbor owned a dog he neglected to feed every Saturday. In response to hunger cues, the dog bounded over our fence precisely at 8 a.m. every Saturday, ran toward our metal garbage cans, tipped out the contents, and began a morning repast. My dad got sick of this dog and decided one Friday night to electrify the can in such fashion that the dog would get shocked if his wet little nose so much as brushed against it. Next morning, my dad awakened our entire family early to observe his “hot dog” show. To Dad’s disappointment, the dog didn’t jump over the fence until about 8:30, and he didn’t come to eat. Instead he came to mark his territory, which he did at several points around our backyard. As the dog moved closer to the can, my Dad started to smile, and when the dog lifted his leg to mark our garbage can, my Dad exclaimed, “Yes!” You don’t have to know the concentration of electrolytes in mammalian urine to know that when the dog marked his territory on our garbage can, he also completed a mighty circuit. His cranial neurons ablaze, his reproductive future suddenly in serious question, the dog howled, bounding back to his owner. The dog never set foot in our backyard again; in fact, he never came within 100 yards of our house. Our neighbor’s dog was a German shepherd with a distinct black muzzle, just like the one in the television show I was now watching. I had not thought of the incident in years.
What physically happened to my dog memory when summoned back to awareness? There is increasing evidence that when previously consolidated memories are recalled from long-term storage into consciousness, they revert to their previously labile, unstable natures. Acting as if newly minted into working memory, these memories may need to become reprocessed if they are to remain in a durable form. That means the hot dog story is forced to restart the consolidation process all over again, every time it is retrieved. This process is formally termed reconsolidation. These data have a number of scientists questioning the entire notion of stability in human memory. If consolidation is not a sequential one-time event but one that occurs repeatedly every time a memory trace is reactivated, it means permanent storage exists in our brains only for those memories we choose not to recall! Oh, good grief. Does this mean that we can never be aware of something permanent in our lives? Some scientists think this is so. And if it is true, the case I am about to make for repetition in learning is ridiculously important.
retrieval
Like many radical university professors, our retrieval systems are powerful enough to alter our conceptions of the past while offering nothing substantial to replace them. Exactly how that happens is an important but missing piece of our puzzle. Still, researchers have organized the mechanisms of retrieval into two general models. One passively imagines libraries. The other aggressively imagines crime scenes.
In the library model, memories are stored in our heads the same way books are stored in a library. Retrieval begins with a command to browse through the stacks and select a specific volume. Once selected, the contents of the volume are brought into conscious awareness, and the memory is retrieved. This tame process is sometimes called reproductive retrieval.
The other model imagines our memories to be more like a large collection of crime scenes, complete with their own Sherlock Holmes. Retrieval begins by summoning the detective to a particular crime scene, a scene which invariably consists of a fragmentary memory. Upon arrival, Mr. Holmes examines the partial evidence available. Based on inference and guesswork, the detective then invents a reconstruction of what was actually stored. In this model, retrieval is not the passive examination of a fully reproduced, vividly detailed book. Rather, retrieval is an active investigative effort to recreate the facts based on fragmented data.
Which is correct? The surprising answer is both. Ancient philosophers and modern scientists agree that we have different types of retrieval systems. Which one we use may depend upon the type of information being sought, and how much time has passed since the initial memory was formed. This unusual fact requires some explanation.
mind the gap
At relatively early periods post-learning (say minutes to hours to days), retrieval systems allow us to reproduce a fairly specific and detailed account of a given memory. This might be likened to the library model. But as time goes by, we switch to a style more reminiscent of the Sherlock Holmes model. The reason is that the passage of time inexorably leads to a weakening of events and facts that were once clear and chock-full of specifics. In an attempt to fill in missing gaps, the brain is forced to rely on partial fragments, inferences, outright guesswork, and often (most disturbingly) other memories not related to the actual event. It is truly reconstructive in nature, much like a detective with a slippery imagination. This is all in the service of creating a coherent story, which, reality notwithstanding, brains like to do. So, over time, the brain’s many retrieval systems seem to undergo a gradual switch from specific and detailed reproductions to this more general and abstracted recall.
Pretend you are a freshman in high school and know a psychiatrist named Daniel Offer. Taking out a questionnaire, Dr. Dan asks you to answer some questions that are really none of his business: Was religion helpful to you growing up? Did you receive physical punishment as discipline? Did your parents encourage you to be active in sports? And so on. Now pretend it is 34 years later. Dr. Dan tracks you down, gives you the same questionnaire, and asks you to fill it out. Unbeknownst to you, he still has the answers you gave in high school, and he is out to compare your answers. How well do you do? In a word, horribly. In fact, the memories you encoded as adolescents bear very little resemblance to the ones you remember as adults, as Dr. Dan, who had the patience to actually do this experiment, found out. Take the physical punishment question. Though only a third of adults recalled any physical punishment, such as spanking, Dr. Dan found that almost 90 percent of the adolescents had answered the question in the affirmative. These data are only some that demonstrate the powerful inaccuracy of the Sherlock Holmes style of retrieval.
This idea that the brain might cheerily insert false information to make a coherent story underscores its admirable desire to create organization out of a bewildering and confusing world. The brain constantly receives new inputs and needs to store some of them in the same head already occupied by previous experiences. It makes sense of its world by trying to connect new information to previously encountered information, which means that new information routinely resculpts previously existing representations and sends the re-created whole back for new storage. What does this mean? Merely that present knowledge can bleed into past memories and become intertwined with them as if they were encountered together. Does that give you only an approximate view of reality? You bet it does. This tendency, by the way, can drive the criminal-justice system crazy.
repetition
Given this predilection for generalizing, is there any hope of creating reliable long-term memories? As the Brain Rule cheerily suggests, the answer is yes. Memory may not be fixed at the moment of learning, but repetition, doled out in specifically timed intervals, is the fixative. Given its potential relevance to business and education, it is high time we talked about it.
Here’s a test that involves the phonological loop of working memory. Gaze at the following list of characters for about 30 seconds, then cover it up before you read the next paragraph.
3 $ 8 ? A % 9
Can you recall the characters in the list without looking at them? Were you able to do this without internally rehearsing them? Don’t be alarmed if you can’t. The typical human brain can hold about seven pieces of information for less than 30 seconds! If something does not happen in that short stretch of time, the information becomes lost. If you want to extend the 30 seconds to, say, a few minutes, or even an hour or two, you will need to consistently re-expose yourself to the information. This type of repetition is sometimes called maintenance rehearsal. We now know that maintenance rehearsal is mostly good for keeping things in working memory—that is, for a short period of time. We also know there is a better way to push information into long-term memory. To describe it, I would like to relate the first time I ever saw somebody die.
Actually, I saw eight people die. Son of a career Air Force official, I was very used to seeing military airplanes in the sky. But I looked up one afternoon to see a cargo plane do something I had never seen before or since. It was falling from the sky, locked in a dead man’s spiral. It hit the ground maybe 500 feet from where I stood, and I felt both the shock wave and the heat of the explosion.
There are two things I could have done with this information. I could have kept it entirely to myself, or I could have told the world. I chose the latter. After immediately rushing home to tell my parents, I called some of my friends. We met for sodas and began talking about what had just happened. The sounds of the engine cutting out. Our surprise. Our fear. As horrible as the accident was, we talked about it so much in the next week that the subject got tiresome. One of my teachers actually forbade us from bringing it up during class time, threatening to make T-shirts saying, “You’ve done enough talking.”
Why do I still remember the details of this story? T-shirt threat notwithstanding, my eagerness to yap about the experience provided the key ingredient. The gabfest after the accident forced a consistent re-exposure to the basic facts, followed by a detailed elaboration of our impressions. The phenomenon is called elaborative rehearsal, and it’s the type of repetition shown to be most effective for the most robust retrieval. A great deal of research shows that thinking or talking about an event immediately after it has occurred enhances memory for that event, even when accounting for differences in type of memory. This tendency is of enormous importance to law-enforcement professionals. It is one of the reasons why it is so critical to have a witness recall information as soon as is humanely possible after a crime.
Ebbinghaus showed the power of repetition in exhaustive detail almost 100 years ago. He even created “forgetting curves,” which showed that a great deal of memory loss occurs in the first hour or two after initial exposure. He demonstrated that this loss could be lessened by deliberate repetitions. This notion of timing in the midst of re-exposure is so critical, I am going to explore it in three ways.
space out the input
Much like concrete, memory takes an almost ridiculous amount of time to settle into its permanent form. And while it is busy hardening, human memory is maddeningly subject to amendment. This probably occurs because newly encoded information can reshape and wear away previously existing traces. Such interference is especially true when learning is supplied in consecutive, uninterrupted glops, much like what happens in most boardrooms and schoolrooms. The probability of confusion is increased when content is delivered in unstoppable, unrepeated waves, poured into students as if they were wooden forms.
But there is happy news. Such interference does not occur if the information is delivered in deliberately spaced repetition cycles. Indeed, repeated exposure to information in specifically timed intervals provides the most powerful way to fix memory into the brain. Why does this occur? When the electrical representations of information to be learned are built up slowly over many repetitions, the neural networks recruited for storage gradually remodel the overall representation and do not interfere with networks previously recruited to store similarly learned information. This idea suggests that continuous repetition cycles create experiences capable of adding to the knowledge base, rather than interfering with the resident tenants.
There is an area of the brain that always becomes active when a vivid memory is being retrieved. The area is within the left inferior prefrontal cortex. The activity of this area, captured by an fMRI (that’s “functional magnetic resonance imaging”) machine during learning, predicts whether something that was stored is being recalled in crystal-clear detail. This activity is so reliable that if scientists want to know if you are retrieving something in a robust manner, they don’t have to ask you. They can simply look in their machine and see what your left inferior prefrontal cortex is doing.
With this fact in mind, scientist Robert Wagner designed an experiment in which two groups of students were required to memorize a list of words. The first group was shown the words via mass repetition, reminiscent of students cramming for an exam. The second group was shown the words in spaced intervals over a longer period of time, no cramming allowed. In terms of accurate retrieval, the first group fared much worse than the second; activity in the left inferior prefrontal cortex was greatly reduced. These results led Harvard psychology professor Dan Schacter to say: “If you have only one week to study for a final, and only 10 times when you can hit the subject, it is better to space out the 10 repetitions during the week than to squeeze them all together.”
Taken together, the relationship between repetition and memory is clear. Deliberately re-expose yourself to the information if you want to retrieve it later. Deliberately re-expose yourself to the information more elaborately if you want the retrieval to be of higher quality. Deliberately re-expose yourself to the information more elaborately, and in fixed, spaced intervals, if you want the retrieval to be the most vivid it can be. Learning occurs best when new information is incorporated gradually into the memory store rather than when it is jammed in all at once. So why don’t we use such models in our classrooms and boardrooms? Partly it’s because educators and business people don’t regularly read the Journal of Neuroscience. And partly it’s because the people who do aren’t yet sure which time intervals supply all the magic. Not that timing issues aren’t a powerful research focus. In fact, we can divide consolidation into two categories based on timing: fast and slow. To explain how timing issues figure into memory formation, I want to stop for a moment and tell you about how I met my wife.
sparking interest
I was dating somebody else when I first met Kari—and so was she. But I did not forget Kari. She is a physically beautiful, talented, Emmy-nominated composer, and one of the nicest people I have ever met. When we both became “available” six months later, I immediately asked her out. We had a great time, and I began thinking about her more and more. Turns out she was feeling the same. I asked her out again, and soon we were seeing each other regularly. After two months, it got so that every time we met, my heart would pound, my stomach would flip-flop, and I’d get sweaty palms. Eventually I didn’t even have to see her to raise my pulse. Just a picture would do, or a whiff of her perfume, or … just music! Even a fleeting thought was enough to send me into hours of rapture. I knew I was falling in love.
What was happening to effect such change? With increased exposure to this wonderful woman, I became increasingly sensitive to her presence, needing increasingly smaller “input” cues (perfume, for heavens sake?) to elicit increasingly stronger “output” responses. The effect has been long-lasting, with a tenure of almost three decades. Leaving the whys of the heart to poets and psychiatrists, the idea that increasingly limited exposures can result in increasingly stronger responses lies at the heart of how neurons learn things. Only it’s not called romance; it’s called long-term potentiation.
To describe LTP, we need to leave the high-altitude world of behavioral research and drop down to the more intimate world of cellular and molecular research. Let’s suppose you and I are looking at a laboratory Petri dish where two hippocampal neurons happily reside in close synaptic association. I will call the presynaptic neuron the “teacher” and the post-synaptic neuron the “student.” The goal of the teacher neuron is to pass on information, electrical in nature, to the student cell. Let’s give the teacher neuron some stimulus that inspires the cell to crack off an electrical signal to its student. For a short period of time, the student becomes stimulated and fires excitedly in response. The synaptic interaction between the two is said to be temporarily “strengthened.” This phenomenon is termed early LTP.
Unfortunately, the excitement lasts only for an hour or two. If the student neuron does not get the same information from the teacher within about 90 minutes, the student neuron’s level of excitement will vanish. The cell will literally reset itself to zero and act as if nothing happened, ready for any other signal that might come its way.
Early LTP is at obvious cross-purposes with the goals of the teacher neuron and, of course, with real teachers everywhere. How does one get that initial excitement to become permanent? Is there a way to transform a student’s short-lived response into a long-lived one?
You bet there is: The information must be repeated after a period of time has elapsed. If the signal is given only once by the cellular teacher, the excitement will be experienced by the cellular student only transiently. But if the information is repeatedly pulsed in discretely timed intervals (the timing for cells in a dish is about 10 minutes between pulses, done a total of three times), the relationship between the teacher neuron and the student neuron begins to change. Much like my relationship with Kari after a few dates, increasingly smaller and smaller inputs from the teacher are required to elicit increasingly stronger and stronger outputs from the student. This response is termed “late LTP.” Even in this tiny, isolated world of two neurons, timed repetition is deeply involved in whether or not learning will occur.
The interval required for synaptic consolidation is measured in minutes and hours, which is why it is called fast consolidation. But don’t let this small passage of time disabuse you of its importance. Any manipulation—behavioral, pharmacological, or genetic—that interferes with any part of this developing relationship will block memory formation in total.
Such data provide rock solid evidence that repetition is critical in learning—at least, if you are talking about two neurons in a dish. How about between two people in a classroom? The comparatively simple world of the cell is very different from the complex world of the brain. It is not unusual for a single neuron to have hundreds of synaptic connections with other neurons.
This leads us to a type of consolidation measured in decidedly longer terms, and to stronger end-use implications. It is sometimes called “system consolidation,” sometimes “slow consolidation.” As we shall see, slow is probably the better term.
a chatty marriage
Nuclear annihilation is a good way to illustrate the differences between synaptic and system consolidation. On August 22, 1968, the Cold War got hot. I was studying history in junior high at the time, living with my Air Force-tethered family at an air base in central Germany, unhappily near ground zero if the atomics were ever to fly in the European theater.
If you could have visited my history class, you wouldn’t have liked it. For all the wonderfully intense subject matter—Napoleonic Wars!—the class was taught in a monotonic fashion by a French national, a teacher who really didn’t want to be there. And it didn’t help my concentration to be preoccupied with the events of the previous day. August 21, 1968, was the morning when a combined contingent of Soviet and Warsaw Pact armies invaded what used to be Czechoslovakia. Our air base went on high alert, and my dad, a member of the U.S. Air Force, had left the evening before. Ominously, he had not yet come home.
The instructor pointed to a large and beautiful painting of the Battle of Austerlitz on the wall, tediously discussing the early wars of Napoleon. I suddenly heard her angry voice say, “Do I need to ask zees twice?” Jolted out of my worried distraction, I turned around to find her looming over my desk. She cleared her throat. “I said, ‘Who were Napoleon’s enemies in zees battle?I suddenly realized she had been talking to me, and I blurted out the first words that came to my addled mind. “The Warsaw Pact armies! No? Wait! I mean the Soviet Union!” Fortunately, the teacher had a sense of humor and some understanding about the day. As the class erupted with laughter, she quickly thawed, tapped my shoulder, and walked back to her desk, shaking her head. “Zee enemies were a coalition of Russian and Austrian armies.” She paused. “And Napoleon cleaned their clocks.”
Many memory systems are involved in helping me to retrieve this humiliating memory, now almost four decades old. I want to use some of its semantic details to describe the timing properties of system consolidation.
Like Austerlitz, our neurological tale involves several armies of nerves. The first army is the cortex, that wafer-thin layer of nerves that blankets a brain the way an atmosphere blankets a battlefield. The second is a bit of a tongue twister, the medial temporal lobe. It houses another familiar old soldier, the oft-mentioned hippocampus. Crown jewel of the limbic system, the hippocampus helps shape the long-term character of many types of memory. That other teacher-student relationship we were discussing, the one made of neurons, takes place in the hippocampus.
How the cortex and the medial temporal lobe are cabled together tells the story of long-term memory formation. Neurons spring from the cortex and snake their way over to the lobe, allowing the hippocampus to listen in on what the cortex is receiving. Wires also erupt from the lobe and wriggle their way back to the cortex, returning the eavesdropping favor. This loop allows the hippocampus to issue orders to previously stimulated cortical regions while simultaneously gleaning information from them. It also allows us to form memories, and it played a large role in my ability to recount this story to you.
The end result of their association is the creation of long-term memories. How they work to provide stable memories is not well understood, even after three decades of research. We do know something about the characteristics of their communication:
1) Sensory information comes into the hippocampus from the cortex, and memories form in the cortex by way of the reverse connections.
2) Their electrical marriage starts out being amazingly chatty. Long after the initial stimulus has exited, the hippocampus and the relevant cortical neurons are still yapping about it. Even when I went to bed that night, the hippocampus was busy feeding signals about Austerlitz back to the cortex, replaying the memory over and over again while I slept. This offline processing provides an almost absurdly powerful reason to advocate for regular sleep. The importance of sleep to learning is described in detail in Chapter 7.
3) While these regions are actively engaged, any memory they mediate is labile and subject to amendment. But it doesn’t stay that way.
4) After an elapsed period of time, the hippocampus will let go of the cortex, effectively terminating the relationship. This will leave only the cortex holding the memory of the event. But there’s an important caveat. The hippocampus will file for cellular divorce only if the cortical memory has first become fully consolidated–only if the memory has changed from transient and amendable to durable and fixed. The process is at the heart of system consolidation, and it involves a complex reorganization of the brain regions supporting a particular memory trace.
So how long does it take for a piece of information, once recruited for long-term storage, to become completely stable? Another way of asking the question is: How long does it take before the hippocampus lets go of its cortical relationship? Hours? Days? Months? The answer surprises nearly everybody who hears it for the first time. The answer is: It can take years.
memories on the move
Remember H.M., the fellow whose hippocampus was surgically removed, and along with it the ability to encode new information? H.M. could meet you twice in two hours, with absolutely no recollection of the first meeting. This inability to encode information for long-term storage is called anterograde amnesia. Turns out this famous patient also had retrograde amnesia, a loss of memory of the past. You could ask H.M. about an event that occurred three years before his surgery. No memory. Seven years before his surgery. No memory. If that’s all you knew about H.M, you might conclude that his hippocampal loss created a complete memory meltdown. But that’s where you’d be wrong. If you asked H.M. about the distant past, say early childhood, he would display a perfectly normal recollection, just as you and I might. He can remember his family, where he lived, details of various events, and so on. This is a conversation with a researcher who studied him for many years:
Researcher: Can you remember any particular event that was special—like a holiday, Christmas, birthday, Easter?
(Now remember, this is a fellow who cannot ever remember meeting this researcher before this interview, though the researcher has worked with him for decades.)
H.M.: There I have an argument with myself about Christmas time.
Researcher: What about Christmas?
H.M.: Well, ’cause my daddy was from the South, and he didn’t celebrate down there like they do up here—in the North. Like they don’t have the trees or anything like that. And uh, but he came North even though he was born down Louisiana. And I know the name of the town he was born in.
If H.M can recall certain details about his distant past, there must be a point where memory loss began. Where was it? Close analysis revealed that his memory doesn’t start to sputter until you get to about the 11th year before his surgery. If you were to graph his memory, you would start out with a very high score and then, 11 years before his surgery, drop it to near zero, where it would remain forever.
What does that mean? If the hippocampus were involved in all memory abilities, its complete removal should destroy all memory abilities—wipe the memory clean. But it doesn’t. The hippocampus is relevant to memory formation for more than a decade after the event was recruited for long-term storage. After that, the memory somehow makes it to another region, one not affected by H.M.’s brain losses, and as a result, H.M. can retrieve it. H.M., and patients like him, tell us the hippocampus holds on to a newly formed memory trace for years. Not days. Not months. Years. Even a decade or more. System consolidation, that process of transforming a labile memory into a durable one, can take years to complete. During that time, the memory is not stable.
There are, of course, many questions to ask about this process. Where does the memory go during those intervening years? Joseph LeDoux has coined the term “nomadic memory” to illustrate memory’s lengthy sojourn through the brain’s neural wilderness. But that does not answer the question. Currently nobody knows where it goes, or even if it goes. Another question: Why does the hippocampus eventually throw in the towel with its cortical relationships, after spending years nurturing them? Where is the final resting place of the memory once it has fully consolidated? At least in response to that last question, the answer is a bit clearer. The final resting place for the memory is a region that will be very familiar to movie buffs, especially if you like films such as The Wizard of Oz, The Time Machine, and the original Planet of the Apes.
Planet of the Apes was released in 1968, the same year of the Soviet invasion, and appropriately dealt with apocalyptic themes. The main character, a spaceman played by Charleton Heston, had crash-landed onto a planet ruled by apes. Having escaped a gang of malevolent simians at the end of the movie, the last frames show Heston walking along a beach. Suddenly, he sees something off camera of such significance that it makes him drop to his knees. He screams. “You finally did it. God damn you all to hell!” and pounds his fists into the surf, sobbing.
As the camera pulls back from Heston, you see the outline of a vaguely familiar sculpture. Eventually the Statue of Liberty is revealed, half buried in the sand, and then it hits you why Heston is screaming. After this long cinematic journey, he wasn’t adventuring on foreign soil. Heston never left Earth. His ending place was the same as his starting place, and the only difference was the timeline. His ship had “crashed” at a point in the far future, a post-apocalyptic Earth now ruled by apes. The first time I encountered data concerning the final resting place of a fully consolidated memory, I immediately thought of this movie.
You recall that the hippocampus is wired to receive information from the cortex as well as return information to it. Declarative memories appear to be terminally stored in the same cortical systems involved in the initial processing of the stimulus. In other words, the final resting place is also the region that served as the initial starting place. The only separation is time, not location. These data have a great deal to say not only about storage but also about recall. Retrieval for a fully mature memory trace 10 years later may simply be an attempt to reconstruct the initial moments of learning, when the memory was only a few milliseconds old! So, the current model looks something like this:
1) Long-term memories occur from accumulations of synaptic changes in the cortex as a result of multiple reinstatements of the memory.
2) These reinstatements are directed by the hippocampus, perhaps for years.
3) Eventually the memory becomes independent of the medial temporal lobe, and this newer, more stable memory trace is permanently stored in the cortex.
4) Retrieval mechanisms may reconstruct the original pattern of neurons initially recruited during the first moments of learning.
forgetting
Solomon Shereshevskii, a Russian journalist born in 1886, seemed to have a virtually unlimited memory capacity, both for storage and for retrieval. Scientists would give him a list of things to memorize, usually combinations of numbers and letters, and then test his recall. As long as he was allowed 3 or 4 seconds to “visualize” (his words) each item, he could repeat the lists back perfectly, even if the lists had more than 70 elements. He could also repeat the list backward.
In one experiment, a researcher exposed Shereshevskii to a complex formula of letters and numbers containing about 30 items. After a single retrieval test (which Shereshevskii accomplished flawlessly), the researcher put the list in a box, and waited 15 years. The scientist then took out the list, found Shereshevskii, and asked him to repeat the formula. Without hesitation, he reproduced the list on the spot, again without error. Shereshevskii’s memory of everything he encountered was so clear, so detailed, so unending, he lost the ability to organize it into meaningful patterns. Like living in a permanent snowstorm, he saw much of his life as blinding flakes of unrelated sensory information, He couldn’t see the “big picture,” meaning he couldn’t focus on commonalities between related experiences and discover larger, repeating patterns. Poems, carrying their typical heavy load of metaphor and simile, were incomprehensible to him. In fact, he probably could not make sense of the sentence you just read. Shereshevskii couldn’t forget, and it affected the way he functioned.
The last step in declarative processing is forgetting. The reason forgetting plays a vital role in our ability to function is deceptively simple. Forgetting allows us to prioritize events. Those events that are irrelevant to our survival will take up wasteful cognitive space if we assign them the same priority as events critical to our survival. So we don’t. We insult them by making them less stable. We forget them.
There appear to be many types of forgetting, categories cleverly enumerated by Dan Schacter, the father of research on the phenomenon, in his book The Seven Sins of Memory. Tip-of-the-tongue issues, absent-mindedness, blocking habits, misattribution, biases, suggestibility—the list reads like a cognitive Chamber of Horrors for students and business professionals alike. Regardless of the type of forgetting, they all have one thing in common. They allow us to drop pieces of information in favor of others. In so doing, forgetting helped us to conquer the Earth.
ideas
How can we use all of this information to conquer the classroom? The boardroom? Exploring the timing of information re-exposure is one obvious arena where researchers and practitioners might do productive work together. For example, we have no idea what this means for marketing. How often must you repeat the message before people buy a product? What determines whether they still remember it six months later, or a year later?
Minutes and hours
The day of a typical high-school student is segmented into five or six 50-minute periods, consisting of unrepeated (and unrelenting) streams of information. Using as a framework the timing requirements suggested by working memory, how would you change this five-period fire hose? What you’d come up with might be the strangest classroom experience in the world. Here’s my fantasy:
In the school of the future, lessons are divided into 25-minute modules, cyclically repeated throughout the day. Subject A is taught for 25 minutes, constituting the first exposure. Ninety minutes later, the 25-minute content of Subject A is repeated, and then a third time. All classes are segmented and interleaved in such a fashion. Because these repetition schedules slow down the amount of information capable of being addressed per unit of time, the school year is extended into the summer.
Days and weeks
We know from Robert Wagner that multiple reinstatements provide demonstrable benefit over periods of days and even weeks.
In the future school, every third or fourth day would be reserved for reviewing the facts delivered in the previous 72 to 96 hours. During these “review holidays,” previous information would be presented in compressed fashion. Students would have a chance to inspect the notes they took during the initial exposures, comparing them with what the teacher was saying in the review. This would result in a greater elaboration of the information, and it would help the teachers deliver accurate information. A formalized exercise in error-checking soon would become a regular and positive part of both the teacher and student learning experiences.
It is quite possible that such models would eradicate the need for homework. At its best, homework served only to force the student to repeat content. If that repetition were supplied during the course of the day, there might be little need for further re-exposure. This isn’t because homework isn’t important as a concept. In the future school, it may simply be unnecessary.
Could models like these actually work? Deliberately spaced repetitions have not been tested rigorously in the real world, so there are lots of questions. Do you really need three separate repetitions per subject per day to accrue a positive outcome? Do all subjects need such repetition? Might such interleaved vigor hurt learning, with constant repetitions beginning to interfere with one another as the day wore on? Do you really need review holidays, and if so, do you need them every three to four days? We don’t know.
Years and years
Today, students are expected to know certain things by certain grades. Curiously absent from this model is how durable that learning remains after the student completes the grade. Given that system consolidation can take years, might the idea of grade-level expectations need amending? Perhaps learning in the long view should be thought of the same way one thinks of immune booster shots, with critical pieces of information being repeated on a yearly or semi-yearly basis.
In my fantasy class, this is exactly what happens. Repetitions begin with a consistent and rigorous review of multiplication tables, fractions, and decimals. First learned in the third grade, six-month and yearly review sessions on these basic facts occur through sixth grade. As mathematical competencies increase in sophistication, the review content is changed to reflect greater understanding. But the cycles are still in place. In my fantasy, these consistent repetition disciplines, stretched out over long periods of time, create enormous benefits for every academic subject, especially foreign languages.
You’ve probably heard that many corporations, especially in technical fields, are disappointed by the quality of the American undergraduates they hire. They have to spend money retraining many of their newest employees in certain basic skills that they often think should have been covered in college. One of my business fantasies would partner engineering firms with colleges of engineering. It involves shoring up this deficit by instituting post-graduate repetition experiences. These reinstatement exercises would be instituted the week after graduation and continue through the first year of employment. The goal? To review every important technical subject relevant to the employee’s new job. Research would establish not only the choice of topics to be reviewed but also the optimal spacing of the repetition.
My fantasy shares the teaching load between firm members and the academic community, extending the idea of a bachelor’s degree into the workplace. This hybridization aligns business professionals with researchers, ensuring that companies have exposure to the latest advances in their fields (and informing researchers on the latest practical day-to-day issues business professionals face). In my fantasy, the program becomes so popular that the more experienced engineers also begin attending these refresher courses, inadvertently rubbing shoulders with younger generations. The old guard is surprised by how much they have forgotten, and how much the review and cross-hybridization, both with research professionals and younger students, aid their own job performance.
I wish I could tell you this all would work, but instead all I can say is that memory is not fixed at the moment of learning, and repetition provides the fixative.
Summary
Rule #6
Remember to repeat.
• Most memories disappear within minutes, but those that survive the fragile period strengthen with time.
• Long-term memories are formed in a two-way conversation between the hippocampus and the cortex, until the hippocampus breaks the connection and the memory is fixed in the cortex—which can take years.
• Our brains give us only an approximate view of reality, because they mix new knowledge with past memories and store them together as one.
• The way to make long-term memory more reliable is to incorporate new information gradually and repeat it in timed intervals.