memory

Brain Rule #7

Repeat to remember.

 

IT IS THE ULTIMATE intellectual flattery to be born with a mind so amazing that brain scientists voluntarily devote their careers to studying it. This impressive feat occurred with the owners of two such minds in the past century, and their remarkable brains provide much insight into human memory.

The first mind belongs to Kim Peek. He was born in 1951 with not one hint of his future intellectual greatness. He had an enlarged head, no corpus callosum, and a damaged cerebellum. He could not walk until age 4, and he could get catastrophically upset when he didn’t understand something, which was often. Diagnosing him in childhood as mentally disabled, his doctors wanted to place him in a mental institution. That didn’t happen, mostly because of the nurturing efforts of Peek’s father, who recognized that his son also had some very special intellectual gifts. One of those gifts was memory. Peek had one of the most prodigious ever recorded. He could read two pages at the same time, one with each eye, comprehending and remembering perfectly everything contained in the pages. Forever.

Though publicity shy, Peek’s dad once granted writer Barry Morrow an interview with his son. They met at a library, where Peek demonstrated to Morrow a familiarity with literally every book in the building. He then started quoting ridiculous—and highly accurate—amounts of sports trivia. After a long discussion about the histories of United States wars (Revolutionary to Vietnam), Morrow felt he had enough. He decided right then and there to write a screenplay about this man. Which he did: the Oscar-winning film Rain Man.

What was going on in the uneven brain of Kim Peek? Did his mind belong in a cognitive freak show, or was it only an extreme example of normal human learning? Clearly he had an extraordinary ability to remember facts. But something very important was occurring in the first few moments Peek’s brain was exposed to information, and it’s not so very different from what happens to the rest of us.

The first few moments of learning give us the ability to remember something. The brain has different types of memory systems, many operating in a semiautonomous fashion, and we know the most about declarative memory. Declarative memory involves something you can declare, such as “The sky is blue.” It involves four steps: encoding, storing, retrieving, and forgetting. This chapter is about the first step. In fact, it is about the first few seconds of the first step. They are crucial in determining whether something that is initially perceived will also be remembered.

Why we have memory

We’re not born knowing everything we need to know about the world. We must learn it through firsthand experience or secondhand teaching. Memory provides a big survival advantage. It allows us to remember where food grows and where threats lurk. For a creature as physically weak as humans (compare your fingernail with the claw of even a house cat, and weep with envy), not allowing experience to shape our brains would have meant almost certain death in the rough-and-tumble world of the savannah.

But memory is more than a Darwinian chess piece. Most researchers agree that its broad influence on our brains is what truly makes us consciously aware. The names and faces of our loved ones, our own personal tastes, and especially our awareness of those names and faces and tastes, are maintained through memory. We don’t go to sleep and then, upon awakening, have to spend a week relearning the entire world. Memory does this for us. Even the single most distinctive talent of human cognition, the ability to write and speak in a language, exists because of active remembering. Memory, it seems, makes us not only durable but also human.

Types of memory

The type of memory Kim Peek was demonstrating so well is called declarative memory. You use it when you need to remember your Social Security number. Your retrieval commands might include things like visualizing the last time you saw the card, or remembering the last time you wrote down the number. And then you can state the number.

Here’s how we know there’s a second type of memory: Go ahead and remember how to ride a bike. Same process? Hardly. You do not call up a protocol list detailing where you put your foot, how to create the correct angle for your back, where your thumbs are supposed to be. The contrast proves an interesting point: One does not recall how to ride a bike in the same way one recalls nine numbers in a certain order. The ability to ride a bike seems quite independent from any conscious recollection of the skill. You were consciously aware when remembering your Social Security number, but not when remembering how to ride a bike. So declarative memories are those that can be experienced in our conscious awareness, such as a list of numbers, and nondeclarative memories are those that cannot be experienced in our conscious awareness, such as the motor skills necessary to ride a bike.

We also have both short-term forms of memory and long-term forms. A 19th-century German researcher was the first to show this. He performed the first real science-based inquiry into human memory—and he did the whole thing with his own brain. Hermann Ebbinghaus was born in 1850. As a young man, he looked like a cross between Santa Claus and John Lennon, with his bushy brown beard and round glasses. Ebbinghaus designed a series of experiments with which a toddler might feel at ease: He made up lists of nonsense words, 2,300 of them. Each word consisted of three letters and a consonant-vowel-consonant construction, such as TAZ, LEF, REN, ZUG. He then spent the rest of his life trying to memorize lists of these words in varying combinations and of varying lengths. With the tenacity of a Prussian infantryman (which, for a short time, he was), Ebbinghaus recorded his successes and failures. He uncovered many important things about human learning during this journey. He showed that memories have different life spans. Some memories hang around for only a few minutes, then vanish. Others persist for days or months, even for a lifetime. He uncovered one of the most depressing facts in all of education: People usually forget 90 percent of what they learn in a class within 30 days. And the majority of this forgetting occurs within the first few hours after class. Ebbinghaus also showed that one could increase the life span of a memory by repeating the information in timed intervals, something we’ll talk about in the Memory chapter.

Long before we get to remembering or forgetting, there is a fleeting golden instant when the brain first encounters a new piece of declarative information. Let’s see what the brain does.

We don’t just press “record”

Tom was a blind teenager who could listen to complex pieces of music and then play them on the piano—on his first try—with the skill and artistry of a professional. He was so versatile on the instrument, he could simultaneously play a different song with each hand. Yet Tom never took piano lessons. In fact, Tom never took any kind of music lessons. He simply listened to other people play. When we hear about people like this, we are usually jealous. Tom absorbs music as if he could switch to the “on” position some neural recording device in his head. We think we also have this video recorder, only our model is not nearly as good. It is a common impression that the brain is a lot like a recording device: that learning is something akin to pushing the “record” button, and remembering is simply pushing “play.” Wrong.

The initial moment of learning—of encoding—is incredibly mysterious and complex. The little we do know suggests that when information enters our head, our brain acts like a blender left running with the lid off. The information is chopped into discrete pieces and splattered all over the insides of our mind. This happens instantly. If you look at a complex picture, for example, your brain immediately extracts the diagonal lines from the vertical lines and stores them in separate areas. Same with color. If the picture is moving, the fact of its motion will be extracted and stored in a place separate than if the picture were static.

The brain slices and dices language the same way. One woman suffered a stroke in a specific region of her brain and lost the ability to use written vowels. You could ask her to write down a simple sentence, such as “Your dog chased the cat,” and it would look like this:

Y_ _ r d _ g ch _ s _ d t h _ c _ t.

There would be a place for every letter, but the vowels’ spots were left blank! So we know that vowels and consonants are not stored in the same place. Her stroke damaged some kind of connecting wiring. Along the same lines, even though the woman lost the ability to fill in the vowels of a given word, she has perfectly preserved the place where the vowel should go. So the place where a vowel should go appears to be stored in a separate area from the vowel itself. Content is stored separately from its context/container. That is exactly the opposite of the strategy a video recorder uses to record things.

The blender

Why does this happen? To encode information means to convert data into, well, a code. Information is translated from one form into another so that it can be transmitted. From a physiological perspective, the brain must translate external sources of energy (sights, sounds, etc.) into electrical patterns the brain can understand. The brain then stores these patterns in separate areas. Here’s an example.

One night I stayed with a friend who owned a beautiful lake cabin inhabited by a very large and hairy dog. Late next morning, I decided to go out and play fetch with this friendly animal. I made the mistake of throwing the stick into the lake and, not owning a dog in those days, had no idea what was about to happen to me. Like some friendly sea monster from Disney, the dog leapt from the water, ran at me full speed, suddenly stopped, then started to shake violently. With no real sense that I should have moved, I got sopping wet. To the brain, this story is all about energy and electricity.

My eyes picked up patterns of photons, or light, bouncing off the Labrador. Instantly, my brain converted them into patterns of electrical activity and routed the signals to the visual cortex in my occipital lobe. Now my brain can see the dog. In the initial moments of this learning, my brain transformed the energy of light into an electrical language it fully understands. My ears picked up the sound waves of the dog’s loud bark. My brain converted the energy of the sound waves into the same brain-friendly electrical language. Then it routed them as well, but to the auditory cortex instead of the visual cortex. From a neuron’s perspective, those two centers are a million miles away from each other. Any energy source—from the feel of the sun on my skin to the instant I unexpectedly and unhappily got soaked—goes through this conversion and routing process.

Encoding involves all of our senses, and their processing centers are scattered throughout the brain. Hence, the blender concept. In one 10-second encounter with an overly friendly dog, the brain recruits hundreds of different brain regions and coordinates the electrical activity of millions of neurons, encoding a single episode over vast neural differences.

Hard to believe, isn’t it? The world appears to you as a unified whole. So how does your brain keep track of everything, and then how does it reunite all the elements to produce this perception of continuity? It is a question that has bothered researchers for years. It is called the “binding problem,” from the idea that certain thoughts are bound together in the brain to provide continuity. We have very little insight into how the brain routinely and effortlessly gives us this illusion of stability.

Effortless vs. effortful processing

There’s another way the brain decides how to encode information. Encoding when viewed from a psychological perspective is the manner in which we apprehend, pay attention to, and organize information so that we can store it. It is one of the many intellectual processes Kim Peek was so darn good at. The brain chooses among several types of encoding, and the ease with which we remember something depends in part on process used for encoding.

Automatic processing

Some years ago, I attended an amazing Paul McCartney concert. If you were to ask me what I had for dinner before the concert and what happened onstage, I could tell you about both events in great detail. Though the actual memory is very complex (composed of spatial locations, sequences of events, sights, smells, tastes, etc.), I did not have to write down some exhaustive list of its varied experiences, then try to remember the list in detail just in case you asked me about my evening.

This is because my brain deployed a type of encoding scientists call automatic processing. Automatic processing occurs with glorious unintentionality, requiring minimal attention or effort. The brain appears to use this type of encoding in cases where we can visualize the information we encounter. (Automatic processing is often associated with being able to recall the physical location of the information, what came before it, and what came after it.) It is very easy to recall data that have been encoded via this process. The memories seem bound all together into a cohesive, readily retrievable form.

Effortful processing

Automatic processing has an evil twin that isn’t nearly so accommodating. As soon as the Paul McCartney tickets went on sale, I dashed to the purchasing website, which required my password for entrance. And I couldn’t remember my password! Finally, I found the right one and snagged some good seats. But trying to commit these passwords to memory is quite a chore, and I have a dozen or so passwords written on countless lists, scattered throughout my house. Unlike my Social Security number, I don’t use each password often enough to remember it. This kind of encoding—initiated deliberately, requiring conscious, energy-burning attention—is called effortful processing. The information does not seem bound together well at all, and it requires a lot of repetition before it can be retrieved with ease.

Others

Still other types of encoding exist. Three of them can be illustrated by taking the quick test below. Examine the capitalized word, and then answer the question below it.

FOOTBALL

Does this word fit into the sentence “I turned around to fight _______”?

LEVEL

Does this word rhyme with evil?

MINIMUM

Are there any circles in these letters?

Answering each question requires very different intellectual skills, which researchers now know underlie different types of encoding. The first example illustrates semantic encoding: paying attention to the definitions of words. The second example illustrates phonemic encoding, involving a comparison between the sounds of words. The third example illustrates structural encoding. Simply asking for a visual inspection of shapes, it is the most superficial type.

You can see how the type of encoding your brain performs on a given piece of information would have a great deal to do with your ability to remember the information at a later date.

Cracking the code

All encoding processes share certain characteristics. If we heed two of them, we can better encode (and thus remember) information.

1) The more elaborately we encode information at the moment of learning, the stronger the memory.

When the initial encoding is more detailed, more multifaceted, and more embued with emotion, we form a more robust memory. You can demonstrate this right now with any two groups of friends. Have them gaze at the list of words below for a few minutes.

Tractor

Green

Apple

Zero

Weather

Pastel

Quickly

Ocean

Nicely

Countertop

Airplane

Jump

Laugh

Tall

Tell Group #1 to determine the number of letters that have diagonal lines in them and the number that do not. Tell Group #2 to think about the meaning of each word and rate, on a scale of 1 to 10, how much they like or dislike the word. Take the list away, let a few minutes pass, and then ask each group to write down as many words as possible.

Which group remembers more words? The result you get has been replicated in laboratories many times over. As researchers Larry Squire and Eric Kandel write, “The result of the experiment is dramatic and consistent. The group that processed meaning remembers two to three times as many words as the group that focused on the shapes of the letters.” You get the same result if you use pictures or even music.

At this point, you might be saying to yourself, “Well, duh!” Isn’t it obvious that the more meaning something has, the more memorable it becomes? Most researchers would answer, “Well, yeah!” The very naturalness of the tendency proves the point. Hunting for diagonal lines in the word “apple” is not nearly as elaborate as remembering wonderful Aunt Mabel’s apple pie, then rating the pie, and thus the word, a “10.” The more personal, the better.

The trick for business professionals, and for educators, is to present information so compelling that the audience provides this meaning on their own, spontaneously engaging in deep and elaborate encoding.

2) The more closely we replicate the conditions at the moment of learning, the easier the remembering.

In one of the most unusual experiments performed in cognitive psychology, deep-sea divers were divided into two groups—one standing around on dry ground wearing wet suits and the other floating in about 10 feet of water, also wearing wet suits. Both groups of divers listened to somebody speak 40 random words. The divers then had to try to recall the list of words. The group that heard the words while in the water got a 15 percent better score if they were asked to recall the words while back in those same 10 feet of water, compared with standing on the beach. The group that heard the words on the beach got a 15 percent better score if they were asked to recall the words while suited on the beach, compared with floating in 10 feet of water.

Memory worked best, it appeared, if the environmental conditions at retrieval mimicked the environmental conditions at encoding. This occurs even under conditions where learning of any kind should be crippled, such as when a person is under the influence of marijuana and even laughing gas (nitrous oxide). Mood creates environmental conditions, too. Learn something while you are sad and you will be able to recall it better if, at retrieval, you are somehow suddenly made sad. It’s called context-dependent or state-dependent learning. It may work because of the following concept.

One pathway for encoding and storing

After new information is perceived and processed, it is not transferred to some central hard drive in the brain for storage. There is no central hunting ground where memories go to be infinitely retrieved. Instead, the same neural pathways that the brain recruits to process new information are the same neural pathways that the brain uses to store the information. This means memories are distributed all over the surface of the cortex, with each brain region making its own contribution to a memory.

This idea is so counterintuitive that it may take an urban legend to explain it. At least, I think it’s an urban legend. I heard it at a university administrators’ luncheon I once attended. The keynote speaker told the story of the wiliest college president he ever encountered. The institute had completely redone its grounds in the summer, resplendent with fountains and beautifully manicured lawns. All that was needed was to install the sidewalks and walkways where the students could access the buildings. But there was no design for these permanent paths. The construction workers were anxious to install them and wanted to know what the design would be, but the president refused to give any. “Install them next year, please,” he said. “I will give you the plans then.” Disgruntled but compliant, the construction workers waited. The school year began, and the students were forced to walk on the grass to get to their classes. Very soon, defined trails started appearing all over campus, as well as large islands of beautiful green lawn. By the end of the year, the buildings were connected by paths in a surprisingly efficient manner. “Now,” said the president to the contractors who had waited all year, “you can install the permanent sidewalks and pathways. Simply fill in all the paths you see before you!” The initial design, created by the initial input, also became the permanent path.

The brain’s storage strategy is remarkably similar to the president’s plan. New information penetrating the brain can be likened to the dirt paths that the students created across a pristine lawn. The final storage area can be likened to the pathways being permanently filled in with asphalt. They are the same pathways. This is why the initial moments of learning are so critical to retrieving that learning.

More ideas

The quality of the encoding stage—those earliest moments of learning—is one of the single greatest predictors of later learning success. We know that information is remembered best when it is elaborate, meaningful, and contextual. What can we do to take advantage of that in the real world?

First, we can take a lesson from a shoe store I used to visit as a little boy. This shoe store had a door with three handles at different heights: one near the very top, one near the very bottom, and one in the middle. The logic was simple: The more handles on the door, the more access points were available for entrance, regardless of the strength or age of customer. What a relief for a 5-year-old—a door I could actually reach! I was so intrigued with the door that I used to dream about it. In my dreams, however, there were hundreds of handles, all capable of opening the door to this shoe store.

“Quality of encoding” really means the number of door handles one can put on the entrance to a piece of information. The more handles one creates at the moment of learning, the more likely the information is to be accessed at a later date. The handles we can add revolve around content, timing, and environment.

Understand what the information means

The more a learner focuses on the meaning of information being presented, the more elaborately he or she will process the information. This principle is so obvious that it is easy to miss. What it means is this: When you are trying to drive a piece of information into your brain’s memory systems, make sure you understand exactly what that information means. If you are trying to drive information into someone else’s brain, make sure they understand exactly what it means. The corollary is true as well. If you don’t know what the learning means, don’t try to memorize the information by rote and pray the meaning will somehow reveal itself. And don’t expect your students will do this either, especially if you have done an inadequate job of explaining things. This is like attempting to remember words by looking at the number of diagonal lines in the words.

Use real-world examples

How does one communicate meaning in such a fashion that learning is improved? A simple trick involves the liberal use of relevant real-world examples, thus peppering main learning points with meaningful experiences. As a student, you can do this while studying after class. Teachers can do it during the actual learning experience.

Numerous studies show this works. In one experiment, groups of students read a 32-paragraph paper about a fictitious foreign country. The introductory paragraphs in the paper were highly structured. They contained either no examples, one example, or two or three consecutive examples of the main theme that followed. The greater the number of examples in the paragraph, the more likely the students were to remember the information. It’s best to use real-world situations familiar to the learner. Remember wonderful Aunt Mabel’s apple pie? That wasn’t an abstract food cooked by a stranger; it was real food cooked by a loving relative. The more personal an example, the more richly it becomes encoded and the more readily it is remembered.

Examples work because they take advantage of the brain’s natural predilection for pattern matching. Information is more readily processed if it can be immediately associated with information already present in the brain. We compare the two inputs, looking for similarities and differences as we encode the new information. Providing examples is the cognitive equivalent of adding more handles to the door. Providing examples makes the information more elaborative, more complex, better encoded, and therefore better learned.

Start with a compelling introduction

Introductions are everything. As an undergraduate, I had a professor who can thoughtfully be described as a lunatic. He taught a class on the history of cinema, and one day he decided to illustrate for us how art films traditionally depict emotional vulnerability. As he went through the lecture, he literally began taking off his clothes. He first took off his sweater and then, one button at a time, began removing his shirt, down to his T-shirt. He unzipped his trousers, and they fell around his feet, revealing, thank goodness, gym clothes. His eyes were shining as he exclaimed, “You will probably never forget now that some films use physical nudity to express emotional vulnerability. What could be more vulnerable than being naked?” We were thankful that he gave us no further details of his example. I will never forget the introduction to this unit in my film class (not that I’m endorsing its specifics). But its memorability illustrates the timing principle: The events that happen the first time you are exposed to information play a disproportionately greater role in your ability to accurately retrieve it at a later date. If you are trying to get information across to someone, a compelling introduction may be the most important single factor in the success of your mission. Why this emphasis on the initial moments? Because the memory of an event is stored in the same places initially recruited to perceive it.

Other professions have stumbled onto this notion. Budding directors are told by their film instructors that the audience needs to be hooked in the first three minutes after the opening credits to make the film compelling (and financially successful). Public speaking professionals say that you win or lose the battle to hold your audience in the first 30 seconds of a given presentation.

Create familiar settings

We know the importance of learning and retrieval taking place under the same conditions, but we don’t have a solid definition of “same conditions.” There are many ways for you to explore this idea.

One suggestion is that bilingual families create a “Spanish Room.” This would be a room with a rule: Only the Spanish language could be spoken in it. The room could be filled with Hispanic artifacts and pictures of Spanish words. All Spanish would be taught there, and no English. Anecdotally, parents have told me this works.

When setting up their children’s playroom at home, parents could create stations for science and stations for art—and not do science at the art station. Students could make sure that an oral examination is studied for orally, rather than by reviewing written material. Future car mechanics could be taught about engine repair in the actual shop where the repairs will occur.

At the moment of learning, environmental features—even ones irrelevant to the learning goals—may become encoded into the memory, right along with the goals. Environment then becomes part of elaborate encoding, the equivalent of putting more handles on the door.

After encoding, working memory kicks in

What happens to declarative information after those first few moments of encoding? We have the ability to hold it in our memory for a little while.

For many years, textbooks described this process using a metaphor involving cranky dockworkers, a large bookstore, and a small loading dock. An event to be processed into memory was likened to somebody dropping off a load of books onto the dock. If a dockworker hauled the load into the vast bookstore, it became stored for a lifetime. Because the loading dock was small, only a few loads could be processed at any one time. If someone dumped a new load of books on the dock before the previous ones were removed, the cranky workers simply pushed the old ones over the side.

Nobody uses this metaphor anymore. Short-term memory, we now know, is a much more active, much less sequential, far more complex process than that. Short-term memory is a collection of temporary memory capacities—busy work spaces where the brain processes newly acquired information. Each work space specializes in processing a specific type of information: auditory information, visual information, stories—plus a “central executive” to keep track of the activities of the others. These all operate in parallel. To reflect this multifaceted talent, short-term memory is now called working memory. The best way to explain working memory is to watch it in action. I can think of no better illustration than the professional chess world’s first real rock star: Miguel Najdorf.

Rarely was a man more at ease with his greatness than Najdorf. He was a short, dapper fellow gifted with a truly enormous voice, and he had an annoying tendency to poll members of his audience on how they thought he was doing. Najdorf in 1939 traveled to a competition in Buenos Aires with the national team. Two weeks later, Germany invaded Najdorf’s home country of Poland. Unable to return, Najdorf rode out the Holocaust tucked safely inside Argentina. He lost his parents, four brothers, and his wife to the concentration camps. Partly in hopes that any remaining family might read about it and contact him (and partly as a publicity stunt), he once played 45 games of chess simultaneously. He won 39 of these games, drew four, and lost two. While that is amazing in its own right, the truly phenomenal part is that he played all 45 games in all 11 hours blindfolded. You did not read that wrong. Najdorf never physically saw any of the chessboards or pieces; he played each game in his mind.

Several components of working memory were operating simultaneously in Najdorf’s brain to allow him to do this. Najdorf’s opponents verbally declared their chess moves. The work space assigned to linguistic information (called the phonological loop) allowed him to temporarily retain this auditory information.

To make his own chess move, Najdorf would visualize what each board looked like. The work space assigned to images and spatial input (called the visuospatial sketch pad) kicked in and allowed him to temporarily retain this visual information.

To separate one game from another, Najdorf’s brain used the work space that keeps track of all activities throughout working memory (the central executive).

All of these work spaces have two things in common: All have a limited capacity, and all have a limited duration. Working memory is the bridge between the first few seconds of encoding and the process of storing a memory for a longer time. If the information held in working memory is not transformed into a more durable form, it will soon disappear.

What would happen if you lost the ability to convert short-term information to long-term memories? A 9-year-old boy, knocked off his bicycle, gave us an idea. Known to scientists as H.M., he is our second famous mind. The accident left H.M. with severe epilepsy. The seizures became so bad that, by his late 20s, H.M. was essentially a shut-in—a danger to himself and others. His family turned to famed neurosurgeon William Scoville in hopes of a cure. Scoville decided on drastic action: He would remove part of H.M’s brain. The seizures were deemed to come from H.M.’s temporal lobe; if parts of it were removed, the logic went, the seizures should go away. The procedure, called a resection, is still in use today.

The surgeon won the battle but lost the war. The epilepsy was gone, but so was H.M.’s memory. He could meet you once and then meet you again an hour or two later, with absolutely no recall of the first visit. Even more dramatically, H.M. could no longer recognize his own face in the mirror. As his face aged, some of his physical features changed. But, unlike the rest of us, H.M. could not convert this new information into a longer-term form. This left him more or less permanently locked into a single idea about his appearance. When he looked in the mirror, he did not see this single idea, and he could not identify the person in the image. H.M.’s brain could still encode new information, but he had lost the ability to convert it.

The process of converting short-term memory traces to longer-term forms is called consolidation. It is our next subject.

Long-term memory

At first, a memory trace is flexible, labile, subject to amendment, and at great risk for extinction. Most of the inputs we encounter in a given day fall into this category. But some memories stick with us. Initially fragile, these memories strengthen with time and become remarkably persistent. They eventually reach a state where they appear to be infinitely retrievable and resistant to amendment. As we shall see, however, they’re not as stable as we think. Nonetheless, we call these forms long-term memories. Consider the following story, which happened while I was watching a TV documentary with my then 6-year-old son. It was about dog shows. When the camera focused on a German shepherd with a black muzzle, an event that occurred when I was about his age came flooding back to my awareness.

In 1960, our backyard neighbor owned a dog he neglected to feed (we assumed) every Saturday. The dog bounded over our fence precisely at 8:00 a.m. every Saturday, ran toward our metal garbage cans, tipped out the contents, and began a morning repast. My dad got sick of this dog and decided one Friday night to electrify the can in such fashion that the dog would get shocked if his wet nose so much as brushed against it. Next morning, my dad awakened our entire family early to observe his “hot dog” show. To Dad’s disappointment, the dog didn’t jump over the fence until late in the morning, and he didn’t come to eat. Instead, he came to mark his territory, which he did at several points around our backyard. As the dog moved closer to the can, my dad started to smile, and when the dog lifted his leg to mark our garbage can, my dad exclaimed, “Yes!” You don’t have to know the concentration of electrolytes in mammalian urine to know that when the dog marked his territory on our garbage can, he also completed a mighty circuit. His cranial neurons ablaze, his reproductive future suddenly in serious question, the dog howled, bounding back to his owner. The dog never set foot in our backyard again; in fact, he never came within 100 yards of our house. Our neighbor’s dog was a German shepherd with a distinct black muzzle, just like the one in the television show I was now watching. I had not thought of the incident in years.

What happened to my dog memory when summoned back to awareness? We used to think that consolidation, the mechanism that guides a short-term memory into a long-term memory, affected only newly acquired memories. Once the memory hardened, it never returned to its initial fragile condition. We don’t think that anymore. There is increasing evidence that when previously consolidated memories are recalled from long-term storage into consciousness, they revert to short-term memories. Acting as if newly minted into working memory, these memories may need to become reprocessed if they are to remain in a durable form.

That means my dog story is forced to start the consolidation process all over again, every time I retrieve it. This process is formally termed reconsolidation. As you can imagine, many scientists now question the entire notion of stability in human memory. If consolidation is not a sequential one-time event but an event that occurs every time a memory trace is reactivated, it means permanent storage exists in our brains only for those memories we choose not to recall! If this is true, the case I am about to make for repetition in learning is ridiculously important.

Retrieving memories: libraries and detectives

Like working memory, we appear to have different forms of long-term memory, most of which interact with one another. Unlike working memory, there is not as much agreement as to what those forms are. Most researchers believe we have semantic memory systems, in charge of remembering things like your sister’s favorite dress or your weight in high school. Most believe there is episodic memory, in charge of remembering “episodes” of past experiences, complete with characters, plots, and time stamps—like your five-year high school reunion. Autobiographical memory, a subset of episodic memory, features a familiar protagonist: you.

How do we retrieve such memories? Two ways, researchers think. One model passively imagines libraries. The other aggressively imagines crime scenes.

In the library model, memories are stored in our heads the same way books are stored in a library. Retrieval begins with a command to browse through the stacks and select a specific volume. Once selected, the contents of the volume are brought into conscious awareness and read like a book. The memory is retrieved. This is the model we use soon after learning something (within minutes, hours, or days). In these cases, we are able to reproduce a fairly specific and detailed account of a given memory.

But as time goes by, and once-clear details fade, we switch to the second model. This model imagines our memories to be more like a large collection of crime scenes, complete with their own Sherlock Holmes. Retrieval begins by summoning the detective to a particular crime scene, full of fragments of data. Mr. Holmes examines the partial evidence available, and he invents a reconstruction of what was actually stored. The brain’s Sherlock Holmes, however, isn’t afraid to use a little imagination. In an attempt to fill in missing gaps, the brain relies on fragments, inferences, guesswork, and often—disturbingly—memories not even related to the actual event.

Why would the brain insert false information as it tries to reconstruct a memory? It stems from a desire to create organization out of a bewildering and confusing world. Here’s what is happening: The brain constantly receives new inputs. It needs to store some of them in the same places already occupied by previous experiences. Trained in pattern matching, the brain connects new information to previously encountered information, in an attempt to make sense of the world. Accessing that previous information returns it to an amendable form. The new information resculpts the old. And the brain then sends the re-created whole back for new storage. What does this mean? Merely that present knowledge can bleed into past memories and become intertwined with them as if they were encountered together. Does that give you only an approximate view of reality? You bet it does.

Psychiatrist Daniel Offer demonstrated how faulty our Sherlock Holmes style of retrieval can be. If you had been one of his study subjects as a high-school freshman, Offer would have asked you to answer some questions that are really none of his business. Was religion helpful to you growing up? Did you receive physical punishment as discipline? Did your parents encourage you to be active in sports? And so on. Thirty-four years would go by. Offer then tracks you down and gives you the same questionnaire. Unbeknownst to you, he still has the answers you gave in high school, and he is out to compare your answers. How well do you do?

Horribly. Take the question about physical punishment, for example. Offer found that a third of the adults in his study recalled any physical punishment, such as spanking, as a kid. Yet nearly 90 percent of them had answered the question in the affirmative as adolescents.

Repetition fixes memories

Is there any hope of creating reliable long-term memories? As our Brain Rule—Repeat to remember—cheerily suggests, the answer is yes. Memory may not be fixed at the moment of learning, but repetition, doled out in specifically timed intervals, is the fixative.

Here’s a test for you. Gaze at the following list of characters for about 30 seconds, then cover it up before you continue reading.

3 $ 8 ? A % 9

Can you recall the characters in the list without looking at them? Were you able to do it without internally rehearsing them? Don’t be alarmed if you couldn’t. The typical human brain can hold about seven pieces of new information for less than 30 seconds! If something does not happen in that short stretch of time, the information becomes lost. If you want to extend the 30 seconds to, say, a few minutes, or even an hour or two, you will need to consistently reexpose yourself to the information. This type of repetition is sometimes called maintenance rehearsal. It is good for keeping things in working memory—that is, for a short period of time. But there is a better way to push information into long-term memory. To describe it, I would like to relate the first time I ever saw somebody die.

Actually, I saw eight people die. The son of a career Air Force official, I was very used to seeing military airplanes in the sky. But I looked up one afternoon to see a cargo plane do something I had never seen before or since. It was falling from the sky, locked in a dead man’s spiral. It hit the ground less than a thousand feet from where I stood, and I felt both the shock wave and the heat of the explosion. There are two things I could have done with this information. I could have kept it to myself, or I could have told the world. I chose the latter. After immediately rushing home to tell my parents, I called some of my friends. We met for sodas and began talking about what had just happened. The sounds of the engine cutting out. Our surprise. Our fear. As horrible as the accident was, we talked about it so much in the next week that the subject got tiresome. One of my teachers actually forbade us from bringing it up during class time, threatening to make T-shirts saying, “You’ve done enough talking.”

Why do I still remember the details of this story? Because of my eagerness to yap about the experience. The gabfest after the accident forced a consistent reexposure to the basic facts, followed by a detailed elaboration of our impressions. This is called elaborative rehearsal, and it’s the type of repetition most effective for the most robust retrieval. A great deal of research shows that thinking or talking about an event immediately after it has occurred enhances memory for that event, even when accounting for differences in type of memory. This is one of the reasons why it is so critical to have a witness recall information as soon as is humanely possible after a crime.

The timing of the repetitions is a key component. This was demonstrated by German researcher Hermann Ebbinghaus more than 100 years ago. He showed that repeated exposure to information in spaced intervals provides the most powerful way to fix memory into the brain.

Repetitions must be spaced out, not crammed in

Much like concrete, memory takes an almost ridiculous amount of time to settle into its permanent form. While it is hardening, it is maddeningly subject to amendment. As we discussed, new information can reshape or wear away previously existing memory traces. Such interference is likely to occur when we encounter an overdose of information without breaks, much like what happens in most conferences and classrooms. But this interference doesn’t occur if the information is built up slowly, repeated in deliberately spaced cycles. Repetition cycles add information to our knowledge base, rather than disturbing the resident tenants.

If scientists want to know whether you are retrieving a vivid memory, they don’t have to ask you. They can simply look in their fMRI machine and see whether your left inferior prefrontal cortex is active. Scientist Anthony Wagner used this fact to study two groups of students given a list of words to memorize. The first group was shown the words via mass repetition, reminiscent of students cramming for an exam. The second group was shown the words in spaced intervals over a longer period of time. The second group recalled the list of words with much more accuracy, with more activity in the cortex showing up on the fMRI (that’s “functional magnetic resonance imaging) machine. Based on these results, Harvard psychology professor Dan Schacter wrote: “[I]f you want to study for a test you will be taking in a week’s time, and are able to go through the material 10 times, it is better to space out the 10 repetitions during the week than to squeeze them all together.”

Scientists aren’t yet sure which time intervals supply all the magic. But taken together, the relationship between repetition and memory is clear. Deliberately re-expose yourself to information if you want to retrieve it later. Deliberately re-expose yourself to information more elaborately if you want to remember more of the details. Deliberately reexpose yourself to the information more elaborately and in fixed, spaced intervals if you want the retrieval to be as vivid as possible.

Memory consolidation goes fast, then slow

I was dating somebody else when I first met Kari—and so was she. But I did not forget Kari. She is a physically beautiful, talented, Emmy-nominated composer, and one of the nicest people I have ever met. When we both became “available” six months later, I immediately asked her out. We had a great time, and I began thinking about her more and more. Turns out she was feeling the same. Soon we were seeing each other regularly. After two months, it got so that every time we met, my heart would pound, my stomach would flip-flop, and I’d get sweaty palms. Eventually I didn’t even have to see her to raise my pulse. Just a picture would do, or a whiff of her perfume, or … just music! Even a fleeting thought was enough to send me into hours of rapture. I knew I was falling in love.

What was happening to effect such change? With increased exposure to this wonderful woman, I became increasingly sensitive to her presence, needing increasingly smaller “input” cues (perfume, for heaven’s sake?) to elicit increasingly stronger “output” responses. The effect has been long-lasting, with a tenure of more than three decades. Leaving the whys of the heart to poets and psychiatrists, the idea that increasingly limited exposures can result in increasingly stronger responses lies at the heart of how neurons learn things. Only it’s not called romance; it’s called long-term potentiation. LTP shows us how timed repetition works at the level of the neuron.

Fast consolidation

To describe LTP, we need to leave the world of behavior and drop down to the more intimate world of cell and molecule. Let’s return to our tiny submarine in the hippocampus, where we were floating between two connected neurons. I will call the presynaptic neuron the “teacher” and the postsynaptic neuron the “student.” The goal of the teacher neuron is to pass on information, electrical in nature, to the student cell. The teacher neuron, after receiving some stimulus, cracks off an electrical signal to its student. For a short period of time, the student becomes stimulated and fires excitedly in response. The synaptic interaction between the two is said to be temporarily “strengthened.” This phenomenon is termed early LTP.

Unfortunately, the excitement lasts only for an hour or two. If the student neuron does not get the same information from the teacher within about 90 minutes, the student neuron’s level of excitement will vanish. The cell will literally reset itself to zero and act as if nothing happened, ready for any other signal that might come its way. But if the information is repeatedly pulsed in discretely timed intervals—the timing for cells in a petri dish is three pulses, with about 10 minutes between each—the relationship between the teacher neuron and the student neuron begins to change. Much like my relationship with Kari after a few dates, increasingly smaller and smaller inputs from the teacher are required to elicit increasingly stronger and stronger outputs from the student. This response is termed late LTP.

When two neurons make it from early LTP to late LTP, you get synaptic consolidation. Scientists also call it fast consolidation, because it happens within minutes or hours. If it happens, that is. Any manipulation—behavioral, pharmacological, or genetic—that interferes with any part of this developing relationship will entirely block memory formation.

Slow consolidation

Two neurons alone don’t allow us to form long-term memories. It’s the fact that many neurons connect the hippocampus to the cortex, marrying the two in a chatty relationship. The cortex is that paper-thin layer of surface tissue that’s about the size of a baby blanket when unfurled. The cortex is composed of six discrete layers of neural cells. These cells process signals originating from many parts of the body, including those lassoed by your sense organs. The cortex is connected to the deeper parts of the brain—including the hippocampus—by a hopelessly incomprehensible thicket of neural connections, like a complex root system. Communication between the cortex and hippocampus (lots of synaptic consolidation) is what allows the creation of long-term memories. This system consolidation takes a long time, so scientists call it slow consolidation.

Remember H.M., the man who couldn’t recognize his own face in the mirror after his hippocampus was surgically removed? H.M. could meet you twice in two hours, with absolutely no recollection of the first meeting. He doesn’t remember ever meeting a researcher who has worked with him for decades. This inability to encode information for long-term storage is called anterograde amnesia. H.M. also had retrograde amnesia, a loss of memory of the past. You could ask H.M. about an event that occurred three years before his surgery. No memory. Seven years before his surgery. No memory. If that’s all you knew about H.M, you might conclude that his hippocampal loss created a complete memory meltdown. But you’d be wrong.

If you asked H.M. about the very distant past, say early childhood, he would display a perfectly normal recollection, just as you and I might. He can remember his family, where he lived, details of various events, and so on. This is a conversation with the researcher who studied him for many years:

Researcher: Can you remember any particular event that was special—like a holiday, Christmas, birthday, Easter?

H.M.: There I have an argument with myself about Christmastime.

Researcher: What about Christmas?

H.M.: Well, ’cause my daddy was from the South, and he didn’t celebrate down there like they do up here—in the North. Like they don’t have the trees or anything like that. And uh, but he came North even though he was born down Louisiana. And I know the name of the town he was born in.

H.M.’s childhood memory is intact starting about 11 years before his surgery. How is that possible? If the hippocampus is involved in all memory formation, removing the hippocampus should wipe the memory clean. But it doesn’t. The hippocampus is relevant to memory formation for about 11 years after an event is recruited for long-term storage. After that, the memory somehow makes it to another region, one not affected by H.M.’s brain losses. Here’s the interaction between the cortex and hippocampus that allows us to form long-term memories, and the reason H.M. still remembers Christmas:

1) The cortex receives sensory information and sends it to the hippocampus. They chat about it—a lot. Long after the initial stimulus has faded away, the hippocampus and the relevant cortical neurons are still yapping. As you sleep, the hippocampus is busy feeding signals back to the cortex, replaying a memory over and over again. The importance of sleep to learning is described in the Sleep chapter.

2) While the hippocampus and cortex are actively engaged, any memory they mediate is labile and subject to amendment.

3) After a period of time, the hippocampus will let go, effectively terminating the relationship with the cortex. The cortex is left holding the memory of the event. The hippocampus files for cellular separation only if the memory has become durable and fixed (consolidated) in the cortex. This process is at the heart of system consolidation, and it involves a complex reorganization of the brain regions supporting a particular memory trace.

How long does it take before the hippocampus lets go of its relationship with the cortex? In other words, how long does it take for a piece of information, once recruited for long-term storage, to become completely stable? Hours? Days? Months? The answer surprises nearly everybody who hears it for the first time. It can take years.

That’s what the case of H.M., and patients like him, tell us. System consolidation, the process of transforming a short-term memory into a long-term one, can take years to complete. During that time, the memory is not stable.

As with short-term memories, long-term memories are stored in the same places that initially processed the stimulus. Retrieving a long-term memory 10 years later may simply be an attempt to reconstruct the initial moments of learning, when the memory was only a few milliseconds old!

Forgetting

We’ve talked about encoding, storage, and retrieval, the first three steps of declarative memory. The last step is forgetting. Forgetting plays a vital role in our ability to function for a deceptively simple reason. Forgetting allows us to prioritize. Anything irrelevant to our survival will take up wasteful cognitive space if we assign it the same priority as events critical to our survival. So we don’t. At least, most of us don’t.

Solomon Shereshevskii, a Russian journalist born in 1886, seemed to have a virtually unlimited memory. Scientists would give him a list of things to memorize, usually combinations of numbers and letters, and then test his recall. Shereshevskii needed only three or four seconds to “visualize” (his words) each item. Then he could repeat the lists back perfectly, forward or backward—even lists with more than 70 elements. In one experiment, developmental psychologist Alexander Luria exposed Shereshevskii to a complex formula of 30 letters and numbers. After a single recall test, which Shereshevskii accomplished flawlessly, the researcher put the list in a safe-deposit box and waited 15 years. Luria then took out the list, found Shereshevskii, and asked him to repeat the formula. Without hesitation, he reproduced the list on the spot, again without error.

Shereshevskii’s memory of everything he encountered was so clear, so detailed, so unending, he lost the ability to organize it into meaningful patterns. Like living in a permanent snowstorm, he saw much of his life as blinding flakes of unrelated sensory information. He couldn’t see the “big picture,” meaning he couldn’t focus on the ways two things might be related, look for commonalities, and discover larger patterns. Poems, carrying their typical heavy load of metaphor and simile, were incomprehensible to him. Shereshevskii couldn’t forget, and it affected the way he functioned.

We have many types of forgetting, categories cleverly enumerated by researcher Dan Schacter in his book The Seven Sins of Memory. Tip-of-the-tongue lapses, absentmindedness, blocking habits, misattribution, biases, suggestibility—the list doesn’t sound good. But they all have one thing in common. They allow us to drop pieces of information in favor of others. In so doing, forgetting helped us to conquer the Earth.

More ideas

Thinking and talking a lot about information soon after we encounter it (elaborate rehearsal) helps commit it to memory. Allowing time between repetitions is better than cramming. Unfortunately, we can’t say exactly how much talking or exactly how much time produces the best result. You’ll have to experiment.

I have some ideas about how we could systemically apply the concept of repetition in schools and companies.

Teaching in cycles

The day of a typical high-school student is segmented into five or six 50-minute periods, consisting of unrelenting, unrepeated streams of information. Here’s my fantasy: In the school of the future, lessons are divided into 25-minute modules, cyclically repeated throughout the day. Subject A is taught for 25 minutes. Ninety minutes later, the 25-minute content of Subject A is repeated, and then a third time. All classes are segmented and interleaved in such a fashion.

Every third or fourth day would be reserved for quickly reviewing the facts delivered in the previous 72 to 96 hours. Students would inspect their notes, comparing them with what the teacher was saying in the review. This would result in a greater elaboration of the information and an opportunity to confirm facts. Because teachers wouldn’t be able to address as much information, the school year would extend into the summer. Homework would be unnecessary, because students would already be repeating content during the day.

As I said, it’s just a fantasy. Deliberately spaced repetitions have not been tested rigorously in the real world, so there are lots of questions. Do you really need three repetitions per subject per day to see a positive outcome? Do all subjects need such repetition? Would constant repetitions begin to interfere with one another as the day wore on? Do you even need the review sessions? We don’t know.

Repetition over many years

Beyond doing well on the year-end test, our education system doesn’t seem to care whether students actually remember what they learned. Given that system consolidation can take years, perhaps critical information should be repeated on a yearly or semiyearly basis.

In my fantasy class, this is exactly what happens. Take math. Repetitions begin with a review of multiplication tables, fractions, and decimals. Starting in the third grade, six-month and yearly review sessions occur through sixth grade. As students’ competency grows, the review content becomes more sophisticated. But the cycles are still in place. I can imagine enormous benefits for every academic subject, especially foreign languages.

For businesses, I would extend the bachelor’s degree into the workplace. You’ve probably heard that many corporations, especially in technical fields, are disappointed by the quality of the American undergraduates they hire. They have to spend money retraining many of their newest employees in basic skills that should have been covered in college.

I would turn your company into a learning and leadership factory, offering a full range of classes that would review every subject important to a new employee’s job. Research would establish the optimal spacing of the repetition. More experienced employees might even begin attending these refresher courses, inadvertently rubbing shoulders with younger generations. The old guard would be surprised by how much they have forgotten, and how much the experience aids their own job performance.

I wish I could tell you this all would work. Instead, all I can say is that memory is not fixed at the moment of learning, and repetition provides the fixative.

Brain Rule #7

Repeat to remember.

   The brain has many types of memory systems. Declarative memory follows four stages of processing: encoding, storing, retrieving, and forgetting.

   Information coming into your brain is immediately fragmented and sent to different regions of the cortex.

   The more elaborately we encode a memory during its initial moments, the stronger it will be.

   You can improve your chances of remembering something if you reproduce the environment in which you first put it into your brain.

   Working memory is a collection of busy work spaces that allows us to temporarily retain newly acquired information. If we don’t repeat the information, it disappears.

   Long-term memories are formed in a two-way conversation between the hippocampus and the cortex, until the hippocampus breaks the connection and the memory is fixed in the cortex—which can take years.

   Our brains give us only an approximate view of reality, because they mix new knowledge with past memories and store them together as one.

   The way to make long-term memory more reliable is to incorporate new information gradually and repeat it in timed intervals.