short-term memory
Rule #5
Repeat to remember.
IT IS THE ULTIMATE intellectual flattery to be born with a mind so amazing that brain scientists voluntarily devote their careers to studying it. This impressive feat occurred with the owners of two such minds in the past century, and their remarkable brains provide much insight into human memory.
The first mind belongs to Kim Peek. He was born in 1951 with not one hint of his future intellectual greatness. He had an enlarged head, no corpus callosum, and a damaged cerebellum. He could not walk until age 4, and he could get catastrophically upset when he didn’t understand something, which was often. Diagnosing him in childhood as mentally disabled, his doctors wanted to place him in a mental institution. That didn’t happen, mostly because of the nurturing efforts of Peek’s father, who recognized that his son also had some very special intellectual gifts. One of those gifts was memory; Peek had one of the most prodigious ever recorded. He could read two pages at the same time, one with each eye, comprehending and remembering perfectly everything contained in the pages. Forever.
Though publicity shy, Peek’s dad once granted writer Barry Morrow an interview with his son. It was conducted in a library, where Peek demonstrated to Morrow a familiarity with literally every book (and every author) in the building. He then started quoting ridiculous—and highly accurate—amounts of sports trivia. After a long discussion about the histories of certain United States wars (Revolutionary to Vietnam), Morrow felt he had enough. He decided right then and there to write a screenplay about this man. Which he did: the Oscar-winning film Rain Man. Peek died in 2009.
What was going on in the uneven brain of Kim Peek? Did his mind belong in a cognitive freak show, or was it only an extreme example of normal human learning? Something very important was occurring in the first few moments Peek’s brain was exposed to information, and it’s not so very different from what happens to the rest of us in the initial moments of learning.
The first few moments of learning give us the ability to remember something. The brain has different types of memory systems, many operating in a semi-autonomous fashion. We know so little about how they coordinate with each other that, to this date, memory is not considered a unitary phenomenon. We know the most about declarative memory, which involves something you can declare, such as “The sky is blue.” This type of memory involves four steps: encoding, storage, retrieval, and forgetting. This chapter is about the first step. In fact, it is about the first few seconds of the first step. They are crucial in determining whether something that is initially perceived will also be remembered. Along the way, we will talk about our second famous mind. This brain, belonging to a man the research community called H.M., was legendary not for its extraordinary capabilities but for its extraordinary inabilities. We will also talk about the difference between bicycles and Social Security numbers.
memory and mumbo jumbo
Memory has been the subject of poets and philosophers for centuries. At one level, memory is like an invading army, allowing past experiences to intrude continuously onto present life. That’s fortunate. Our brains do not come fully assembled at birth, which means that most of what we know about the world has to be either experienced by us firsthand or taught to us secondhand. Our robust memory can provide great survival advantages—it is in large part why we’ve succeeded in overpopulating the planet. For a creature as physically weak as humans (compare your fingernail with the claw of even a simple cat, and weep with envy), not allowing experience to shape our brains would have meant almost certain death in the rough-and-tumble world of the open savannah.
But memory is more than a Darwinian chess piece. Most researchers agree that its broad influence on our brains is what truly makes us consciously aware. The names and faces of our loved ones, our own personal tastes, and especially our awareness of those names and faces and tastes, are maintained through memory. We don’t go to sleep and then, upon awakening, have to spend a week relearning the entire world. Memory does this for us. Even the single most distinctive talent of human cognition, the ability to write and speak in a language, exists because of active remembering. Memory, it seems, makes us not only durable but also human.
Let’s look at how it works. When researchers want to measure memory, they usually end up measuring retrieval. That’s because in order to find out if somebody has committed something to memory, you have to ask if he or she can recall it. So, how do people recall things? Does the storage space carrying the record of some experience just sit there twiddling its thumbs in our brains, waiting for some command to trot out its contents? Can we investigate storage separately from retrieval? It has taken more than a hundred years of research just to get a glimmer of a definition of memory that makes sense to a scientist. The story began in the 19th century with a German researcher who performed the first real science-based inquiry into human memory. He did the whole thing with his own brain.
Hermann Ebbinghaus was born in 1850. As a young man, he looked like a cross between Santa Claus and John Lennon, with his bushy brown beard and round glasses. He is most famous for uncovering one of the most depressing facts in all of education: People usually forget 90 percent of what they learn in a class within 30 days. He further showed that the majority of this forgetting occurs within the first few hours after class. This has been robustly confirmed in modern times.
Ebbinghaus designed a series of experimental protocols with which a toddler might feel at ease: He made up lists of nonsense words, 2,300 of them. Each word consisted of three letters and a consonant-vowel-consonant construction, such as TAZ, LEF, REN, ZUG. He then spent the rest of his life trying to memorize lists of these words in varying combinations and of varying lengths.
With the tenacity of a Prussian infantryman (which, for a short time, he was), Ebbinghaus recorded for over 30 years his successes and failures. He uncovered many important things about human learning during this journey. He showed that memories have different life spans. Some memories hang around for only a few minutes, then vanish. Others persist for days or months, even for a lifetime. He also showed that one could increase the life span of a memory simply by repeating the information in timed intervals. The more repetition cycles a given memory experienced, the more likely it was to persist in his mind. We now know that the space between repetitions is the critical component for transforming temporary memories into more persistent forms. Spaced learning is greatly superior to massed learning.
Ebbinghaus’s work was foundational. It was also incomplete. It did not, for example, separate the notion of memory from retrieval—the difference between learning something and recalling it later.
Go ahead and try to remember your Social Security number. Easy enough? Your retrieval commands might include things like visualizing the last time you saw the card, or remembering the last time you wrote down the number. Now try to remember how to ride a bike. Easy enough? Hardly. You do not call up a protocol list detailing where you put your foot, how to create the correct angle for your back, where your thumbs are supposed to be. The contrast proves an interesting point: One does not recall how to ride a bike in the same way one recalls nine numbers in a certain order. The ability to ride a bike seems quite independent from any conscious recollection of the skill. You were consciously aware when you were remembering your Social Security number, but not when riding a bike. Do you need to have conscious awareness in order to experience a memory? Or is there more than one type of memory?
The answer seemed clearer as more data came in. The answer to the first question was no, which answered the second question. There are at least two types of memories: memories that involve conscious awareness and memories that don’t. This awareness distinction gradually morphed into the idea that there were memories you could declare and there were memories you could not declare. Declarative memories are those that can be experienced in our conscious awareness, such as “this shirt is green,” “Jupiter is a planet,” or even a list of words. Nondeclarative memories are those that cannot be experienced in our conscious awareness, such as the motor skills necessary to ride a bike.
This does not explain everything about human memory. It does not even explain everything about declarative memory. But the rigor of Ebbinghaus gave future scientists their first real shot at mapping behavior onto a living brain. Then a 9-year-old boy was knocked off his bicycle, forever changing the way brain scientists thought about memory.
where memories go
In his accident, H.M. suffered a severe head injury that left him with epileptic seizures. These seizures got worse with age, eventually culminating in one major seizure and 10 blackout periods every seven days. By his late 20s, H.M. was essentially dysfunctional, of potential great harm to himself, in need of drastic medical intervention.
The desperate family turned to famed neurosurgeon William Scoville, who decided that the problem lay within the brain’s temporal lobe (the brain region roughly located behind your ears). Scoville excised the inner surface of this lobe on both sides of the brain. This experimental surgery greatly helped the epilepsy. It also left H.M with a catastrophic memory loss. Since the day the surgery was completed, in 1953, H.M. has been unable to convert a new short-term memory into a long-term memory. He can meet you once and then an hour or two later meet you again, with absolutely no recall of the first visit.
He has lost the conversion ability Ebbinghaus so clearly described in his research more than 50 years before.
Even more dramatically, H.M. can no longer recognize his own face in the mirror. Why? As his face aged, some of his physical features changed. But, unlike the rest of us, H.M. cannot take this new information and convert it into a long-term form. This leaves him more or less permanently locked into a single idea about his appearance. When he looks in the mirror and does not see this single idea, he cannot identify to whom the image actually belongs.
As horrible as that is for H.M., it is of enormous value to the research community. Because researchers knew precisely what was taken from the brain, it was easy to map which brain regions controlled the Ebbinghaus behaviors. A great deal of credit for this work belongs to Brenda Milner, a psychologist who spent more than 40 years studying H.M. and laid the groundwork for much of our understanding about the nerves behind memory. Let’s review for a moment the biology of the brain.
You recall the cortex—that wafer-thin layer of neural tissue that’s about the size of a baby blanket when unfurled. It is composed of six discrete layers of cells. It’s a busy place. Those cells process signals originating from many parts of the body, including those lassoed by your sense organs. They also help create stable memories, and that’s where H.M.’s unfortunate experience becomes so valuable. Some of H.M.s cortex was left perfectly intact; other regions, such as his temporal lobe, sustained heavy damage. It was a gruesome but ideal opportunity for studying how human memory forms.
This baby blanket doesn’t just lay atop the brain, of course. As if the blanket were capable of growing complex, sticky root systems, the cortex adheres to the deeper structures of the brain by a hopelessly incomprehensible thicket of neural connections. One of the most important destinations of these connections is the hippocampus, which is parked near the center of your brain, one in each hemisphere. The hippocampus is specifically involved in converting short-term information into longer-term forms. As you might suspect, it is the very region H.M. lost during his surgery.
The anatomical relationship between the hippocampus and the cortex has helped 21st-century scientists further define the two types of memory. Declarative memory is any conscious memory system that is altered when the hippocampus and various surrounding regions become damaged. Non-declarative memory is defined as those unconscious memory systems that are NOT altered (or at least not greatly altered) when the hippocampus and surrounding regions are damaged. We’re going to focus on declarative memory, a vital part of our everyday activities.
sliced and diced
Research shows that the life cycle of declarative memory can be divided into four sequential steps: encoding, storing, retrieving, and forgetting.
Encoding describes what happens at the initial moment of learning, that fleeting golden instant when the brain first encounters a new piece of declarative information. It also involves a whopping fallacy, one in which your brain is an active co-conspirator. Here’s an example of this subversion, coming once again from the clinical observations of neurologist Oliver Sacks.
The case involves a low-functioning autistic boy named Tom, who has become quite famous for being able to “do” music (though little else). Tom never received formal instruction in music of any kind, but he learned to play the piano simply by listening to other people. Astonishingly, he could play complex pieces of music with the skill and artistry of accomplished professionals, on his first try after hearing the music exactly once. In fact, he has been observed playing the song “Fisher’s Horn Pipe” with his left hand while simultaneously playing “Yankee Doodle Dandy” with his right hand while simultaneously singing “Dixie”! He also can play the piano backwards, that is, with his back to the keyboard and his hands inverted. Not bad for a boy who cannot even tie his own shoes.
When we hear about people like this, we are usually jealous. Tom absorbs music as if he could switch to the “on” position some neural recording device in his head. We think we also have this video recorder, only our model is not nearly as good. This is a common impression. Most people believe that the brain is a lot like a recording device—that learning is something akin to pushing the “record” button (and remembering is simply pushing “playback”). Wrong. In the real world of the brain—Tom’s or yours—nothing could be further from the truth. The moment of learning, of encoding, is so mysterious and complex that we have no metaphor to describe what happens to our brains in those first fleeting seconds.
The little we do know suggests it is like a blender left running with the lid off. The information is literally sliced into discrete pieces as it enters the brain and splattered all over the insides of our mind. Stated formally, signals from different sensory sources are registered in separate brain areas. The information is fragmented and redistributed the instant the information is encountered. If you look at a complex picture, for example, your brain immediately extracts the diagonal lines from the vertical lines and stores them in separate areas. Same with color. If the picture is moving, the fact of its motion will be extracted and stored in a place separate than if the picture were static.
This separation is so violent, and so pervasive, it even shows up when we perceive exclusively human-made information, such as parts of a language. One woman suffered a stroke in a specific region of her brain and lost the ability to use written vowels. You could ask her to write down a simple sentence, such as “Your dog chased the cat,” and it would look like this:
There would be a place for every letter, but the vowelsspots were left blank! So we know that vowels and consonants are not stored in the same place. Her stroke damaged some kind of connecting wiring. That is exactly the opposite of the strategy a video recorder uses to record things. If you look closely, however, the blender effect goes much deeper. Even though she lost the ability to fill in the vowels of a given word, she has perfectly preserved the place where the vowel should go. Using the same logic, it appears that the place where a vowel should go is stored in a separate area from the vowel itself: Content is stored separately from its context/container.
Hard to believe, isn’t it? The world appears to you as a unified whole. If the interior brain function tells us that it is not, how then do we keep track of everything? How do features that are registered separately, including the vowels and consonants in this sentence, become reunited to produce perceptions of continuity? It is a question that has bothered researchers for years and has been given its own special name. It is called the “binding problem,” from the idea that certain thoughts are bound together in the brain to provide continuity. We have no idea how the brain routinely and effortlessly gives us this illusion of stability.
Not that there aren’t hints. Close inspection of the initial moments of learning, the encoding stage, has supplied insights into not only the binding problem, but human learning of any kind. It is to these hints that we now turn.
automatic or stick shift?
To encode information means to convert data into, well, a code. Creating codes always involves translating information from one form into another, usually for transmission purposes, often to keep something secret. From a physiological point of view, encoding is the conversion of external sources of energy into electrical patterns the brain can understand. From a purely psychological point of view, it is the manner in which we apprehend, pay attention to, and ultimately organize information for storage purposes. Encoding, from both perspectives, prepares information for further processing. It is one of the many intellectual processes the Rain Man, Kim Peek, was so darn good at.
The brain is capable of performing several types of encoding. One type of encoding is automatic, which can be illustrated by talking about what you had for dinner last night, or The Beatles. The two came together for me on the evening of an amazing Paul McCartney concert I attended a few years ago. If you were to ask me what I had for dinner before the concert and what happened on stage, I could tell you about both events in great detail. Though the actual memory is very complex (composed of spatial locations, sequences of events, sights, smells, tastes, etc.), I did not have to write down some exhaustive list of its varied experiences, then try to remember the list in detail just in case you asked me about my evening. This is because my brain deployed a certain type of encoding scientists call automatic processing. It is the kind occurring with glorious unintentionality, requiring minimal attentional effort. It is very easy to recall data that have been encoded via this process. The memories seem bound all together into a cohesive, readily retrievable form.
Automatic processing has an evil twin that isn’t nearly so accommodating, however. As soon as the Paul McCartney tickets went on sale, I dashed to the purchasing website, which required my password for entrance. And I couldn’t remember my password! Finally, I found the right one and snagged some good seats. But trying to commit these passwords to memory is quite a chore, and I have a dozen or so passwords written on countless lists, scattered throughout my house. This kind of encoding, initiated deliberately, requiring conscious, energy-burning attention, is called effortful processing. The information does not seem bound together well at all, and it requires a lot of repetition before it can be retrieved with the ease of automatic processing.
encoding test
There are still other types of encoding, three of which can be illustrated by taking the quick test below. Examine the capitalized word beside the number, then answer the question below it.
1) FOOTBALL
Does this word fit into the sentence “I turned around to fight _________”?
2) LEVEL
Does this word rhyme with evil?
3) MINIMUM
Are there any circles in these letters?
Answering each question requires very different intellectual skills, which researchers now know underlie different types of encoding. The first sentence illustrates what is called semantic encoding. Answering the question properly means paying attention to the definitions of words. The second sentence illustrates a process called phonemic encoding, involving a comparison between the sounds of words. The third is called structural encoding. It is the most superficial type, and it simply asks for a visual inspection of shapes. The type of encoding you perform on a given piece of information as it enters your head has a great deal to do with your ability to remember the information at a later date.
the electric slide
Encoding also involves transforming any outside stimulus into the electrical language of the brain, a form of energy transfer. All types of encoding initially follow the same pathway, and generally the same rules. For example, the night of Sir Paul’s concert, I stayed with a friend who owned a beautiful lake cabin inhabited by a very large and hairy dog. Late next morning, I decided to go out and play fetch with this friendly animal. I made the mistake of throwing the stick into the lake and, not owning a dog in those days, had no idea what was about to happen to me when the dog emerged.
Like some friendly sea monster from Disney, the dog leapt from the water, ran at me full speed, suddenly stopped, then started to shake violently. With no real sense that I should have moved, I got sopping wet.
What was occurring in my brain in those moments? As you know, the cortex quickly is consulted when a piece of external information invades our brains—in this case, a slobbery, soaking wet Labrador. I see the dog coming out of the lake, which really means I see patterns of photons bouncing off the Labrador. The instant those photons hit the back of my eyes, my brain converts them into patterns of electrical activity and routes the signals to the back of my head (the visual cortex in the occipital lobe). Now my brain can see the dog. In the initial moments of this learning, I have transformed the energy of light into an electrical language the brain fully understands. Beholding this action required the coordinated activation of thousands of cortical regions dedicated to visual processing.
The same is also true of other energy sources. My ears pick up the sound waves of the dog’s loud bark, and I convert them into the same brain-friendly electrical language to which the photons patterns were converted. These electrical signals will also be routed to the cortex, but to the auditory cortex instead of the visual cortex. From a nerve’s perspective, those two centers are a million miles away from each other. This conversion and this vastly individual routing are true of all the energy sources coming into my brain, from the feel of the sun on my skin to the instant I unexpectedly and unhappily got soaked by the dog shaking off lake water. Encoding involves all of our senses, and their processing centers are scattered throughout the brain.
This is the heart of the blender. In one 10-second encounter with an overly friendly dog, my brain recruited hundreds of different brain regions and coordinated the electrical activity of millions of neurons. My brain was recording a single episode, and doing so over vast neural differences, all in about the time it takes to blink your eyes.
Years have passed since I saw Sir Paul and got drenched by the dog. How do we keep track of it all? And how do we manage to manage these individual pieces for years? This binding problem, a phenomenon that keeps tabs on farflung pieces of information, is a great question with, unfortunately, a lousy answer. We really don’t know how the brain keeps track of things. We have given a name to the total number of changes in the brain that first encode information (where we have a record of that information). We call it an engram. But we might as well call them donkeys for all we understand about them.
The only insight we have into the binding problem comes from studying the encoding abilities of a person suffering from Balint’s Syndrome. This disorder occurs in people who have damaged both sides of their parietal cortex. The hallmark of people with Balint’s Syndrome is that they are functionally blind. Well, sort of. They can see objects in their visual field, but only one at a time (a symptom called simultanagnosia). Funny thing is, if you ask them where the single object is, they respond with a blank stare. Even though they can see it, they cannot tell you where it is. Nor can they tell you if the object is moving toward them or away from them. They have no external spatial frame of reference upon which to place the objects they see, no way to bind the image to other features of the input. They’ve lost explicit spatial awareness, a trait needed in any type of binding exercise. That’s about as close as anyone has ever come to describing the binding problem at the neurological level. This tells us very little about how the brain solves the problem, of course. It only tells us about some of the areas involved in the process.
cracking the code
Despite their wide reach, scientists have found that all encoding processes have common characteristics. Three of these hold true promise for real-world applications in both business and education.
1) The more elaborately we encode information at the moment of learning, the stronger the memory.
When encoding is elaborate and deep, the memory that forms is much more robust than when encoding is partial and cursory. This can be demonstrated in an experiment you can do right now with any two groups of friends. Have them gaze at the list of words below for a few minutes.
Tractor
Green
Apple
Zero
Weather
Pastel
Quickly
Ocean
Nicely
Countertop
Airplane
Jump
Laugh
Tall
Tell Group #1 to determine the number of letters that have diagonal lines in them and the number that do not. Tell Group #2 to think about the meaning of each word and rate, on a scale of 1 to 10, how much they like or dislike the word. Take the list away, let a few minutes pass, and then ask each group to write down as many words as possible. The dramatic results you get have been replicated in laboratories around the world. The group that processes the meaning of the words always remembers two to three times as many words as the group that looked only at the architecture of the individual letters. We did a form of this experiment when we discussed levels of encoding and I asked you about the number of circles in the word … remember what it was? You can do a similar experiment using pictures. You can even do it with music. No matter the sensory input, the results are always the same.
At this point, you might be saying to yourself, “Well, duh!” Isn’t it obvious that the more meaning something has, the more memorable it becomes? Most researchers would answer, “Well, yeah!” The very naturalness of the tendency proves the point. Hunting for diagonal lines in the word “apple” is not nearly as elaborate as remembering wonderful Aunt Mabel’s apple pie, then rating the pie, and thus the word, a “10.” We remember things much better the more elaborately we encode what we encounter, especially if we can personalize it. The trick for business professionals, and for educators, is to present bodies of information so compelling that the audience does this on their own, spontaneously engaging in deep and elaborate encoding.
It’s a bit weird if you think about it. Making something more elaborate usually means making it more complicated, which should be more taxing to a memory system. But it’s a fact: More complexity means greater learning.
2) A memory trace appears to be stored in the same parts of the brain that perceived and processed the initial input.
This idea is so counterintuitive that it may take an urban legend to explain it. At least, I think it’s an urban legend, coming from the mouth of the keynote speaker at a university administratorsluncheon I once attended. He told the story of the wiliest college president he ever encountered. The institute had completely redone its grounds in the summer, resplendent with fountains and beautifully manicured lawns. All that was needed was to install the sidewalks and walkways where the students could access the buildings. But there was no design for these paths. The construction workers were anxious to install them and wanted to know what the design would be, but the wily president refused to give any. He frowned. “These asphalt paths will be permanent. Install them next year, please. I will give you the plans then.” Disgruntled but compliant, the construction workers waited.
The school year began, and the students were forced to walk on the grass to get to their classes. Very soon, defined trails started appearing all over campus, as well as large islands of beautiful green lawn. By the end of the year, the buildings were connected by paths in a surprisingly efficient manner. “Now,” said the president to the contractors who had waited all year, “you can install the permanent sidewalks and pathways. But you need no design. Simply fill in all the paths you see before you!” The initial design, created by the initial input, also became the permanent path.
The brain has a storage strategy remarkably similar to the wily president’s plan. The neural pathways initially recruited to process new information end up becoming the permanent pathways the brain reuses to store the information. New information penetrating into the brain can be likened to the students initially creating the dirt paths across a pristine lawn. The final storage area can be likened to the time those pathways were permanently filled with asphalt. They are the same pathways, and that’s the point.
What does this mean for the brain? The neurons in the cortex are active responders in any learning event, and they are deeply involved in permanent memory storage. This means the brain has no central happy hunting ground where memories go to be infinitely retrieved. Instead, memories are distributed all over the surface of the cortex. This may at first seem hard to grasp. Many people would like the brain to act like a computer, complete with input detectors (like a keyboard) connected to a central storage device. Yet the data suggest that the human brain has no hard drive separate from its initial input detectors. That does not mean memory storage is spread evenly across the brain’s neural landscape. Many brain regions are involved in representing even single inputs, and each region contributes something different to the entire memory. Storage is a cooperative event.
3) Retrieval may best be improved by replicating the conditions surrounding the initial encoding.
In one of the most unusual experiments performed in cognitive psychology, the brain function of people standing around on dry ground in wet suits was compared with the brain function of people floating in about 10 feet of water, also in wet suits. Both groups of deep-sea divers listened to somebody speak 40 random words. The divers were then tested for their ability to recall the list of words. The group that heard the words while in the water got a 15 percent better score if they were asked to recall the words while back in those same 10 feet than if they were on the beach. The group that heard the words on the beach got a 15 percent better score if they were asked to recall the words while suited on the beach than if in 10 feet of water. It appeared that memory worked best if the environmental conditions at retrieval mimicked the environmental conditions at encoding. Is it possible that the second characteristic, which tries to store events using the same neurons recruited initially to encode events, is in operation in this third characteristic?
The tendency is so robust that memory is even improved under conditions where learning of any kind should be crippled. These experiments have been done incorporating marijuana and even laughing gas (nitrous oxide). This third characteristic even responds to mood. Learn something while you are sad and you will be able to recall it better if, at retrieval, you are somehow suddenly made sad. The condition is called context-dependent or state-dependent learning.
ideas
We know that information is remembered best when it is elaborate, meaningful, and contextual. The quality of the encoding stage—those earliest moments of learning—is one of the single greatest predictors of later learning success. What can we do to take advantage of that in the real world?
First, we can take a lesson from a shoe store I used to visit as a little boy. This shoe store had a door with three handles at different heights: one near the very top, one near the very bottom, and one in the middle. The logic was simple: The more handles on the door, the more access points were available for entrance, regardless of the strength or age of customer. It was a relief for a 5-year-old—a door I could actually reach! I was so intrigued with the door that I used to dream about it. In my dreams, however, there were hundreds of handles, all capable of opening the door to this shoe store.
“Quality of encoding” really means the number of door handles one can put on the entrance to a piece of information. The more handles one creates at the moment of learning, the more likely the information is to be accessed at a later date. The handles we can add revolve around content, timing, and environment.
Real-world examples
The more a learner focuses on the meaning of the presented information, the more elaborately the encoding is processed. This principle is so obvious that it is easy to miss. What it means is this: When you are trying to drive a piece of information into your brain’s memory systems, make sure you understand exactly what that information means. If you are trying to drive information into someone else’s brain, make sure they know what it means.
The directive has a negative corollary. If you don’t know what the learning means, don’t try to memorize the information by rote and pray the meaning will somehow reveal itself. And don’t expect your students will do this either, especially if you have done an inadequate job of explaining things. This is like looking at the number of diagonal lines in a word and attempting to use this strategy to remember the words.
How does one communicate meaning in such a fashion that learning is improved? A simple trick involves the liberal use of relevant real-world examples embedded in the information, constantly peppering main learning points with meaningful experiences. This can be done by the learner studying after class or, better, by the teacher during the actual learning experience. This has been shown to work in numerous studies.
In one experiment, groups of students read a 32-paragraph paper about a fictitious foreign country. The introductory paragraphs in the paper were highly structured. They contained either no examples, one example, or two or three consecutive examples of the main theme that followed. The results were clear: The greater the number of examples in the paragraph, the more likely the information was to be remembered. It’s best to use real-world situations familiar to the learner. Remember wonderful Aunt Mabel’s apple pie? This wasn’t an abstract food cooked by a stranger; it was real food cooked by a loving relative. The more personal an example, the more richly it becomes encoded and the more readily it is remembered.
Why do examples work? They appear to take advantage of the brain’s natural predilection for pattern matching. Information is more readily processed if it can be immediately associated with information already present in the learner’s brain. We compare the two inputs, looking for similarities and differences as we encode the new information. Providing examples is the cognitive equivalent of adding more handles to the door. Providing examples makes the information more elaborative, more complex, better encoded, and therefore better learned.
Compelling introductions
Introductions are everything. As an undergraduate, I had a professor who can thoughtfully be described as a lunatic. He taught a class on the history of cinema, and one day he decided to illustrate for us how art films traditionally depict emotional vulnerability. As he went through the lecture, he literally began taking off his clothes. He first took off his sweater and then, one button at a time, began removing his shirt, down to his T-shirt. He unzipped his trousers, and they fell around his feet, revealing, thank goodness, gym clothes. His eyes were shining as he exclaimed, “You will probably never forget now that some films use physical nudity to express emotional vulnerability. What could be more vulnerable than being naked?” We were thankful that he gave us no further details of his example.
I will never forget the introduction to this unit in my film class, though I hardly recommend imitating his example on a regular basis. But its memorability illustrates the timing principle: If you are a student, whether in business or education, the events that happen the first time you are exposed to a given information stream play a disproportionately greater role in your ability to accurately retrieve it at a later date. If you are trying to get information across to someone, your ability to create a compelling introduction may be the most important single factor in the later success of your mission.
Why this emphasis on the initial moments? Because the memory of an event is stored in the same places that were initially recruited to perceive the learning event. The more brain structures recruited—the more door handles created—at the moment the learning, the easier it is to gain access to the information.
Other professions have stumbled onto this notion. Budding directors are told by their film instructors that the audience needs to be hooked in the first 3 minutes after the opening credits to make the film compelling (and financially successful). Public speaking professionals say that you win or lose the battle to hold your audience in the first 30 seconds of a given presentation.
What does that mean for business professionals attempting to create a compelling presentation? Or educators attempting to introduce a complex new topic? Given the importance of the findings to the success of these professions, you might expect that some rigorous scientific literature exists on this topic. Surprisingly, very little data exist about how brains pay attention to issues in real-world settings, as we discussed in the Attention chapter. The data that do exist suggest that film instructors and public speakers are on to something.
Familiar settings
We know the importance of learning and retrieval taking place under the same conditions, but we don’t have a solid definition of “same conditions.” There are many ways to explore this idea.
I once gave a group of teachers advice about how to counsel parents who wanted to teach both English and Spanish at home. One dissatisfying finding is that for many kids with this double exposure, language acquisition rates for both go down, sometimes considerably. I recounted the data about the underwater experiments and then suggested that the families create a “Spanish Room.” This would be a room with a rule: Only the Spanish language could be spoken in it. The room could be filled with Hispanic artifacts, with large pictures of Spanish words. All Spanish would be taught there, and no English. Anecdotally, the parents have told me that it works.
This way, the encoding environments and retrieving environments could be equivalent. At the moment of learning, many environmental features—even ones irrelevant to the learning goals—may become encoded into the memory right along with the goals. Environment makes the encoding more elaborate, the equivalent of putting more handles on the door. When these same environmental cues are encountered, they may lead directly to the learning goals simply because they were embedded in the original trace.
American marketing professionals have known about this phenomenon for years. What if I wrote the words “wind-up pink bunny,” “pounding drum,” and “going-and-going,” then told you to write another word or phrase congruent with those previous three? No formal relationship exists between any of these words, yet if you lived in the United States for a long period of time, most of you probably would write words such as “battery” or “Energizer.” Enough said.
What does it mean to make encoding and retrieving environments equivalent in the real world of business and education? The most robust findings occur when the environments exist in dramatically different contexts from the norm (underwater vs. on a beach is about as dramatic as it gets). But how different from normal life does the setup need to be to obtain the effect?
It could be as simple as making sure that an oral examination is studied for orally, rather than by reviewing written material. Or perhaps future airplane mechanics should be taught about engine repair in the actual shop where the repairs will occur.
Summary
Rule #5
Repeat to remember.
• The brain has many types of memory systems. One type follows four stages of processing: encoding, storing, retrieving, and forgetting.
• Information coming into your brain is immediately split into fragments that are sent to different regions of the cortex for storage.
• Most of the events that predict whether something learned also will be remembered occur in the first few seconds of learning.The more elaborately we encode a memory during its initial moments, the stronger it will be.
• You can improve your chances of remembering something if you reproduce the environment in which you first put it into your brain.