Did you write down rest? Night? Aardvark? Sleep?
If you’re like most people, you remembered a few of the words. Eighty-five percent of people write down rest. Rest is the first word you saw, and this is consistent with the primacy effect of memory: We tend to remember best the first entry on a list. Seventy percent of people remember the word night. It was the last word you saw, and is consistent with the recency effect: We tend to remember the most recent items we encountered on a list, but not as well as the first item. For lists of items, scientists have documented a serial position curve, a graph showing how likely it is that an item will be remembered as a function of its position in a list.
You almost certainly didn’t write down aardvark, because it wasn’t on the list—researchers typically throw in test questions like that to make sure their subjects are paying attention. About 60% of the people tested write down sleep. But if you go back and look now, you’ll see that sleep wasn’t on the list! You’ve just had a false memory, and if you’re like most people, you were confident when you wrote down sleep that you had seen it. How did this happen?
It’s due to the associational networks described in the Introduction—the idea that if you think of red, it might activate other memories (or conceptual nodes) through a process called spreading activation. The same principle is at work here; by presenting a number of words that are related to the idea of sleep, the word sleep became activated in your brain. In effect, this is a false memory, a memory you have for something that didn’t actually happen. The implications of this are far-reaching. Skillful attorneys can use this, and principles like it, to their clients’ advantage by implanting ideas and memories in the minds of witnesses, juries, and even judges.
Changing a single word in a sentence can cause witnesses to falsely remember seeing broken glass in a picture. Psychologist Elizabeth Loftus showed videos of a minor car accident to participants in an experiment. Later, she asked half of them, “How fast were the cars going when they hit each other?” and she asked the other half, “How fast were the cars going when they smashed into each other?” There were dramatically different estimates of speed, depending on that one word (smashed versus hit). She then had the participants back one week later and asked, “Was there any broken glass at the scene?” (There was no broken glass in the video.) People were more than twice as likely to respond yes to the question if they had been asked, a week earlier, about the cars’ speed with the word smashed in the question.
To make matters worse, the act of recalling a memory thrusts it into a labile state whereby new distortions can be introduced; then, when the memory is put back or re-stored, the incorrect information is grafted to it as though it were there all along. For example, if you recall a happy memory while you’re feeling blue, your mood at the time of retrieval can color the memory to the point that when you re-store it in your memory banks, the event gets recoded as slightly sad. Psychiatrist Bruce Perry of the Feinberg School of Medicine sums it up: “We know today that, just like when you open a Microsoft Word file on your computer, when you retrieve a memory from where it is stored in the brain, you automatically open it to ‘edit.’ You may not be aware that your current mood and environment can influence the emotional tone of your recall, your interpretation of events, and even your beliefs about which events actually took place. But when you ‘save’ the memory again and place it back into storage, you can inadvertently modify it. . . . [This] can bias how and what you recall the next time you pull up that ‘file.’” Over time, incremental changes can even lead to the creation of memories of events that never took place.
With the exception of the fact that memories can be so easily distorted and overwritten—a problematic and potentially troublesome affair—the brain organizes past events in an ingenious fashion, with multiple access points and multiple ways to cue any given memory. And if the more audacious theorists are right, everything you’ve experienced is “in there” somewhere, waiting to be accessed. Then why don’t we become overwhelmed by memory? Why is it that when you think of hash browns, your brain doesn’t automatically deliver up every single time you’ve ever had hash browns? It’s because the brain organizes similar memories into categorical bundles.
Eleanor Rosch has shown that the act of categorizing is one of cognitive economy. We treat things as being of a kind so that we don’t have to waste valuable neural processing cycles on details that are irrelevant for our purposes. When looking out at the beach, we don’t typically notice individual grains of sand, we see a collective, and one grain of sand becomes grouped with all the others. It doesn’t mean that we’re incapable of discerning differences among the individual grains, only that for most practical purposes our brains automatically group like objects together. Similarly, we see a bowl of peas as containing aggregated food, as peas. As I wrote earlier, we regard the peas as interchangeable for practical purposes—they are functionally equivalent because they serve the same purpose.
Part of cognitive economy is that we aren’t flooded with all the possible terms we could use to refer to objects in the world—there exists a natural, typical term that we use most often. This is the term that is appropriate in most situations. We say that noise coming from around the corner is a car, not a 1970 Pontiac GTO. We refer to that bird that made a nest in the mailbox, not that rufous-sided towhee. Rosch called this the basic-level category. The basic level is the first term that babies and children learn, and the first one we typically learn in a new language. There are exceptions of course. If you walk into a furniture store, you might ask the greeter where the chairs are. But if you walk into a store called Just Chairs and ask the same question, it sounds odd; in this context, you’d burrow down to a subordinate level from the basic level and ask where the office chairs are, or where the dining room chairs are.
As we specialize or gain expert knowledge, we tend to drop down to the subordinate level in our everyday conversation. A sales agent at Just Chairs won’t call the stockroom and ask if they have any accent chairs, he’ll ask for the mahogany Queen Anne replica with the yellow tufted back. A bird-watcher will text other bird-watchers that there’s a rufous-sided towhee watching a nest in my mailbox. Our knowledge thus guides our formation of categories and the structure they take in the brain.
Cognitive economy dictates that we categorize things in such a way as not to be overwhelmed by details that, for most purposes, don’t matter. Obviously, there are certain things on which you want detailed information right now, but you never want all the details all the time. If you’re trying to sort through the black beans to pull out the hard, undercooked ones, you see them for the moment as individuals, not functionally equivalent. The ability to go back and forth between these modes of focus, to change lenses from the collective to the individual, is a feature of the mammalian attentional system, and highlights the hierarchical nature of the central executive. Although researchers tend to treat the central executive as a unitary entity, in fact it can be best understood as a collection of different lenses that allow us to zoom in and zoom out during activities we’re engaged in, to focus on what is most relevant at the moment. A painter needs to see the individual brushstroke or point she is painting but be able to cycle back and forth between that laserlike focus and the painting-as-a-whole. Composers work at the level of individual pitches and rhythms, but need to apprehend the larger musical phrase and the entire piece in order to ensure that everything fits together. A cabinetmaker working on a particular section of the door is still mindful of the cabinet-as-a-whole. In all these cases and many more—an entrepreneur launching a company, an aircraft pilot planning a landing—the person performing the work holds an image or ideal in mind, and attempts to get it manifested in the real world so that the appearance of the thing matches the mental image.
The distinction between appearance and a mental image traces its roots back to Aristotle and Plato and was a cornerstone of classic Greek philosophy. Aristotle and Plato both spoke of a distinction between how something appears and how it really and truly is. A cabinetmaker can use a veneer to make plywood appear to be solid mahogany. The cognitive psychologist Roger Shepard, who was my teacher and mentor (and who drew the monster illusion in Chapter 1), pushed this further in his theory that adaptive behavior depends on an organism being able to make three appearance-reality distinctions.
First, some objects, though different in presentation, are inherently identical. That is, different views of the same object that strike very different retinal images, ultimately refer to the same object. This is an act of categorization—the brain has to integrate different views of an object into a coherent, unified representation, binding them into a single category.
We do this all the time when we’re interacting with other people—their faces appear to us in profile, straight on, and at angles, and the emotions their faces convey project very different retinal images. The Russian psychologist A. R. Luria reported on a famous patient who could not synthesize these disparate views and had a terrible time recognizing faces on account of a brain lesion.
Second, objects that are similar in presentation are inherently different. For example, in a scene of horses grazing in a meadow, each horse may look highly similar to others, even identical in terms of its retinal image, but evolutionarily adaptive behavior requires that we understand each one is an individual. This principle doesn’t involve categorization; in fact, it requires a kind of unbundling of categorization, a recognition that although these objects may be functionally and practically equivalent, there are situations in which it behooves us to understand that they are distinct entities (e.g., if only one approaches you at a rapid trot, there is probably much less danger than if the entire herd comes toward you).
Third, objects although different in presentation may be of the same natural kind. If you saw one of the following crawling on your leg or in your food,
it wouldn’t matter to you that they might have very different evolutionary histories, mating habits, or DNA. They may not share a common evolutionary ancestor within a million years. All you care about is that they belong to the category of “things I do not want crawling on me or in my food.”
Adaptive behavior, therefore, according to Shepard, depends on cognitive economy, treating objects as equivalent when indeed they are. To categorize an object means to consider it equivalent to other things in that category, and different—along some salient dimension—from things that are not.
The information we receive from our senses, from the world, typically has structure and order, and is not arbitrary. Living things—animals and plants—typically exhibit correlational structure. For example, we can perceive attributes of animals, such as wings, fur, beaks, feathers, fins, gills, and lips. But these do not occur at random. Wings typically are covered in feathers rather than fur. This is an empirical fact provided by the world. In other words, combinations do not occur uniformly or randomly, and some pairs are more probable than others.
Where do categories fit into all of this? Categories often reflect these co-occurrences: The category bird implies that wings and feathers will be present on the animal (although there are counterexamples, such as the wingless kiwi of New Zealand and certain now-extinct featherless birds).
We all have an intuitive sense of what constitutes a category member and how well it fits the category, even from a young age. We use linguistic hedges to indicate the unusual members of the category. If you’re asked, “Is a penguin a bird?” it would be correct to respond yes, but many of us would respond using a hedge, something like “a penguin is technically a bird.” If we wanted to elaborate, we might say, “They don’t fly, they swim.” But we wouldn’t say, “A sparrow is technically a bird.” It is not just technically a bird, it is a bird par excellence, among the very best examples of birds in North America, due to several factors, including its ubiquity, familiarity, and the fact that it has the largest number of attributes in common with other members of the category: It flies, it sings, it has wings and feathers, it lays eggs, it makes a nest, it eats insects, it comes to the bird feeder, and so forth.
This instant sense of what constitutes a “good” member of a category is reflected in daily conversation by our ability to substitute a category member for the name of the category in a well-formed sentence when that member is well chosen, reflecting the internal structure of the category. Take the following sentence:
Twenty or so birds often perch on the telephone wire outside my window and tweet in the morning.
I can take out the word birds and substitute robins, sparrows, finches, or starlings with no loss of correctness. But if I substitute penguins, ostriches, or turkeys, it sounds absurd.
The schoolboy took the piece of fruit out of his lunch box and took several bites before eating his sandwich.
We can substitute apple, banana, or orange without loss of correctness, but we cannot just as easily substitute cucumber or pumpkin without the sentence seeming odd. The point is that when we use preexisting categories, or create new ones, there are often clear exemplars of objects that obviously belong to or are central to the category, and other cases that don’t fit as well. This ability to recognize diversity and organize it into categories is a biological reality that is absolutely essential to the organized human mind.
How are categories formed in our brains? Generally, there are three ways. First, we categorize them based on either gross or fine appearance. Gross appearance puts all pencils together in the same bin. Fine appearance may separate soft-lead from hard-lead pencils, gray ones from colored ones, golf pencils from schoolwork pencils. A feature of all categorization processes used by the human brain, including appearance-based categorization, is that they are expandable and flexible, subject to multiple levels of resolution or graininess. For example, zooming in on pencils, you may desire to have maximal separation like they do at the stationery store, separating them both by manufacturer and by the softness of their lead: 3H, 2H, H, HB, B. Or you may decide to separate them by how much of the eraser is left, whether they have bite marks on them or not (!), or by their length. Zooming out, you may decide to put all pencils, pens, felt markers, and crayons into a single broad category of writing implements. As soon as you decide to identify and name a category, the brain creates a representation of that category and separates objects that fall inside from objects that fall outside the category. If I say, “A mammal is an animal that gives birth to live young and that nurses its young,” it is easy to quickly categorize ostrich (no), whale (yes), salmon (no), and orangutan (yes). If I tell you that there exist five species of mammal that lay eggs (including the platypus and echidna), you can quickly accommodate the new information about these exceptions, and this seems perfectly ordinary.
A second way we categorize is based on functional equivalence when objects lack similarity of appearance. In a pinch, you can use a crayon to write a note—it becomes functionally equivalent to a pen or pencil. You can use an opened-up paper clip to post something to a corkboard, an untwisted coat hanger to unclog your kitchen sink; you can bunch up your down jacket to use it as a pillow while you’re camping. A classic functional equivalence concerns food. If you’re driving on the highway and pull into a gas station, hungry, you may be willing to accept a range of products as functionally equivalent for relieving hunger, even though they don’t resemble one another: fresh fruit, yogurt, a bag of mixed nuts, a granola bar, muffin, or premade burrito. If you’ve ever used the back of a stapler or a shoe to pound a nail, you’ve employed a functional equivalence for a hammer.
A third way we categorize is in conceptual categories that address particular situations. Sometimes these are done on the fly, leading to ad hoc categories. For example: What do the following items have in common? Your wallet, childhood photographs, cash, jewelry, and the family dog. They don’t have any physical similarities, and they lack functional similarities. What binds them together is that they are “things you might take out of your house in case of a fire.” You may never have thought about their going together or being conceptually bound until that moment when you have to make a quick decision about what to take. Alternatively, these situational categories can be planned far in advance. A shelf devoted to emergency preparedness items (water, canned foods, can opener, flashlight, wrench for turning off natural gas, matches, blanket) exemplifies this.
Each of these three categorization methods informs how we organize our homes and work spaces, how we allocate shelf and drawer space, and how we can sort things to make them easy and quick to find. Each time we learn or create a new category, there is neural activity in a circuit that invokes a prefrontal cortex–thalamic loop, alongside the caudate nucleus. It contains low-resolution maps of perceptual space (linking to the hippocampus); it associates a categorization space with a perceptual stimulus. Dopamine release strengthens synapses when you correctly categorize items according to a rule. If you change a classification rule—say you decide to sort your clothes by color rather than by season—the cingulate cortex (part of the central executive) becomes activated. Of course we also cross-classify, placing things in more than one category. In one situation, you might think of yogurt as a dairy product; in another, you might think of it as a breakfast item. The former is based on a taxonomic classification, the latter on a functional category.
But how important are categories? Is making them really that profound? What if mental categories like this are actually manifested in neural tissue? Indeed they are.
More than 50,000 years ago, our human ancestors categorized the world around them, making distinctions and divisions about things that were relevant to their lives: edible versus nonedible, predator versus prey, alive versus dead, animate versus inanimate. As we saw in Chapter 1, their biological categories grouped together objects based on appearance or characteristics. In addition, they would have used conceptual, ad hoc categories for things that lacked physical similarities but shared functional features—for example, “things you don’t want in your food,” a heterogeneous category that could include worms, insects, a clump of dirt, tree bark, or your little brother’s stinky feet.
In the last few years, we’ve learned that the formation and maintenance of categories have their roots in known biological processes in the brain. Neurons are living cells, and they can connect to one another in trillions of different ways. These connections don’t just lead to learning—the connections are the learning. The number of possible brain states that each of us can have is so large that it exceeds the number of known particles in the universe. The implications of this are mind-boggling: Theoretically, you should be able to represent uniquely in your brain every known particle in the universe, and have excess capacity left over to organize those particles into finite categories. Your brain is just the tool for the information age.
Neuroimaging technology has uncovered the biological substrates of categorization. Volunteers placed inside a scanning machine are asked to create or think of different kinds of categories. These categories might contain natural objects like plants and animals or human-made artifacts like tools and musical instruments. The scanning technology allows us to pinpoint, usually within one cubic millimeter, where particular neural activity is taking place. This research has shown that the categories we form are real, biological entities, with specific locations in the brain. That is, specific and replicable regions of the brain become active both when we recall previously made categories and when we make them up on the spot. This is true whether the categories are based on physical similarities (e.g., “edible leaves”) or only conceptual ones (“things I could use as a hammer”). Additional evidence for the biological basis of categories comes from case studies of people with brain lesions. Disease, strokes, tumors, or other organic brain trauma sometimes cause a specific region of the brain to become damaged or die. We’ve now seen patients whose brain damage is so specific that they may lose the ability to use and understand a single category, such as fruits, while retaining the ability to use and understand a related category, such as vegetables. The fact that a specific category can become lost in this way points to its biological basis in millions of years of evolution, and the importance of categorization in our lives today.
Our ability to use and create categories on the spot is a form of cognitive economy. It helps us by consolidating like things, freeing us from having to make decisions that can cause energy depletion, those hundreds of inconsequential decisions such as “Do I want this pen or that pen?” or “Is this exactly the pair of socks that I bought?” or “Have I mixed nearly identical socks in attempting to match them?”
Functional categories in the brain can have either hard (sharply defined) or fuzzy boundaries. Triangles are an example of a hard boundary category. To be a member of the category, an object must be a two-dimensional closed figure with three sides, the sum of whose interior angles must equal exactly 180 degrees. Another hard boundary is the outcome of a criminal proceeding—with the exception of hung juries and mistrials, the defendant is found either guilty or not guilty; there is no such thing as 70% guilty. (During sentencing, the judge can accommodate different degrees of punishment, or assign degrees of responsibility, but she’s generally not parsing degrees of guilty. In civil law, however, there can be degrees of guilt.)
An example of a fuzzy boundary is the category “friendship.” There are clear and obvious cases of people who you know are friends, and clear cases of people who you know are not—strangers, for example. But “friends” is a category that, for most of us, has fuzzy boundaries. It depends to some degree on context. We invite different people to our homes for a neighborhood barbecue than for a birthday party; we’ll go out for drinks with people from work but not invite them to our homes. Like many categories, inclusion depends on context. The category “friends” has permeable, fuzzy boundaries, unlike the triangle category for which polygons are either in or out. We consider some people to be friends for some purposes and not for others.
Hard boundaries apply mostly to formal categories typically found in mathematics and law. Fuzzy boundaries can occur in both natural and human-made categories. Cucumbers and zucchinis are technically fruits, but we allow them to permeate the fuzzy boundary “vegetable” because of context—we tend to eat them with or in lieu of “proper” vegetables such as spinach, lettuce, and carrots. The contextual and situational aspect of categories is also apparent when we talk about temperature—104 degrees Fahrenheit is too hot for the bedroom when we’re trying to sleep, but it’s the perfect temperature for a hot tub. That same 104 would seem not quite hot enough if it were coffee.
A classic case of a fuzzy category is “game,” and the twentieth-century philosopher Ludwig Wittgenstein spent a great deal of time thinking about it, concluding that there was no list of attributes that could unambiguously define the category. Is a game something you do for leisure? That definition would exclude professional football and the Olympic Games. Something you do with other people? That lets out solitaire. An activity done for fun, and bound by certain rules, that is sometimes practiced competitively for fans to watch? That lets out the children’s game Ring around the Rosies, which is not competitive, nor does it have any rules, yet really does seem like a game. Wittgenstein concluded that something is a game when it has a family resemblance to other games. Think of a hypothetical family, the Larsons, at their annual family reunion. If you know enough Larsons, you might be able to easily tell them from their non-Larson spouses, based on certain family traits. Maybe there’s the Larson dimpled chin, the aquiline nose, the large floppy ears and red hair, and the tendency to be over six feet tall. But it’s possible, likely even, that no one Larson has all these attributes. They are not defining features, they are typical features. The fuzzy category lets in anyone who resembles the prototypical Larson, and in fact, the prototypical Larson, the Larson with all the noted features, may not actually exist as anything other than a theoretical, Platonic ideal.
The cognitive scientist William Labov demonstrated the fuzzy category/family resemblance concept with this series of drawings:
The object in the upper left is clearly a cup. As we move to the right on the top row, the cup gets wider and wider until at number 4 it has a greater resemblance to a bowl than a cup. What about number 3? It could be in either the cup or the bowl category, depending on context. Similarly, as the cups get taller, moving downward, they begin to look less and less like cups and more like pitchers or vases. Other variations, such as adding a stem (number 17) make it look more like a goblet or wineglass. Changing the shape (numbers 18 and 19), however, makes it look like a peculiar cup, but a cup all the same. This illustrates the underlying notion that category boundaries are flexible, malleable, and context-dependent. If I serve you wine in number 17 and I make it out of glass instead of porcelain or ceramic, you’re more likely to accept it as a goblet. But even if I make number 1 out of glass, the object it still most closely resembles is a cup, regardless of whether I fill it with coffee, orange juice, wine, or soup.
Fuzzy categories are instantiated biologically in the brain, and are as real as hard categories. Being able to create, use, and understand both kinds of categories is something that our brains are hardwired to do—even two-year-olds do it. As we think about organizing our lives and the spaces we inhabit, creating categories and bins for things is an act of cognitive economy. It’s also an act of great creativity if we allow it to be, leading to organizational systems that range from the rigid classification of a military warehouse and the perfect sock drawer to whimsical categories that reflect playful ways of looking at the world and all of the objects in it.
The brain organizes information in its own idiosyncratic way, a way that has served us very well. But in an age of information overload, not to mention decision overload, we need systems outside our heads to help us. Categories can off-load a lot of the difficult work of the brain into the environment. If we have a drawer for baking supplies, we don’t need to remember separately where ten different items are—the rolling pin, the cookie cutters, the sifter, and so on—we just remember that we have a category for baking tools, and it is in the third drawer down underneath the coffeemaker. If we’re planning two separate birthday parties, one at the office and one at home, the category of “people I work with” in our mental recollection, Outlook file, or contacts app on our smartphone helps prompt the memory of whom to include and whom not to.
Calendars, smartphones, and address books are also brain extenders, externalizing onto paper or into computer chips myriad details that we no longer have to keep in our heads. Historically, the ultimate brain extenders were books, keeping track of centuries’ worth of collected knowledge that we can access when we need it. Perhaps they still are.
People at the top of their professions, in particular those known for their creativity and effectiveness, use systems of attention and memory external to their brain as much as they can. And a surprising number of them, even in high-tech jobs, use decidedly low-tech solutions for keeping on top of things. Yes, you can embed a microchip in your keys that will let you track them with a cell phone app, and you can create electronic checklists before you travel to ensure you take everything you need. But many busy and effective people say that there is something different, something visceral in using old-fashioned physical objects, rather than virtual ones, to keep track of important things from shopping lists to appointments to ideas for their next big project.
One of the biggest surprises I came upon while working on this book was the number of such people who carry around a pen and notepads or index cards for taking physical notes, and their insistence that it is both more efficient and more satisfying than the electronic alternatives now on offer. In her autobiography, Lean In, Sheryl Sandberg reluctantly admits to carrying a notebook and pen around to keep track of her To Do list, and confesses that at Facebook, where she is the COO, this is “like carrying around a stone tablet and chisel.” Yet she and many others like her persist in this ancient technology. There must be something to it.
Imagine carrying a stack of 3 x 5 index cards with you wherever you go. When you get an idea for something you’re working on, you put it on one card. If you remember something you need to do later, you put that on a card. You’re sitting on a bus and suddenly remember some people you need to call and some things you need to pick up at the hardware store—that’s several more cards. You’ve figured out how to solve that problem your sister is having with her husband—that goes on a card. Every time any thought intrudes on what you’re doing, you write it down. David Allen, the efficiency expert and author of books, including Getting Things Done, calls this kind of note-taking “clearing the mind.”
Remember that the mind-wandering mode and the central executive work in opposition and are mutually exclusive states; they’re like the little devil and angel standing on opposite shoulders, each trying to tempt you. While you’re working on one project, the mind-wandering devil starts thinking of all the other things going on in your life and tries to distract you. Such is the power of this task-negative network that those thoughts will churn around in your brain until you deal with them somehow. Writing them down gets them out of your head, clearing your brain of the clutter that is interfering with being able to focus on what you want to focus on. As Allen notes, “Your mind will remind you of all kinds of things when you can do nothing about them, and merely thinking about your concerns does not at all equate to making any progress on them.”
Allen noticed that when he made a big list of everything that was on his mind, he felt more relaxed and better able to focus on his work. This observation is based in neurology. When we have something on our minds that is important—especially a To Do item—we’re afraid we’ll forget it, so our brain rehearses it, tossing it around and around in circles in something that cognitive psychologists actually refer to as the rehearsal loop, a network of brain regions that ties together the frontal cortex just behind your eyeballs and the hippocampus in the center of your brain. This rehearsal loop evolved in a world that had no pens and paper, no smartphones or other physical extensions of the human brain; it was all we had for tens of thousands of years and during that time, it became quite effective at remembering things. The problem is that it works too well, keeping items in rehearsal until we attend to them. Writing them down gives both implicit and explicit permission to the rehearsal loop to let them go, to relax its neural circuits so that we can focus on something else. “If an obligation remained recorded only mentally,” Allen says, “some part of me constantly kept thinking that it should be attended to, creating a situation that was inherently stressful and unproductive.”
Writing things down conserves the mental energy expended in worrying that you might forget something and in trying not to forget it. The neuroscience of it is that the mind-wandering network is competing with the central executive, and in such a battle, the mind-wandering default mode network usually wins. Sometimes it’s as if your brain has a mind of its own. If you want to look at this from a Zen point of view, the Masters would say that the constant nagging in your mind of undone things pulls you out of the present—tethers you to a mind-set of the future so that you’re never fully in the moment and enjoying what’s now. David Allen notes that many of his clients spin their wheels at work, worrying about things they need to do at home, and when they’re at home, they are worried about work. The problem is that you’re never really in either place.
“Your brain needs to engage on some consistent basis with all of your commitments and activities,” Allen says. “You must be assured that you are doing what you need to be doing, and that it’s OK to be not doing what you’re not doing. If it’s on your mind, then your mind isn’t clear. Anything you consider unfinished in any way must be captured in a trusted system outside your mind. . . .” That trusted system is to write it down.
For the 3 x 5 system to work best, the rule is one idea or task per card—this ensures that you can easily find it and dispose of it when it’s been dealt with. One piece of information per card allows for rapid sorting and re-sorting, and it provides random access, meaning that you can access any idea on its own, take it out of the stack without dislocating another idea, and put it adjacent in the stack to similar ideas. Over time, your idea of what is similar or what binds different ideas together may change, and this system—because it is random and not sequential—allows for that flexibility.
Robert Pirsig inspired a generation to philosophical reflection—and organizing their thoughts—with his hugely popular novel Zen and the Art of Motorcycle Maintenance, published in 1974. In a somewhat less well-known later book (nominated for a Pulitzer Prize), Lila: An Inquiry into Morals, he endeavors to establish a way of thinking about metaphysics. Phaedrus, the author’s alter ego and the story’s protagonist, uses the index card system for organizing his philosophical notions. The size of the index cards, he says, makes them preferable to full-size sheets of paper because they provide greater random access. They fit into a shirt pocket or purse. Because they’re all the same size, they’re easy to carry and organize. (Leibniz complained about all the slips of paper that he had ideas on getting lost because they were all different sizes and shapes.) And importantly, “when information is organized in small chunks that can be accessed and sequenced at random it becomes much more valuable than when you have to take it in serial form. . . . They [the index cards] ensured that by keeping his head empty and keeping sequential formatting to a minimum, no fresh new unexplored ideas would be forgotten or shut out.” Of course our heads can never be truly empty, but the idea is powerful. We should off-load as much information to the external world as possible.
Once you have a stack of index cards, you make it a point to sort them regularly. When there are a small number, you simply put them in the order in which you need to deal with them. With a larger number, you assign the index cards to categories. A modified version of the system that Ed Littlefield had me use for sorting his mail works:
It isn’t the names of the categories that are critical, it is the process of external categorization. Maybe your categories are more like some of these:
David Allen recommends this mnemonic for fine sorting your To Do list into four actionable categories:
Do it
Delegate it
Defer it
Drop it
Allen suggests the two-minute rule: If you can attend to one of the things on your list in less than two minutes, do it now (he recommends setting aside a block of time every day, thirty minutes for example, just to deal with these little tasks, because they can accumulate quickly to the point of overload). If a task can be done by someone else, delegate it. Anything that takes more than two minutes to deal with, you defer. You might be deferring only until later today, but you defer it long enough to get through your list of two-minute tasks. And there are some things that just aren’t worth your time anymore—priorities change. While going through the daily scan of your index cards, you can decide to drop them.
At first it may sound like busywork. You can keep these things all in your head, right? Well, yes, you can, but the point is that the anatomy of your brain makes it less effective to do so. And the busywork is not so onerous. It’s a time for reflection and healthy mind-wandering. To distinguish the cards that go in one category versus another, a header card can be placed as the first card in the new category. If your 3 x 5 cards are white, your header cards can be blue, for example, to make finding them easy. Some people go crazy with the index card system and extend this to use different-colored cards for the different categories. But this makes it more difficult to move a card from one category to another, and the whole point of the 3 x 5 system is to maximize flexibility—any card should be able to be put anywhere in the stack. As your priorities change, you simply reorder the cards to put them in the order and the category you want. Little bits of information each get their own index card. Phaedrus wrote a whole book by putting ideas, quotes, sources, and other research results on index cards, which he called slips. What begins as a daunting task of trying to figure out what goes where in a report becomes simply a matter of ordering the slips.
Instead of asking “Where does this metaphysics of the universe begin?”—which was a virtually impossible question—all he had to do was just hold up two slips and ask, “Which comes first?” This was easy and he always seemed to get an answer. Then he would take a third slip, compare it with the first one, and ask again, “Which comes first?” If the new slip came after the first one he compared it to the second. Then he had a three-slip organization. He kept repeating this process with slip after slip.
People who use the index card system find it liberating. Voice recorders require you to listen back, and even on a sped-up playback, it takes longer to listen to a note than it does to read it. Not terribly efficient. And the voice files are not easily sorted. With index cards, you can sort and re-sort to your heart’s content.
Pirsig continues, describing Phaedrus’s organizational experiments. “At various times he’d tried all kinds of different things: colored plastic tabs to indicate subtopics and sub-subtopics; stars to indicate relative importance; slips split with a line to indicate both emotive and rational aspects of their subject; but all of these had increased rather than decreased confusion and he’d found it clearer to include their information elsewhere.”
One category that Phaedrus allowed for was unassimilated. “This contained new ideas that interrupted what he was doing. They came in on the spur of the moment while he was organizing the other slips or sailing or working on the boat or doing something else that didn’t want to be disturbed. Normally your mind says to these ideas, ‘Go away, I’m busy,’ but that attitude is deadly to Quality.” Pirsig recognized that some of the best ideas you’ll have will come to you when you’re doing something completely unrelated. You don’t have time to figure out how to use the idea because you’re busy with something else, and taking time to contemplate all the angles and ramifications takes you out of the task you’re working on. For Phaedrus, an unassimilated pile helped solve the problem. “He just stuck the slips there on hold until he had the time and desire to get to them.” In other words, this is the junk drawer, a place for things that don’t have another place.
You don’t need to carry all the cards with you everywhere of course—the abeyance or future-oriented ones can stay in a stack on your desk. To maximize the efficiency of the system, the experts look through their cards every morning, reordering them as necessary, adding new ones if sifting through the stack gives them new ideas. Priorities change and the random access nature of the cards means you can put them wherever they will be most useful to you.
For many of us, a number of items on our To Do lists require a decision and we feel we don’t have enough information to make the decision. Say that one item on your To Do list was “Make a decision about assisted living facilities for Aunt Rose.” You’ve already visited a few and gathered information, but you haven’t yet made the decision. On a morning scan of your cards, you find you aren’t ready to do it. Take two minutes now to think about what you need in order to make the decision. Daniel Kahneman and Amos Tversky said that the problem with making decisions is that we are often making them under conditions of uncertainty. You’re uncertain of the outcome of putting Rose in a home, and that makes the decision difficult. You also fear regret if you make the wrong decision. If more information will remove that uncertainty, then figure out what that information is and how to obtain it, then—to keep the system working for you—put it on an index card. Maybe it’s talking to a few more homes, maybe it’s talking to other family members. Or maybe you just need time to let the information set in. In that case, you put a deadline on the decision card, say four days from now, and try to make the decision then. The essential point here is that during your daily sweep through the cards, you have to do something with that index card—you do something about it now, you put it in your abeyance pile, or you generate a new task that will help to move this project forward.
The index card system is merely one of what must be an infinite number of brain extension devices, and it isn’t for everyone. Paul Simon carries a notebook with him everywhere to jot down lines or phrases that he might use later in a song, and John R. Pierce, the inventor of satellite communication, carried around a lab book that he used as a journal for everything he had to do as well as for research ideas and names of people he met. A number of innovators carried pocket notebooks to record observations, reminders, and all manner of what-not; the list includes George S. Patton (for exploring ideas on leadership and war strategy, as well as to record daily affirmations), Mark Twain, Thomas Jefferson, and George Lucas. These are serial forms of information storage, not random access; everything in them is chronological. It involves a lot of thumbing through pages, but it suits their owners.
As humble and low-tech as it may seem, the 3 x 5 card system is powerful. That is because it builds on the neuroscience of attention, memory, and categorization. The task-negative or mind-wandering mode is responsible for generating much useful information, but so much of it comes at the wrong time. We externalize our memory by putting that information on index cards. We then harness the power of the brain’s intrinsic and evolutionarily ancient desire to categorize by creating little bins for those external memories, bins that we can peer into whenever our central executive network wishes to. You might say categorizing and externalizing our memory enables us to balance the yin of our wandering thoughts with the yang of our focused execution.