5

ORGANIZING OUR TIME

What Is the Mystery?

Ruth was a thirty-seven-year-old married mother of six. She was planning dinner for her brother, her husband, and her children to be served at six P.M. At 6:10, when her husband walked into the kitchen, he saw that she had two pots going on the stove, but the meat was still frozen and the salad was only partly made. Ruth had just picked up a tray of dessert and was getting ready to serve it. She had no awareness that she was doing things in the wrong order, or in fact that a proper order existed.

Ernie began his career as an accountant and was promoted to comptroller of a home building firm at age thirty-two. His friends and family considered him to be especially responsible and reliable. At age thirty-five he abruptly put all his savings into a partnership with a sketchy businessman and soon after had to declare bankruptcy. Ernie drifted through job after job and was fired from each for being late, disorganized, and a general deterioration of his ability to plan anything or to properly prioritize his tasks. He required more than two hours to get ready for work in the morning, and often spent entire days doing nothing more than shaving and washing his hair. Ernie suddenly had lost the ability to properly evaluate future needs: He adamantly refused to get rid of useless possessions such as five broken television sets, six broken fans, assorted dead houseplants, and three bags crammed full of empty frozen orange juice cans.

Peter had been a successful architect with a graduate degree from Yale, a special talent for math and science, and an IQ 25 points above average. Given a simple assignment of reorganizing a small office space, he found himself utterly perplexed. He spent nearly two hours preparing to begin the project, and once he started, he inexplicably kept starting over. He made several preliminary sketches of idea fragments but was unable to connect those ideas or to refine the sketches. He was well aware of his disordered thinking. “I know what I want to draw, but I just don’t do it. It’s crazy . . . it’s as if I’m getting a train of thought and then I start to draw it, and then I lose the train of thought. And, then, I have another train of thought that’s in a different direction and the two don’t [meet] . . . and this is a very simple problem.”

What Ruth, Ernie, and Peter have in common is that shortly before these episodes, all three suffered damage to their prefrontal cortex. This is the part of the brain I wrote about before, which, along with the anterior cingulate, basal ganglia, and insula, helps us to organize time and engage in planning, to maintain attention and stick with a task once we’ve started it. The networked brain is not a mass of undifferentiated tissue—damage to discrete regions of it often results in very specific impairments. Damage to the prefrontal cortex wreaks havoc with the ability to plan a sequence of events and thereby sustain calm, productive effort resulting in the accomplishment of the goals we’ve set ourselves in the time we have. But even the healthiest of us sometimes behave as though we’ve got frontal lobe damage, missing appointments, making silly mistakes now and then, and not making the most of our brain’s evolved capacity to organize time.

022.psd

The Biological Reality of Time

Both mystics and physicists tell us that time is an illusion, simply a creation of our minds. In this respect, time is like color—there is no color in the physical world, just light of different wavelengths reflecting off of objects; as Newton said, the light waves themselves are colorless. Our entire sense of color results from the visual cortex in our brains processing these wavelengths and interpreting them as color. Of course that doesn’t make it subjectively any less real—we look at a strawberry and it is red, it doesn’t just seem red. Time can be thought of similarly as an interpretation that our brains impose on our experience of the world. We feel hungry after a certain amount of time has passed, sleepy after we’ve been awake for a certain amount of time. The regular rotation of the earth on its axis and around the sun leads us to organize time as a series of cyclical events, such as day and night and the four seasons, that in turn allow us to mentally register the passage of time. And having registered time, more so than ever before in human history, we divide up that time into chunks, units to which we assign specific activities and expectations for what we’ll get done in them. And these chunks of time are as real to us as a strawberry is red.

Most of us live by the clock. We make appointments, wake and sleep, eat, and organize our time around the twenty-four-hour clock. The duration of the day is tied to the period of rotation of the earth, but what about the idea to divide that up into equal parts—where did that come from? And why twenty-four?

As far as we know, the Sumerians were the first to divide the day into time periods. Their divisions were one-sixth of a day’s sunlight (roughly equivalent to two of our current hours). Other ancient time systems reckoned the day from sunrise to sunset, and divided that period into two equal divisions. As a result, these ancient mornings and afternoons would vary in length by season as the days got longer and shorter.

The three most familiar divisions of time we make today continue to be based on the motions of heavenly bodies, though now we call this astrophysics. The length of a year is determined by the time it takes the earth to circle the sun; the length of a month is (more or less) the time it takes the moon to circle the earth; the length of a day is the time it takes the earth to rotate on its axis (and observed by us as the span between two successive sunrises or sunsets). But further divisions are not based on any physical laws and tend to be based on historical factors that are largely arbitrary. There is nothing inherent in any biological or astrophysical cycle that would lead to the division of a day into twenty-four equal segments.

The current practice of dividing the clock into twenty-four comes from the ancient Egyptians, who divided the day into ten parts and then added an hour for each of the ambiguous periods of twilight, yielding twelve parts. Egyptian sundials in archeological sites testify to this. After nightfall, time was kept by a number of means, including tracking the motion of the stars, the burning of candles, or the amount of water that flowed through a small hole from one vessel to another. The Babylonians also used fixed duration with twenty-four hours in a day, as did Hipparchus, the ancient Greek mathematician and astronomer.

The division of the hour into sixty minutes, and the minutes into sixty seconds is also arbitrary, deriving from the Greek mathematician Eratosthenes, who divided the circle into sixty parts for an early cartographic system representing latitudes.

For most of human history, we did not have clocks or indeed any way of accurately reckoning time. Meetings and ritual get-togethers would be arranged by referencing obvious natural events, such as “Please drop by our camp when the moon is full” or “I’ll meet you at sunset.” Greater precision than that wasn’t possible, but it wasn’t needed, either. The kind of precision we’ve become accustomed to began after railroads were built. You might think the rationale is that railroad operators wanted to make departure times accurate and standardized as a convenience for customers, but it really grew out of safety concerns. After a series of railroad collisions in the early 1840s, investigators sought ways to improve communication and reduce the risk of accidents. Prior to that, timekeeping was considered a local matter for each city or town. Because there did not exist rapid forms of communication or transportation, there was no practical disadvantage to one location being desynchronized from another—and no way to really tell! Sir Sandford Fleming, a Scottish engineer who had helped design many of the railroads in Canada, came upon the idea of worldwide standard time zones, which were adopted by all Canadian and U.S. railroads in late 1883. The United States Congress didn’t make it into law until the Standard Time Act was passed thirty-five years later.

Still, what we call hours, minutes, and days are arbitrary: There is nothing physically or biologically critical about the day being divided into twenty-four parts, or the hour and minute being divided into sixty parts. They were easy to adopt because these divisions don’t contradict any inherent biological process.

Are there any biological constants to time? Our life span appears to be limited to about one hundred years (plus or minus twenty) due to aging. One theory used to be that life span limits are programmed into the genes to limit population size, but this has been dismissed because, in the harsh conditions of the wild, most species don’t live long enough to age, so there would be no threat of overpopulation. A few species don’t age at all and so are technically immortal. These include some species of jellyfish, flatworms (planaria), and hydra; the only causes of death in them are from injury or disease. This is in stark contrast to humans—of the roughly 150,000 people who die in the world each day, two-thirds die from age-related causes, and this number can reach 90% in peaceful industrialized nations, where war or disease is less likely to shorten life.

Natural selection has very limited or no opportunities to exert any direct influence on the aging process. Natural selection will tend to favor genes that have good effects on the organism early in life, prior to reproductive age, even if they have bad effects at older ages. Once an individual has reproduced and passed on his or her genes to the next generation, natural selection no longer has a means by which to operate on that person’s genome. This has two consequences. If an early human inherited a gene mutation that rendered him less likely to reproduce—a gene that made him vulnerable to early disease or simply made him an unattractive mate—that gene would be less likely to show up in the next generation. On the other hand, suppose there are two gene mutations that each conferred a survival advantage and made this early human especially attractive, but one of them has the side effect of causing cancer at age seventy-five, decades after the most likely age at which an individual reproduces. Natural selection has no way to discourage the cancer-causing gene because the gene doesn’t show itself until long after it has been passed on to the next generation. Thus, genetic variations that challenge survival at an old age—variations such as a susceptibility to cancer, or weakening of the bones—will tend to accumulate as one gets older and farther away in time from the peak age of reproduction. (This is because such a small percentage of organisms reproduce after a certain age that any investment in genetic mechanisms for survival beyond this age benefits a very small percentage of the population.) There is also the Hayflick limit, which states that cells can divide only a maximum number of times due to errors that accumulate during successive cell divisions. The fact that we not only die but are aware that our time is limited has different effects on us across the life span—something I write about at the end of this chapter.

At the level of hours and minutes, the most relevant constants are: human heart rates, which normally vary from 60 to 100 beats per minute; the need to spend roughly one-third of our time sleeping in order to function properly; and without cues from the sun, our bodies will drift toward a twenty-five-hour day. Biologists and physiologists still don’t know why this is so. Moving down to the level of time that occurs at 1/1000 of a second are biological constants with respect to the temporal resolution of our senses. If a sound has a gap in it shorter than 10 milliseconds, we will tend not to hear it, because of resolution limits of the auditory system. For a similar reason, a series of clicks ceases to sound like clicks and becomes a musical note when the clicks are presented at a rate of about once every 25 milliseconds. If you’re flipping through static (still) pictures, they must be presented slower than about once every 40 milliseconds in order for you to see them as separate images. Any faster than that and they exceed the temporal resolution of our visual system and we perceive motion where there is none (this is the basis of flipbooks and motion pictures).

Photographs are interesting because they can capture and preserve the world at resolutions that exceed those of our visual system. When this happens, they allow us to see a view of the world that our eyes and brains would never see on their own. Shutter speeds of 125 and 250 provide samples of the world in 8 millisecond and 4 millisecond slices, and this is part of our fascination with them, particularly as they capture human movement and human expressions. These sensory limits are constrained by a combination of neural biology and the physical mechanics of our sensory organs. Individual neurons have a range of firing rates, on the order of once per millisecond to once every 250 milliseconds or so.

We have a more highly developed prefrontal cortex than any other species. It’s the seat of many behaviors that we consider distinctly human: logic, analysis, problem solving, exercising good judgment, planning for the future, and decision-making. It is for these reasons that it is often called the central executive, or CEO of the brain. Extensive two-way connections between the prefrontal cortex and virtually every other region of the brain place it in a unique position to schedule, monitor, manage, and manipulate nearly every activity we undertake. Like real CEOs, these cerebral CEOs are highly paid in metabolic currency. Understanding how they work (and exactly how they get paid) can help us to use their time more effectively.

It’s natural to think that because the prefrontal cortex is orchestrating all this activity and thought, it must have massive neural tracts for back-and-forth communication with other brain regions so that it can excite them and bring them on line. In fact, most of the prefrontal cortex’s connections to other brain regions are not excitatory; they’re the opposite: inhibitory. That’s because one of the great achievements of the human prefrontal cortex is that it provides us with impulse control and, consequently, the ability to delay gratification, something that most animals lack. Try dangling a string in front of a cat or throwing a ball in front of a retriever and see if they can sit still. Because the prefrontal cortex doesn’t fully develop in humans until after age twenty, impulse control isn’t fully developed in adolescents (as many parents of teenagers have observed). It’s also why children and adolescents are not especially good at planning or delaying gratification.

When the prefrontal cortex becomes damaged (such as from disease, injury, or a tumor), it leads to a specific medical condition called dysexecutive syndrome.

The condition is recognized by the kinds of planning and time coordination deficits that Ruth the homemaker, Ernie the accountant, and Peter the architect suffered from. It is also often accompanied by an utter lack of inhibition across a range of behaviors, particularly in social settings. Patients may blurt out inappropriate remarks, or go on binges of gambling, drinking, or sex with inappropriate partners. And they tend to act on what is right in front of them. If they see someone moving, they have difficulty inhibiting the urge to imitate them; if they see an object, they pick it up and use it.

What does all this have to do with organizing time? If your inhibitions are reduced, and you’re impaired at seeing the future consequences of your actions, you tend to do things now that you might regret later, or that make it difficult to properly complete projects you’re working on. Binge-watch an entire season of Mad Men instead of working on the Pensky file? Eat a donut (or two) instead of sticking to your diet? That’s your prefrontal cortex not doing its job. In addition, damage to the prefrontal cortex causes an inability to effectively go forward or backward in time in one’s mind—remember Peter the architect’s description of starting over and over and not being able to move forward. Dysexecutive syndrome patients often get stuck in the present, doing something over and over again, perseverating, revealing a failure in temporal control. They can be terrible at organizing their calendars and To Do lists due to a double whammy of neural deficits. First, they’re unable to place events in the correct temporal order. A patient with severe damage might attempt to bake the cake before having added all the ingredients. And many frontal lobe patients are not aware of their deficit; a loss of insight is associated with these frontal lobe lesions, such that patients generally underestimate their impairment. Having an impairment is bad enough, but if you don’t know you have it, you’re liable to go headlong into situations without taking proper precautions, and end up in trouble.

As if that weren’t enough, advanced prefrontal cortex damage interferes with the ability to make connections and associations between disparate thoughts and concepts, resulting in a loss of creativity. The prefrontal cortex is especially important for generating creative acts in art and music. This is the region of the brain that is most active when creative artists are functioning at their peak.

If you’re interested in seeing what it’s like to have prefrontal cortex damage, there’s a simple, reversible way: Get drunk. Alcohol interferes with the ability of prefrontal cortex neurons to communicate with one another, by disrupting dopamine receptors and blocking a particular kind of neuron called an NMDA receptor, mimicking the damage we see in frontal lobe patients. Heavy drinkers also experience the frontal lobe system double whammy: They may lose certain capabilities, such as impulse control or motor coordination or the ability to drive safely, but they aren’t aware that they’ve lost them—or simply don’t care—so they forge ahead anyway.

An overgrowth of dopaminergic neurons in the frontal lobes leads to autism (characterized by social awkwardness and repetitive behaviors), which mimics frontal lobe damage to some degree. The opposite, a reduction of dopaminergic neurons in the frontal lobes, occurs in Parkinson’s disease and attention deficit disorder (ADD). The result then is scattered thinking and a lack of planning, which can sometimes be improved by the administration of L-dopa or of methylphenidate (also known by its brand name Ritalin), drugs that increase dopamine in the frontal lobes. From autism and Parkinson’s, we’ve learned that too much or too little dopamine causes dysfunction. Most of us live in a Goldilocks zone where everything is just right. That’s when we plan our activities, follow through on our plans, and inhibit impulses that would take us off track.

It may be obvious, but the brain coordinates a large share of the body’s housekeeping and timekeeping functions—regulating heart rate and blood pressure, signaling when it’s time to sleep and wake up, letting us know when we’re hungry or full, and maintaining body temperature even as the outside temperature changes. This coordination takes place in the so-called reptilian brain, in structures we share with all vertebrates. In addition to this, there are the higher cognitive functions of the brain handled by the cerebral cortex: reasoning, problem solving, language, music, precision athletic movement, mathematical ability, art, and the mental operations that support them, including memory, attention, perception, motor planning, and categorization. The entire brain weighs three pounds (1.4 kg) and so is only a small percentage of an adult’s total body weight, typically 2%. But it consumes 20% of all the energy the body uses. Why? The perhaps oversimplified answer is that time is energy.

Neural communication is very rapid—it has to be—reaching speeds of over 300 miles per hour, and with neurons communicating with one another hundreds of times per second. The voltage output of a single resting neuron is 70 millivolts, about the same as the line output of an iPod. If you could hook up a neuron to a pair of earbuds, you could actually hear its rhythmic output as a series of clicks. My colleague Petr Janata did this many years ago with neurons in the owl’s brain. He attached small thin wires to neurons in the owl’s brain and connected the other end of the wires to an amplifier and a loudspeaker. Playing music to the owl, Petr could hear in the neural firing pattern the same pattern of beats and pitches that were in the original music.

Neurochemicals that control communication between neurons are manufactured in the brain itself. These include some relatively well-known ones such as serotonin, dopamine, oxytocin, and epinephrine, as well as acetylcholine, GABA, glutamate, and endocannabinoids. Chemicals are released in very specific locations and they act on specific synapses to change the flow of information in the brain. Manufacturing these chemicals, and dispersing them to regulate and modulate brain activity, requires energy—neurons are living cells with a metabolism, and they get that energy from glucose. No other tissue in the body relies solely on glucose for energy except the testes. (This is why men occasionally experience a battle for resources between their brains and their glands.)

A number of studies have shown that eating or drinking glucose improves performance on mentally demanding tasks. For example, experimental participants are given a difficult problem to solve, and half of them are given a sugary treat and half of them are not. The ones who get the sugary treat perform better and more quickly because they are supplying the body with glucose that goes right to the brain to help feed the neural circuits that are doing the problem solving. This doesn’t mean you should rush out and buy armloads of candy—for one thing, the brain can draw on vast reserves of glucose already held in the body when it needs them. For another, chronic ingestion of sugars—these experiments looked only at short-term ingestion—can damage other systems and lead to diabetes and sugar crash, the sudden exhaustion that many people feel later when the sugar high wears off.

But regardless of where it comes from, the brain burns glucose, as a car burns gasoline, to fuel mental operations. Just how much energy does the brain use? In an hour of relaxing or daydreaming, it uses eleven calories or fifteen watts—about the same as one of those new energy-efficient lightbulbs. Using the central executive for reading for an hour takes about forty-two calories. Sitting in class, by comparison, takes sixty-five calories—not from fidgeting in your seat (that’s not factored in) but from the additional mental energy of absorbing new information. Most brain energy is used in synaptic transmission, that is, in connecting neurons to one another and, in turn, connecting thoughts and ideas to one another. What all this points to is that good time management should mean organizing our time in a way that maximizes brain efficiency. The big question many of us ask today is: Does that come from doing one thing at a time or from multitasking? If we only do one thing at a time, can we ever hope to catch up?

Mastering the See-Saw of Events

The brain “only takes in the world little bits and chunks at a time,” says MIT neuroscientist Earl Miller. You may think you have a seamless thread of data coming in about the things going on around you, but the reality is your brain “picks and chooses and anticipates what it thinks is going to be important, what you should pay attention to.”

In Chapters 1 and 3, I talked about the metabolic costs of multitasking, such as reading e-mail and talking on the phone at the same time, or social networking while reading a book. It takes more energy to shift your attention from task to task. It takes less energy to focus. That means that people who organize their time in a way that allows them to focus are not only going to get more done, but they’ll be less tired and less neurochemically depleted after doing it. Daydreaming also takes less energy than multitasking. And the natural intuitive see-saw between focusing and daydreaming helps to recalibrate and restore the brain. Multitasking does not.

Perhaps most important, multitasking by definition disrupts the kind of sustained thought usually necessary for problem solving and for creativity. Gloria Mark, professor of informatics at UC Irvine, explains that multitasking is bad for innovation. “Ten and a half minutes on one project,” she says, “is not enough time to think in-depth about anything.” Creative solutions often arise from allowing a sequence of altercations between dedicated focus and daydreaming.

Further complicating things is that the brain’s arousal system has a novelty bias, meaning that its attention can be hijacked easily by something new—the proverbial shiny objects we use to entice infants, puppies, and cats. And this novelty bias is more powerful than some of our deepest survival drives: Humans will work just as hard to obtain a novel experience as we will to get a meal or a mate. The difficulty here for those of us who are trying to focus amid competing activities is clear: The very brain region we need to rely on for staying on task is easily distracted by shiny new objects. In multitasking, we unknowingly enter an addiction loop as the brain’s novelty centers become rewarded for processing shiny new stimuli, to the detriment of our prefrontal cortex, which wants to stay on task and gain the rewards of sustained effort and attention. We need to train ourselves to go for the long reward, and forgo the short one. Don’t forget that the awareness of an unread e-mail sitting in your inbox can effectively reduce your IQ by 10 points, and that multitasking causes information you want to learn to be directed to the wrong part of the brain.

There are individual differences in cognitive style, and the trade-off present in multitasking often comes down to focus versus creativity. When we say that someone is focused, we usually mean they’re attending to what is right in front of them and avoiding distraction, either internal or external. On the other hand, creativity often implies being able to make connections between disparate things. We consider a discovery to be creative if it explores new ideas through analogy, metaphor, or tying together things that we didn’t realize were connected. This requires a delicate balance between focus and a more expansive view. Some individuals who take dopamine-enhancing drugs such as methylphenidate report that it helps them to stay motivated to work, to stay focused, and to avoid distractions, and that it facilitates staying engaged with repetitious tasks. The downside, they report, is that it can destroy their ability to make connections and associations, and to engage in expansive, creative thinking—underscoring the see-saw relationship between focus and creativity.

There is an interesting gene known as COMT that appears to modulate the ease with which people can switch tasks, by regulating the amount of dopamine in the prefrontal cortex. COMT carries instructions to the brain for how to make an enzyme (in this case, catechol-O-methyltransferase, hence the abbreviation COMT) that helps the prefrontal cortex to maintain optimal levels of dopamine and noradrenaline, the neurochemicals critical to paying attention. Individuals with a particular version of the COMT gene (called Val158Met) have low dopamine levels in the prefrontal cortex and, at the same time, show greater cognitive flexibility, easier task switching, and more creativity than average. Individuals with a different version of the COMT gene (called Val/Val homozygotes) have high dopamine levels, less cognitive flexibility, and difficulty task switching. This converges with anecdotal observations that many people who appear to have attention deficit disorder—characterized by low dopamine levels—are more creative and that those who can stay very focused on a task might be excellent workers when following instructions but are not especially creative. Keep in mind that these are broad generalizations based on aggregates of statistical data, and there are many individual variations and individual differences.

Ruth, Ernie, and Peter were stymied by everyday events such as cooking a meal, clearing the house of broken, unwanted items, or redecorating a small office. Accomplishing any task requires that we define a beginning and an ending. In the case of more complex operations, we need to break the whole thing into manageable chunks, each with its own beginning and ending. Building a house, for example, might seem impossibly complicated. But builders don’t look at it that way—they divide the project into stages and chunks: grading and preparing the site, laying the foundation, framing the super structure and supports, plumbing, electrical, installing drywall, floors, doors, cabinets, painting. And then each of those stages is further divided into manageable chunks. Prefrontal cortex damage, among other things, can lead to deficits both in event segmentation—that’s why Peter had trouble rearranging the office—and in stitching the segmented events back into the proper order—why Ruth was cooking the food out of order.

One of the most complicated things that humans do is to put the components of a multipart sequence in their proper temporal order. To accomplish temporal ordering, the human brain has to set up different scenarios, a series of what-ifs, and juggle them in different configurations to figure out how they affect one another. We estimate completion times and work backward. Temporal order is represented in the hippocampus alongside memory and spatial maps. If you’re planting flowers, you dig a hole first, then take the flowers out of their temporary pots, then put the flowers in the ground, then fill the hole with dirt, then water them. This seems obvious for something we do all the time, but anyone who has ever tried to put together IKEA furniture knows that if you do things in the wrong order, you might have to take it apart and start all over from the beginning. The brain is adept at this kind of ordering, requiring communication between the hippocampus and the prefrontal cortex, which is working away busily assembling a mental image of the finished outcome alongside mental images of partly finished outcomes and—subconsciously most of the time—picturing what would happen if you did things out of sequence. (You really don’t want to whip the cream after you’ve spooned it onto the pie—what a mess!)

More cognitively taxing is being able to take a set of separate operations, each with their own completion time, and organize their start times so that they are all completed at the same time. Two common human activities where this is done make an odd couple: cooking and war.

You know from experience that you can’t serve the pie just as it comes out of the oven because it will be too hot, or that it takes some time for your oven to preheat. Your goal of being able to serve the pie at the right time means you need to take into account these various timing parameters, and so you probably work out a quick, seat-of-the-pants calculation about how long the combined pie cooking and cooling period is, how long it will take everyone to eat their soup and their pasta, and what an appropriate period might be to wait between the time everyone finishes the main course and when they’ll want dessert (if you serve it too quickly, they may feel rushed; if you wait too long, they may grow impatient). From here, we work backward from the time we want to serve the pie to when we need to preheat the oven to ensure the timing is right.

Wartime maneuvers also require essentially the same precise organization and temporal planning. In World War II, the Allies took the German army by surprise, using a series of deceptions and the fact that there was no harbor at the invasion site; the Germans assumed it would be impossible to maintain an offensive without shipborne materials. Unprecedented amounts of supplies and personnel were spirited to Normandy in secret so that artificial, portable harbors could be swiftly constructed at Saint-Laurent-sur-Mer and Arromanches. The harbors, code-named Mulberry, were assembled like an enormous jigsaw puzzle and, when fully operational, could move 7,000 tons of vehicles, supplies, and personnel per day. The operation required 545,000 cubic yards of concrete, 66,000 tons of reinforcing steel, 9,000 standards of timber (approximately 1.5 million cubic feet), 440,000 square yards of plywood, and 97 miles of steel wire rope, taking 20,000 men to build it, all of which had to arrive in the proper order and at the proper time. Building it and transporting it to Normandy without detection or suspicion is considered one of the greatest engineering and military feats in human history and a masterpiece of human planning and timing—thanks to connections between the frontal lobes and the hippocampus.

The secret to planning the invasion of Normandy was that, like all projects that initially seem overwhelmingly difficult, it was broken up deftly into small tasks—thousands of them. This principle applies at all scales: If you have something big you want to get done, break it up into chunks—meaningful, implementable, doable chunks. It makes time management much easier; you only need to manage time to get a single chunk done. And there’s neurochemical satisfaction at the completion of each stage.

Then there is the balance between doing and monitoring your progress that is necessary in any multistep project. Each step requires that we stop the actual work every now and then to view it objectively, to ensure we’re carrying it out properly and that we’re happy with the results so far. We step back in our mind’s eye to inspect what we did, figure out whether we need to redo something, whether we can move forward. It’s the same whether we’re sanding a fine wood cabinet, kneading dough, brushing our hair, painting a picture, or building a PowerPoint presentation. This is a familiar cycle: We work, we inspect the work, we make adjustments, we push forward. The prefrontal cortex coordinates the comparison of what’s out-there-in-the-world with what’s in your head. Think of an artist who evaluates whether the paint she just applied had a desirable effect on the painting. Or consider something as simple as mopping the floor—we’re not just blindly swishing the mop back and forth; we’re ensuring that the floor comes clean. And if it doesn’t, we go back and scrub certain spots a little more. In many tasks, both creative and mundane, we must constantly go back and forth between work and evaluation, comparing the ideal image in our head with the work in front of us.

This constant back-and-forth is one of the most metabolism-consuming things that our brain can do. We step out of time, out of the moment, and survey the big picture. We like what we see or we don’t, and then we go back to the task, either moving forward again, or backtracking to fix a conceptual or physical mistake. As you now know well, such attention switching and perspective switching is depleting, and like multitasking, it uses up more of the brain’s nutrients than staying engaged in a single task.

In situations like this, we are functioning as both the boss and the employee. Just because you’re good at one doesn’t mean you’ll be any good at the other. Every general contractor knows painters, carpenters, or tile setters capable of great work, but only when someone is standing by to give perspective. Many subcontractors actually doing the work have neither the desire nor the ability to think about budgets or make decisions about the optimum trade-off between time and money. Indeed, left to their own devices, some are such perfectionists that nothing ever gets finished. I once worked with a recording engineer who blew through a budget trying to make one three-minute song perfect before I was able to stop him and remind him that we still had eleven other songs to do. In the world of music, it’s no accident that only a few artists produce themselves effectively (Stevie Wonder, Paul McCartney, Prince, Jimmy Page, Joni Mitchell, and Steely Dan). Many, many PhD students fall into this category, never finishing their degrees because they can’t move forward—they’re too perfectionistic. The real job in supervising PhD students isn’t teaching them facts; it’s keeping them on track.

Planning and doing require separate parts of the brain. To be both a boss and a worker, one needs to form and maintain multiple, hierarchically organized attentional sets and then bounce back and forth between them. It’s the central executive in your brain that notices that the floor is dirty. It forms an executive attentional set for “mop the floor” and then constructs a worker attentional set for doing the actual mopping. The executive set cares only that the job is done and is done well. It might find the mop, a bucket the mop fits into, the floor cleaning product. Then, the worker set gets down to wetting the mop, starting the job, monitoring the mop head so you know when it’s time to put it back in the bucket, rinsing the head now and then when it gets too dirty. A good worker will be able to call upon a level of attention subordinate to all that and momentarily become a kind of detail-oriented worker who sees a spot that won’t come out with the mop, gets down on his hands and knees, and scrapes or scrubs or uses whatever method necessary to get that spot out. This detail-oriented worker has a different mind-set and different goals from those of the regular worker or boss. If your spouse walks in, after the detail guy has been working for fifteen minutes on a smudge off in the corner, and says, “What—are you crazy!? You’ve got the entire floor left to do and the guests will be here in fifteen minutes!” the detail guy is pulled up into the perspective of the boss and sees the big picture again.

All this level shifting, from boss down to worker down to detail worker and back again, is a shifting of the attentional set and it comes with the metabolic costs of multitasking. It’s exactly the reason a good hand car wash facility has these jobs spread out among three classes of workers. There are the car washers who do just the broad strokes of soaping down and rinsing the whole car. When they’re done, the detail guys come in and look closely to see if there are any leftover dirty spots, to clean the wheels and bumpers, and present the car to you. There’s also a boss who’s looking over the whole operation to make sure that no worker spends too much or too little time at any one point or on any one car. By dividing up the roles in this way, each worker forms one, rather than three, attentional sets and can throw himself into that role without worrying about anything at a different level.

We can all learn from this because we all have to be workers in one form or another at least some of the time. The research says that if you have chores to do, put similar chores together. If you’ve collected a bunch of bills to pay, just pay the bills—don’t use that time to make big decisions about whether to move to a smaller house or buy a new car. If you’ve set aside time to clean the house, don’t also use that time to repair your front steps or reorganize your closet. Stay focused and maintain a single attentional set through to completion of a job. Organizing our mental resources efficiently means providing slots in our schedules where we can maintain an attentional set for an extended period. This allows us to get more done and finish up with more energy.

Related to the manager/worker distinction is that the prefrontal cortex contains circuits responsible for telling us whether we’re controlling something or someone else is. When we set up a system, this part of the brain marks it as self-generated. When we step into someone else’s system, the brain marks it that way. This may help explain why it’s easier to stick with an exercise program or diet that someone else sets up: We typically trust them as “experts” more than we trust ourselves. “My trainer told me to do three sets of ten reps at forty pounds—he’s a trainer, he must know what he’s talking about. I can’t design my own workout—what do I know?” It takes Herculean amounts of discipline to overcome the brain’s bias against self-generated motivational systems. Why? Because as with the fundamental attribution error we saw in Chapter 4, we don’t have access to others’ minds, only our own. We are painfully aware of all the fretting and indecision, all the nuances of our internal decision-making process that led us to reach a particular conclusion. (I really need to get serious about exercise.) We don’t have access to that (largely internal) process in others, so we tend to take their certainty as more compelling, in many cases, than our own. (Here’s your program. Do it every day.)

To perform all but the simplest tasks requires flexible thinking and adaptiveness. Along with the many other distinctly human traits discussed, the prefrontal cortex allows us the flexibility to change behavior based on context. We alter the pressure required to slice a carrot versus slicing cheese; we explain our work differently to our grandma than to our boss; we use a pot holder to take something out of the oven but not out of the refrigerator. The prefrontal cortex is necessary for such adaptive strategies for living daily life, whether we’re foraging for food on the savanna or living in skyscrapers in the city.

The balance between flexible thinking and staying on task is assessed by neuropsychologists using a test called the Wisconsin Card Sorting Test. People are asked to sort a deck of specially marked cards according to a rule. In the example below, the instruction might be to sort the new, unnumbered card according to the shade of gray, in which case it should be put on pile 1. After getting used to sorting a bunch of cards according to this rule, you’re then given a new rule, for example, to sort by shape (in which case the new card should be put on pile 4) or to sort by number (in which case the new card should be put on pile 2).

023.psd

People with frontal lobe deficits have difficulty changing the rule once they’ve started; they tend to perseverate, applying an old rule after a new one is given. Or they show an inability to stick to a rule, and err by suddenly applying a new rule without being prompted. It was recently discovered that holding a rule in mind and following it is accomplished by networks of neurons that synchronize their firing patterns, creating a distinctive brain wave. For example, if you’re following the shading rule in the card sorting task, your brain waves will oscillate at a particular frequency until you switch to follow shape, and then they’ll oscillate at a different frequency. You can think of this by analogy to radio broadcasts: It’s as though a given rule operates in the brain on a particular frequency so that all the instructions and communication of that rule can remain distinct from other instructions and communications about other rules, each of which is transmitted and coordinated on its own designated frequency band.

Reaching our goals efficiently requires the ability to selectively focus on those features of a task that are most relevant to its completion, while successfully ignoring other features or stimuli in the environment that are competing for attention. But how do you know what factors are relevant and what factors aren’t? This is where expertise comes in—in fact, it could be said that what distinguishes experts from novices is that they know what to pay attention to and what to ignore. If you don’t know anything at all about cars and you’re trying to diagnose a problem, every screech, sputter, and knock in the engine is potential information and you try to attend to them all. If you’re an expert mechanic, you home in on the one noise that is relevant and ignore the others. A good mechanic is a detective (as is a good physician), investigating the origins of a problem so as to learn the story of what happened. Some car components are relevant to the story and some aren’t. The fact that you filled up with a low-octane gasoline this morning might be relevant to the backfiring. The fact that your brakes squeak isn’t. Similarly, some temporal events are important and some aren’t. If you put in that low-octane gas this morning, it’s different than if you did it a year ago.

We take for granted that movies have well-defined temporal frames—scenes—parts of the story that are segmented with a beginning and an end. One way of signaling this is that when one scene ends, there is a break in continuity—a cut. Its name comes from analog film; in the editing room, the film would be physically cut at the end of one event, and spliced to the beginning of another (nowadays, this is done digitally and there is no physical cutting, but the digital editing tools use a little scissors icon to represent the action, and we still call this a cut, just as we “cut and paste” with our word processors). Without cuts signifying the end of a scene, it would be difficult for the brain to process and digest the material as it became a single onslaught of information, 120 minutes long. Of course modern filmmaking, particularly in action movies, uses far more cuts than was previously the norm, as a way to engage our ever hungrier appetite for visual stimulation.

Movies use the cut in three different ways, which we’ve learned to interpret by experience. A cut can signify a discontinuity in time (the new scene begins three hours later), in place (the new scene begins on the other side of town), or in perspective (as when you see two people talking and the camera shifts from looking at one face to looking at the other).

These conventions seem obvious to us. But we’ve learned them through a lifetime of exposure to comics, TV, and films. They are actually cultural inventions that have no meaning for someone outside our culture. Jim Ferguson, an anthropologist at Stanford, describes his own empirical observation of this when he was doing fieldwork in sub-Saharan Africa:

When I was living among the Sotho, I went into the city one day with one of the villagers. The city is something he had no experience with. This was an intelligent and literate man—he had read the Bible, for example. But when he saw a television for the first time in a shop, he couldn’t make heads or tails of what was going on. The narrative conventions that we use to tell a story in film and TV were completely unknown to him. For example, one scene would end and another would begin at a different time and place. This gap was completely baffling to him. Or during a single scene, the camera would focus on one person, then another, in order to take another perspective. He struggled, but simply couldn’t follow the story. We take these for granted because we grew up with them.

Film cuts are extensions of culturally specific storytelling conventions that we also see in our plays, novels, and short stories. Stories don’t include every single detail about every minute in a character’s life—they jump to salient events, and we have been trained to understand what’s going on.

Our brains encode information in scenes or chunks, mirroring the work of writers, directors, and editors. To do that, the information packets, like movie scenes, must have a beginning and an ending. Implicit in our management of time is that our brains automatically organize and segment the things we see and do into chunks of activity. Richard is not building a house today or even building the bathroom, he is preparing the kitchen floor for the tile. Even Superman chunks—he may wake up every morning and tell Lois Lane, “I’m off to save the world today, honey,” but what he tells himself is the laundry list of chunked tasks that need to be done to accomplish that goal, each with a well-defined beginning and ending. (1. Capture Lex Luthor. 2. Dispose of Kryptonite safely. 3. Hurl ticking bomb into outer space. 4. Pick up clean cape from dry cleaner.)

Chunking fuels two important functions in our lives. First, it renders large-scale projects doable by giving us well-differentiated tasks. Second, it renders the experiences of our lives memorable by segmenting them with well-defined beginnings and endings—this in turn allows memories to be stored and retrieved in manageable units. Although our actual waking time is continuous, we can easily talk about the events of our lives as being differentiated in time. The act of having breakfast has a more or less well differentiated beginning and ending, as does your morning shower. They don’t bleed into one another in your memory because the brain does the editing, segmenting, and labeling for you. And we can subdivide these scenes at will. We make sense of the events in our lives by segmenting them, giving them temporal boundaries. We don’t treat our daily lives as undifferentiated moments, we group moments into salient events such as “brushing my teeth,” “eating breakfast,” “reading the newspaper,” and “driving to the train station.” That is, our brains implicitly impose a beginning and an ending to events. Similarly, we don’t perceive or remember a football game as a continuous sequence of action, we remember the game in terms of its quarters, downs, and specific important plays. And it’s not just because the rules of the game create these divisions. When talking about a particular play, we can further subdivide: We remember the running back peeling off into the open; the quarterback dodging the defensive linemen; the arm of the quarterback stretched back and ready to throw; the fake throw; and then the quarterback suddenly running, stride-by-stride, for a surprise touchdown.

There is a dedicated portion of the brain that partitions long events into chunks, and it is in—you guessed it—the prefrontal cortex. An interesting feature of this event segmentation is that hierarchies are created without our even thinking about them, and without our instructing our brains to make them. That is, our brains automatically create multiple, hierarchical representations of reality. And we can review these in our mind’s eye from either direction—from the top down, that is, from large time scales to small, or from the bottom up, from small time scales to large.

Consider a question such as asking a friend, “What did you do yesterday?” Your friend might give a simple, high-level overview such as “Oh, yesterday was like any other day. I went to work, came home, had dinner, and then watched TV.” Descriptions like these are typical of how people talk about events, making sense of a complex dynamic world in part by segmenting it into a modest number of meaningful units. Notice how this response implicitly skips over a lot of detail that is probably generic and unremarkable, concerning how your friend woke up and got out of the house. And the description jumps right to his or her workday. This is followed by two more salient events: eating dinner and watching TV.

The proof that hierarchical processing exists is in the fact that normal, healthy people can subdivide their answer into increasingly smaller parts if you ask them to. Prompt them with “Tell me more about the dinner?” and you might get a response like “Well, I made a salad, heated up some leftovers from the party we had the night before, and then finished that nice Bordeaux that Heather and Lenny brought over, even though Lenny doesn’t drink.”

And you can drill down still more: “How exactly did you prepare the salad? Don’t leave anything out.”

“I took some lettuce out of the crisper in the refrigerator, washed it, sliced some tomatoes, shredded some carrots, and then added a can of hearts of palm. Then I put on some Kraft Italian dressing.”

“Tell me in even more detail how you prepared the lettuce. As though you were telling someone who has never done this before.”

“I took out a wooden salad bowl from the cupboard and wiped it clean with a dish towel. I opened the refrigerator and took out a head of red leaf lettuce from the vegetable crisper. I peeled off layers of lettuce leaves, looked carefully to make sure that there weren’t any bugs or worms, tore the leaves into bite-size pieces, then soaked them in a bowl of water for a bit. Then I drained the water, rinsed the leaves under running water, and put them in a salad spinner to dry them. Then I put all the now-dry lettuce into the salad bowl and added the other ingredients I mentioned.”

Each of these descriptions holds a place in the hierarchy, and each can be considered an event with a different level of temporal resolution. There is a natural level at which we tend to describe these events, mimicking the natural level of description I wrote about in Chapter 2—the basic level of categories in describing things like birds and trees. If you use a level of description that is too high or too low in the hierarchy—which is to say, a level of description that is unexpected or atypical—it is usually to make some kind of point. It seems aberrant to use the wrong level of description, and it violates the Gricean maxim of quantity.

Artists often flout these norms to make an artistic gesture, to cause the audience to see things differently. We can imagine a film sequence in which someone is preparing a salad, and every little motion of tearing lettuce leaves is shown as a close-up. This might seem to violate a storytelling convention of recounting information that moves the story forward, but in surprising us with this seemingly unimportant lettuce tearing, the filmmaker or storyteller creates a dramatic gesture. By focusing on the mundane, it may convey something about the mental state of the character, or build tension toward an impending crisis in the story. Or maybe we see a centipede in the lettuce that the character doesn’t notice.

The temporal chunking that our brains create isn’t always explicit. In films, when the scene cuts from one moment to another, our brains automatically fill in the missing information, often as a result of a completely separate set of cultural conventions. In television shows from the relatively modest 1960s (Rob and Laura Petrie slept in separate twin beds!), a man and a woman might be seen sitting on the edge of the bed kissing before the scene fades to black and cuts to the next morning, when they wake up together. We’re meant to infer a number of intimate activities that occurred between the fade-out and the new scene, activities that could not be shown on network TV in the 1960s.

A particularly interesting example of inference occurs in many single-panel comics. Often the humor requires you to imagine what happened in the instant immediately before or immediately after the panel you’re being shown. It’s as though the cartoonist devised a series of four or five panels to tell the story and has chosen to show you only one—and typically not even the funniest one but the one right before or right after what would be the funniest panel. It’s this act of audience participation and imagination that makes the single-panel comic so engaging and so rewarding—to get the joke, you actually have to figure out what some of those missing panels must be.

Take this example from Bizarro:

024.psd

The humor is not so much in what the judge is saying but in our imagining what must have gone on in the courtroom moments before to elicit such a warning! Because we are coparticipants in figuring out the joke, cartoons like these are more memorable and pleasurable than ones in which every detail is handed to us. This follows a well-established principle of cognitive psychology called levels of processing: Items that are processed at a deeper level, with more active involvement by us, tend to become more strongly encoded in memory. This is why passive learning through textbooks and lectures is not nearly as effective a way to learn new material as is figuring it out for yourself, a method called peer instruction that is being introduced into classrooms with great success.

Sleep Time

You go to bed later or get up earlier. A daily time-management tactic we all use and barely notice revolves around that large block of lost time that can make all of us feel unproductive: sleep. It’s only recently that we’ve begun to understand the enormous amount of cognitive processing that occurs while we’re asleep. In particular, we now know that sleep plays a vital role in the consolidation of events of the previous few days, and therefore in the formation and protection of memories.

Newly acquired memories are initially unstable and require a process of neural strengthening or consolidation to become resistant to interference, and to become accessible to us for retrieval. For a memory to be accessible means that we can retrieve it using a variety of different cues. Take, for example, that lunch of shrimp scampi I had at the beach a few weeks ago with my high-school buddy Jim Ferguson. If my memory system is functioning normally, by today, any of the following queries should be able to evoke one or more memories associated with the experience:

In other words, there are a variety of ways that a single event such as a lunch with an old friend can be contextualized. For all of these attributes to be associated with the event, the brain has to toss and turn and analyze the experience after it happens, extracting and sorting information in complex ways. And this new memory needs to be integrated into existing conceptual frameworks, integrated into old memories previously stored in the brain (shrimp is seafood, Jim Ferguson is a friend from high school, good table manners do not include wiping shrimp off your mouth with the tablecloth).

In the last few years, we’ve gained a more nuanced understanding that these different processes are accomplished during distinct phases of sleep. These processes both preserve memories in their original form, and extract features and meaning from the experiences. This allows new experiences to become integrated into a more generalized and hierarchical representation of the outside world that we hold inside our heads. Memory consolidation requires that our brains fine-tune the neural circuits that first encountered the new experience. According to one theory that is gaining acceptance, this has to be done when we’re asleep, or otherwise the activity in those circuits would be confused with an actually occurring experience. All of this tuning, extraction, and consolidation doesn’t happen during one night but unfolds over several sequential nights. Disrupted sleep even two or three days after an experience can disrupt your memory of it months or years later.

Sleep experts Matthew Walker (from UC Berkeley) and Robert Stickgold (from Harvard Medical School) note the three distinct kinds of information processing that occur during sleep. The first is unitization, the combining of discrete elements or chunks of an experience into a unified concept. For example, musicians and actors who are learning a new piece or scene might practice one phrase at a time; unitization during sleep binds these together into a seamless whole.

The second kind of information processing we accomplish during sleep is assimilation. Here, the brain integrates new information into the existing network structure of other things you already knew. In learning new words, for example, your brain works unconsciously to construct sample sentences with them, turning them over and experimenting with how they fit into your preexisting knowledge. Any brain cells that used a lot of energy during the day show an increase of ATP (a neural signaling coenzyme) during sleep, and this has been associated with assimilation.

The third process is abstraction, and this is where hidden rules are discovered and then entered into memory. If you learned English as a child, you learned certain rules about word formation such as “add s to the end of a word to make it plural” or “add ed to the end of a word to make it past tense.” If you’re like most learners, no one taught you this—your brain abstracted the rule by being exposed to it in multiple instances. This is why children make the perfectly logical mistake of saying “he goed” instead of “he went,” or “he swimmed” instead of “he swam.” The abstraction is correct; it just doesn’t apply to these particular irregular verbs. Across a range of inferences involving not just language but mathematics, logic problems, and spatial reasoning, sleep has been shown to enhance the formation and understanding of abstract relations, so much so that people often wake having solved a problem that was unsolvable the night before. This may be part of the reason why young children just learning language sleep so much.

Thus, many different kinds of learning have been shown to be improved after a night’s sleep, but not after an equivalent period of being awake. Musicians who learn a new melody show significant improvement in performing it after one night’s sleep. Students who were stymied by a calculus problem the day it was presented are able to solve it more easily after a night’s sleep than an equivalent amount of waking time. New information and concepts appear to be quietly practiced while we’re asleep, sometimes showing up in dreams. A night of sleep more than doubles the likelihood that you’ll solve a problem requiring insight.

Many people remember the first day they played with a Rubik’s Cube. That night they report that their dreams were disturbed by images of those brightly colored squares and of them rotating and clicking in their sleep. The next day, they are much better at the game—while asleep, their brains had extracted principles of where things were, relying on both their conscious perceptions of the previous day and myriad unconscious perceptions. Researchers found the same thing when studying Tetris players’ dreams. Although the players reported dreaming about Tetris, especially early on in their learning, they didn’t dream about specific games or moves they had made; rather, they dreamed about abstract elements of the game. The researchers hypothesized that this created a template by which their brains could organize and store just the sort of generalized information that would be necessary to succeed at the game.

This kind of information consolidation happens all the time in our brains, but it happens more intensely for tasks we are more engaged with. Those calculus students didn’t simply glance at the problem during the day, they tried actively to solve it, focused attention on it, and then reapproached it after a night’s sleep. If you are only dimly engaged in your French language tapes, it is unlikely your sleep will help you to learn grammar and vocabulary. But if you struggle with the language for an hour or more during the day, investing your focus, energy, and emotions in it, then it will be ripe for replay and elaboration during your sleep. This is why language immersion works so well—you’re emotionally invested and interpersonally engaged with the language as you attempt to survive in the new linguistic environment. This kind of learning, in a way, is hard to manufacture in the classroom or language laboratory.

Perhaps the most important principle of memory is that we tend to remember best those things we care about the most. At a biological level, neurochemical tags are created and attached to experiences that are emotionally important; and those appear to be the ones that our dreams grab hold of.

All sleep isn’t created equal when it comes to improving memory and learning. The two main categories of sleep are REM (rapid eye movement) and NREM (non-REM), with NREM sleep being further divided into four stages, each with a distinct pattern of brain waves. REM sleep is when our most vivid and detailed dreams occur. Its most obvious feature is temporary selective muscle suppression (so that if you’re running in your dream, you don’t get out of bed and start running around the house). REM sleep is also characterized by low-voltage brain wave patterns (EEG), and the rapid, flickering eyelid movements for which it is named. It used to be thought that all our dreaming occurs during REM sleep, but there is newer evidence that we can dream during NREM sleep as well, although those dreams tend to be less elaborate. Most mammals have physiologically similar states, and we assume they’re dreaming, but we can’t know for sure. Additional dreamlike states can occur just as we’re falling asleep and just as we’re waking up; these can feature vivid auditory and visual imagery that seem like hallucinations.

REM sleep is believed to be the stage during which the brain performs the deepest processing of events—the unitization, assimilation, and abstraction mentioned above. The brain chemicals that mediate it include decreases in noradrenaline and increased levels of acetylcholine and cortisol. A preponderance of theta wave activity facilitates associative linking between disparate brain regions during REM. This has two interesting effects. The first is that it allows our brains to draw out connections, deep underlying connections, between the events in our lives that we might not otherwise perceive, through activating thoughts that are far-flung in our consciousness and unconsciousness. It’s what lets us perceive, for example, that clouds look a bit like marshmallows, or that “Der Kommissar” by Falco uses the same musical hook as “Super Freak” by Rick James. The second effect is that it appears to cause dreams in which these connections morph into one another: You dream you’re eating a marshmallow and it suddenly floats up to the sky and becomes a rain cloud; you’re watching Rick James on TV and he’s driving a Ford Falcon (the brain can be a terrible punster—Falco becomes Falcon); you’re walking down a street and suddenly the street is in a completely different town, and the sidewalk turns to water. These distortions are a product of the brain exploring possible relations among disparate ideas and things. And it’s a good thing they happen only while you’re asleep or your view of reality would be unreliable.

There’s another kind of distortion that occurs when we sleep—time distortion. What may seem like a long, elaborate dream spanning thirty minutes or more may actually occur within the span of a single minute. This may be due to the fact that the body’s own internal clock is in a reduced state of activation (you might say it is asleep, too) and so becomes unreliable.

The transition between REM and NREM sleep is believed to be mediated by GABAergic neurons near the brainstem, those same neurons that act as inhibitors in the prefrontal cortex. Current thinking is that these and other neurons in the brain act as switches, bringing us from one state to the other. Damage to one part of this brain region causes a dramatic reduction in REM sleep, while damage to another causes an increase.

A normal human sleep cycle lasts about 90–100 minutes. Around 20 of those minutes on average are spent dreaming in REM sleep, and 70–80 are NREM sleep, although the length varies throughout the night. REM periods may be only 5–10 minutes at the beginning of the night and expand to 30 minutes or more later in the early morning hours. Most of the memory consolidation occurs in the first two hours of slow-wave, NREM sleep, and during the last 90 minutes of REM sleep in the morning. This is why drinking and drugs (including sleep medications) can interfere with memory, because that crucial first sleep cycle is compromised by intoxication. And this is why sleep deprivation leads to memory loss—because the crucial 90 minutes of sleep at the end is either interrupted or never occurs. And you can’t make up for lost sleep time. Sleep deprivation after a day of learning prevents sleep-related improvement, even three days later following two nights of good sleep. This is because recovery sleep or rebound sleep is characterized by abnormal brain waves as the dream cycle attempts to resynchronize with the body’s circadian rhythm.

Sleep may also be a fundamental property of neuronal metabolism. In addition to the information consolidation functions, a new finding in 2013 showed that sleep is necessary for cellular housekeeping. Like the garbage trucks that roam city streets at five A.M., specific metabolic processes in the glymphatic system clear neural pathways of potentially toxic waste products that accumulate during waking thought. As discussed in Chapter 2, we also know that it is not an all-or-none phenomenon: Parts of the brain sleep while others do not, leading to not just the sense but the reality that sometimes we are half-asleep or sleeping only lightly. If you’ve ever had a brain freeze—momentarily unable to remember something obvious—or if you’ve ever found yourself doing something silly like putting orange juice on your cereal, it may well be that part of your brain is taking a nap. Or it could just be that you’re thinking about too many things at once, having overloaded your attentional system.

Several factors contribute to feelings of sleepiness. First, the twenty-four-hour cycle of light and darkness influences the production of neurochemicals specifically geared to induce wakeful alertness or sleepiness. Sunlight impinging on photoreceptors in the retina triggers a chain reaction of processes resulting in stimulation of the suprachiasmatic nucleus and the pineal gland, a small gland near the base of the brain, about the size of a grain of rice. About one hour after dark, the pineal gland produces melatonin, a neurohormone partly responsible for giving us the urge to sleep (and causing the brain to go into a sleep state).

The sleep-wake cycle can be likened to a thermostat in your home. When the temperature falls to a certain point, the thermostat closes an electrical circuit, causing your furnace to turn on. Then, when your preset, desired temperature is reached, the thermostat interrupts the circuit and the furnace turns off again. Sleep is similarly governed by neural switches. These follow a homeostatic process and are influenced by a number of factors, including your circadian rhythm, food intake, blood sugar level, condition of your immune system, stress, sunlight and darkness. When your homeostat increases above a certain point, it triggers the release of neurohormones that induce sleep. When your homeostat decreases below a certain point, a separate set of neurohormones are released to induce wakefulness.

At one time or another, you’ve probably thought that if only you could sleep less, you’d get so much more done. Or that you could just borrow time by sleeping one hour less tonight and one hour more tomorrow night. As enticing as these seem, they’re not borne out by research. Sleep is among the most critical factors for peak performance, memory, productivity, immune function, and mood regulation. Even a mild sleep reduction or a departure from a set sleep routine (for example, going to bed late one night, sleeping in the next morning) can produce detrimental effects on cognitive performance for many days afterward. When professional basketball players got ten hours of sleep a night, their performance improved dramatically: Free-throw and three-point shooting each improved by 9%.

Most of us follow a sleep-waking pattern of sleeping for 6–8 hours followed by staying awake for approximately 16–18. This is a relatively recent invention. For most of human history, our ancestors engaged in two rounds of sleep, called segmented sleep or bimodal sleep, in addition to an afternoon nap. The first round of sleep would occur for four or five hours after dinner, followed by an awake period of one or more hours in the middle of the night, followed by a second period of four or five hours of sleep. That middle-of-the-night waking might have evolved to help ward off nocturnal predators. Bimodal sleep appears to be a biological norm that was subverted by the invention of artificial light, and there is scientific evidence that the bimodal sleep-plus-nap regime is healthier and promotes greater life satisfaction, efficiency, and performance.

To many of us raised with the 6–8 hour, no-nap sleep ideal, this sounds like a bunch of hippie-dippy, flaky foolishness at the fringe of quackery. But it was discovered (or rediscovered, you might say) by Thomas Wehr, a respected scientist at the U.S. National Institute of Mental Health. In a landmark study, he enlisted research participants to live for a month in a room that was dark for fourteen hours a day, mimicking conditions before the invention of the lightbulb. Left to their own devices, they ended up sleeping eight hours a night but in two separate blocks. They tended to fall asleep one or two hours after the room went dark, slept for about four hours, stayed awake for an hour or two, and then slept for another four hours.

Millions of people report difficulty sleeping straight through the night. Because uninterrupted sleep appears to be our cultural norm, they experience great distress and ask their doctors for medication to help them stay asleep. Many sleep medications are addictive, have side effects, and leave people feeling drowsy the next morning. They also interfere with memory consolidation. It may be that a simple change in our expectations about sleep and a change to our schedules can go a long way.

There are large individual differences in sleep cycles. Some people fall asleep in a few minutes, others take an hour or more at night. Both are considered within the normal range of human behavior—what is important is what is normal for you, and to notice if there is a sudden change in your pattern that could indicate disease or disorder. Regardless of whether you sleep straight through the night or adopt the ancient bimodal sleep pattern, how much sleep should you get? Rough guidelines from research suggest the following, but these are just averages—some individuals really do require more or less than what is indicated, and this appears to be hereditary. Contrary to popular myth, the elderly do not need less sleep; they are just less able to sleep for eight hours at a stretch.

AVERAGE SLEEP NEEDS

Age

Needed sleep

Newborns (0–2 months)

12–18 hours

Infants (3–11 months)

14–15 hours

Toddlers (1–3 years)

12–14 hours

Preschoolers (3–5 years)

11–13 hours

Children (5–10 years)

10–11 hours

Preteens and Teenagers (10–17)

8 1/2–9 1/4 hours

Adults

6–10 hours

One out of every three working Americans gets less than six hours’ sleep per night, well below the recommended range noted above. The U.S. Centers for Disease Control and Prevention (CDC) declared sleep deprivation a public health epidemic in 2013.

The prevailing view until the 1990s was that people could adapt to chronic sleep loss without adverse cognitive effects, but newer research clearly says otherwise. Sleepiness was responsible for 250,000 traffic accidents in 2009, and is one of the leading causes of friendly fire—soldiers mistakenly shooting people on their own side. Sleep deprivation was ruled to be a contributing factor in some of the most well-known global disasters: the nuclear power plant disasters at Chernobyl (Ukraine), Three Mile Island (Pennsylvania), Davis-Besse (Ohio), and Rancho Seco (California); the oil spill from the Exxon Valdez; the grounding of the cruise ship Star Princess; and the fatal decision to launch the Challenger space shuttle. Remember that Air France plane that crashed into the Atlantic Ocean in June 2009, killing all 288 people on board? The captain had been running on only one hour of sleep, and the copilots were also sleep deprived.

In addition to loss of life, there is the economic impact. Sleep deprivation is estimated to cost U.S. businesses more than $150 billion a year in absences, accidents, and lost productivity—for comparison, that’s roughly the same as the annual revenue of Apple Corporation. If sleep-related economic losses were a business, it would be the sixth-largest business in the country. It’s also associated with increased risk for heart disease, obesity, stroke, and cancer. Too much sleep is also detrimental, but perhaps the most important factor in achieving peak alertness is consistency, so that the body’s circadian rhythms can lock into a consistent cycle. Going to bed just one hour late one night, or sleeping in for an hour or two just one morning, can affect your productivity, immune function, and mood significantly for several days after the irregularity.

Part of the problem is cultural—our society does not value sleep. Sleep expert David K. Randall put it this way:

While we’ll spend thousands on lavish vacations to unwind, grind away hours exercising and pay exorbitant amounts for organic food, sleep remains ingrained in our cultural ethos as something that can be put off, dosed or ignored. We can’t look at sleep as an investment in our health because—after all—it’s just sleep. It is hard to feel like you’re taking an active step to improve your life with your head on a pillow.

Many of us substitute drugs for good sleep—an extra cup of coffee to take the place of that lost hour or two of sleep, and a sleeping pill if all that daytime caffeine makes it hard to fall asleep at night. It is true that caffeine enhances cognitive function, but it works best when you’ve been maintaining a consistent sleep pattern over many days and weeks; as a substitute for lost sleep, it may keep you awake, but it will not keep you alert or performing at peak ability. Sleeping pills have been shown to be counterproductive to both sleep and productivity. In one study, cognitive behavior therapy—a set of practices to change thought and behavior patterns—was found to be significantly more effective than the prescription drug Ambien in combating insomnia. In another study, sleeping pills allowed people on average to sleep only eleven minutes longer. More relevant, the quality of sleep with sleeping pills is poor, disrupting the normal brain waves of sleep, and there is usually a sleeping pill hangover of dulled alertness the next morning. Because medication-induced sleep quality is poor, memory consolidation is affected, so we experience short-term memory loss—we don’t remember that we didn’t get a good night’s sleep, and we don’t remember how groggy we were upon waking up.

One of the most powerful cues our body uses to regulate the sleep-wake cycle is light. Bright light in the morning signals the hypothalamus to release chemicals that help us wake up, such as orexin, cortisol, and adrenaline. For this reason, if you’re having trouble sleeping, it’s important to avoid bright lights right before bedtime, such as those from the TV or computer screen.

Here are some guidelines for a good night’s sleep: Go to bed at the same time every night. Wake up at the same time every morning. Set an alarm clock if necessary. If you have to stay up late one night, still get up at your fixed time the next morning—in the short run, the consistency of your cycle is more important than the amount of sleep. Sleep in a cool, dark room. Cover your windows if necessary to keep out light.

What about those delicious afternoon stretches on the couch? There’s a reason they feel so good: They’re an important part of resetting worn-out neural circuits. People differ widely in their ability to take naps and in whether they find naps helpful. For those who do, they can play a large role in creativity, memory, and efficiency. Naps longer than about forty minutes can be counterproductive, though, causing sleep inertia. For many people, five or ten minutes is enough.

But you can’t take naps just any old time—not all naps are created equal. Those little micronaps you take in between hitting the snooze button on your morning alarm? Those are counterproductive, giving you abnormal sleep that fails to settle into a normal brain wave pattern. Napping too close to bedtime can make it difficult or impossible to fall asleep at night.

In the United States, Great Britain, and Canada, napping tends to be frowned upon. We’re aware that members of Latino cultures have their naps—siestas—and we consider this a cultural oddity, not for us. We try to fight it off by having another cup of coffee when the drowsiness overtakes us. The British have institutionalized this fighting-off with four o’clock teatime. But the benefits of napping are well established. Even five- or ten-minute “power naps” yield significant cognitive enhancement, improvement in memory, and increased productivity. And the more intellectual the work, the greater the payoff. Naps also allow for the recalibration of our emotional equilibrium—after being exposed to angry and frightening stimuli, a nap can turn around negative emotions and increase happiness. How does a nap do all that? By activating the limbic system, the brain’s emotional center, and reducing levels of monoamines, naturally occurring neurotransmitters that are used in pill form to treat depression, anxiety, and schizophrenia. Napping has also been shown to reduce the incidence of cardiovascular disease, diabetes, stroke, and heart attacks. A number of companies now encourage their employees to take short naps—fifteen minutes is the corporate norm—and many companies have dedicated nap rooms with cots.

The emerging consensus is that sleep is not an all-or-nothing state. When we are tired, parts of our brain may be awake while other parts sleep, creating a kind of paradoxical mental state in which we think we’re awake, but core neural circuits are off-line, dozing. One of the first neural clusters to go off-line in cases like these is memory, so even though you think you’re awake, your memory system isn’t. This causes failures of retrieval (what was that word again?) and failures of storage (I know you just introduced yourself, but I forgot what you said your name is).

Normally, our body establishes a circadian rhythm synchronized to the sunrise and sunset of our local time zone, largely based on cues from sunlight and, to a lesser degree, mealtimes. This rhythm is part of a biological clock in the hypothalamus that also helps to regulate core body temperature, appetite, alertness, and growth hormones. Jet lag occurs when that circadian cycle becomes desynchronized from the time zone you’re in. This is partly due to the sunrise and sunset occurring at different times than your body clock expects, thus giving unexpected signals to the pineal gland. Jet lag is also due to our disrupting our circadian rhythm by waking, exercising, eating, and sleeping according to the new local time rather than to the home time our body clock is adjusted for. In general, the body clock is not easily shifted by external factors, and this resistance is what causes many of the difficulties associated with jet lag. These difficulties include clumsiness, fuzzy thinking, gastrointestinal problems, poor decision-making, and the most obvious one, being alert or sleepy at inappropriate times.

It’s been only in the past 150 years that we’ve been able to jump across time zones, and we haven’t evolved a way to adapt yet. Eastward travel is more difficult than westward because our body clock prefers a twenty-five-hour day. Therefore, we can more easily stay awake an extra hour than fall asleep an hour early. Westward travel finds us having to delay our bedtime, which is not so difficult to do. Eastward travel finds us arriving in a city where it’s bedtime and we’re not yet tired. Traveling east is difficult even for people who do it all the time. One study of nineteen Major League Baseball teams found a significant effect: Teams that had just traveled eastward gave up more than one run on average in every game. Olympians have shown significant deficits after traveling across time zones in either direction, including reductions in muscle strength and coordination.

As we age, resynchronizing the clock becomes more difficult, partly due to reductions in neuroplasticity. Individuals over the age of sixty have much greater difficulty with jet lag, especially on eastbound flights.

Aligning your body clock to the new environment requires a phase shift. It takes one day per time zone to shift. Advance or retard your body clock as many days before your trip as the number of time zones you’ll be crossing. Before traveling east, get into sunlight early in the day. Before traveling west, avoid sunlight early by keeping the curtains drawn, and instead expose yourself to bright light in the evening, to simulate what would be late afternoon sun in your destination.

Once you’re on the plane, if you’re westbound, keep the overhead reading lamp on, even if it is your home bedtime. When you arrive in the western city, exercise lightly by taking a walk in the sun. That sunlight will delay the production of melatonin in your body. If you’re on an eastbound plane, wear eye shades to cover your eyes two hours or so before sunset in your destination city, to acclimate yourself to the new “dark” time.

Some research suggests that taking melatonin, 3–5 milligrams, two to three hours before bedtime can be effective, but this is controversial, for other studies have found no benefit. No studies have examined the long-term effects of melatonin, and young people and pregnant women have been advised to avoid it entirely. Although it is sometimes marketed as a sleep aid, melatonin will not help you sleep if you have insomnia because, by bedtime, your body has already produced as much melatonin as it can use.

When We Procrastinate

Many highly successful people claim to have ADD, and some genuinely meet the clinical definition. One of them was Jake Eberts, a film producer whose works include Chariots of Fire, Gandhi, Dances with Wolves, Driving Miss Daisy, A River Runs through It, The Killing Fields, and Chicken Run, and whose films received sixty-six Oscar nominations and seventeen Oscar wins (he passed away in 2012). By his own admission, he had a short attention span and very little patience, and he was easily bored. But his powerful intellect found him graduating from McGill University at the age of twenty and leading the engineering team for the European company Air Liquide before earning his MBA from Harvard Business School at age twenty-five. Early on, Jake identified his chief weakness: a tendency to procrastinate. He is of course not alone in this, and it is not a problem unique to people with attention deficit disorder. To combat it, Jake adopted a strict policy of “do it now.” If Jake had a number of calls to make or things to attend to piling up, he’d dive right in, even if it cut into leisure or socializing time. And he’d do the most unpleasant task—firing someone, haggling with an investor, paying bills—the first thing in the morning to get it out of the way. Following Mark Twain, Jake called it eating the frog: Do the most unpleasant task first thing in the morning when gumption is highest, because willpower depletes as the day moves on. (The other thing that kept Jake on track was that, like most executives, he had executive assistants. He didn’t have to remember due dates or small items himself; he could just put a given task in “the Irene bucket” and his assistant, Irene, would take care of it.)

Procrastination is something that affects all of us to varying degrees. We rarely feel we’re caught up on everything. There are chores to do around the house, thank-you notes to write, synchronizing and backing up of our computers and smartphones to do. Some of us are affected by procrastination only mildly, others severely. Across the whole spectrum, all procrastination can be seen as a failure of self-regulation, planning, impulse control, or a combination of all three. By definition, it involves delaying an activity, task, or decision that would help us to reach our goals. In its mildest form, we simply start things at a later time than we might have, and experience unneeded stress as a deadline looms closer and we have less and less time to finish. But it can lead to more problematic outcomes. Many people, for instance, delay seeing their doctors, during which time their condition can become so bad that treatment is no longer an option, or they put off writing wills, filling out medical directives, installing smoke detectors, taking out life insurance, or starting a retirement savings plan until it’s too late.

The tendency to procrastinate has been found to be correlated with certain traits, lifestyles, and other factors. Although the effects are statistically significant, none of them is very large. Those who are younger and single (including divorced or separated) are slightly more likely to procrastinate. So are those with a Y chromosome—this could be why women are far more likely to graduate from college than men; they are less likely to procrastinate. As mentioned earlier, being outside in natural settings—parks, forests, the beach, the mountains, and the desert—replenishes self-regulatory mechanisms in the brain, and accordingly, living or spending time in nature, as opposed to urban environments, has been shown to reduce the tendency to procrastinate.

A related factor is what Cambridge University psychologist Jason Rentfrow calls selective migration—people are apt to move to places that they view as consistent with their personalities. Large urban centers are associated with a tendency to be better at critical thinking and creativity, but also with procrastination. This could be because there are so many things to do in a large urban center, or because the increased bombardment of sensory information reduces the ability to enter the daydreaming mode, the mode that replenishes the executive attention system. Is there a brain region implicated in procrastination? As a failure of self-regulation, planning, and impulse control, if you guessed the prefrontal cortex, you’d be right: Procrastination resembles some of the temporal planning deficits we saw following prefrontal damage, at the beginning of this chapter. The medical literature reports many cases of patients who suddenly developed procrastination after damage to this region of the brain.

Procrastination comes in two types. Some of us procrastinate in order to pursue restful activities—spending time in bed, watching TV—while others of us procrastinate certain difficult or unpleasant tasks in favor of those that are more fun or that yield an immediate reward. In this respect, the two types differ in activity level: The rest-seeking procrastinators would generally rather not be exerting themselves at all, while the fun-task procrastinators enjoy being busy and active all the time but just have a hard time starting things that are not so fun.

An additional factor has to do with delayed gratification, and individual differences in how people tolerate that. Many people work on projects that have a long event horizon—for example, academics, businesspeople, engineers, writers, housing contractors, and artists. That is, the thing they’re working on can take weeks or months (or even years) to complete, and after completion, there can be a very long period of time before they get any reward, praise, or gratification. Many people in these professions enjoy hobbies such as gardening, playing a musical instrument, and cooking because those activities yield an immediate, tangible result—you can see the patch of your flower bed where you removed the weeds, you can hear the Chopin piece you’ve just played, and you can taste the rhubarb pie you just baked. In general, activities with a long time to completion—and hence a long time to reward—are the ones more likely to be started late, and those with an immediate reward are less likely to be procrastinated.

Piers Steel is an organizational psychologist, one of the world’s foremost authorities on procrastination and a professor at the Haskayne School of Business at the University of Calgary. Steel says that two underlying factors lead us to procrastinate:

Humans have a low tolerance for frustration. Moment by moment, when choosing what tasks to undertake or activities to pursue, we tend to choose not the most rewarding action but the easiest. This means that unpleasant or difficult things get put off.

We tend to evaluate our self-worth in terms of our achievements. Whether we lack self-confidence in general—or confidence that this particular project will turn out well—we procrastinate because that allows us to delay putting our reputations on the line until later. (This is what psychologists call an ego-protective maneuver.)

The low tolerance for frustration has neural underpinnings. Our limbic system and the parts of the brain that are seeking immediate rewards come into conflict with our prefrontal cortex, which all too well understands the consequences of falling behind. Both regions run on dopamine, but the dopamine has different actions in each. Dopamine in the prefrontal cortex causes us to focus and stay on task; dopamine in the limbic system, along with the brain’s own endogenous opioids, causes us to feel pleasure. We put things off whenever the desire for immediate pleasure wins out over our ability to delay gratification, depending on which dopamine system is in control.

Steel identifies what he calls two faulty beliefs: first, that life should be easy, and second, that our self-worth is dependent on our success. He goes further, to build an equation that quantifies the likelihood that we’ll procrastinate. If our self-confidence and the value of completing the task are both high, we’re less likely to procrastinate. These two factors become the denominator of the procrastination equation. (They’re in the denominator because they have an inverse relationship with procrastination—when they go up, procrastination goes down, and vice versa.) They are pitted against two other factors: how soon in time the reward will come, and how distractible we are. (Distractibility is seen as a combination of our need for immediate gratification, our level of impulsivity, and our ability to exercise self-control.) If the length of time it will take to complete the task is high, or our distractibility is high, this leads to an increase in procrastination.

152074.jpg

To refine Steel’s equation, I’ve added delay, the amount of time one has to wait to receive positive feedback for completion of the task. The greater the delay, the greater the likelihood of procrastination:

152080.jpg

Certain behaviors may look like procrastination but arise due to different factors. Some individuals suffer from initiation deficits, an inability to get started. This problem is distinct from planning difficulties, in which individuals fail to begin tasks sufficiently early to complete them because they have unrealistic or naïve ideas about how long it will take to complete subgoals. Others may fail to accomplish tasks on time because they don’t have the required objects or materials when they finally sit down to work. Both of these latter difficulties arise from a lack of planning, not from procrastination per se. On the other hand, some individuals may be attempting a challenging task with which they have no previous experience; they may simply not know where or how to begin. In these cases, having supervisors or teachers who can help them break up the problem into component parts is very helpful and often essential. Adopting a systematic, componential approach to assignments is effective in reducing this form of procrastination.

Finally, some individuals suffer from a chronic inability to finish projects they’ve started. This is not procrastination, because they don’t put off starting projects; rather, they put off ending them. This can arise because the individual doesn’t possess the skills necessary to properly complete the job with acceptable quality—many a home hobbyist or weekend carpenter can testify to this. It can also arise from an insidious perfectionism in which the individual has a deep, almost obsessive belief that their work products are never good enough (a kind of failure in satisficing). Graduate students tend to suffer from this kind of perfectionism, no doubt because they are comparing themselves with their advisors, and comparing their thesis drafts with their advisors’ finished work. It is an unfair comparison of course. Their advisors have had more experience, and the advisor’s setbacks, rejected manuscripts, and rough drafts are hidden from the graduate student’s view —all the graduate student ever sees is the finished product and the gap between it and her own work. This is a classic example of the power of the situation being underappreciated in favor of an attribution about stable traits, and it shows up as well in the workplace. The supervisor’s role virtually guarantees that she will appear smarter and more competent than the supervisee. The supervisor can choose to show the worker her own work when it is finished and polished. The worker has no opportunity for such self-serving displays and is often required to show work at draft and interim stages, effectively guaranteeing that the worker’s product won’t measure up, thus leaving many underlings with the feeling they aren’t good enough. But these situational constraints are not as predictive of ability as students and other supervisees make them out to be. Understanding this cognitive illusion can encourage individuals to be less self-critical and, hopefully, to emancipate themselves from the stranglehold of perfectionism.

Also important is to disconnect one’s sense of self-worth from the outcome of a task. Self-confidence entails accepting that you might fail early on and that it’s OK, it’s all part of the process. The writer and polymath George Plimpton noted that successful people have paradoxically had many more failures than people whom most of us would consider to be, well, failures. If this sounds like double-talk or mumbo jumbo, the resolution of the paradox is that successful people (or people who eventually become successful) deal with failures and setbacks very differently from everyone else. The unsuccessful person interprets the failure or setback as a career breaker and concludes, “I’m no good at this.” The successful person sees each setback as an opportunity to gain whatever additional knowledge is necessary to accomplish her goals. The internal dialogue of a successful (or eventually successful) person is more along the lines of “I thought I knew everything I needed to know to achieve my goals, but this has taught me that I don’t. Once I learn this, I can get back on track.” The kinds of people who become successful typically know that they can expect a rocky road ahead and it doesn’t dissuade them when those bumps knock them off kilter—it’s all part of the process. As Piers Steel would say, they don’t subscribe to the faulty belief that life should be easy.

The frontal lobes play a role in one’s resilience to setbacks. Two subregions involved in self-assessment and judging one’s own performance are the dorsolateral prefrontal cortex and the orbital cortex. When they are overactive, we tend to judge ourselves harshly. In fact, jazz musicians need to turn off these regions while improvising, in order to freely create new ideas without the nagging self-assessment that their ideas are not good enough. When these regions are damaged, they can produce a kind of hyperresilience. Prior to damage, one patient was unable to get through a standard battery of test problems without weeping, even after correctly completing them. After the damage to the prefrontal cortex, she was utterly unable to complete the same problems, but her attitude differed markedly: She would continue to try the problems over and over again, beyond the patience of the examiner, making mistake after mistake without the least indication of embarrassment or frustration.

Reading the biographies of great leaders—corporate CEOs, generals, presidents—the sheer number and magnitude of failures many have experienced is staggering. Few thought that Richard Nixon would recover from his embarrassing defeat in the 1962 California gubernatorial election. (“You won’t have Nixon to kick around anymore.”) Thomas Edison had more than one thousand inventions that were unsuccessful, compared to only a small number that were successful. But the successful ones were wildly influential: the lightbulb, phonograph, and motion picture camera. Billionaire Donald Trump has had as many high-profile failures as successes: dead-end business ventures like Trump Vodka, Trump magazine, Trump Airlines, and Trump Mortgage, four bankruptcies, and a failed presidential bid. He is a controversial figure, but he has demonstrated resilience and has never let business failures reduce his self-confidence. Too much self-confidence of course is not a good thing, and there can be an inner tug-of-war between self-confidence and arrogance that can, in some cases, lead to full-scale psychological disorders.

Self-confidence appears to have a genetic basis, and is a trait that is relatively stable across the life span, although like any trait, different situations can trigger different responses in the individual, and environmental factors can either build up or chip away at it. One effective strategy is acting as if. In other words, even those who lack an inner sense of self-confidence can act as if they are self-confident by not giving up, working hard at tasks that seem difficult, and trying to reverse temporary setbacks. This can form a positive feedback loop wherein the additional effort actually results in success and helps to gradually build up the person’s sense of agency and competence.

Creative Time

Here’s a puzzle: What word can be joined to all of these to create three new compound words?

crab     sauce     pine

Most people try to focus on the words intently and come up with a solution. Most of them fail. But if they start to think of something else and let their mind wander, the solution comes in a flash of insight. (The answer is in the Notes section.) How does this happen?

Part of the answer has to do with how comfortable we are in allowing ourselves to enter the daydreaming mode under pressure of time. Most people say that when they’re in that mode, time seems to stop, or it feels that they have stepped outside of time. Creativity involves the skillful integration of this time-stopping daydreaming mode and the time-monitoring central executive mode. When we think about our lives as a whole, one theme that comes up over and over is whether we feel we made any contributions with our lives, and it is usually the creative contributions, in the broadest sense, that we’re most proud of. In the television series House, Wilson is dying of cancer, with only five months to live. Knowing he’s going to die, he implores Dr. House, “I need you to tell me that my life was worthwhile.” We learn that his sense of his life’s worth comes from having effected new and creative solutions for dozens of patients who wouldn’t otherwise be alive.

Achieving insight across a wide variety of problems—not just word problems but interpersonal conflicts, medical treatments, chess games, and music composition, for example—typically follows a pattern. We focus all our attention on the aspects of the problem as it is presented, or as we understand it, combing through different possible solutions and scenarios with our left prefrontal cortex and anterior cingulate. But this is merely a preparatory phase, lining up what we know about a problem. If the problem is sufficiently complex or tricky, what we already know won’t be enough. In a second phase, we need to relax, let go of the problem, and let networks in the right hemisphere take over. Neurons in the right hemisphere are more broadly tuned, with longer branches and more dendritic spines—they are able to collect information from a larger area of cortical space than left hemisphere neurons, and although they are less precise, they are better connected. When the brain is searching for an insight, these are the cells most likely to produce it. The second or so preceding insight is accompanied by a burst of gamma waves, which bind together disparate neural networks, effectively binding thoughts that were seemingly unrelated into a coherent new whole. For all this to work, the relaxation phase is crucial. That’s why so many insights happen during warm showers. Teachers and coaches always say to relax. This is why.

If you’re engaged in any kind of creative pursuit, one of the goals in organizing your time is probably to maximize your creativity. We’ve all had the experience of getting wonderfully, blissfully lost in an activity, losing all track of time, of ourselves, our problems. We forget to eat, forget that there is a world of cell phones, deadlines, and other obligations. Abraham Maslow called these peak experiences in the 1950s, and more recently the psychologist Mihaly Csikszentmihalyi (pronounced MEE-high, CHEECH-sent-mee-high) has famously called this the flow state. It feels like a completely different state of being, a state of heightened awareness coupled with feelings of well-being and contentment. It’s a neurochemically and neuroanatomically distinct state as well. Across individuals, flow states appear to activate the same regions of the brain, including the left prefrontal cortex (specifically, areas 44, 45, and 47) and the basal ganglia. During flow, two key regions of the brain deactivate: the portion of the prefrontal cortex responsible for self-criticism, and the amygdala, the brain’s fear center. This is why creative artists often report feeling fearless and as though they are taking creative risks they hadn’t taken before—it’s because the two parts of their brain that would otherwise prevent them from doing so have significantly reduced activity.

People experience flow in many kinds of work, from looking at the tiniest cells to exploring the largest scales of the universe. Cell biologist Joseph Gall described flow looking through a microscope; astronomers describe it looking through telescopes. Similar flow states are described by musicians, painters, computer programmers, tile setters, writers, scientists, public speakers, surgeons, and Olympic athletes. People experience it playing chess, writing poetry, rock climbing, and disco dancing. And almost without exception, the flow state is when one does his or her best work, in fact, work that is above and beyond what one normally thinks of as his or her best.

During the flow state, attention is focused on a limited perceptual field, and that field receives your full concentration and complete investment. Action and awareness merge. You cease thinking about yourself as separate from the activity or the world, and you don’t think of your actions and your perceptions as being distinct—what you think becomes what you do. There are psychological aspects as well. During flow, you experience freedom from worry about failure; you are aware of what needs to be done, but you don’t feel that you are doing it—the ego is not involved and falls away completely. Rosanne Cash described writing some of her best songs in this state. “It didn’t feel like I was writing it. It was more like, the song was already there and I just had to hold up my catcher’s mitt and grab it out of the air.” Parthenon Huxley, a lead vocalist for The Orchestra (the current incarnation of the British band ELO) recalled a concert they played in Mexico City. “I opened my mouth to sing and all kinds of fluidity was there—I couldn’t believe the notes that were coming out of my mouth, couldn’t believe it was me.”

Flow can occur during either the planning or the execution phase of an activity, but it is most often associated with the execution of a complex task, such as playing a trombone solo, writing an essay, or shooting baskets. Because flow is such a focused state, you might think that it involves staying inside either the planning phase or the execution phase, but in fact it usually allows for the seamless integration of them—what are normally separate tasks, boss and worker tasks, become permeable, interrelated tasks that are part of the same gesture. One thing that characterizes flow is a lack of distractibility—the same old distractions are there, but we’re not tempted to attend to them. A second characteristic of flow is that we monitor our performance without the kinds of self-defeating negative judgments that often accompany creative work. Outside of flow, a nagging voice inside our heads often says, “That’s not good enough.” In flow, a reassuring voice says, “I can fix that.”

Flow states don’t occur for just any old task or activity. They can occur only when one is deeply focused on the task, when the task requires intense concentration and commitment, contains clear goals, provides immediate feedback, and is perfectly matched to one’s skill level. This last point requires that your own skills and abilities are matched in a particular way to the level of difficulty before you. If the task you are engaged in is too simple, holding no challenge, you’ll get bored. That boredom will break your attention to the task, and your mind will wander into the default mode. If the task is too difficult and holds too many challenges, you’ll become frustrated and experience anxiety. The frustration and anxiety will also break your attention. It’s when the challenge is just right for you—given your own particular set of skills—that you have a chance of reaching flow. There’s no guarantee that you will, but if this condition isn’t met, if the challenge isn’t just right for you, it surely won’t happen.

In the graph below, challenge is shown on the y-axis, and you can see that high challenge leads to anxiety and low challenge to boredom. Right in the middle is the area where flow is possible. The funnel shape of the flow region is related to the level of your own acquired skills, running along the x-axis. What this shows is that the greater your skills, the greater the opportunity to achieve flow. If you have low skill, the challenge window opening is small; if you have high skill, there is a much wider range of possibility for you to achieve flow. This is because the flow state is characterized by a total lack of conscious awareness, a merging of your self with the project itself, a seamless melding of thought, action, movement, and result. The higher your skill level, the more easily you can practice those skills automatically, subconsciously, and then the more easily you can disengage your conscious mind, your ego, and other enemies of flow. Flow states occur more regularly for those who are experts or who have invested a great deal of time to train in a given domain.

025.psd

Engagement is what flow is defined by—high, high levels of engagement. Information access and processing seem effortless—facts that we need are at our fingertips, even long-lost ones we didn’t know we knew; skills we didn’t know we had begin to emerge. With no need to exercise self-control to stay focused, we free neural resources to the task at hand. And this is where something paradoxical occurs in the brain. During flow states, we no longer need to exert ourselves to stay on task—it happens automatically as we enter this specialized attentional state. It takes less energy to be in flow—in a peak of creative engagement—than to be distracted. This is why flow states are periods of great productivity and efficiency.

Flow is a chemically different state as well, involving a particular neurochemical soup that has not yet been identified. It appears there needs to be a balance of dopamine and noradrenaline, particularly as they are modulated in a brain region known as the striatum (seat of the attentional switch), serotonin (for freedom to access stream-of-consciousness associations), and adrenaline (to stay focused and energized). GABA neurons (sensitive to gamma-Aminobutyric acid) that normally function to inhibit actions and help us to exercise self-control need to reduce their activity so that we aren’t overly critical of ourselves in these states and so that we become less inhibited in the production of ideas. Finally, some of the processes involved in homeostasis, particularly sexual drive, hunger, and thirst, need to be reduced so that we’re not distracted by bodily functions. In very high flow states, we lose complete awareness of our environment. Csikszentmihalyi notes one case in which the roof fell in during an operation and the surgeon didn’t notice it until after the operation was over.

Flow occurs when you are not explicitly thinking about what you’re doing; rather, your brain is in a special mode of activity in which procedures and operations are performed automatically without your having to exert conscious control. This is why practice and expertise are prerequisites for flow. Musicians who have learned their scales can play them without explicitly concentrating on them, based on motor memory. Indeed, they report that it feels as if their fingers “just know where to go” without their having to think about it. Basketball players, airplane pilots, computer programmers, gymnasts, and others who are highly skilled and highly practiced report similar phenomena, that they have reached such a high level of ability that thinking seems not to be involved at all.

When you learned to ride a bicycle, you had to concentrate on keeping your balance, on pedaling, and on steering. You probably tipped over a few times because keeping track of these was difficult. But after some practice, you could climb on the bike and just ride, directing your attention to more pleasant matters, such as the view and your immediate surroundings. If you then try to teach someone else to ride, you realize that very much of what you know is not available to conscious introspection or description. Circuits in the brain have become somewhat autonomous in carrying it out and they don’t require direction from the central executive system in your prefrontal cortex. We just press START in our brain, and the bike-riding sequence takes over. People report similar automaticity with tying their shoes, driving a car, and even solving differential equations.

We all have brain programs like these. But trying to think about what you’re doing can quickly interfere, ending the automaticity and high performance level you’ve enjoyed. The easiest way to get someone to fall off a bicycle is to ask him to concentrate on how he’s staying up, or to describe what he’s doing. The great tennis player John McEnroe used this to his advantage on the courts. When an opponent was performing especially well, for example by using a particularly good backhand, McEnroe would compliment him on it. McEnroe knew this would cause the opponent to think about his backhand, and this thinking disrupted the automatic application of it.

Flow is not always good; it can be disruptive when it becomes an addiction, and it is socially disruptive if flow-ers withdraw from others and stay in their own cocoon. Jeannette Walls, in The Glass Castle, describes her mother being so absorbed in painting that she would ignore her hungry children’s cries for food. Three-year-old Jeannette accidentally set herself on fire while standing on a chair in front of the stove, attempting to cook hot dogs in a boiling pot while her artist mother was absorbed in painting. Even after Jeannette returned from six weeks in the hospital, her mother couldn’t be bothered to step outside the flow she was in while painting, to cook for the child.

Creative people often arrange their lives to maximize the possibility that flow periods will occur, and to be able to stay in flow once they arrive there. The singer and songwriter Neil Young described it best. Wherever he is, no matter what he is doing, if a song idea comes to him, he “checks out” and stops doing whatever he is doing and creates the time and space then and there to work on the song. He pulls over to the side of the road, abruptly leaves dinner parties, and does whatever it takes to stay connected to the muse, to stay on task. If he ends up getting a reputation for being flaky, and not always being on time, it’s the price to pay for being creative.

It seems, then, that in some respects, creativity and conscientiousness are incompatible. If you want to indulge your creative side, it means you can’t also be punctilious about keeping appointments. Of course, one could counter that Neil is being exceptionally conscientious about his art and giving it all he’s got. It’s not a lack of conscientiousness; it’s just that his conscientiousness serves a different priority.

Stevie Wonder practices the same kind of self-imposed separation from the world to nourish his creativity. He described it in terms of emotion—when he feels a groundswell of emotion inside him upon learning of tragic news or spending time with someone he loves, for example—he goes with it, stays in the emotional experience, and doesn’t allow himself to become distracted, even if it means missing an appointment. If he can write a song about the emotion at that moment, he does; otherwise, he tries later to fully immerse himself in that same emotional state so that it will infuse the song. (He also has a poor reputation for being on time.)

Sting organizes and partitions his time to maximize creative engagement. On tours, his time is well structured by others to give him maximum freedom. He doesn’t need to think about anything at all except music. Where he has to be, what he has to do, when he eats, all these parts of the day are completely scheduled for him. Importantly, he has a few hours of personal time every day that is sacrosanct. Everyone knows not to interrupt him then, and he knows that there is nothing pressing or more important to do than to use the time for creative and creativity-restoring acts. He’ll use the time for yoga, songwriting, reading, and practicing. By combining his exceptional self-discipline and focus with a world in which distractions have been dramatically reduced, he can more easily become absorbed in creative pursuits. Sting also did something interesting to help him handle the disorienting (and creativity-crushing) effects of travel. Working closely with an interior designer, he found curtains, pillows, rugs, and other decorative objects that resemble in style, color, and texture those he enjoys at home. Every day on the road, his tour staff create a virtual room out of interlocking aluminum poles and curtains, a private space inside the concert venue that is exactly the same from city to city so there is a great deal of comfort and continuity in the midst of all the change. This promotes a calm and distraction-free state of mind. There’s a fundamental principle of neuroscience behind this: As we noted earlier, the brain is a giant change detector. Most of us are easily distracted by newness, the prefrontal cortex’s novelty bias. We can help ourselves by molding our environments and our schedules to facilitate and promote creative inspiration. Because his senses aren’t being bombarded daily by new sights, colors, and spatial arrangements—at least during his four-hour personal time—Sting can let his brain and his mind relax and more easily achieve a flow state.

There’s an old saying that if you really need to get something done, give it to a busy person. It sounds paradoxical, but busy people tend to have systems for getting things done efficiently, and the purpose of this section is to uncover what those systems are. Even inveterate procrastinators benefit from having more to do—they’ll dive into a task that is more appealing than the one they’re trying to avoid, and make great progress on a large number of projects. Procrastinators seldom do absolutely nothing. Robert Benchley, the Vanity Fair and New Yorker writer, wrote that he managed to build a bookshelf and pore through a pile of scientific articles when an article was due.

A large part of efficient time management revolves around avoiding distractions. An ironic aspect of life is how easily we can be harmed by the things we desire. Fish are seduced by a fisherman’s lure, a mouse by cheese. But at least these objects of desire look like sustenance. This is seldom the case for us. The temptations that can disrupt our lives are often pure indulgences. None of us needs to gamble, or drink alcohol, read e-mail, or compulsively check social networking feeds to survive. Realizing when a diversion has gotten out of control is one of the great challenges of life.

Anything that tempts us to break the extended concentration required to perform well on challenging tasks is a potential barrier to success. The change and novelty centers in your brain also feed you chemical rewards when you complete tasks, no matter how trivial. The social networking addiction loop, whether it’s Facebook, Twitter, Vine, Instagram, Snapchat, Tumblr, Pinterest, e-mail, texting, or whatever new thing will be adopted in the coming years, sends chemicals through the brain’s pleasure center that are genuinely, physiologically addicting. The greatest life satisfaction comes from completing projects that required sustained focus and energy. It seems unlikely that anyone will look back at their lives with pride and say with satisfaction that they managed to send an extra thousand text messages or check social network updates a few hundred extra times while they were working.

To successfully ignore distractions, we have to trick ourselves, or create systems that will encourage us to stick with the work at hand. The two kinds of distractions we need to deal with are external—those caused by things in the world that beckon us—and internal—those caused by our mind wandering back to the default daydreaming mode.

For external distractions, the strategies already mentioned apply. Set aside a particular time of day to work, with the phone turned off and your e-mail and browser shut down. Set aside a particular place to work that allows you to focus. Make it a policy to not respond to missives that come in during your productivity time. Adopt the mental set that this thing you’re doing now is the most important thing you could be doing. Remember the story of presidential candidate Jimmy Carter in Chapter 1—his aides managed time and space for him. They evaluated, in real time, whether the greatest value would be gained by continuing to talk to the person in front of him or someone else who was waiting, whether he should be here or there. This allowed Carter to let go of his time-bound cares completely, to live in the moment and attend one hundred percent to the person in front of him. Similarly, executive assistants often schedule the time of their bosses so that the boss knows that whatever is in front of her is the most important thing she could be doing right now. She doesn’t need to worry about projects or tasks that are going unattended, because the assistant is keeping track of them for her. This is similar to the situation described above with construction workers: Great productivity and increased quality result if the person doing the work and the person scheduling or supervising the work are not the same person.

For those of us without executive assistants, we have to rely on our own wits, and on the prefrontal cortex’s central executive.

To combat internal distractions, the most effective thing you can do is the mind-clearing exercise I wrote about in Chapter 3. Difficult tasks benefit from a sustained period of concentration of fifty minutes or more, due to the amount of time it takes your brain to settle into and maintain a focused state. The best time-management technique is to ensure you have captured every single thing that has your attention, or should have your attention, by writing it down. The goal is to get projects and situations off your mind but not to lose any potentially useful ideas—externalizing your frontal lobes. Then you can step back and look at your list from an observer standpoint and not let yourself be driven by what’s the latest and loudest in your head.

Taking breaks is also important. Experts recommend getting up to walk around at least once every ninety minutes, and scheduling daily physical activity. By now, even the most vegetative, TV-bingeing couch potatoes among us have heard that daily exercise is important. We try to tell ourselves that we’re doing just fine, our pants still fit (sort of), and all this physical fitness stuff is overrated. But actuarial and epidemiological studies show unquestionably that physical activity is strongly related to the prevention of several chronic diseases and premature death, and enhances the immune system’s ability to detect and fend off certain types of cancer. And although twenty years ago, the recommendations were for the sort of vigorous activity that few people over the age of forty-five are motivated to undertake, current findings suggest that even moderate activity such as brisk walking for thirty minutes, five days a week, will yield significant effects. Older adults (fifty-five to eighty) who walked for forty minutes three days a week showed significant increases in the size of their hippocampus, enhancing memory. Exercise has also been shown to prevent age-related cognitive decline by increasing blood flow to the brain, causing increases in the size of the prefrontal cortex and improvements in executive control, memory, and critical thinking.

There is one mistake that many of us make when we have a looming deadline for a big project, a project that is very important and will take many many hours or days or weeks to complete. The tendency is to put everything else on hold and devote all our time to that big project—it seems as though every minute counts. But doing this means that lots of little tasks will go undone, only to pile up and create problems for you later. You know you should be attending to them, a little voice in your head or entry on your To Do list nags at you; it takes a great deal of conscious effort to not do them. This carries a tangible psychological strain as your brain keeps trying to tamp them down in your consciousness, and you end up using more mental energy to not do them than you would have used to do them.

The solution is to follow the five-minute rule. If there is something you can get done in five minutes or less, do it now. If you have twenty things that would only take five minute each, but you can spare only thirty minutes now, prioritize them and do the others later or tomorrow, or delegate them. The point is that things you can deal with now are better off being dealt with, rather than letting them accumulate. A good tip is to set aside some time each day to deal with such things—whether it’s picking up clothes off the floor, making an unpleasant phone call, or giving a quick response to an e-mail. If this seems to contradict the discussion above, about not allowing yourself to get distracted by unimportant tasks, note the critical distinction: I’m proposing here that you set aside a designated block of time to deal with all these little things; don’t intersperse them within a block of time you’ve set aside to focus on a single, large project.

One thing that many successful people do for time management is to calculate how much their time is subjectively worth to them. This is not necessarily what it is worth in the marketplace, or what their hourly pay works out to, although it might be informed by these—this is how much they feel their time is worth to them. When deciding, for example, whether to steam clean your carpets or hire someone to do it, you might take into account what else you could be doing with your time. If a free weekend day is rare, and you are really looking forward to spending it bicycling with friends, or going to a party, you may well decide that it’s worth it to pay someone else to do it. Or if you’re a consultant or attorney earning upward of $300 an hour, spending $100 to join one of those priority services that bypasses the long line at airport security seems well worth it.

If you calculate what your time is worth to you, it simplifies a great deal of decision-making because you don’t have to reassess each individual situation. You just follow your rule: “If I can spend $XX and save an hour of my time, it is worth it.” Of course this assumes that the activity is something you don’t find pleasurable. If you like steam-cleaning carpets and standing in airport lines, then the calculation doesn’t work. But for tasks or chores about which you are indifferent, having a time-value rule of thumb is very helpful.

Related to knowing how much your time is worth is the following rule: Do not spend more time on a decision than it’s worth. Imagine you’re clothes shopping and find a shirt you particularly like, and it is just at the limit of what you decided you’d spend. The salesperson comes over and shows you another shirt that you like just as much. Here, you’re willing to invest a certain amount of time trying to choose between the two because you have a limited amount of money. If the salesperson offers to throw in the second shirt for only five dollars more, you’ll probably jump at the chance to buy both because, at that point—with a small amount of money at stake—agonizing over the decision isn’t worth the time.

David Lavin, a former chess champion and now president of the international speakers agency bearing his name, articulates it this way: “A colleague once complained ‘you made a decision without having all the facts!’ Well, getting all the facts would take me an hour and the amount of income at stake means that this decision is only worth ten minutes of my time.”

Time management also requires structuring your future with reminders. That is, one of the secrets to managing time in the present is to anticipate future needs so that you’re not left scrambling and playing catch-up all the time. Linda (whom we met in Chapter 3), the executive assistant for the president of a $20 billion Fortune 100 company, describes how she managed the executive offices, and in particular her boss’s schedule, his assignments, and his To Do list. She is among the most efficient and most organized people I’ve ever met.

“I use a lot of abeyance or tickler files,” Linda says, things that remind her about some future obligation well in advance. The tickler file is either a physical file on her desk or, increasingly, an alert on her calendar. “I use the calendar as the primary way to organize my boss’s schedule. I use it for my own schedule, too. When I come in in the morning, the calendar tells me what needs to be done today, as well as what future things we need to be thinking about today.

“If a new project comes across his desk, I find out how long he thinks he’ll need to complete it, and when it is due. Say he thinks he needs two weeks to do it. I’ll set a tickler, a reminder in the calendar three weeks before it’s due—that’s a week before the two weeks he needs to do it—so that he can start thinking about it and know that it’s coming up. Then another tickler on the day he’s supposed to start working on it, and ticklers every day to make sure he’s doing it.

“Of course many of his projects require input from other people, or have components that other people need to provide. I sit down with him and he tells me who else will contribute to the project, and when he needs to have their input by, in order for him to make his deadline. I make reminders on the calendar to contact all of them.”

For all this to work, it’s important to put everything in the calendar, not just some things. The reason is simple: If you see a blank spot on the calendar, you and anyone else looking at it would reasonably assume that the time is available. You can’t just partially use a calendar, keeping some of your appointments in your head—that’s a recipe for double booking and missed appointments. The best strategy is to enter events, notes, and reminders in the calendar as soon as they come up or, alternatively, gather all of your calendar entries on index cards or slips of paper and set aside one or two times each day to update your calendar en masse.

Linda says that she prints out every calendar entry on paper as well, in case the computer goes down for some reason, or crashes. She maintains multiple calendars: one that her boss sees and one that is just for her to see—hers includes reminders to herself that she doesn’t need to bother him with—and she also keeps separate calendars for her personal business (unrelated to work) and for key people with whom her boss interacts.

Linda also uses the calendar to organize things that need to be done prior to an appointment. “If it’s a medical appointment and there are things required in advance of the appointment—tests, for example—I find out how long it takes for the test results to come in, and then put in a reminder to get the tests done well in advance of the actual medical appointment. Or if it’s a meeting and certain documents need to be reviewed in advance of the meeting, I figure out how long they’ll take to read and schedule time in the calendar for that.” These days, most computer calendars can synchronize with the calendar on an Android, iPhone, BlackBerry, or other smartphone, so that every reminder or some selected subset of them also shows up on the phone.

Special dates become part of the calendar, along with tickler files in advance of those dates. “Birthdays go on the calendar,” Linda says, “with a tickler a week or two in advance to remind us to buy a present or send a card. Actually, any social event or business meeting that will require a gift gets two calendar entries—one for the event itself and one in advance so that there’s time to select a gift.”

Of course, there are things you want to spend time on, but just not now. Remembering to complete time-sensitive tasks and doing them at the most convenient times is becoming easier, because externalizing them is becoming easier. Some programs allow you to compose an e-mail or text message but send it at a later date—this works effectively as a tickler file: You compose an e-mail or text on the day you’re thinking about it, to remind you on a particular day in the future that you need to do something or start working on a project. Work flow apps such as Asana allow you to do the same thing, with the option of tagging coworkers and friends if you’re engaged in a joint project that requires input from others. Asana then automatically sends e-mails to remind people when and what needs to be done.

As a time-saver, cognitive psychologist Stephen Kosslyn recommends that if you are not the kind of person who overspends—that is, if you know you can live within your means—stop balancing your checkbook. Banks seldom make errors anymore, he notes, and the average size of the error is likely to be minuscule compared to the hours you’ll spend squaring every purchase. He advises to go over the statement quickly to identify any unauthorized charges, then file it and be done with it. If you set up automatic overdraft protection, you don’t need to worry about checks bouncing. Second, set up automatic bill payments for every recurring bill: your Visa card, cell phone, electric bill, mortgage. The hours a month you used to spend paying bills is free time gained.

Life Time

As people grow older, they frequently say that time seems to pass more quickly than when they were younger. There are several hypotheses about this. One is that our perception of time is nonlinear and is based on the amount of time we’ve already lived. A year in the life of a four-year-old represents a larger proportion of the time she’s already been alive than it does for a forty-year-old. Experiments suggest that the formula for calculating subjective time is a power function, and the equation states that the passing of a year should seem twice as long for a ten-year-old than for a forty-year-old. You may recall trying to be still for an entire minute as a child, and now a minute goes by very quickly.

Another factor is that after the age of thirty, our reaction time, cognitive processing speed, and metabolic rate slow down—the actual speed of neural transmission slows. This leaves the impression that the world is racing by, relative to our slowed-down thought processes.

The way we choose to fill our time naturally changes across the life span as well. When we’re young, we are driven by novelty and motivated to learn and experience new things. Our teens and twenties can be seen as a time when we want to learn as much about ourselves and the world as possible, so that we can come to know, out of an infinity of possibilities, what we like and how we’d like to spend our time. Am I someone who likes parachuting? Martial arts? Modern jazz? As we get older and approach our fifties and sixties, most of us place a higher priority on actually doing the things we already know we like rather than trying to discover new things we like. (Individuals vary tremendously of course; some older people are more interested in new experiences than others.)

These different views of how we want to spend time are partly fueled by how much time we feel we have left. When time is perceived as open-ended, the goals that become most highly prioritized are those that are preparatory, focused on gathering information, on experiencing novelty, and on expanding one’s breadth of knowledge. When time is perceived as constrained, the highest-priority goals will be those that can be realized in the short-term and that provide emotional meaning, such as spending time with family and friends. And although it’s well documented that older people tend to have smaller social networks and reduced interests, and are less drawn to novelty than younger people, the older people are just as happy as the younger ones—they’ve found what they like and they spend their time doing it. Research shows clearly that this is not due to aging per se but to a sense of time running out. Tell a twenty-year-old that he has only five years left to live and he tends to become more like a seventy-five-year-old—not particularly interested in new experiences, instead favoring spending time with family and friends and taking time for familiar pleasures. It turns out that young people with terminal diseases tend to view the world more like old people. There’s a certain logic to this based on risk assessment: If you have a limited number of meals left, for example, why would you order a completely new dish you’ve never tried before, running the risk that you’ll hate it, when you can order something you know you like? Indeed, prisoners on death row tend to ask for familiar foods for their last meals: pizza, fried chicken, and burgers, not crêpes suzette or cassoulet de canard. (At least American prisoners. There are no data on what French prisoners requested. France abolished the death penalty in 1981.)

A related difference in time perception is driven by differences in attention and emotional memory. Older adults show a special preference for emotionally positive memories over emotionally negative memories, while younger adults show the opposite. This makes sense because it has long been known that younger people find negative information more compelling and memorable than the positive. Cognitive scientists have suggested that we tend to learn more from negative information than from positive—one obvious case is that positive information often simply confirms what we already know, whereas negative information reveals to us areas of ignorance. In this sense, the drive for negative information in youth parallels the thirst for knowledge that wanes as we age. This age-related positivity bias is reflected in brain scans: Older adults activate the amygdala only for positive information, whereas younger adults activate it for both positive and negative information.

One way to stave off the effects of aging is to stay mentally active, to perform tasks you’ve never done before. This sends blood to parts of your brain that wouldn’t otherwise get it—the trick is to get the blood flowing in every nook and cranny. People with Alzheimer’s disease show deposits in the brain of amyloids, proteins that erroneously interact, forming small, fibrous microfilaments in the brain. People who were more cognitively active in their lives have less amyloid in their brains, suggesting that mental activity protects against Alzheimer’s. And it’s not just being active and learning new things in your seventies and eighties that counts—it’s a lifetime pattern of learning and exercising the brain. “We tend to focus on what people do at seventy-five in terms of dementia,” says William Jagust, a neuroscientist at UC Berkeley. But there is more evidence that what you do in your life, at forty or fifty, is probably more important.”

“Retaining lots of social interaction is really important,” adds Arthur Toga, a neuroscientist at the University of Southern Califomia. “It involves so much of the brain. You have to interpret facial expressions and understand new concepts.” In addition, there is pressure to react in real time, and to assimilate new information. As with cognitive activity, having a history of social interaction across the life span is protective against Alzheimer’s.

For people of any age, the world is becoming increasingly linear—a word I’m using in its figurative rather than mathematical sense. Nonlinear thinkers, including many artists, are feeling more marginalized as a result. As a society, it seems we take less time for art. In doing so, we may be missing out on something that is deeply valuable and important from a neurobiological standpoint. Artists recontextualize reality and offer visions that were previously invisible. Creativity engages the brain’s daydreaming mode directly and stimulates the free flow and association of ideas, forging links between concepts and neural nodes that might not otherwise be made. In this way, engagement in art as either a creator or consumer helps us by hitting the reset button in our brains. Time stops. We contemplate. We reimagine our relationship to the world.

Being creative means allowing the nonlinear to intrude on the linear, and to exercise some control over the output. The major achievements in science and art over the last several thousand years required induction, rather than deduction—required extrapolating from the known to the unknown and, to a large extent, blindly guessing what should come next and being right some of the time. In short, they required great creativity combined with a measure of luck. There is a mystery to how these steps forward are made, but we can stack the decks in our favor. We can organize our time, and our minds, to leave time for creativity, for mind-wandering, for each of us to make our own unique contribution in our time here.

In contrast to creative thinking is rational decision-making. Unfortunately, the human brain didn’t evolve to be very good at this, and evolutionary biologists and psychologists can only speculate why this might be so. We have a limited attentional capacity to deal with large amounts of information, and as a consequence, evolution has put into place time- and attention-saving strategies that work much of the time but not all of the time. The better we do in life, and the more we become like the HSPs (those highly successful persons) we dream of being, the more perplexing some decisions become. We could all use better decision-making strategies. The next chapter examines how we can better organize scientific and medical information, to teach ourselves to be our own best advocates in times of illness, and to make more evidence-based choices when they matter most.