Chapter Twenty-Seven

Homo Prospectus (2008–2016)

“HOLINESS, CAN I say what bothers me about Buddhism?” I asked the Dalai Lama. He was unflappably positive, and my question did not ruffle him a bit. We were on stage at a meeting titled “Mind and Its Potential” in Sydney, Australia, on a scorching summer day in December 2009. His Holiness was rarely challenged, and I tried to perk up an overly reverential and sleepy discussion with a deep issue that was on my mind much of the time.

“Buddhism urges us to live in the present and to be mindful of the present,” I continued. “I don’t agree. We are not beings who dwell in the present. Our minds brim with futures. This is not to be fought. The future is our nature. We are creatures who are drawn into the future.”

 

WHERE WAS THIS outlandish idea coming from?

Traditional psychology’s emphasis on the present and the past had been bothering me since the founding of positive psychology. At first just the glaring omission of the positive bothered me. Traditional psychology tried to undo what was wrong, or tried to derive what was right from what was not wrong, or neglected what was right altogether. Positive psychology corrected this imbalance. But I suspected an even more fundamental shortcoming: the future. Traditional psychology tells us that we are creatures of the past and the present, which in turn give rise to the future. Psychology-as-usual studies memory, a person’s past, as well as motivation and perception, the present. Predicting a person’s future actions will directly fall out from knowing his past and his present—somehow.

This picture does violence to my own mental life. I don’t think much about the past, and I certainly don’t bask in the present. The present is too brief to dwell in. Rather I spend an enormous amount of my time imagining futures, daydreaming what-ifs, turning possible scenarios over and over, upside down, and backward, and the older I get the more time I spend in the future.

The name of our species bothers me as well. Homo sapiens means “wise man” or “knowing man,” but in contrast to Homo habilis, “handy man,” and Homo erectus, “upright man,” this name is not a description but an aspiration. And not one that we live up to. What do we actually do well that every other species does poorly? Language, toolmaking, killing, rationality, tasting bad to predators, cooperation—are all candidates. But when we look closely at other mammals, birds, and even social insects, our uniqueness fades. So with Dan Gilbert, professor of social psychology at Harvard,1 I believe that the un-rivaled human ability of imagining futures—“prospection”—uniquely describes our species.

We prospect the future uniquely well, and this ability might ultimately make the aspiration of wisdom a reality. Hence, we are better named Homo prospectus.2

This promotes prospection to the front and center of psychological science. In the traditional view, if you want to know what I will do in the future, you need know, in principle, four things:

My history

My genetic makeup

The present stimuli

My present drives and motives

Psychoanalysis, behaviorism, and most of cognitive psychology accept this. But I do not. I have been working on agency, one way or another, from learned helplessness on, all my life, and in thinking about prospection, agency comes into focus most clearly. Here is the great blind spot of the traditional framework that has gnawed at me for fifty years: it leaves out human agency and its very fulcrum, a mind that metabolizes the past and present to create the future and then chooses among possible futures.

There cannot be a positive psychology without prospection.

I first broached this idea with Roy Baumeister in 2008. We were working together on mental energy—another sorely neglected topic in psychology, orphaned by the field’s abandonment of Freud’s hydraulic theory of emotional life. The baby of energy got thrown out with the bathwater of Freud’s concept of repression. Roy had resuscitated energy to explain his memorable chocolate-chip cookie finding. His subjects sat in front of a freshly baked plate of them but were told that they were in the “radish group,” which meant they could eat as many radishes as they wanted but must not eat any cookies. Following this excruciating resisting of temptation, Roy’s subjects did worse at puzzles that required trying and persistence.3 They were depleted of mental energy. Roy and I tossed around the idea that consciousness uses energy and we should try to measure how much.

Roy suggested that consciousness is for imagining possible futures (like eating a chocolate-chip cookie and then feeling guilty). I fell in love with this idea, and a couple of drafts about consciousness as simulating futures changed hands. In the meantime, Roy published a learned Psychological Review article in which this slant on consciousness appeared briefly but was swamped almost to invisibility by other bowings and scrapings to the reviewers.4

In October 2010, Chris Peterson and Nansook Park used their campus clout to organize well-being as the University of Michigan’s theme for the year. There were happiness banners everywhere, and I was invited to deliver the Tanner Lecture in philosophy about positive psychology. Over lunch, I met Chandra Sripada, an assistant professor of both philosophy and psychiatry. Chandra, meticulous of dress and speech, turned out to be a working neuroscientist in addition. He mapped out network architecture by the connectivity from one area to another. I had never before met anyone who combined philosophy, psychiatry, neuroscience, and psychology, and as I would discover over the next six years, Chandra was a polymath—one of only a half dozen I have ever met.

He was shocked that I had never heard of the default network and opened my eyes to the wondrous discovery of a reliable brain circuit that could well be how the brain does its simulations of possible futures.

“There must be at least one thousand studies of the brain on a particular external task like mental arithmetic or anagrams,” Chandra began over our cafeteria lo mein. “But you always have to run a ‘rest’ period for contrast. During ‘rest’ you tell the subject just to lie there and not do anything.”

I nodded my head.

“The circuits that light up for mental arithmetic are pretty noisy, not quite a dead end, but a messy signal-to-noise search. But what lights up at so-called rest is uniform and reliable, so it is called the ‘default’ circuit, what the brain defaults to at rest.

“Now here is the shocker. This is not some sort of empty rest. This is the same circuit that lights up when you ask the person to imagine a personal future.5

My epiphany light bulb went on. This must be the circuit that does the simulations that Roy and I thought consciousness was all about.

That evening the Michigan professors of philosophy and psychology dined together over many bottles of an indifferent red wine. I was seated next to Peter Railton, a well-known moral philosopher. Peter told me that he worked on desire. Desire, Peter said, is about forming a positive image of a possible future. Desire exemplifies being drawn into the future rather than being pushed by the past. At this point, I was called on to say a few words, and I recalled Morton White’s6 unfilled promise that philosophy should someday rejoin its orphaned child, psychology.

“We all pay lip service to interdisciplinary work,” I said. “Let’s go around the room and say as a result of this day what we will now do differently? Peter Railton and I will start. We’re going to write an article together on being drawn into the future rather than driven by the past.”

This was the very first that Railton had heard of our project. Nevertheless, it actually began a few days later. Railton and Sripada sent me several articles to read, and within two weeks, I sent them the very first of what would become twenty drafts, which culminated in our article7 and ultimately our book Homo Prospectus.8 In this first pass, I confessed my doubts about traditional psychology and how a framework that put prospection front and center would lead to better psychology.

I suggested where a new psychology should start. The hard determinism of traditional psychology is a will-of-the-wisp because all science is at best statistical. No science can ever be billiard-ball deterministic, and even the “simple” problem of where three balls colliding will end up is only approximated statistically. Knowing the genes, the past history, and the present stimuli—even to a statistical asymptote (imagine it achieves the never-achieved 0.99 accuracy)—will not come close to deducing what the person will go on to do in any real-time sequence, because .99 × .99 × .99 × .99 all too rapidly approaches zero prediction. Therefore, discovering what the person expects, intends, and desires in the future is usually a better starting-off point than asking about past behavior. If you want to know what I will do next Saturday night, the best place to start is by asking what I intend to do next Saturday night, not what I did last Saturday night and the Saturday night before.

I gave “what consciousness is” my first try. Human consciousness is the seat of agency. Agency consists in running simulations of possible futures and deciding among them. Agency is prospecting the future, and expectation, choice, decision, preference, desire, and free will are all processes of prospection. Maybe all this happens in the default network.

My bottom line was that human action is drawn by the future and influenced, but not driven, by the past. All of this seemed pretty naive to Railton, the meticulous philosopher, and it seemed out of touch to Sripada, the working neuroscientist, and they began to rework these ideas into a more scholarly and sophisticated form.

Enter the Eagle Scouts of philanthropy: the John Templeton Foundation. The foundation regularly asked me to spot initiatives that were adventurous, good science, unlikely to be funded by disease agencies like the National Institute of Mental Health, and compatible with Sir John Templeton’s vision of a science of human flourishing.

image


Spending weeks at a time with Roy Baumeister, Peter Railton, and Chandra Sripada was like going to graduate school all over again, but in a brand-new discipline: prospective psychology. Photo courtesy of Mandy Seligman.

image


Peter Railton teaching us (Chandra Sripada, the author, and Roy Baumeister) about moral realism.

Barnaby Marsh and Chris Stawski (both then advisors to Jack Templeton, Sir John’s neurosurgeon son, who succeeded to the presidency after Sir John’s death) took me to lunch, and I explained the idea of prospection at length.

“This is just the kind of science that Sir John loved,” was their first reaction. “Sir John thought that imagination—future-mindedness—was the key to success.” In due course, the Templeton Foundation funded the four of us to write Homo Prospectus and to do laboratory psychology and neuroscience on prospection. They actually doubled down and created a $3-million research competition for the measurement, mechanisms, applications, and improvement of prospection.

We went to work and wrote draft after draft of our first paper. Peter and I struggled over almost every line. He wanted every sentence to be true on its own, but I wanted them to be readable. Peter added qualifiers to achieve reviewer-friendly prose. I cut out half his qualifiers to get reader-friendly prose. He stuffed them back in. We ultimately sent it in to the leading theoretical journal in psychology, the Psychological Review. The editor told me it was “the most interesting paper he had read since becoming editor,” but it was not theoretical enough. We then sent it to the Psychological Bulletin. The editor said that it was one of the most interesting papers he had read since becoming editor, but it did not review the literature exhaustively enough. We then sent it to Perspectives on Psychological Science, also a leading journal. The editor, Bobbie Spellman, said she could publish anything she wanted if it was really good, and this paper was really good. She published it.

Spending weeks at a time with Roy, Chandra, and Peter was like going to graduate school all over again—but in a brand-new discipline. The rest of this chapter shares some of what I learned and what made me wish I was twenty-one again.

 

WHAT IF PERCEPTION is not the registration of what is present but a useful hallucination of what to expect? The modern understanding of vision differs vastly from what I was taught. The visual system of the brain is a column, where the bottom of the column gets almost point for point what the retina registers. The next layer then gets a more abstract version, and abstraction deepens on up to the top layer. The top layer is extremely abstract, housing schemas for detecting such entities as a table or even a basketball, for example. Strangely, however, there are ten times as many connections going down from the top as there are up from the bottom.

Why?

Our eyes jerk three times every second, but the world stays stable. Top-down instructions to the lower layers explain this stability. You watch a film of people passing a basketball around and are instructed to count the number of passes. A gorilla walks across the court. Half the observers do not see the gorilla. The top layer tells the layers below that this is basketball: amplify the basketball-relevant elements and inhibit gorilla elements. What we see is not reality printing itself on our eyes but a hallucination of what we expect to see next, a projection into the future.9

 

WHAT IF MEMORY is not a file drawer of films and photographs but a changing collection of possibilities most relevant to the future? In writing this book, I have been astonished by how flawed my memory is. I decided to write first and fact-check later. I had already written so much about the topics that if I read what I originally wrote, my rendering would be boring for me and not nearly fresh enough for you. So when I finally went about fact-checking—by looking at datebooks, articles, books, and emails and asking living witnesses—I found disturbingly many errors. Bob Kaiser, for example, insists that it was a purple 1947 Chevrolet, not a green 1948 Ford, that we raced around our drag track. And that it cost thirty-five dollars, not fifty. But I can still see the grill of our green Ford so clearly.

This isn’t Alzheimer’s creeping up on me. It is a brute fact about memory, and Ulric Neisser pinned this down with his study of the Challenger disaster. Twenty-four hours after it happened, the students in his introductory psychology class wrote down where they were, what they were doing, and how they felt when they heard the tragic news. Two and a half years later, they answered the same questions and also said how confident they were about their responses. There were gaping discrepancies, and only a quarter of the students even remembered filling out the first questionnaire. Despite the memory lapses, many students were completely confident about their completely false recollections.10

Perhaps the fallibility of memory is not a bug but an essential feature. Errors of memory, shifting and shuffling bits of our old memories, might enable us to draw different lessons from the past, lessons that are necessary for a better future.

 

WHAT IF FREE will is not about willing? The key to prospection is that we are constantly imagining possible futures and that each scenario comes with a valence—how much we like or dislike it. Here’s what Chandra utterly persuaded me of:

I believe the proper form of a philosophical theory of free will is a “distinctive mark” theory. Consider the question of what makes a Ferrari fast. It is unhelpful to cite the pressing of the accelerator as what accounts for the Ferrari’s fastness. While it is true that accelerator pressing is necessary for the Ferrari to cruise down the road at 100 miles an hour, citing this factor misses the point of the question. Accelerators must be pressed in Ferraris and non-Ferraris alike. The question of what makes a Ferrari fast is rather a request for information about what is special or distinctive that makes a Ferrari speedier than other cars. A proper answer must say something, not about the pressing of the accelerator, or the tightness of lug nuts, or the lack of corrosion on the spark plugs, or all the things that are necessary for a Ferrari to go fast, but rather must say something about the distinguishing basis for the Ferrari’s speediness. That is, it should say something about the Ferrari’s engine, and in particular its size or power or its unique engineering. The philosophical question of free will is similar. Of all the attributes and qualities of an agent that are necessary for freedom, what is the distinctive mark that makes certain creatures—presumably humans, but perhaps other creatures as well—free?

That mark is the latitude of the scenarios we imagine. Humans, unlike any other species, imagine futures that stretch and stretch. They stretch over time—even whole lifetimes. They stretch over complex sequences—linear and branching. They stretch to hypotheticals and counterfactuals—What if life does not depend on carbon? What if the Vikings had had gunpowder?

The scope of our freedom, then, is exactly the latitude of our imaginings.

Willing now shrinks almost to invisibility. We simply enact the scenario with the highest valence. How we calculate valence remains unknown but is a proper subject for the future study of free will.

 

WHY DO WE feel anything at all? This is called the “hard” problem of consciousness11 because it is … well, so damn hard. Machines don’t need to feel anything to do their business perfectly. The mechanical tortoise is programmed to register when it runs low on electricity and to return to its nest and plug in. It doesn’t need to experience anything at all—not weariness, not homesickness. What does our subjective world of experience—which is so close to being our very identity—add?

Here are the competing scenarios that crossed my mind in a matter of seconds recently:

  1. I could go on the internet and play bridge. Who might be available? Mark Lair, but he’s usually at lunch. Peter Friedland, he’s in Taiwan, probably going to sleep, but there are a few lesser lights likely to be available. They make errors. Anyway, I’ve wasted a lot of time playing bridge lately.
  2. I might help Mandy teach Carly and Jenny about the Silk Road. I don’t know much about Asia. But the kids would love it, and I haven’t spent any time with them today. Mandy might find it intrusive, having prepared the lesson. But she thinks I have not done my share of teaching lately.
  3. I might make myself some lunch. There’s some Moroccan chicken left over in the fridge. It’s pretty high calorie though. And I’m meeting Phil Voss for dinner at Le Bec Fin in only five hours. But I could order only their three-course meal. Maybe Mandy was saving the chicken for the kids’ dinner.
  4. I could keep working on this damn paper. But I’m having trouble thinking through examples of compelling counterfactual mental simulations. Maybe a bridge break will help. But this is a pretty good example, maybe I should keep plugging. Why bother? I don’t have a deadline, since this paper is for my own amusement. Peter Railton will be disappointed if I don’t follow up soon.
  5. All those tulip bulbs need planting. I could use the exercise, particularly with Le Bec Fin coming up. The temperature is good, but the ground is soggy. Tulips can be planted even if it gets really cold, no rush. They might rot. I did lift weights for twenty minutes already today. But I could use some fresh air. It would calm me down. I need it particularly after my argument with the dean.

Notice how multidimensional these simulations are and also how incommensurable they seem. By what measure does the pleasure of playing bridge with Mark Lair stack up against the annoyance of letting tulip bulbs rot or the satisfaction of seeing bright tulips in six months or the anticipation of Moroccan chicken or guilt about not working out? Subjective feeling is the brain’s common currency for value, and it lets us compare possible futures. A capacity for vivid conscious simulation could feed into a final common path for comparisons. There is also the genuine constraint that we must often make decisions very quickly, and we may use acting in response to subjective feeling as the streamlined readout to compare futures.

We also need to compare the present, which is cloaked in feeling, to possible futures. You walk into the tavern, paycheck in hand, and must compare the pleasure of drinking now to the ensuing fight with your wife and the pain of sleeping on the couch all weekend. That these prospections are also cloaked with feeling allows us to compare directly the present to the future as well as to compare one possible future to another.

 

THE FINAL THING I learned from working with Roy, Peter, and Chandra had to do with creativity itself and how it fares with aging, a topic of obvious concern to me.

Can creativity increase with age?12 Creativity employs an exquisite kind of prospection: imagining something original and surprising and useful that is not present to the senses. The research literature strongly suggests that creativity wanes as we get old.

I am in mind of that day in Oxford in 1975 when John Teasdale and I began our reformulation of learned helplessness together. Our public agreement to do this did not end the day. Another encounter right afterward echoed many years beyond the collaboration that John and I had just undertaken. As I am seventy-five, this encounter is particularly resonant right now.

I was invited to dinner at Donald Broadbent’s chicly modern house in an Oxford suburb. Jerry Bruner was the other guest. Not one for social chitchat and mindful of the uniqueness of this opportunity, I raised a topic that I was very curious about. At age thirty-two, I worried that I had peaked, and much research suggests that the mid-thirties are the high-water mark of scientific creativity. I knew the careers of the two luminaries I was dining with. Donald was about fifty and did his great work on dichotic listening at about my age. Jerry was almost sixty and had done his great work on thought going beyond the data given twenty-five years before, in his early thirties as well. Truth to tell, I was less impressed by what they had done since.

“It’s an honor to be with the two of you this evening,” I said. “I am a fan of both of your careers, and I know your work pretty well and when you did it. Now tell me honestly, when were you at your most creative?”

Right now!” they roared in unison.

Even more strikingly, when I recently asked Aaron Beck, in his mid-nineties, the same question, he responded with exactly those words. And when I ask myself the very same question, I answer, “Right now.” I wonder if we were all harboring a benign, defensive illusion.

That creativity might increase well into later life flies in the face of the data. Much research shows that creativity peaks within a couple of decades of the start of a career, earlier with mathematics and poetry, later in science, history, and philosophy.13

Here’s the damage that aging does,14 and I can testify to each of these personally:

Wait, originality—divergent thinking—decreases? Doesn’t that end the discussion? Creativity requires originality, which in turn requires prospection, which in turn requires imagination. Paving my driveway with salami is an original idea, but it is useless. Creativity demands more than originality; also requires usefulness and a good sense of who will make use of the idea, the audience. “Audience” can refer to a literal audience in the arts but often alludes to the gatekeeping members of the discipline in academia.

What gets better with age that might outweigh all these losses?

First, knowledge itself increases. Even though I have forgotten a huge amount, I know more in total now. I know, for example, almost automatically about how to use Big Data to get around the artifacts of questionnaires and why the default network might be the imagination network.

I not only know more facts from my own discipline, but I know more generally. I recently took Jenny and Carly to the architecturally magnificent Native American Museum in Washington, DC. On hearing that European diseases carried by Christopher Columbus’s sailors had decimated the native Carib population, Jenny, then seven, asked, “Why weren’t the European sailors killed off by Carib diseases?” The answer is crossroads. Europe of the fifteenth century had stood at the crossroads of many civilizations for 2,000 years. With the diversity of people who traipsed through Europe as traders, soldiers, and slaves came the diversity of their diseases. This host of plagues honed the immune systems of Europeans; those who survived had the ability to fight off a great variety of diseases and passed on their disease-fighting genes. The Carib “Indians,” highly interbred, did not have the dubious benefit of encountering one different plague after another in their immunological history, and so they were devastated.

It is not just the immune system that thrives with diversity; ideas do too. Crossroads thinking is general knowledge and underpins creativity. Jared Diamond points out that Tasmanian Aborigines were at a cognitive disadvantage compared with Australian Aborigines. Tasmania is cut off by the almost impassable Tasman Strait, whereas geography does not block trade across Australia. The sophistication of Australian tools increased across 2,000 years, whereas that of Tasmanian tools deteriorated.16 Creativity sharpens tools and also brings different ways of thinking to make connections among seemingly unrelated concepts.

Crossroads thinking implies diversity. If experience consists only of repeated examples of the same thing, it adds little. I was at a faculty meeting about forty years ago, and Frank Irwin, then the grand old man of our department, said, “I’ve had twenty years of experience with this” (what “this” was, I forget). Assistant professor Frank Norman, then a young Turk, said, “There’s twenty years of experience, Dr. Irwin, and there’s one year of experience twenty times.”

 

BESIDES MORE SPECIFIC and general knowledge, a second benefit of age for creativity is that shortcuts and guidelines get better.

The story about Isaac Newton and the apple is apparently true, but not the apple-falling-off-the-tree version that we learned at school. One evening when Newton was at his family’s farm during the plague years, he was at his desk. An apple sat on the surface and the moon rose behind it, occupying the same visual space. Aha! Newton wondered if same force that holds the moon in orbit could draw the apple to the ground.17 Newton was a “natural.” Really wondrous here is the scaffolding that permitted him to perceive the apple and the moon in just that perfect way to point toward gravity and the inverse-square law.

Newton had this scaffolding in place in his early twenties. How could so much scaffolding have been erected so early? It is commonly said that Newton was a genius, and with that statement the mystery is thrown back on another mystery: the brain, a gift, God, birth from Zeus’s brow, or “talent.” Angela Duckworth has demystified talent at least a bit.18 Talent is the rate at which a skill is acquired. Newton was fast and could soak up information more quickly than the rest of us. He had knowledge-based shortcuts that allowed him to see to the bottom faster and with less input.

I think an alternate explanation of Newton calls not on a gift from the gods but on experience: shortcuts. My guess is that Newton’s being a “natural” came from continual reverie about physics and mathematics, and this huge amount of “time on task” created the shortcuts that permitted him to see that the apple plus the moon equals gravity.

While I am certainly no Newton, I have been through so much in my continual daily reverie about psychological issues that I have an immense latticework of shortcuts that allow me to leap from one building to the next with ease. Newton had it at twenty-two, but it took me decades to build.

For example, I used to read every word of a journal article laboriously. Now I just scan because I can infer huge swaths of what the whole article contains. I know that if Bob Rescorla wrote it, I don’t have to worry about the adequacy of the control groups. These kinds of shortcuts save me a lot of time that I can put to better use elsewhere.

 

BESIDES SHORTCUTS THAT save time, guidelines more formally called “heuristics” likely get better with age. Many guidelines are about what not to do—negative heuristics that tell you how to avoid errors. “Recency,” for example, is one: if a long friendship has just ended on a bitter note, you will underestimate how worthwhile the friendship was, relative to a lower-quality friendship that just petered out. I use some negative heuristics specific to doing psychology:

I use negative heuristics in daily life: don’t promise more than you can deliver; never be sarcastic; check your work repeatedly; don’t be late; when in doubt stand on principle. These are some of the “thou shalt nots” by which I have learned to live.

But I repeat a major caveat in the context of the heuristics that may improve with age: not getting it wrong does not equal getting it right. There are positive heuristics that are more important than negative heuristics, and these likely get better with age.

Imagine making a speech with no grammatical errors. Or writing a memoir in which you say nothing untrue. Or serving a meal in which nothing tastes bad. Or proving a theorem in which every statement is true. Or playing a Beethoven piano sonata with no mistakes. Or chairing a meeting in which no one is discourteous. None of these guarantees a good speech, a good book, a good meal, a good proof, a good performance, or a good meeting.

Guidelines of the form “thou shalt” create goodness, right, beauty, and truth over and above the mere absence of badness, wrong, ugliness, or falsehood. Discovering positive heuristics is a much harder task than discovering shortcuts to avoid errors, and unlike negative heuristics, positive heuristics are at the heart of creativity.

Enumerating them is well beyond me, but here are a few I have learned to use in psychology:

Beyond allowing one to glean more knowledge and better shortcuts and guidelines, aging helps counterbalance the losses with a keener sense of audience. Creativity requires the accurate evaluation that the original and surprising idea will be useful, beneficial, and desired by the relevant audience.19 “Audience” is both literal, as in the arts and commerce, and figurative, in the case of academic disciplines. Here “audience” refers to the gatekeepers—the individuals who have the power to decide which contributions are creative. Isaac Newton at age twenty-three had an exquisite sense of audience. Returning from two plague years at home, he had written three papers: one on optics, one on calculus, and one on gravitation. His audience was his mentor, Sir Isaac Barrow, who responded by resigning his Lucasian professorship in 1669 in favor of Newton. (Oh, to have such a student! Oh, to have such objectivity!)

Writing about creativity offers an example of the importance of audience. Creativity is an overworked topic, and almost everything has been said about it. For an article about creativity to see the light of day, it is a good idea to first send a draft to Dean Simonton, Mihaly Csikszentmihalyi, Teresa Amabile, and Howard Gardner and patiently wait for their comments. Their work needs to be cited (favorably if possible) and their papers read carefully. I blush to say that I have failed to do this all too often in my eagerness to get on to my next work. Dick Solomon had a much better grasp of this principle than I. He always sent his drafts out to the gatekeepers first, and blessed with Harold Schlosberg on one shoulder and Walter Hunter on the other, commenting on each sentence, Dick had a great sense of audience.

Ideas that are a half step ahead of the audience are good “normal” science, and the journals are filled with these. While usually boring to read, they provide the cumulative bricks. Papers that are two steps ahead of the gatekeepers are rejected as outlandish. If you wish to succeed in normal science, stay just one step ahead, but do venture an occasional one-and-one-half-step paper to keep your sense of self-respect.

 

I ASKED THE Dalai Lama what follows if we are not of the present but are drawn into the future. He was sympathetic and encouraged me to pursue it. With Peter, Chandra, and Roy, over the next five years, we put prospection front and center, and this illuminated several huge issues in psychology and neuroscience that were completely opaque in the framework in which the past and the present determine the future. Consciousness became the process by which humans envision the future. The default network became the imagination network. Subjectivity became the common currency we use to compare future scenarios. Freedom of the will became the latitude of the scenarios we envisage. Creativity became the combination of original and surprising prospections coupled with a keen sense of audience, meaning that creativity can increase with age and might be teachable at any age.

Even though it may be a myth, I end this chapter with this story.

On Nov. 18, 1995, Itzhak Perlman, the violinist, came on stage to give a concert at Avery Fisher Hall at Lincoln Center in New York City. If you have ever been to a Perlman concert, you know that getting on stage is no small achievement for him. He was stricken with polio as a child, and so he has braces on both legs and walks with the aid of two crutches.

To see him walk across the stage one step at a time, painfully and slowly, is an unforgettable sight. He walks painfully, yet majestically, until he reaches his chair. Then he sits down, slowly, puts his crutches on the floor, undoes the clasps on his legs, tucks one foot back and extends the other foot forward. Then he bends down and picks up the violin, puts it under his chin, nods to the conductor and proceeds to play….

But this time, something went wrong. Just as he finished the first few bars, one of the strings on his violin broke. You could hear it snap—it went off like gunfire across the room. There was no mistaking what that sound meant….

He waited a moment, closed his eyes then signaled the conductor to begin again. The orchestra began, and he played from where he had left off. He played with overwhelming passion and power and purity.

Of course, anyone knows that it is impossible to play a symphonic work with just three strings. I know that, and you know that, but that night Itzhak Perlman refused to know that. You could see him modulating, changing, re-composing the piece in his head. At one point, it sounded like he was de-tuning the strings to get new sounds from them that they had never made before.

When he finished, there was an awesome silence in the room. And then people rose and cheered. There was an extraordinary outburst of applause from every corner of the auditorium. We were all on our feet, screaming and cheering, doing everything we could to show how much we appreciated what he had done.

He smiled, wiped the sweat from this brow, raised his bow to quiet us, and then he said, not boastfully, but in a quiet, pensive, reverent tone, “You know, sometimes it is the artist’s task to find out how much music you can still make with what you have left.”20