Reflections on Brainstorms after Forty Years

Brainstorms was published in 1978, the same year that the Alfred P. Sloan Foundation inaugurated its generous program of support for cognitive science, setting off something of a gold rush among researchers in many fields who thought they could see their own favored projects as cognitive science of one kind or another. Sloan-sponsored conferences and workshops were held around the country, and soon there emerged a cadre of speakers drawn from the various component disciplines who could be counted on to present perspectives on their own discipline that were accessible to outsiders, provocative, and in many cases actually useful across disciplinary boundaries. The “Sloan Rangers,” we were dubbed, and I was one of the most junior members of the roving band, since few philosophers of the generation who had been my teachers were interested in getting out of their armchairs. (The work of a few of those philosophers was influential: Austin’s work on performatives and Grice’s work on implicatures in particular.) The philosophical corner of the interdisciplinary pentagon1 (along with psychology, linguistics, computer science, and neuroscience) was anchored by Jerry Fodor, Gil Harman, Bert Dreyfus, John Searle, and me, with the Churchlands soon to invade from the north. The other corners were occupied by psychologists such as George Miller, Roger Shepard, Anne Treisman, Ulric “Dick” Neisser, Zenon Pylyshyn, and Philip Johnson-Laird; linguists Noam Chomsky, Ray Jackendoff, Jim McCawley, Barbara Partee, and George Lakoff; computer scientists Allen Newell, Herbert Simon, Marvin Minsky, John McCarthy, Roger Schank, and Terry Winograd; and neuroscientists Michael Gazzaniga, Brenda Milner, Marcus Raichle, and Michael Posner.

I already knew most of these folks from my early forays into interdisciplinarity, and some had encouraged me to publish a collection of my essays. The appearance of Brainstorms in near synchrony with the Sloan road show was one of the factors, no doubt, that led to its gratifyingly large interdisciplinary readership. That, and a special arrangement I had made with Harry and Betty Stanton, the original publishers of the book. I had come to deplore the practice of university presses publishing monographs that were too expensive for students and even professors to buy. College libraries were obliged to buy them, and until a paperback edition came out some years later (in the case of a book that demonstrated its market in spite of its price), students had to settle for waiting in line to borrow the library copy. So I decided to find a publisher with whom I could try an experiment in alternative publishing. As luck would have it, Harry and Betty showed up in my office at Tufts with perfect timing.

Harry had been a highly successful editor at Addison-Wesley in Boston, but he and Betty were also avid skiers with a ski lodge in Vermont, and he had decided to resign his position and set up their own imprint, Bradford Books, in the village of Montgomery, Vermont. His little address book of freelance copy editors and designers could take care of the editorial side handsomely, a nearby barn would be the warehouse for the newly printed books, and a neighborhood sheltered workshop employing people with disabilities would handle the shipping and mailing. High-quality production and low overhead. What should they publish? They did some prospecting at MIT, where Noam Chomsky encouraged them to publish the proceedings of a conference on the biology of language that he and Ed Walker had organized, which they readily agreed to do (Edward Walker, ed., Explorations in the Biology of Language, Montgomery, VT: Bradford Books, 1978—their first volume). Who else might have a promising manuscript? Chomsky suggested that I might have something interesting in the works. As it happened, I had just finished organizing a stack of my papers which I thought would make a good collection, so when the elegant couple appeared at my office hours, I was more than ready to talk with them.

I explained my disapproval of high-priced academic books, and proposed a deal: I wanted to publish this collection simultaneously in hardback and paperback, and while they could charge whatever they liked for the hardback, I had to have control over the price of the paperback. (I proposed to make a visit to a college bookstore with them to work out a reasonable price by comparison shopping of available paperbacks.) In exchange for this unusual arrangement I proposed to forgo all royalties on the first three thousand copies sold. Since few academic books, and almost none in academic philosophy, ever sell that many copies, I was clearly willing to risk a lot for my hunch that in fact this would open up the market dramatically. They got an imprimatur on the stack of papers from two young philosophers they sought at the University of Vermont (Philip and Patricia Kitcher—I didn’t know them at the time) and accepted my offer. Since the Stantons wanted to make a big splash with their first monograph, they went overboard on clever advertising, and enlisted their good friend Edward (Ted) Gorey to do the jacket and some more pieces for ads. The original Gorey art intended for the cover was lost in the mail in 1978, and at the last minute another already published piece of his work was substituted, drawn from his 1969 book The Dong with the Luminous Nose. I liked it because it looks for all the world like Harry and Betty Stanton perched in a lookout tower gazing solemnly into the stormy night. Few people have ever held a hardbound copy of Brainstorms in their hands (I have a souvenir copy on my bookshelf), but the paperback has never been out of print. Bradford Books soon moved to MIT Press (one of the wisest acquisitions any press has ever made), and in 1985, Gorey redrew the original art—the enigmatic black-faced sheep and bearded chaps with scarves—for a second printing. The Gorey piece on the cover of this fortieth anniversary edition was originally drawn by Gorey just for use on a double-fold publicity mailer, hence its unusual placement of the book’s title. (A second, shorter mailer is reproduced in the front matter of the book.)

On rereading the chapters in 2016, I am beset with mixed emotions. First, I am relieved to find almost nothing to recant or regret, aside from a few embarrassing uses of “man” and “he” that passed for acceptable back then. Second, I have enough temporal distance from the author of these essays—forty years—that I can react to them the way I sometimes react on rereading another author’s work: I find myself thinking—“Wait, I thought I had come up with this idea first, but here it is, clear as day, in this predecessor’s words, which I obviously read, and dimly remember reading, but apparently didn’t fully appreciate! I have unwittingly plagiarized this person!” There is no doubt, then, that I am a somewhat forgetful author who sometimes repeats himself without realizing it.

I guess forgetting one’s own work is better than forgetful plagiarism of somebody else’s, but it’s source amnesia in either case. Recognizing that I am not alone in this affliction, I can view the occasions when I encounter somebody else carefully spelling out some argument that I myself concocted and published back in the day as a source of contentment, not anger. My version went down so smoothly it left no trace of a struggle in the author’s memory. I may have succeeded to some degree in acquiring the skill I once discovered in my dissertation supervisor, Gilbert Ryle. Our weekly chats were so pleasant and unaggressive that I was unable to recognize anything I had learned from him until I compared an early draft of my dissertation to the final draft, and found the latter teeming with Rylean ideas and turns of phrase. Like many philosophy students, it took me a while to realize that philosophy that is not a chore to read can still be good, even deep, philosophy. Some professors never tumble to this fact.

On the spectrum of my mixed emotions, the other pole is frustration. It’s painful to realize that insights I had thought had achieved consensus are now ignored and in need of rediscovery, not just in my own work but that of many others. U. T. Place and J. J. C. Smart both wrote brilliant, pioneering articles on physicalism in the 1960s that have clear lessons yet to be digested by many in the field today. I sometimes think that in spite of his fame (among philosophers) W. V. O. Quine’s great contributions to naturalism are routinely ignored by the current generation, as are some of the best ideas of Wilfrid Sellars. Since I do not exempt myself from this criticism of our so-called discipline, I daresay there are fine anticipations or elaborations of points I pride myself in having authored to be found in the earlier philosophers whose work I have found it convenient to ignore or forget.

That’s the way philosophy goes, and I don’t think it should be embarrassing to acknowledge it. I am fitfully resigned to the fact that it is so hard to make progress in philosophy! We philosophers deal with the tempting mistakes made by very smart thinkers, mistakes that will attract new victims perennially, and in spite of our best efforts, only a distilled and distorted residue of insights survive to become the (still shaky) foundation for the next generation. It’s a real quandary: we want to introduce our students to the best of the best, the highlights and classics, and this creates a canon of works everyone is presumed to have read, but only those who go on to become Kant specialists, or Aristotle scholars, ever replace the cartoon versions they learned as students. It’s an even more pressing problem when we philosophers try to explain to our colleagues in other fields what take-home message can be extracted from the philosophical debate over X or Y. There is no decision procedure for how to navigate between the Scylla of caricature and the Charybdis of jargon-laden, nitpicking, forced immersion in the world of philosophers P, Q, and R. Our colleagues are impatient and want an edited summary; the price they pay is very often the branding of a monolithic ism, to which they can then pledge their allegiance or dismiss as rubbish. Are we scientists going to be Dualists, or Physicalists, or Instrumentalists, or Illusionists, or Eliminative Materialists, or Functionalists, or Epiphenomenalists? Are we for Reductionism or against it?

I tried in Brainstorms to write about the problems in language accessible to all serious thinkers, as jargon-free as possible, with lots of examples. There are philosophers who pride themselves in eschewing examples, much the way novelists pride themselves in not allowing illustrations to gild the lilies of their prose. The philosophers are tying both hands behind their backs. (Recall Richard Feynman’s wise advice for how to penetrate an obscure presentation by a theorist: “Can you please give me a simple example of what you’re claiming?” The inability of a thinker to respond to that polite request is a danger sign.)

Some people just don’t want to get it. Dimly foreseen moral implications, or imagined implications for the meaning of life, for instance, often provoke preemptive strikes against ideas that are felt to threaten one’s settled convictions, one’s home truths. Sometimes the blockade is subtle and unrecognized, and sometimes it is brandished. This is when the art of diplomacy comes in. Where a straightforward, in-your-face argument, with carefully articulated premises and fist-pounding emphases on the logical soundness of one’s train of argument may feel good, it often is utterly unpersuasive. One sometimes has to sneak around behind people to get them to so much as consider your message. That tactic can backfire, especially when it is addressed to philosophers who have a puritanical opinion of such moves. “Don’t let him duck your question with some seductive digression!” “Make him define his terms!”

Remember, in these discussions, always insist on the first-person point of view. The first step in the operationalist sleight of hand occurs when we try to figure out how we would know what it would be like for others.2

Are my somewhat informal, even playful, expository tactics illicit or just hard for opponents to counter? Nothing I could say at this point should be taken as settling the issue; readers will just have to keep an eye out for subterfuge, and—I implore them—maintain an open mind as best they can. I don’t think I misrepresent anything deliberately or conceal serious alternatives, but I certainly leave large swathes of the literature undiscussed. It should come as no surprise to philosophy students that some of their professors refuse to regard me as a serious philosopher. In their opinion, I don’t behave the way I should: I don’t always cite their work, and I often refuse to discuss in detail the twists and turns of their carefully marshaled arguments. That is not just out of laziness but out of a conviction, right or wrong, that I have already seen and shown the futility of their enterprise and deem it not worth my time and effort to critique further. As Donald Hebb once put it, if it’s not worth doing, it’s not worth doing well. I can see that my unadvertised but unblushing willingness to ignore their efforts is plenty of reason for them to dismiss me. I don’t resent their opinion; it comes with the strategic choices I’ve decided to make. I’ll take my chances, especially since I have colleagues and graduate students who continue to do the dirty work of demonstrating the flaws I see in these approaches, which I try to keep up on. If the armchair theorists are irritated by my cavalier attitude, I sympathize. I get similarly irritated when scientists think that philosophers have nothing to contribute beyond hot air and fancy footwork.

Some of the chapters cast longer shadows than others. Chapter 12, “Mechanism and Responsibility,” was actually my first articulation of the concept of the intentional stance. As an antidote to the widespread intuition that mechanistic explanations always drive out intentional explanations, the three stances set the stage for my later account of free will and moral responsibility (in Elbow Room, 1984, and Freedom Evolves, 2003) but also provided the more fundamental perspective for my account of intentionality or content, as presented in Chapter 1, “Intentional Systems.” That essay was primarily addressed to philosophers, but it spawned a number of influential interdisciplinary essays in Behavioral and Brain Sciences, including “Beliefs about Beliefs” (1978), a commentary that helped trigger the hundreds of false-belief experiments with children and animals, and a target article “Intentional Systems in Cognitive Ethology: The Panglossian Paradigm Defended”(1983), which inaugurated my multiyear challenge to Stephen Jay Gould’s distorted presentation of evolutionary biology. It also led to my book, The Intentional Stance (1987), which was treated to a further target article treatment in Behavioral and Brain Sciences in 1988. Recently (in From Bacteria to Bach and Back: The Evolution of Minds, 2017) I’ve revised and extended the concept of free-floating rationales, introduced in the 1983 essay, as a major element in my account of competence without comprehension, which clarifies the transition between the use of the intentional stance in explaining animal behavior and in explaining the design rationales exhibited by the products of natural selection.

Content first, and then comes consciousness. Chapter 8, “Are Dreams Experiences?,” foreshadows all the major ideas in my 1991 book, Consciousness Explained, as an astute student of mine, Paul Oppenheim (personal communication), noted a quarter century ago. Chapter 9, “Toward a Cognitive Theory of Consciousness,” contains early versions of arguments I have put forward in recent publications. Chapter 10, “Two Approaches to Mental Images,” outlines the justification for what I later dubbed “heterophenomenology” and sets out the crucial difference between the causes of our beliefs and the intentional objects of our beliefs, a difference that has in general been overlooked by many of the aficionados of David Chalmers’s Hard Problem, who think there are problematic “phenomenal properties.” (There aren’t, but it takes a lot of hard work to get people to see how this can be so.)

Chapter 13, “The Abilities of Men [sorry!] and Machines,” still has important work to do, challenging the recently resumed desire—apparently unquenchable—among mathematicians and logicians to apply Gödel’s Theorem to the issue of whether human beings (at least the mathematicians among them) are demonstrably not mere computers, not mere “meat machines,” because of some powers they exhibit. Chapter 15, “On Giving Libertarians What They Say They Want,” contains a model of free will that I viewed as an instructive throwaway when I concocted it, but it has recently found champions who credit me with finding the only defensible path (the “Valerian model”) to a libertarian position on free will! I have conceded this much to Bob Doyle (the “Information Philosopher”—see http://www.informationphilosopher.com): Thanks to his relentless questioning I can now conceive of a situation in which I, compatibilist that I am, would ardently wish for the truth of genuine indeterminism so that I could act with absolute freedom: if I were playing rock-paper-scissors with God for high stakes! An omniscient and antagonistic (I’m an atheist after all) God could foil me in this competition unless my choice was quantum indeterministic, not merely chaotic and practically unpredictable. Some version of the Valerian model—minus the indeterminism—is all we need for free will worth wanting in this Godless world. Chapter 16, “How to Change Your Mind,” appears in retrospect to be heading toward my current thinking about Bayesian models of (animal) belief as the foundation for the more intellectual and reflective thinking we language-users can engage in (see my From Bacteria to Bach and Back).

Chapter 17, “Where Am I?,” has had a remarkable trajectory. In 1979, a novelist friend attempted to turn it into a coauthored story fit for a Hollywood film, but gave up for legal reasons: his agent found a story published a few years earlier in some sci-fi magazine with a faintly similar story line, and decided Hollywood wouldn’t pay for a story that might unleash a copyright infringement battle. The main scene in the story was dramatically rendered in a 1981 BBC science documentary, “The Human Brain: The Self,” in which I appear, looking at my own brain in a fabulous fountain/vat on a pedestal in front of whirring computer tape drives (remember those?), and wondering why I am saying “Here I am staring at my own brain in a vat” instead of “Here I am, in a vat, being stared at by my own eyes.” In 1984, Lynn Jeffries, then a student at Harvard, produced a Javanese shadow puppet play at the Loeb Experimental Theater, with original music and a cast of hundreds (of cardboard Javanese shadow puppets). In 1987, a feature-length film made in the Netherlands by Piet Hoenderdos, Victim of the Brain, starring Piet, Doug Hofstadter, and me, contained a half-hour version of the whole story (I played the second Dennett body, Fortinbras, and the Dutch vat was larger and far more realistic than the BBC’s whimsical vat). Over the years I have received a variety of letters and emails asking me, apparently in all seriousness, if the story was true; once a student came up behind me after a public lecture and manually checked my skull for protruding antennas (I had enough hair back then so that a visual search was apparently not deemed adequate investigation). Forty years of research in cognitive science has not made the predicament I claimed to find myself in much closer to realization, but that was not the point, of course.

I didn’t invent the idea of the brain in the vat. There have been science fiction stories exploiting it for a century or more, and I wasn’t the first philosopher to entertain the idea. So far as I know, Gil Harman was first, in his 1973 book, Thought, which I taught in a graduate seminar at Harvard when it first came out. The passage is brief:

Or perhaps you do not even have a body. Maybe you were in an accident and all that could be saved was your brain, which is kept alive in the laboratory. For your amusement you are being fed a tape from the twentieth century. Of course, that is to assume that you have to have a brain in order to have experiences; and that might be just part of the myth you are being given.3

I see that I never underlined it, or wrote marginalia about it—my books are usually loaded with footprints from my rambles, and Thought is no exception—but Harman may well have put the idea in my head. I was not particularly interested in the use he and many others have made of the predicament. In the eternal campaign of combatting radical skepticism, it comes out as a modern, high-tech version of Descartes’s evil demon hypothesis. I was interested in what light it could shed on subjectivity and point of view, and personal identity. I saw it as a naturalistic exploration of our concept of a soul or mind, the “owner” of a living body.

I concocted the first version of my tale in 1976 while driving on the Mass Pike from Tufts to Vassar, where I was to give a colloquium. That evening at a party with Oswaldo Chateaubriand and half a dozen students after my talk, I ad libbed the basic tale and we then spent several hilarious hours considering its implications and complications. When I left Vassar for home the next day, I knew what I was going to present as the after-dinner talk at the Chapel Hill Colloquium in October of that year. I would guess that “Where am I?” has been on the syllabus of more philosophy courses than anything else I have written, and some Vassar undergraduates deserve a bit of the credit.

I have tried in this book to bridge divides, to speak to everyone whose attention is worth attracting. No doubt I have failed in many cases to be, in this endeavor, a good advertisement for philosophy, but I continue to be proud to be a philosopher and I still think that this book is a good example of one kind of philosophy worth doing.

 

Daniel C. Dennett

January 12, 2017

Notes