EXCURSIONS TO THE ISLANDS OF THE HAPPY FEW

 

Both the semiotic model of the index and the linguistic model of performativity (and often their combination) become central to the aesthetic of Conceptual art and they also define the specific visuality and textuality of Lamelas’s filmic and photographic work of the late sixties. If, in the former, depiction and figuration are displaced by the mere trace and pure record that the photograph or the film or video recording supply when reduced to pure information, then we encounter in the latter a model of texuality where rhetoric, narrative plot and fiction, agency and psychobiography, are all dismissed as integrally participating in the conditions of the ideological and of myth (in Barthes’s definition).

Benjamin H. D. Buchloh1

More fundamentally, however, an examination by pharmacological means, of the mechanism by which the granularity of activation is engendered (results not shown) indicates that the areas of silence between patches of activity at 40 Hz are generated by active inhibition. Thus, in the presence of GABA A blockers, the spatial filtering of cortical activity described above disappears. These results are clearly in accordance with the findings that cortical inhibitory neurons are capable of high-frequency oscillation (Llinás et al. 1991) and with the view that, if such neurons are synaptically coupled and fire in synchrony, they might be formative in generating cortical gamma-band activity.

R. Llinás, U. Ribary, D. Contreras and C. Pedroarena2

Primary narcissism, however, is not in the focus of the ensuing developmental considerations. Although there remains throughout life an important direct residue of the original position—a basic narcissistic tonus which suffuses all aspects of the personality—I shall turn our attention to two other forms into which it becomes differentiated: the narcissistic self and the idealized parent imago.

Hans Kohut3

I DIDN’T GO FAR TO find the passages quoted above. Buchloh’s book, the scientific paper on consciousness, and Kohut’s essay on narcissism are all in my study, on bookshelves only steps away from my desk. I have read all of them because in one way or another they have been part of my research. Countless other books I’ve read could have served my purpose just as well, which was simple: I wanted to show that without considerable reading in the fields represented above—theoretical art history, neuroscience, and psychoanalysis—it would be difficult to understand what these people are talking about. What they share is more important to me than what distinguishes them. They are rarefied texts that rely on words known to “the happy few” who are reading them. They all require a reader who is steeped in the subject at hand. He or she already knows what Roland Barthes thought about myth and ideology, can readily identify GABA as gamma-aminobutyric acid, an important inhibitory neurotransmitter in the central nervous system, can distinguish GABA A from GABA B, and has some idea what a “basic narcissistic tonus” might be.

We live in a culture of hyperfocus and expertise. “Experts” on this or that particular subject are continually consulted and cited in newspapers and magazines, on television and the radio. Just think how many times we’ve heard the term “Middle East expert” in recent years, perhaps with some further qualification: “Dr. F. is the leading expert on Syrian and Iranian relations at Carbuncle University.” Each field carves out a domain and pursues it relentlessly, accumulating vast amounts of highly specific knowledge. Except when brought in to make declarations to the culture at large, these people inhabit disciplinary islands of the like-educated and the like-minded. As a roaming novelist and essayist with an academic background in literature, I’ve found myself swimming toward various islands for some time. I’ve reached the shores of a few of them and even stayed on for a while to check out the natives. What I’ve discovered is both exciting and dismaying. Despite the hardships I’ve had penetrating abstruse texts and learning foreign vocabularies (not to speak of innumerable acronyms and abbreviations), I’ve been forever altered by my excursions. My thoughts about what it means to be a human being have been changed, expanded, and reconfigured by my adventures in art theory, neuroscience, and psychoanalysis. At the same time, I’ve been saddened by the lack of shared knowledge. It can be very hard to talk to people, have them understand you, and for you to understand them. Dialogue itself is often at risk.

Some years ago, I did extensive research on psychopathy, also called sociopathy and antisocial personality disorder, for a novel I was writing. I read everything I could get my hands on without discrimination. I read psychoanalytic and psychiatric books. I read statistical analyses of psychopaths in inmate populations. I read science papers that measured seratonin levels in criminal sociopaths, and I read neurological cases of people with frontal lobe damage who shared traits with classic psychopaths. It turned out that bibliographies tell all. You see those the authors quote or refer to and you know where they live intellectually. Even people researching the same subject are often wholly isolated from one another. For example, a statisician doesn’t give a jot about what Winnicott had to say about sociopathy in his book Deprivation and Delinquency,4 and neurologists don’t bother to investigate what John Bowlby wrote about the effects of early separation on both children and primates in his masterwork, Attachment and Loss.5 Ours is a world of intellectual fragmentation, in which exchanges between and among fields have become increasingly difficult.

Thomas Kuhn, in his book The Structure of Scientific Revolutions, identified these circles in science as “disciplinary matrixes.”6 These groups share a set of methods, standards, and basic assumptions—a paradigm of values. In other words, the people in these groups all speak the same language. In a lecture, the German philosopher Jürgen Habermas addressed these isolates, which are by no means limited to science: “The formation of expert cultures, within which carefully articulated spheres of validity help the claims to propositional truth, normative rightness, and authenticity, attain their own logic (as well, of course, as their own life, esoteric in character and endangered in being split off from ordinary communicative practice)…”7 The parenthetical comment is crucial. It has become increasingly difficult to decipher the logic of these expert cultures because their articulations are so highly refined, so remote from ordinary language that the layperson is left thoroughly confused. Indeed, when reading some of these specialized texts, I can’t help but think of Lucky’s tirade in Waiting for Godot, during which his creator, the erudite Samuel Beckett, made inspired poetic nonsense from the overwrought articulations of academe: “Given the existence as uttered forth in the public works of Puncher and Wattman of a personal God quaquaquaqua with white beard quaquaquaqua outside time without extension who from the heights of divine apathia divine athymbia divine aphasia loves us dearly with some exceptions for reasons unknown but time will tell … that as a result of the labors left unfinished crowned by the Acacacacademy of Anthropopopometry of Essy-in-Possy of Testew and Cunard it is established beyond all doubt all other doubt than that which clings to the labors of men…”8

About a year ago, I was on a flight from Lisbon to New York and beside me was a man reading a neurology paper. Although I usually refrain from speaking to people on airplanes, my abiding curiosity about neurology was too great, and I asked him about his work. He was, as I had expected, a neurologist, an expert on Alzheimer’s disease, it turned out, who ran a large research center in the United States and worked indefatigably with both patients and their families. He was bright, open, affable, and obviously knew as much about Alzheimer’s disease as any human being in the world, an esoteric knowledge I could never hope to penetrate. After we had talked for a while, he looked down at the book I had with me and asked what it was. I told him I was rereading Kierkegaard’s Either/Or. He gave me a blank look. I then explained that Kierkegaard was a Danish philosopher and refrained from using the word famous because I was no longer sure what fame was. I don’t think everyone in the world should have read Kierkegaard. I don’t even believe that everyone should know who Kierkegaard is. My point is that I, too, often find myself in a closed world, one in which I make assumptions about common knowledge only to discover it isn’t common at all. Somewhat later in the conversation I asked him what he thought about “mirror neurons.” Mirror neurons, first discovered by Giacomo Rizzolatti, Vittorio Gallese, and their colleagues in 1995, made a splash in neuroscience and beyond. The researchers discovered that there were neurons in the premotor cortex of macaque monkeys that fired in animals who were performing a task, such as grasping, and also fired in those who were merely watching others perform that same task. A similar neural system has been found in human beings. The implications of the discovery seemed enormous and a great deal of speculation on their meaning began. My Alzheimer’s companion didn’t know about mirror neurons. No doubt they had never been crucial to his research, and I had made another presumptuous gaffe.

The truth is that being an expert in any field, whether it’s Alzheimer’s or seventeenth-century English poetry, takes up most of your time, and even with heroic efforts, it’s impossible to read everything on a given topic. There was an era before the Second World War when philosophy, literature, and science were all considered crucial for the truly educated person. The Holocaust in Europe, the expansion of education beyond elites, the postwar explosion of information, and the death of the Western canon (no more necessity for Greek and Latin) killed the idea that any human being could master a common body of knowledge that traversed many disciplines. That world is gone forever, and mourning it may well be misplaced for all kinds of reasons, but its loss is felt, and a change is in the air, at least in some circles. In his introduction to Autopoiesis and Cognition, a book written by Humberto Maturana and Francisco Varela, Sir Stafford Beer praises the authors for their ability to create “a higher synthesis of disciplines” and assails the character of modern scholarship. “A man who can lay claim to knowledge about some categorized bit of the world, however tiny, which is greater than anyone else’s knowledge of that bit, is safe for life: reputation grows, paranoia deepens. The number of papers increase exponentially, knowledge grows by infinitesimals, but understanding of the world actually recedes, because the world really is an interacting system.”9 Anyone who has even a passing acquaintance with academic life must recognize that Beer has a point.

I remember a conversation I had with a young woman at Columbia University when I was a student there. She told me she was writing her dissertation on horse heads in Spenser. Of course it’s entirely possible that examining those heads led to staggering insights about Spenser’s work, but I recall that I nodded politely and felt a little sad when she mentioned her topic. Years of work on that particular subject did seem incommensurate with my fantasies of an impassioned intellectual labor that uncovered something essential about a work of literature. When I did research for my own dissertation on language and identity in Charles Dickens’s novels, I plodded dutifully through the endless volumes written about his work and realized in the end that only a handful of books had meant anything to me. Linguists, philosophers, psychoanalysts were far more helpful and, had I known then what I know now, I would have turned to neurobiology as well to explicate the endlessly fascinating world of Dickens. The theories I drew from then often came from the same pool as my fellow students in arms. In the late seventies and early eighties, French theory was the intellectual rage in the humanities, and we eagerly digested Derrida, Foucault, Barthes, Deleuze, Kristeva, Guattari, Lacan, and various others who were called upon to explicate not just literature but the textual world at large. Hegel, Marx, Freud, Husserl, and Heidegger also lurked in the wings of every discussion, but science was never part of a single conversation I had during those years. Wasn’t science part of the ideological superstructure that determined our dubious cultural truths?

While I was doing research for an essay I was writing on the artist Louise Bourgeois, I read a book called Fantastic Reality by Mignon Nixon.10 The author uses psychoanalytic concepts, Melanie Klein’s idea of part objects, in particular, to elucidate Bourgeois’s work. Nixon’s analysis is intelligent and often persuasive. Along the way she mentions Klein’s famous patient, Little Dick, whose behavior, she says, resembles Bruno Bettelheim’s description of autism in The Empty Fortress. After discussing Bettelheim’s machinelike patient, the autistic Joey, she moves on to Deleuze and Guattari’s description of him in Anti-Oedipus: Capitalism and Schizophrenia. They also use Joey to further their particular argument. My purpose is not to critique Nixon’s analysis of Bourgeois or Anti-Oedipus, a book I read years ago with interest, but rather to suggest that at every turn Nixon’s sources are predictable. They indicate a theoretical education rather like the one I acquired during my years as a graduate student. She follows a preestablished line of thinkers worth mentioning, moving from one to another, but never steps beyond a particular geography of shared references. An investigation of contemporary ideas about Theory of Mind or the current science on autism, which is entirely different from and at odds with Bettelheim’s ideas, doesn’t enter her discussion. She is not alone. Islands are everywhere, even within a single discipline. I’ve noticed, for example, that continental and Anglo-American analytical philosophers often don’t acknowledge that the other exists, much less do they deign to read each other.

The realization that the strict borders drawn between one field and another, or between one wing and another within a field, are at best figments may well be behind a new desire for communication among people with varying specialties. Philosophers have turned to neuroscientists and cognitive researchers to help ground their theories of consciousness. Their various musings are printed regularly in the Journal of Consciousness Studies, which has even published a literature professor or two. The philosopher Daniel Dennett draws from both neuroscience and artificial intelligence to propose a working metaphor for the mind—multiple drafts—in his book Consciousness Explained.11 The neurologists Antonio Damasio12 and V. S. Ramachandran13 evoke both philosophy and art in their investigations about the neural foundations of what we call “the self.” The art historian David Freedberg, author of The Power of Images, has leapt into neuroscience research to explore the effects of images on the mind-brain.14 He was among the organizers of a conference I attended at Columbia University that hoped to establish a dialogue between neuroscientists and working visual artists. And yet, it isn’t easy to make forays out of one’s own discipline. The experts lie in wait and often attack the interlopers who dare move onto their sacred ground. This defensiveness is also understandable. Specialists in one field can make reductive hash of another they know less well. To my mind, conversations among people working in different areas can only benefit everyone involved, but the intellectual windows that belong to one discipline do not necessarily belong to another. The result is a scrambling of terms and beliefs, and often a mess is made. The optimistic view is that out of the chaos come interesting questions, if not answers.

In the last couple of years, I’ve attended the monthly neuroscience lectures at the New York Psychoanalytic Institute, and through Mark Solms, a psychoanalyst and brain researcher who has spearheaded a dialogue between neuroscience and psychoanalysis, I became a member of a study group that took place after those lectures led by the late psychoanalyst Mortimer Ostow and the neuroscientist Jaak Panksepp. The group has since been reconfigured, but during the year I regularly attended meetings of the first group, I found myself in a unique position. I was the lone artist among analysts, psychiatrists, and neuroscience researchers and was able to witness the dissonant vocabularies of the various disciplines, which nevertheless addressed the same mystery from different perspectives: how does the mind-brain actually work? Although the language of psychoanalysis had long been familiar to me, I knew next to nothing about the physiology of the brain. During the first meeting, I worked so hard to understand what was being said and felt so mentally exhausted afterward that I fell asleep at a dinner party that same evening. I ordered a rubber brain to try to learn its many parts, and began to read. It took innumerable books and many more papers before I was able to penetrate, even superficially, the neuroscientific aspects of the discussion, but as the fog lifted somewhat, I felt I was in a position to make a few observations. Unsurprisingly, our conversations were riddled by language problems. For example, was what Freud meant by primary process equivalent to a similar idea in neuroscience used by Jaak Panksepp? Both sides agreed that most of what the brain-mind does is unconscious, but is the unconscious of neuroscience harmonious with Freud’s idea of the same thing? Or, for example, when scientists use the word neural representations when they talk about brain function, what do they mean? Exactly how do neurons represent things, and what are those things they’re representing? Is this a form of mental translation—the internal reconfigurations of perception? The words neural correlates are also interesting. I have a better sense of this. I’m feeling angry, and when I’m feeling angry, there are neuronal networks in parts of my brain that are actively firing. In order to avoid saying that those excited neurons are anger, scientists speak of correlates. Language counts. Not always, but often, I listened as people talked right past each other, each one speaking his own language.

Words quickly become thing-like. It has often fascinated me how a psychoanalytic concept such as internal object, for example, can be treated as if it weren’t a metaphor for our own inner plurality that necessarily includes the psychic presence of others, but as if it were something concrete that could be manipulated—like a shovel. I’ve also listened in amazement to analysts talk about the ego almost as if it were an internal organ—a liver, say, or a spleen. (I can’t help feeling that Freud’s Ich, our English I, with its pronominal association, might have been a better translation choice than ego.) A pharmacologist I met in the group referred to this phenomenon as a “hardening of the categories,” a phrase that struck me as both funny and wise. Names divide, and those divisions can easily come to look inevitable. Neurons, of course, aren’t like internal objects, egos, or ids. They are material in a way that psychic categories aren’t, but making sense of them calls for interpretation, nevertheless. As Jaak Panksepp likes to say, scientific research doesn’t make reality pop up like magic. He is not a victim of a naïve realism that reaches out for “the thing in itself” and grabs it. Hard science is a plodding business of findings and refindings and findings again. It is incremental, often contradictory, and dependent on the creativity of the mind doing the research, a mind that can grasp what the research means at the time, a meaning that may well change. At the end of many science papers there is a section called Discussion, where the researchers tell the reader how their study might be understood or how it could be followed up. Results rarely speak for themselves.

In the group, the problem of the self came up several times. What constitutes a self ? True to my education, I had often located the self in self-consciousness, a dialogical mirroring relation between an I and a you, and I believed that we create ourselves through others in narratives that are made over time. From this point of view, the self is a convenient if necessary fiction we construct and reconstruct as we go about the business of life. It was always clear to me that most of what we are is unconscious, but that unconscious reality has never seemed singular to me, but plural—a murmuring multitude. I have since modified my views. Language is important to forms of symbolic self-consciousness, an ability to see oneself as another, to hurl oneself forward into the future and remember oneself in the past, but not to consciousness or a feeling of selfhood. Surely animals, even snails, have some form of a self and are awake, alive, and aware. Does that awakeness, with its desires and survival instincts, its aggressions and attachments, constitute a core or primordial self, as some would have it, or is a self simply the sum of all our conflicted and fragmented parts, conscious and unconscious? People with damage to left language areas of their brains often hold on to a strong sense of themselves and can be profoundly aware of what they have lost. Some people with devastating lesions nevertheless retain an inner sense of who they are. Luria recorded a famous case of just such a person, Zazetsky, in The Man with a Shattered World.15 After suffering terrible head injuries in the Second World War, he spent his days trying to recover the bits and pieces of his ruined memory in a journal he kept until his death. Other forms of neurological injury, in the right hemisphere especially, can cause far greater disruption to a person’s sense of being, which suggests that while language is important, it doesn’t determine our identities. But for this discussion what interests me is not my own evolving view of what a self might be, but that for neurobiologists and analysts alike this global question of what we are is tortuous and necessarily calls upon a philosophical orientation, an ability to loosen the categories, juggle the frames, and be free enough to question even those ideas one has held most dear. Human inquiry requires making borders and categories, and dissecting them, and yet these divisions belong to a shared, articulated vision of how things are that is not arbitrary, but neither is it absolute. Unless one believes in an ultimate view, a supreme, disembodied scientific observer, we cannot find a perfect objective image of things as they are. We are neither angels nor brains in vats, but active embodied inhabitants of a world we internalize in ways we don’t fully understand. Merleau-Ponty makes a distinction between philosophy and science that is valuable: “Philosophy is not science, because science believes it can soar over its object and holds the correlation of knowledge with being as established, whereas philosophy is the set of questions wherein he who questions himself is himself implicated by the question.”16 Whether one agrees with Merleau-Ponty’s phenomenology or not, it seems clear that in science, as well as in philosophy, the observer’s relation to the object must be considered.

Every discipline needs its philosophy, or at least its ground rules. Another one of my island excursions has been to a hospital. I have a volunteer job as a writing teacher for psychiatric inpatients at the Payne Whitney Clinic. For its inmates, the clinic on the eleventh floor of New York Hospital is a world unto itself. The patients live in locked wards, are under continual supervision, and many of them don’t know when they will be able to leave. Some are eager to get out. Others are afraid to return to their lives. Some of those who are released come back before too long, and I wonder how warmly I should greet them when they walk through the door of the classroom, even when I’ve missed them. Each person has been diagnosed with one or several of the many disorders that are found in the Diagnostic and Statistical Manual of Mental Disorders, now in its fourth edition. The authors of the manual state in their introduction: “In the DSM IV, there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or from no mental disorder.”17 Despite this caveat, I’ve discovered that at least for patients these diagnoses can become surprisingly rigid. The diagnosis and the patient are identified to such a degree that there is no escape, no blur, nothing left over for the person to hold on to once he’s been designated as bipolar, say, or schizophrenic. In many ways, this is not strange. A mental disorder isn’t a virus or a bacteria that attacks the body from the outside. If I am continually hearing voices from another dimension or am so depressed that I lie in my bed without moving day after day or am churning out fifty pages of poetry during a period of a few hours in a flight of manic joy or am reliving an assault in horrifying uncontrollable flashbacks, who is to say these experiences are not of me?

One afternoon, several students in my class began to talk about their diagnoses. A young woman turned to me and said in a pained voice, “To call someone a borderline is the same as saying ‘I hate you.’” It wasn’t a stupid comment. She had understood that the characteristics that define borderline personality disorder in the DSM, “inappropriate, intense anger” among them, could well be construed as unflattering traits, and ironically and perhaps true to her diagnosis, she had interpreted the clinical name as a blow. Another patient then helpfully volunteered: “My doctor says she treats symptoms, not diagnoses.” This more flexible position is also complicated. I have noticed that patients are often dealt with as a jumble of symptoms—sleeplessness, anxiety, thought disorder, auditory hallucinations—almost as if these traits were disembodied and easily distinguishable problems, each of which calls for a pharmacological solution. I am not an antidrug person. I have seen people who over a period of a couple of weeks improve remarkably when they change their medicines. I also witnessed a wild, incoherent, out-of-control patient, who after ECT, once called electroshock, the nightmare treatment of popular imagination, seemed so much better, so normal, that it took my breath away. I know the effects don’t always last. I know there are risks. A psychiatrist I met who works with inpatients said she was continually aware, not only of the potential dangers of treatment, but that fixing one thing may mean the loss of another. “How are you?” I asked a talented writer who had been in my class for over a month. She seemed quieter than I had ever seen her and far less delusional. “Not so good,” she answered, tapping her temple. “Lithium head. I’ve gone dead in there.” Is the dead self the well self? Would a little less lithium help to create a more normal self? As a person who has what psychiatrists would call hypomanic phases, which manifest themselves in excessive or perhaps obsessive reading and writing, often followed by crashes into migraine, I was full of sympathy for the patient. The difference between us is one of degree. Somehow I manage, and she doesn’t.

No one would argue that a person is his diagnosis, and yet no one would argue that the characteristics that define an illness aren’t part of the person. Even I, layperson that I am, have found myself silently diagnosing students in my classes, especially those with florid symptoms. Once you are immersed in the jargon and have read enough, it seems to come naturally. And yet, I know that in another era, or even a few years ago, the names and boundaries for the various illnesses were different and, to my mind, not necessarily worse than the ones that exist now. With each new edition, the DSM shifts its descriptive categories. New illnesses are announced. Old ones drop out. What interests me is that my perception of patients’ disorders, colored by doubt as it is, has been shaped by the givens of current psychiatric expert culture. I have begun to see what I’m looking for. The discursive frames orient my vision. My familiarity with them certainly makes it possible for me to talk to people in that world, to ask informed questions, to find out more, to continue my life as a curious adventurer. But one may well ask: just because I like hanging out on islands and chatting up the inhabitants, is there anything really wrong with specialization, with knowing as much as one can about Alzheimer’s or borderline personality or horse heads in Spenser? Would reading philosophy, history, art theory, linguistics, neuroscience, literature, or even psychoanalysis (now that it’s marginal to the profession) be beneficial, for example, to the doctors on the wards where I go to teach every week? Do we really want psychiatrists deep in Immanuel Kant, Hippolyte Taine, Erwin Panofsky, Roman Jakobson, D. O. Hebb, Fyodor Dostoyevsky, and the inimitable but still controversial Sigmund Freud? What’s the point? Nobody can know everything. Not even in the lost world of the thinkers who shared a vision of the educated man (it was a man, I’m sorry to say) did people know all. Even then there were too many books, too many fields, too many ideas to keep track of.

In The Man Without Qualities, Robert Musil’s character, General Stumm, hopes to search out “the finest idea in the world.” Chapter 100 of Musil’s huge, unfinished novel is Stumm’s account of his visit to the library. He is too embarrassed to use the phrase “the finest idea in the world” when he asks for a book, confessing that the phrase sounds like a “fairy tale,”18 but he hopes that the librarian, familiar with the millions of books to be found in that palace of knowledge, will guide him toward it. He can’t specify what he wants—the book in his mind isn’t about a single subject.

My eyes must have been blazing with such a thirst for knowledge that the man suddenly took fright, as if I were about to suck him dry altogether. I went on a little longer about needing a kind of timetable that would enable me to make connections among all sorts of ideas in every direction—at which he turns so polite it’s absolutely unholy, and offers to take me into the catalog room and let me do my own searching, even though it’s against the rules, because it’s only for the use of the librarians. So I actually found myself inside the holy of holies. It felt like being inside an enormous brain.19

The endless volumes, the bibliographies, the lists, categories, compartments for this subject and the other have a traumatizing effect on poor Stumm, who becomes only more disoriented when the librarian confesses that he never actually reads the books in the collection, only their titles and tables of contents. “Anyone who lets himself go and starts reading a book is lost as a librarian,” he tells Stumm. “He’s bound to lose perspective.”20

Musil’s comedy summons a truth. Losing perspective is an intellectual virtue because it requires mourning, confusion, reorientation, and new thoughts. Without it, knowledge slogs along in its various narrow grooves, but there will be no leaps, because the thinner my perspective, the more likely it is for me to accept the preordained codes of a discipline as inviolable truths. Doubt is the engine of ideas. A willingness to lose perspective means an openness to others who are guided by a set of unfamiliar propositions. It means entertaining a confounding, even frightening and radical form of intersubjectivity. It also means that however happy you are among the few residents of your particular island, that little island is not the whole world.

2007