Dreams into Things; or, Art
Many Worlds
On May 15, 1648, the Peace of Westphalia was signed, bringing about an end to the Thirty Years’ War, a conflict that had, by most estimates, claimed the lives of eight million people, military and civilian, throughout Europe. There had been seemingly no end of atrocities, men tortured and hung up in iron cages, severed heads on spikes placed outside the gates of cities as a warning and a threat. There was no sense in the era that the warring parties were all, in view of their shared Christianity, ultimately on the same side. Catholicism and Protestantism appeared as distant from one another, and irreconcilable, as Islam and Christianity do today from the point of view of the most ideologically intransigent jihadist or the most Islamophobic European.
The 1648 peace treaty contained the seeds of the modern global order based on the sovereignty of nation-states. Two years after it was signed, the great rationalist philosopher René Descartes died. At the core of his philosophical project, as we began to see in the previous chapter, was a quest for certainty that he was not dreaming or hallucinating, but that the world as he experienced it, and even his own consciousness, were real. The philosopher himself had served in the army of Maximilian of Bavaria and was present at the Battle of White Mountain near Prague in 1620. It is at least conceivable that he witnessed significant death and injury, and soldiers who had lost arms and legs but continued to feel pain in them. Years later he would write of the problem of phantom limbs, of the challenge they pose for our understanding of the distinct concepts of body and mind.
Descartes’s successor in the rationalist tradition, Leibniz, was born in 1646, and spent his early life in the fragile but hopeful new postwar reality of Protestant Saxony. Leibniz’s career, both as a diplomat and as a philosopher, would be devoted to reconciliation of opposed camps—Protestants and Catholics, Cartesians and Aristotelians, any two parties that believe themselves to be in fundamental disagreement. For Leibniz, as we have already briefly seen, such belief is always based on an illusion, for in fact all human minds, as reflections of the same divinely created rational order, believe, deep down, fundamentally the same thing. The task of philosophy then, for Leibniz, is to clarify our terms to the point where we are all able to see that we in fact agree. Of course Leibniz’s vision seems wildly optimistic to us today, as we tend to suppose that the reasons politicians give for going to war are only ad hoc pretexts for grabs at power and territory in which the consideration of who is in fact right or wrong is something close to a category mistake. But Leibniz’s vision shows just how much stock was placed in reason at the beginning of the modern period.
Another rationalist philosopher, Spinoza, was so hopeful about the power of reason to solve human problems that he wrote a work on ethics modeled after the rigorous deductive style of Euclid’s work on geometry. Spinoza’s conclusions do indeed follow from his axioms and propositions. He refused to acknowledge, however, that in matters of ethics, unlike geometry, the first principles to which one commits oneself have a great deal to do with one’s cultural values, one’s contingent attachments, and are far from self-evident truths. Spinoza, like other rationalist philosophers, was also very nearly phobic about the faculty of the imagination, which, as was usual in the era, he understood, as the word suggests, in part as the power of the mind to generate images. The faculty of reason deals with pure concepts, but the imagination falls back on visions, phantasms, hallucinations of sorts, when reason proves too weak to go forth on its own without crutches. It is, to return to our own metaphor, the bright-colored dye that makes the invisible visible, even as it distorts the creature’s true nature and threatens to destroy it altogether. It is the imagination, Spinoza suggests, that is at the root of all superstition, and therefore of all suffering.
But while the philosophers were busy designing ways of escaping from madness and illusion, and suppressing the faculty of imagination that served as a gateway to these, the storytellers, the novelists, and the artists were contriving delirious new forms of them. No one provides a sharper contrast with Descartes than his near contemporary, the Spanish novelist Miguel de Cervantes, whose character Don Quixote seems to assure us that all our pursuits in life may be a dream, that we may in fact never be able to determine whether we are mad or not, and that in the end this is simply the human condition—and indeed a basic existential fact in which we may take delight. Over the course of the following century, the genres of fantasy and science fiction would enjoy a boom, with authors such as Savinien de Cyrano de Bergerac and Margaret Cavendish allowing their imaginations to roam freely in those directions that rationalist philosophy had sought to limit.
Dreams, fictions, and artistic creation in general are species of the same genus, as all involve submission to the sort of fantasies to which the mind is naturally inclined, and which reason compels us to keep always at bay. Both take us off to other worlds, to other possibilities, while reason tells us that there is only one world. To live according to reason is to live in that one world, which is shared and common, while to lapse into unreason, whether waking or sleeping, is to drift off into a private and unshareable world.
Bleeding Out
Novels, and not only those in the “romance” genre narrowly defined, are capable of evoking passions in the reader, largely as a result of the way in which they play on the imagination. In an older and by now mostly forgotten meaning of the term, “passion” was understood simply as the opposite of “action,” where an “agent” is one who acts, a “patient” one who undergoes the consequences of an action. We can see how the old sense led to the new one: we say that people who have “fallen” in love are “swept off their feet” or “bowled over”; the common French expression for falling in love at first sight, un coup de foudre, invokes a lightning strike. To fall in love, or to be overcome with anger, or jealousy, or joy, is to lose self-control, to come under the control of external forces, working on us through the body.
Such loss of self-control has generally been understood as an expression of irrationality. And yet we find ourselves in bodies—there is nothing to be done about it, at least not as long as we are alive—and so we must somehow come to terms with the fact that we are going to be, to some extent, determined in the course of our human affairs by the fluctuations of our passions. Even Descartes, who believed that the soul, the true locus of our individual selfhood, is entirely immaterial and only contingently wrapped up with the body, nonetheless wrote an entire treatise, the 1649 Passions of the Soul, accounting for the ways in which our bodily, passionate existence defines who we are. Descartes knew we could not fight the passions but must rather modulate them to the extent possible so as to make them work in accordance with reason. A century later David Hume would reverse this approach, maintaining that “reason is, and ought only to be a slave to the passions.”1 The Scottish empiricist is not arguing here that we should all abandon ourselves to unreason, but rather that the body is naturally outfitted and disposed to operate in a rational way, and we are only complicating things if we attempt to find a priori rules of conduct that the mind would haughtily dictate to the body in advance of any experience.
The history of philosophy does not so much resolve as mirror some of the most common tensions of human social life: whether one should listen to “the head or the heart,” whether one should trust one’s gut feelings or reason things through. These are clichés, but their very existence and endurance provides an important illustration of the depth of our attachment to something other than reason as the source of meaning in human life.
Visual art, too, and not only literary fiction, works through the body, at least to the extent that it sends us visual images, or indeed sonic waves, which move through our eyes or ears, and, eventually, affect our mind or soul for better or for worse. This basic condition of the experience of art has been seen as both a threat and an opportunity throughout the history of Western thought, not least in philosophy. In his 1794 Letters on the Aesthetic Education of Man Friedrich Schiller described in detail how art might be employed to cultivate the feelings of a developing psyche, eventually yielding a grown human being who is a slave neither to reason nor to sensual impulse. But typically any hope that is placed in certain exemplary works of art, or in certain genres, comes at the expense of others, and the history of promotion of art’s edifying value is inseparable from the history of censorship and of the chauvinistic hierarchization of taste.
In the Republic Plato had been particularly wary of music, as an art form that works directly on the body, without any role for the rational soul. What is music about? Unlike literature, and unlike most visual art, music generally refuses to say, and only entrances us with its otherworldly call. The Greek philosopher had been mostly concerned about particular chords, while in the twentieth century most calls for the censorship of music—at least those that did not focus on lyrics, the nonmusical element of a subset of musical works—have been concerned with certain types of rhythm, particularly those that American bigots of the 1950s associated with the “jungle.” In the seventeenth century it was the dances that accompanied the music of southern Italy that caused consternation throughout Europe, such as the tarantella, held to be directly descended from ancient Bacchanalian rites, and to carry with it the danger of not being able to stop dancing once one has started: like the fear of rock and roll, this too was a fear of irreparable loss. Again and again, we see the same fear returning, that music’s siren song will pull our loved ones, especially our children, away from us, into the domain of unreason, a vaguely sensed parallel world, where bodies rule.
The history of censorship, at the same time, reveals to us alternating strategies for dealing with this threat: again, do we simply try to suppress the danger, or do we recognize that it is to some extent ineliminable, and attempt to lasso it and train it toward the allegedly rational ends of society? Authoritarian regimes, typically, are intent on passing off the society they control as the only possible one, as necessary and inevitable. Consequently, the imagining of other possible worlds, even if they are only fictional, is subject to tight control. Even the imagining of this world, but through a lens that seems borrowed from another world—a lens that shows the world through officially unrecognized registers or moods—already drifts too far from the version of actuality the regime seeks to enforce.
In the early years of the Soviet Union, Isaak Babel was the great chronicler of the lives of the poor Jews of Odessa, the gangsters, small farmers, clueless rabbis, fat girls in love with cretinous boys, at the time of the Bolshevik Revolution, and for a few years thereafter. In Babel’s world, as on the dance floor, bodies rule. He was early on a protégé of Maxim Gorky, who would never fall from grace during Stalin’s reign, when socialist realism was set up as official state aesthetic ideology. Babel, by contrast, was arrested by the NKVD (the Soviet secret police, and predecessor to the KGB), and his death sentence was personally signed by Stalin’s henchman Lavrenty Beria. He was murdered by firing squad in 1940.
Babel had done his best, under socialist realism, to, as he put it, work in the new literary genre of silence. But his stories from the early 1920s were too memorable not to echo. Their crime, if it must be made explicit, is nothing other than to show the joy and confusion of life, to portray characters who are both good and bad, insightful yet inarticulate, and generally unable to think about their misery or their momentary triumphs through the lens of class consciousness. Sometimes they invoke working-class solidarity, but generally in ways that show they have not really grasped the concept. They often smell bad: odors of milk and flesh emanate from Babel’s characters, and right off the pages. Gorky would later, when Babel had fallen into ill repute, complain of his protégé’s “Baudelairean predilection for rotting meat.”2 Babel’s work is vital, raucous, politically disobedient, and hilarious.
As Mary Douglas reminds us, the essence of humor is to thrust us back into our bodies in social contexts in which these are supposed to be screened out, in which we are supposed to conduct ourselves as if we were pure disembodied intellects. The reason our bodies are so offensive has something to do with the fact that they are always rotting, or threatening to rot, and that we must engage in considerable upkeep to ensure that this not happen. We are mortal and corruptible, in other words. The official philosophy in the context in which Babel was writing was dialectical materialism, which taught among other things that everything that exists is a corruptible body. But this philosophical commitment did not prevent a humorless and oppressive disdain for the living body from finding its way back into arts and culture.
In 1947, after Babel was dead, Stalin’s lead censor, Andrei Zhdanov, would give a speech3 criticizing the literary magazine Zvezda for having published a story by Mikhail Zoshchenko entitled “The Adventures of a Monkey.”4 The censor complains that the author had “portray[ed] Soviet people as lazy, unattractive, stupid and crude. He is in no way concerned with their labour, their efforts, their heroism, their high social and moral qualities.”5 Zhdanov says that it is characteristic of “philistine” writers to emphasize the “baseness and pettiness” of people, and he cites Gorky as an authority in support of this view. But Gorky’s own protégé had shown a generation earlier that we must fearlessly enter into the baseness and pettiness of people; we must attend to the small conversations at the wedding feast in Odessa and hear of extortion schemes, of petty plans to be buried in the best spot in the cemetery. Only thus may we gain experience through literature of something close to human love: love for imperfect, fallen, desperate souls, an emotion that remains entirely beyond the horizon, in his moralistic elevation of the virtues of solidarity and exemplary heroism, of Zhdanov’s limited artistic sensibility.
Babel’s fate is in important respects yet another echo of Hippasus’s, who was drowned by his fellow mathematical cult members for speaking publicly of irrational numbers. On the surface the two are indeed different: the Pythagorean divulged a new discovery and a well-guarded secret, while the Russian author described what people have already known for as long as they have come together in families and communities—that human beings are obscene, petty, vain, selfish, and loving. But Babel also divulged a discovery of sorts, namely, a literary innovation, in which he discerned how, like few before, to capture these real traits of real people in a lucid, honest, and verisimilar way. His characters were not idealizations; they did not belong to an ideal realm of reason and virtue, but rather revealed the complexities and contradictions that prevail wherever there are real people. And for this he had to die, sacrificed to an ideology whose adherents believed, in spite of all evidence, that the irrational kinks in the relations among human beings would soon be ironed out, by force of political will, and that society would be structured in a rational way unsusceptible to undermining by the vicissitudes and passions of imperfect individuals. Unlike Hippasus’s murder, we know that Babel’s happened, and we know exactly why it happened. We know that the rubbing out, in the name of rationality, of people who acknowledge the existence of irrationality is not a legend, but part of the regular course of human affairs.
While one would not wish to lend them even a faint hint of support, the censors employed by various regimes throughout history are not entirely wrong to believe that fictional worlds cannot be entirely contained, that in the simple invention and description of them there is some real risk of their seeping out into reality and altering it: that world description is at the same time world making. The very word “poetry,” in the broad sense of the creative spinning out of possible alternative realities, is derived from poiesis, the primary meaning of which is “making.” This sense endures in strange relics of English vocabulary such as the word “playwright,” which evokes not the mere writer, but rather the wheelwright or the shipwright, artisans who actually bring some new entity into existence through their labor. Do writers too introduce something into the world, something that was not there before? I have on occasion felt a reaction to powerful literature along these lines: this really should not have been allowed; someone should have censored this. I recall in particular the titular character in Philip Roth’s Sabbath’s Theater inducing such a thought, as well as virtually every line from Louis-Ferdinand Céline. As with the alphabets of Tlön in Borges’s famous story, I have sometimes felt in reading that I am seeing “the first intrusion of the fantastical world into the real world,”6 and it seems dangerous indeed.
Some things that are said have the strange quality of bleeding out from within the quotation marks we use in the vain hope of containing them.7 Ordinarily philosophers distinguish between the use of a word and its mention: if I say that I heard someone on the metro today saying the word “chien,” it does not follow that I have just spoken of Canis familiaris. I might not even know, when I give you this report, what the word “chien” means. But if your child tells you that someone at school said the word “fuck,” then he cannot resort to the claim that he has not himself just used that word. It bleeds out of the quotation marks; it’s powerful enough to override the use/mention distinction. This is something like the experience of powerful literature I have attempted to describe: it is as if Philip Roth has inserted into reality a character as morally rotten as Mickey Sabbath, and it does not seem entirely satisfactory to protest that in effect the entire life of that character occurs between something like quotation marks, between the covers of a book that announces itself as a novel. Fictional worlds are possible worlds not just in the sense that they are nonactual; when we spin them out in writing, they come to seem—if I may deploy a phrase that verges on oxymoron—like real possibilities.
Babel’s spinning out of a fictional world (which, again, was really a description of a slice of our real world, seen in a certain mood and register) had real and tragic consequences for him. He was murdered, in the end, because the officials did not want the mood and register he evoked to be part of the reality under their control. The register they preferred was so-called realism, which depicted a world that exists nowhere, full of morally transparent heroes and villains, and simple and straightforward contrasts between right and wrong. The world depicted in socialist-realist literature is an impossible world.
To kill off or imprison those who invent worlds according to their own unauthorized vision of life is of course a great crime. And yet it at least recognizes something that the familiar liberal argument against censorship typically ignores, in its insistence on a dichotomy between the ideal and the real realms, between stories and history. The early modern period witnessed such an intense proliferation of imaginings of possible worlds, such a fruitful hybridity of fiction and philosophy, in large part because it was a period of intense and sustained efforts to rethink the actual world and our place in it. The long-term consequences of this rethinking included upheavals and revolutions in both the political and scientific realms, but they could not have been brought about—or even, likely, begun—without the art of imagination, an art the strict enforcers of reason have repeatedly warned against and sought to control or suppress.
Genies, Genius, and Ingenium
As we have already seen, for some early modern philosophers the mental faculty of imagination was conceived as a sort of waking dream, as it involved the production of images of things that are not, strictly speaking, there. Philosophers were divided in their assessment of it: was it a regrettable tendency to which the human mind is prone, or could it be mastered and channeled in productive and rational ways? Virtually no one thought that the imagination should simply run free. The task of mastering it was generally thought to be a central part of the project of “improvement of the intellect,” to cite part of the name of a 1662 treatise by Baruch Spinoza.8
Because imagination involves the production of images, it was typically seen as a fundamentally bodily process, or at least as occurring at the point of intersection between the mind and the body. It was generally thought that some mental faculties, such as intellect or understanding, could carry on as usual even if the body with which a mind is associated were to disappear from existence. But imagination involves bodily sensation, and so the body cannot be removed from the use of this faculty, and the proper training of it involves knowing when reliance on it is useful, and when by contrast it is simply a distraction. Thus in the sixth of his Meditations Descartes makes the case that in geometry we represent to our minds, or on paper, an image of a polygon; but this is just a representation, pleasing to the imagination, while ultimately unnecessary to a rational mind that is powerful enough to grasp the various properties of a polygon without having to envision it.9 A great rational mind could do all geometry, even geometrical proofs involving a thousand-sided chiliagon, without ever having to sketch the plane figure in question on paper or imagine it in his head. The imagination is a mark of human weakness, perhaps sometimes a necessary crutch, but always to be kept in check by reason or understanding, to which in turn it always poses a threat. Even at its best, imagination is only the more salutary expression of what can easily degenerate into “feverish imaginings,” the upward motion of vapors toward the brain, which causes us to see what is not there. At its worst, imagination is to reason what idolatry is to true religion: a mistaking of the sensual representation for the thing itself.
While an image rendered to the mind with the help of the imagination might be useful, its more or less consistently evil twin, so to speak, is the phantasm, rendered to the mind by means of fantasy. In his Pensées of 1670, Blaise Pascal would describe fantasy, along with opinion, as the “mistress of error,” and as a “superlative power” (superbe puissance) that stands as an “enemy of reason.”10 Fantasy is for him a sort of antireason, fighting it out with its positive counterforce in a perpetual Manichaean struggle. Fantasy works for evil rather than good, Pascal thinks, to the extent that it makes no distinction between the true and the false, but rather represents both what exists and what does not exist with the same faithfulness. Fantasy amounts to a second nature in human beings, seeking ever to dominate and control our reason. It often prevails, since it is better at making us happy, especially when we are not by natural disposition wise. Fantasy “cannot make the mad wise, but it makes them happy, to the envy of reason, which can only make the friends it has miserable.”11 Pascal for his part was certainly no rationalist in the strict sense, with a capital R, as were Descartes, Spinoza, and Leibniz, since he believed that faith must lead us to our ultimate commitments, while reason always comes up short in human life. Nonetheless, like the Rationalists, when it is not faith that is opposed to reason, but rather imagination or fantasy, Pascal agrees that these latter faculties are sooner worthy of our suspicion than of being exalted as what makes us distinctly and most excellently human.
In the most recent era we have largely lost this wariness of imagination and fantasy. We no longer associate them with unreason, let alone with madness. On the contrary, the use of the imagination is now a central part of all but the most conservative and backward-looking educational philosophies. Children grow smart by cultivating their imaginations, and imaginations must be cultivated, allowed to grow on their own (even if channeled in this more promising direction rather than that less promising one, by use of clever wooden toys rather than violent video games), rather than being curtailed, domesticated, or dominated. We are aware that some people might get so lost in their imaginations as to become altogether disconnected from reality—might, for example, get so lost in fantasy fiction as to be unable to pay their bills and show up to appointments—and we recognize that this is a problem. But it is not generally thought to be a problem already incipient in any indulgence of the imagination whatsoever. The badness of such disconnection, it is generally thought today, no more makes imagination bad than burned food makes cooking as such bad.
But what changed, exactly? Who are the ancestors from whom John Dewey, Maria Montessori, and other enlightened, pro-imagination pedagogues of the twentieth century descended, if not from Descartes, Spinoza, Leibniz, and Pascal? The short answer is that we are living out the dual legacy today, and have been for a long while now, of rationalism and romanticism. This becomes particularly clear when we chart the transformation, since the seventeenth century, of the concept of “genius.”
We sometimes inadequately choose the translation of the Latin ingenium, as it occurs in early modern philosophical texts, as “genius.” This choice conceals the incredibly complex history of the Latin term. We have already seen it in the title of Spinoza’s Tractatus de emendatione ingenii. Although the term can be translated as “intellect,” a more appropriate rendering would come in the form of a multiword gloss: ingenium is the propensity for learning, or aptitude for discovery, or any number of other, similar, ultimately inadequate variations. For Cicero “ingenium” had designated the “innate seeds of virtue, which, if they are able to grow, by nature itself will lead us to a happy life.”12 Writing in French in the Discourse on Method of 1637, Descartes uses the term bon sens—not quite what he would have intended by ingenium were he writing in Latin, but also not part of a completely different semantic cluster—in order to mockingly suggest that people believe falsely of themselves that they possess all of it that they might need, indeed all that any human being might hope to possess: “Good sense is the best distributed thing in the world,” he writes, “for everyone thinks himself so well endowed with it that even those who are the hardest to please in everything else do not usually desire more of it than they possess. In this it is unlikely that everyone is mistaken.”13 It is unlikely that everyone is mistaken, but likely, Descartes implies, that most people are.
For Descartes as for Cicero before him, while ingenium may be held by all, it is more acutely developed in some than in others. Moreover, this acuity is likely a natural endowment rather than something that can be inculcated by instruction: even though a textbook full of rules, such as Descartes’s 1628 Regulae ad directionem ingenii (variously translated as Rules for the Direction of the Mind or, in a rather more cumbersome fashion, Rules for the Direction of the Natural Intelligence), can help a person to direct her ingenium, the sharpness or strength of the ingenium in question may well be a fixed quantity throughout that person’s life. The fact that ingenium is something not fully teachable therefore means that those of “greater ingenium” may cultivate it simply by attention to nature. Thus Descartes writes in the Rules, describing his own proposed method of learning, “Since the utility of this method is so great that, without it, the pursuit of learning would seem to be more harmful than helpful, I am easily persuaded that those of greater ingenium have already seen it in some manner—even under the guidance of nature alone.”14
The folkloric, but also very deep-seated, figure of the genius as a supernatural spirit, associated with but not identical to an individual human being, would become a common fixture of eighteenth-century treatments of the faculty that had been called, from Cicero to Descartes, ingenium. Thus Kant writes in the 1790 Critique of the Faculty of Judgment that “it is probable that the word Genie is derived from genius, that peculiar guiding and guardian spirit given to a man at his birth, from whose suggestion these original ideas proceed.”15 At the beginning of Kant’s century, in the first French translation of A Thousand and One Nights, brought out in stages between 1704 and 1717, the Arabic al-jinnī had been rendered by Antoine Galland as le génie.16 This of course will soon become the familiar “genie,” that early example of cultural appropriation, who comes out of a lamp, as a vaporous puff, and grants wishes. The shared etymology is, however, spurious: the spiritual creature of Islamic folklore shares no common origin with the genius, whose name ultimately shares the same root with such things as genes, genera, and generation. But in the early eighteenth century the terms “genius” and “génie” were unstable, teetering between signifying some particular mental capacity of an individual human being, and denoting a supernatural being that guides or intervenes in human life somewhat in the manner of an Arabic jinnī. Thus in the Theodicy of 1709 Leibniz would insist that “there is an inconceivable number of genies [génies]” inhabiting the heavenly spheres, too great in magnitude to enter into our field of perception.
In the Critique of the Faculty of Judgment, Kant identifies “genius” as “the innate mental disposition (ingenium) through which nature gives the rule to art.”17 The parenthetical Latin is Kant’s own: for him, ingenium is an innate mental disposition, but it is elevated into genius properly speaking only when it involves the rare power of nature to “give the rule to art.” That is to say, for Kant the genius artist is the person who has received a gift from nature that becomes manifest in the artist’s creations. This creates for Kant a sort of hierarchical distinction between the respective values or powers of scientists on the one hand, and of artists on the other: “In science,” he writes, “the greatest discoverer only differs in degree from his laborious imitator and pupil; but he differs specifically from him whom nature has gifted for beautiful art.”18
This distinction, in turn, would be the beginning of a celebration of “genius” in the new and unprecedented sense in which it would be understood in German romanticism: the exceptional gift of certain individuals that enables them to have creative breakthroughs, to innovate artistically, to see the world in a new way and to make new works of art in accordance with this new way of seeing. For the rationalists, reason had been the highest faculty of the human mind, and was shared equally by all human beings simply in virtue of their humanity—even if, to be sure, many human beings do not train it in the right way and never really excel as the rational beings they were, in some sense, created to be. Genius, by contrast, in the early romantic understanding of it, is a scarce resource, and there is not necessarily any reliable method of drawing it out of any individual person. You either have it or you don’t, and no instruction book could ever possibly be written to explain to you how to get it. “If you have to ask, you’ll never know,” as Louis Armstrong said of the meaning of “swing.”
Descartes had wanted to provide a method to as many people as possible, with whatever rudiments of bon sens they may have been born, to master bodies of knowledge, as well as the right rules of inference concerning these bodies of knowledge. He did not have much concern about art as an autonomous domain of human existence that might be well suited to bring human excellence into evidence. With Kant, and all the more with the German romantic movement that will develop over the half century or so that follows his work (and often in conscious opposition to important elements of this work), art will by contrast be propelled into the center of attention. Art, moreover, will be sharply distinguished from craft or craftsmanship: that which any person with rudimentary potential could in principle be trained to produce. Art, rather, will come to have “fine art” as its exemplary, even as its sole legitimate, instance: the sort of art that only the ingenious may hope to produce, not by learning rules and following them, but rather by learning rules, and, eventually, breaking them as only a genius can. And at this point, genius, which had initially been conceived as a natural disposition to learning any rule-bound skill or science, including logic, is now set up in stark opposition to logic and identified with deep and incommunicable inner feeling. Thus in 1901, in Montana, the nineteen-year-old genius Mary MacLane will write: “If I were not so unceasingly engrossed with my sense of misery and loneliness my mind would produce beautiful, wondrous logic. I am a genius—a genius—a genius. Even after all this you may not realize that I am a genius. It is a hard thing to show. But, for myself, I feel it.”19
To master science, to learn the rules as Descartes hoped to bring people with good sense to do, is to conduct oneself with integrity, to do what is right in the right circumstances, which includes making the right inferences from the right knowledge, constructing one’s machines in the right way, and so on. But this conception of human excellence, and of the human good, is at odds with the human particularism that rationalist thinkers such as Descartes also stress; it makes the well-trained human being, the one who has mastered the rules, little different from the predictable egret or lizard, who moves now this way and now that, depending on what is happening around it, what its body needs, and so on. Descartes expresses on a number of occasions the common observation of animals that “the high degree of perfection displayed in some of their actions makes us suspect that [they] do not have free will.”20 For Descartes, an animal can have integrity but cannot have idiosyncrasy; it can achieve the excellence of the sort of being it is, only to the extent that it continues to do what we expect it to do. If it does something unexpected, this is probably because it is enraged, or dying, and even then we expect it to undergo these changes in patterned and species-specific ways. But surely there is something more that human beings might hope for, and that sets them apart? There is: idiosyncrasy, doing now this and now that, for no other reason than that it suits our exceptional individual natures. And the idiosyncrasy of the exceptional few, who do now this and now that for reasons we cannot understand, but with results we recognize as valuable, is nothing other than genius.
The shift in philosophical interest from ingenium to genius, in the sense just described, from teachable and collective science to individual accomplishments of art, from reason to inscrutable inspiration, came at a great cost: it could not pretend to offer us a general account of how human beings are and of what their potential might be. It was of necessity preoccupied with rare birds among men. Moreover, in abandoning any expectation of an explicit account of what makes great art great, or exceptional artists exceptional, abandoning any hope that the rules of great art may be stated, set down, and then followed by others, it acknowledges that what matters most in human life is something for which the reasons cannot be given. Art is, to this extent, irrational, as is the society that sets art up as a supreme good, without any expectation of understanding why it is so—the society, for example, in which a local museum jockeys and petitions for the acquisition of a Jeff Koons sculpture, which Koons himself did not make with his own hands, but which is believed nonetheless to carry, by a series of certifiable transmissions, the authentic trace of his inscrutable genius.
What Is Art?
Much of the tradition of Western “fine art,” prior to and in some cases parallel to romanticism and later movements such as expressionism and surrealism, has been devoted to tempering and dominating irrationality, to converting wild dreams into things, into physical objects and commodities. Even if the rules of these objects’ production cannot be given, they can still be rationalized to some extent as commodities with a certain monetary value, and any other sort of value can be debated by critics and viewers with greater or lesser degrees of futility.
E. R. Dodds begins The Greeks and the Irrational, to which we have already been introduced, by relating an encounter in a museum that occurred not long before the book’s publication in 1951.21 He meets a young man by chance who tells him that he has no taste for Greek art or culture because, the young man says, the Greeks had been too preoccupied with reason. At the time Dodds was writing, museums, striving for austerity, were filled with monochrome canvases and ready-made industrial products (which were not strictly speaking created by the artist to whom they are attributed, but rather, to speak with Arthur Danto, were “transfigured” by the artist).22 The museums also favored “primitive” sculptures inspired by the artistic traditions of non-European cultures—traditions that Kant would have insisted do not rise to the level of true art, “fine art,” but rather should be kept cordoned off in the lesser category of decoration or craft. The air du temps for the midcentury urban Westerner, the young man presumably felt, was preoccupied with the return of the repressed, with everything that cannot be confined within a pristine rational order.
Dodds took this encounter as the starting point for his own groundbreaking investigation of Greek culture. Is it really so easy to sum up what Greek life had been about by calling it “rational”? And has there ever, moreover, been a culture that deserves this appellation unambiguously or unqualifiedly? Or is it rather something that is of service only as a stereotype deployed at a great distance, either in space or in time, or perhaps as a conceit that one might use within a given culture by screening out everything that does not conform to it? “To a generation whose sensibilities have been trained on African and Aztec art,” Dodds writes, “and on the work of such men as Modigliani and Henry Moore, the art of the Greeks, and Greek culture in general, is apt to appear lacking in the awareness of mystery and in the ability to penetrate to the deeper, less conscious levels of human experience.”23
An important role is played, in engaging these deeper levels, by repetition. The poet Les Murray, quoted earlier, has compellingly described religion as poetry to the extent that its rituals are enacted “in loving repetition.” Or, as the German choreographer Pina Bausch has articulated her own use of repetition as the vehicle of her artistic creation: “Repetition is not repetition. The same action makes you feel something completely different by the end.”24 Here we may also recall the multiple mystical visions of Plotinus, discussed in chapter 1. Could these repetitions have failed to structure his purportedly ineffable experiences, to give them sense and form, like a choreographed dance? And is there perhaps something in the notion of repetition that can provide for us a bridge between the seemingly separate realms of art, on the one hand, and religion and ritual on the other?
When I was thirteen, I was baptized in the Catholic church. I had been the only unbaptized student in a Catholic elementary school, and it was judged at some point that I might fit in better if I were to become a member of the flock. I acquiesced, happily, and for a year or so I muttered the rosary with deep inward yearning: in loving repetition. This experience overlaps in my memory with a period of intense, ridiculous, adolescent Beatlemania. I knew all the band members’ birthdays, all their parents’ birthdays, the precise layouts of the streets of Liverpool, of Hamburg. I knew, most of all, the precise aural contours of every available recording of every Beatles song, whether canonical or bootleg. I do not remember whether the Beatles came before, or after, the Catholicism. What I remember is that they blended perfectly into one another in my fantasy life.
Now the recordings, though I played them back in loving repetition, were not, strictly speaking, repeated. They were each performed only once, in a studio, at some point in the 1960s, before I was born. Perhaps these singular performances involved tracks, and so multiple recordings of different elements, but in any case the whole production of the authoritative version was completed in a finite, no doubt very short, series of steps. What was produced was what Nelson Goodman would call an “allographic” artwork: a work that can be fully experienced even if the thing itself remains remote, even if the thing itself is in its essence unlocatable.25 My copy of the White Album, scooped up at a San Francisco garage sale from some kindly hippie, repairing his Volkswagen bus, circa 1985, cannot in any sense be said to be the work of art itself, and yet I have experienced the White Album as fully as anyone has, simply by bringing this copy home and putting it on the record player and listening to it: in loving repetition. The recording of that album fixed and eternalized a number of contingencies, a number of things that could just as well not have happened, some words muttered, George Harrison’s fingers staying on the strings a microsecond too long and generating that superfluous but not unpleasant string noise for which there is surely a term. These contingencies become canonical. They are awaited lovingly by the knowing listener. They arrive as expected, and they reconfirm the aesthetic order of the world.
We know that a number of the world’s most glorious works of epic poetry, including Homeric epic, began as traditions of oral recitation, presumably involving some degree of rhythmic articulation, and perhaps also inflections of the voice’s pitch and timber. In this respect, literature and music are really only different trajectories of the same deeper aesthetic activity: a repetition that reconfirms, or reestablishes, or perhaps re-creates, the order of the world. To be invested in this repetition aesthetically, following Murray, is nothing other than love itself. The Yakut heroic epos, the Olonkho, is considered to be the urtext of pre-Islamic Turkic mythology, preserved across the centuries in the oral tradition of northeastern Siberia. It speaks of snow, and reindeer, and human beings, and ancestors, and the transcendent cause of all of this. When reading it, I envision an expert raconteur, someone who relates the Olonkho with a degree of mastery comparable to the mastery we recognize as involved in conducting the Ring Cycle or playing Othello. What one would particularly relish, it is easy to imagine, experiencing the recitation directly, and intimately, would be the variety of deviations, and the way the master raconteur controls the deviations for such-and-such desired effect. “Here comes that part where he’s going to make a bear-grunt noise!” the Yakut adolescent might think to herself. And then it comes, and it is slightly different from the last time, yet perfectly, satisfyingly different. The repetitions are irreducibly social, variable yet constant (unlike the recording of George’s fingers on the string, they are a little bit different each time, and yet the same), and mediated through a figure who in turn mediates between the human and the transhuman spheres of existence.
It is an unusual experience when the repetition can be experienced, as I experienced the Beatles’ music, both in a way that is not directly social, at home with headphones on in front of a record player, and in a way that involves total invariability from one “performance” to the next. My experience of the Catholic faith was also somewhat unusual: it consisted almost entirely in private mutterings of memorized prayers, in a way that remained almost completely oblivious to the existence of the church, the coming together of two or more people that in turn calls God to presence as well. But these obsessive compulsions, like the socially mediated recitation of epic, or like technologically mediated communion with godlike pop stars through recorded tokens of their canonical creations, are all, as already suggested, the work of love, or at least of some passion that feels a lot like love when it is being experienced. Let us just call it love. This love seems to send a person straight outside of himself. But since this cannot really happen, since we all in fact stay right where we are, the sweet irrational ecstasy arrives in the next best way possible: through a cycling back, again and again, to the syllables and sounds that order the world, and that may give some hint of its true cause and nature.
The essence of much art lies in repetition of the sort I have just described, and to this extent it is at least a close relative of ritual. Yet for more than a century now art has been principally preoccupied not with the eternal cycling back of the same, but with the perpetual, forward-marching innovation of the new. The irrational, as expressed in modern art, has been not only about mystery and the unconscious, but also about transgression, so much so that critics have often written as if transgression is an essential feature of modern art. Kieran Cashell describes the pressure on critics to conform to this view: “Either support transgression unconditionally or condemn the tendency and risk obsolescence amid suspicions of critical conservatism.”26 A small but significant portion of the transgressive artworks of the past several decades have incorporated violence, not an exploration of the theme of violence, but actual violence, enacted by the artist either against him- or herself, or against animals, or, in some rare cases, against unwitting spectators. Perhaps nothing signals transgression in art quite so easily as splattered blood, even if some who have used it, such as the Vienna actionist artist Hermann Nitsch, insist that they are attempting to return to something ritualistic and archaic, rather than aspiring to the singular and cutting-edge.27
What, in the end, are such ritualized displays of transgression getting at? I spent a good part of my youth, within a few years of the remission of my Beatlemania, attending the concerts of bands considerably harder than the Beatles at their hardest, bands that made “Helter Skelter” sound like a lullaby. To thrash about under the spell of such music, we ordinarily assume, and I certainly assumed while in flagrant delectation, is the very opposite of submission to authority, to top-down diktats from the state, the church, the military, or the family as to how we ought to be conducting ourselves. But the opposition is not so clear: the revelry, while anarchic, is occurring in front of, and usually somewhat beneath, a band. The revelers are not bowing down to or worshipping the band, but nor is what they are doing an entirely different species of activity from a church service or a mass rally.
How exactly the one sort of social phenomenon morphs into the other, how individuals move from an ebullient expression of their individuality to an ecstatic transcendence of the self in a supercharged collectivity, is both complicated and crucially important for our understanding of the social manifestations of irrationality. We know that many young people who have enjoyed freaking out and dancing alone under the influence of psychedelic drugs and music have not long after found themselves under the influence of enigmatic and psychopathic cult leaders. Roughly half of the anarchist punks I knew in my adolescence, who listened to bands with names like Social Distortion and the Dead Kennedys—and, with seething irony, one that was called Reagan Youth—are now sincere, utterly unironic Trump supporters. It is but one small step from free-spirited anarchy into a statist-nationalist cult of personality. The last few centuries reveal that individual transcendence cannot exist for long without being followed by a reabsorption into the collectivity, that the two mutually imply each other. The middle of the twentieth century, in particular, seems to have witnessed an intense acceleration of the alternations between the two. The most anarchic and individualistic expressions of avant-garde art in Weimar Germany or in the early Soviet Union quickly gave way to a wave of conformism in which the aims of art were subordinated to the aims of mass politics. Many avant-garde artists themselves, such as the Italian futurists, were eager to sign up for mass movements that submerged their own individuality and that subordinated them to ironfisted rulers.
In the seventeenth century Descartes had sought to banish dreams, to assure himself that the hallucinations of sleep play no part in who he really is, that they are inevitable, but must also be contained and, to the extent possible, forgotten. Freud, by contrast, would maintain that our logical reasoning and clear and distinct perceptions are just a fragile wrapping around the true self that is always bubbling and fermenting underneath, and that is made up, precisely, of dreams, of largely forgotten memories and passions. The Jesuit missionary in New France had worried, recall, that the Iroquois chief’s access to his own unconscious, his penchant for taking the content of his dreams as instructions for action, would lead him to act out violently against his missionary guest, perhaps to kill him. The unconscious is not subject to the sort of moral regulation that governs our conscious life. In our society we often suppose that morality consists precisely in keeping a check on the forms of transgression that our unconscious enables us or compels us to imagine. In this regard it would seem natural to suppose that a society in which action is based on the translation of dreams into reality could not possibly be rational or moral: it would involve constant, chaotic transgression. In societies such as those of the twentieth-century West, mystery and the deeper levels of human experience are allowed to seep into public life in the form of sculpture and painting and music, but ordinarily not in the form of direct enactment of the sorts of moral transgression that might be depicted or described in art. This may seem like a healthy compromise, an effective way of deploying a release valve for the unconscious, to return to this common metaphor, without making too great a mess of things.
But this would almost certainly amount to hasty self-congratulation. The fact that violence is strictly morally prohibited in civilian life within our society—that it is monopolized by the state and relegated to the fantasy lives of the state’s subjects—does not seem conclusively to mean that there is less overall real violence in our society than in those, such as the Iroquois, in which violence is more fluidly integrated and processed through ritual. Increasingly, the work of war is being transferred to an ever smaller number of people, responsible for ever more powerful war machines. It is science that has kissed awake the new powers of these machines, and this over the course of the same period of history in which the enduring violence of our dreams has been ever more effectively cordoned off from the sphere of real action: restrained within the safe spaces of museums, or, in its lower-brow iterations, packaged as the harmless entertainments of the big and small screens, entertainments that until recently were often said to have been made in “dream factories.” The violence sometimes leaps off the screens—it bleeds out, like obscenity out of quotation marks, into the real world—but for the most part we find that the arrangement works fairly well: science and technology monopolize real violence, congealed into missiles and drones that are held to be more effective the less often they are used; while art has unfettered access to fantasy violence, and may do with it whatever it wishes, as long as no one (or no human) gets hurt.
To the extent possible, we try to see these, too, as nonoverlapping magisteria. We try not to think at all about the unimaginable violence hanging over our heads at all times; and we try to see the imaginative violence on our screens and in our art museums as belonging to an entirely different metaphysical and moral order: the one that is possible but not actual, the one that is just for fun. The safe space of unreason.
The Two Magisteria
In his 1759 poem “Jubilate Agno,” Christopher Smart, locked away in a mental asylum with his cat Jeoffry, would write of this faithful companion:
For by stroking him I have found out electricity.
For I perceived God’s light about him both wax and fire.
For the Electrical fire is the spiritual substance, which God sends from heaven to sustain the bodies of man and beast.28
What is electricity? For many, in the years before it was harnessed and transformed into a central part of our everyday lives, it was a spirituous substance and a sign of God’s power. More than a century after Smart, in his 1885 “Dissertation on Monads,” the Canadian Métis resistance fighter and mystic Louis Riel, evidently recalling the lessons on Leibniz’s philosophy he must have had decades earlier while at the Sulpician college of Montreal, would write from his Saskatchewan jail cell, awaiting execution: “A monad is an electricity [sic].”29 Over the course of the twentieth century, in turn, electricity would be taken away from the raving visionaries, and transformed into something for the use of which we receive a burdensome bill each month. It would be normalized, deprived, so to speak, of its charge of strangeness, and of all the interest it had previously had for those who position themselves outside the mainstream.
It is generally clear only in hindsight where the line in fact lies between science and its embarrassing cousin pseudoscience. Karl Popper had believed in the mid-twentieth century that the two could be sharply bounded off from one another by the criterion of falsifiability: if a proposition cannot in principle be falsified, then it is not a scientific proposition.30 But it is often impossible to know in advance how to class different propositions according to this criterion, and for this reason other prominent philosophers of science, notably Larry Laudan, have argued that the demarcation question “is both uninteresting and, judging by its checkered past, intractable.”31 More recently, Massimo Pigliucci has compellingly argued, in turn, that Laudan’s eulogy was premature. As Pigliucci notes, most of us think we know pseudoscience when we see it. It is perhaps only inevitable that we should continue to try to find a rigorous definition of it, even if none is forthcoming.32 We will return to explore pseudoscience in more detail in the next chapter. For the moment what interests us is the way in which natural forces, such as electricity, can themselves move, from one moment of history to another, across the demarcation line between the supernatural and the natural. This is a motion that often maps on, at least partially, to the transition from the realm of the creative and imaginative arts to the realm of sober science.
In the seventeenth century many who denied the reality of action at a distance or other varieties of sympathetic power were seeking to advance a clockwork model of nature, with every motion of a body explainable by the fact that some other body has directly imparted its motion to it, just as a gear in a clock moves only because it is pushed along in its rotation by another gear. Naturally, on such a model, miracles and wonders, God’s delivery of anything as a sign or as a gift, also had no place. But many who sought to deny miracles did so not because they wished to deny God’s infinite power, as it might have seemed to a medieval theologian some centuries earlier, but because, as they argued, it would in fact serve to diminish this power if one were to maintain that God had created a natural order in need of periodic interruptions. How much more worthy of exaltation would be a God who had set up the order of nature so perfectly at the beginning that any subsequent interventions could only cast into doubt nature’s status as a monument to his perfection. Yet however sincere the theological motivations behind this new theological noninterventionism may have been, it brought with it the threat of the Deus absconditus: a God who is no longer needed, whose entire responsibility is wrapped up in the Creation, and who, once he has established the initial conditions of things, along with the unchanging laws of nature, is free to simply abscond.
To some extent, this is a problem that reaches back to antiquity: in order to properly exalt God, many theologically oriented philosophers found it fitting to push him out of the ordinary affairs of the world. Some ancient thinkers, including Aristotle, supposed that God must be so great as to not be at all dependent on the existence of the world, and that if he were even so much as to think about the world at all, then this would already amount to a sort of dependence. But in the modern period, with the long legacy of Christianity, it was no longer nearly so easy as it had been for Aristotle and other Greek philosophers to imagine God as indifferent to the human world: after all, the Christian God is supposed to have created this world for human beings, and, however complicated the relationship has been, to love them as well. No modern Christian philosopher, no matter how radical, could simply deny that God has a relationship to the world at all. For this reason it would prove far more difficult than it had been for the Greeks to strike the delicate balance between exaltation of God and his elimination.
Nor was sincerity in one’s faith necessarily enough to avoid contributing to the emergence of a naturalistic model of the world in which God would ultimately prove otiose and superfluous. No one was more adamant than Robert Boyle, the English experimental philosopher of the late seventeenth century, that nature may be exhaustively explained as a clockwork. Boyle also insisted that there is no form of life more in keeping with Christian piety than the one devoted to conducting the sort of experiments that reveal how what appears miraculous in nature may in fact be explained in terms of previously undiscerned regularities and undiscovered laws of nature. Boyle is in this respect a world away from a libertine thinker such as Pierre Gassendi, who wishes to account for eclipses, for example, in terms of the ordinary and predictable orbits of celestial bodies, and who takes the possibility of their naturalistic explanation as evidence for the superstitiousness of religious belief in general. For Gassendi, naturalism is a weapon against faith, while for Boyle it is the means to arrive at a more worthy faith.
According to the French historian Paul Hazard, invoked in the introduction, it is in the period between 1680 and 1715 that what we might call “Boyle’s program”—the attempt to harness the rational explanation of nature’s regularity in the service of religious faith—will prove impossible to sustain.33 Increasingly over the next centuries, such rational explanation will be seen, by its defenders and detractors alike, as overtly hostile to faith, and faith in turn will retreat, often, into irrationalism, where it either denies the legitimacy of science or attempts unconvincingly to beat science at its own game. By the late nineteenth century in Europe, the rift will have become so pronounced as to obscure from view earlier attempts to hold these domains together. This will be the era in which science, as embodied by the figure of the scientist, becomes the autonomous and authoritative domain of culture, as we continue to understand it, more or less, today. William Whewell would coin the term “scientist,” on analogy to “artist,” only as recently as 1834,34 prior to which the preferred term for a person who does what a scientist does would have been “philosopher” or “natural philosopher” or “naturalist.” It is only in the mid-1860s that the term begins to be used with any significant frequency.
In the 1880s, the Eiffel Tower was built in Paris, not, as many now groundlessly imagine, as a monument to all that is sensual and seductive, but rather as a feat of engineering in celebration of the French men of science who had contributed to the glory of the Third Republic, and whose names are inscribed in enormous letters around the base of the structure.35 This is just one of many such triumphalist erections that were appearing throughout the world’s capital cities around the same time, and that by no means celebrated romance. Rather, the romantic vision of the world was positively screened out and ignored in the engineers’ and the architects’ pursuit of a solid, steel-framed future. There was by now nothing “poetic” about such projects, in the sense of poiesis invoked earlier. By now poiesis has been relegated to the second-tier status of mere writers, creators of merely possible worlds; while the work of the wrighter, so to speak, of the builder or maker, was the creation of a new reality here in our actual world. At the top of his structure Gustave Eiffel included a meteorological observatory and a laboratory for the study of radio waves and other physical phenomena. This elevated station doubled as his apartments, where he would also receive Thomas Edison during the American inventor’s visits to Paris. Not long before, Jules Verne was writing a new sort of science fiction, including the 1865 novel From the Earth to the Moon,36 in which he not only envisions a lunar transit, a stock feature of literature since Lucian of Samosata wrote the True History in the first century CE, but also attempts to give a plausible account of how such a transit might actually take place in the near future. Verne is, in his way, drawing the stars down to earth, claiming the fiction of the possible—which had previously been under the chaotic reign of the faculty of the imagination—for the faculty of reason.
In Verne’s and Eiffel’s lifetime the ideal personage of the scientist was taking shape, and only then was philosophy, for its part, forced to split into two camps. There were those who found this new figure of the scientist impressive and longed to share in his new cultural cachet. Others, by contrast, found his purview—that of building on, improving upon, and channeling the forces of the natural world, hacking through nature’s thorns to kiss awake new powers, in James Merrill’s words—inadequate for the central task of philosophy as it had been understood by one prominent strain of thinkers since antiquity: that of understanding ourselves, our interiority, and the gap between what we experience in our inner lives and what the natural world will permit to be actualized or known.
This was not a strict bifurcation, not every philosopher felt compelled to take sides, and it is not always easy to determine in retrospect on what side of the divide a given thinker fell. Yet there are some for whom there can be no question: for example, Nietzsche, who writes ecstatically in The Gay Science of 1882 that “the wildly beautiful irrationality of poetry refutes you, you utilitarians!”37 The German thinker absolutely rejected the vision of the philosopher as sharing a common cause with the scientist, who, again, at the time Nietzsche was writing had been around in his present incarnation for only a few decades. Contemporary poets, meanwhile, whom Nietzsche praised for their wildly beautiful irrationality, increasingly touted the power and virtue of madness, childlikeness, and dream states in the creation of their art. Charles Baudelaire explicitly described the poem as a form of thought that occurs “musically and pictorially, without quibbling, without syllogisms, without deduction.”38 Thus for him poetry is a rejection of philosophy as it had been traditionally understood, and if a philosopher wishes to share in the generative force of the poet’s unreason, he must by the same token reject the recognized tools of his discipline.
These distinctions, again, are not always clear, nor are they distinctions that the figures involved would necessarily have recognized as pertaining to themselves. One particularly important movement that had its greatest prominence in the eighteenth century, but that also weaves in complex ways in and out of the work of nineteenth-century figures, including that of Nietzsche, is Hellenism: a variety of neoclassicism, resistant to definition, but which may at least be said to not place poetry and reason in stark opposition to one another. Hellenism, as the English poet and critic Matthew Arnold understood it,39 is opposed to the strict subordination of individuals to rules that cramp and stunt the free expression of the spirit. As such it is a movement that is based on spontaneity, but this is conceived quite differently from anarchy or behaving in any disordered way whatsoever. Rather, the product of spontaneous action, when issuing from the spirit of true artistic talent, is one that is in harmony with the natural and transcendental order. It is not chaotic or satanic, which would be rather closer to what Baudelaire was after, but nor is it rule bound and pious.
Nietzsche’s work comes late in the history of modern Hellenism, and may be said both to mark its crisis and also to offer its swan song. He was by training a philologist, raised up to be a dusty scholar sequestered in a dimly lit library, doing the hard work of reconstructing our civilization’s origins from the textual traces of a long past world. This sort of work was in the nineteenth century the very ideal of scholarship, the very reason for being of a humanistic education, and, very much unlike today, it was a central part of what went on in universities. But what Nietzsche drew from his reading of the Greeks was not at all a continuity of civilizations, and not even a history of decline. It was, rather, something closer to an absolute rupture, whereby the things the Greeks valued can mostly no longer be detected by us—even if we learn their language and believe we are familiar with their world—for these things have simply become too strange and foreign. And these things are not, as the more conservative Hellenists had believed, the gifts of reason, order, geometry, and so on; they are expressions of extreme unreason, of the sort that in the following century E. R. Dodds would more thoroughly excavate: Dionysianism, ecstasies, transgressions without guilt. Nietzsche remained rooted in the Greek world, but did not at all see that world as extending to us a torch to illuminate the pursuit of our shared values of order, perfection, and invention.
A half century before Nietzsche, Johann Wolfgang von Goethe was busy forging the modern German literary spirit, drawing on the neoclassicism that was at its apex during his lifetime, but also cultivating a model of the modern intellectual as one who is at home in the realm of the sensual, and who sees no strong opposition between sensuality and reason, or between imagination and understanding, or romanticism and science. For Goethe, however, acknowledgment of the ineliminability of sensuality, even in domains of human experience that are held to be primarily cognitive, does not at all amount to a compromise with romanticism. In fact he believes that it is only in the modern period that sense and reason have been artificially separated, and that classicism gives us everything we need to bring them back together. Thus his judgment of the two prevailing intellectual currents in his era is clear and decisive: “What is classical is healthy,” he writes, “what is romantic is sick.”40
Goethe’s own contributions in science, particularly in botany and in the study of color perception, reveal a path not taken, which had its moment in the early decades of the nineteenth century, just before the first appearance of Whewell’s “scientist,” and the congealing over the remainder of the century of the ideal type of the scientific practitioner: a cold, detached, and unfeeling experimenter, an ultimate authority to turn to for answers to questions that the common run of people, too attached to their passions, are unable to come by on their own. This type would prevail until the late twentieth century, only to be in turn replaced by the latest iteration of the figure of the scientist: the more or less unreflective technician, able to fulfill all of his or her job duties without appeal to abstract concepts of any sort, and principally accomplished in the art of winning grant money for his or her home institution.
Goethe envisioned a practice of science that would not exclude the role of the emotions in the discernment of basic truths about the natural world. In the domains of natural science that interested him most, botany and optics, qualitative description is ineliminable. In his monumental work of epic poetry, Faust, composed in drafts over several years at the end of the eighteenth century, Goethe also remained aware of the deep, mythical association of scientific discovery with natural magic: with probing into the forces we would do best to ignore, and unleashing them into the world, against our better inclinations, and against piety. Goethe did not himself see scientific discovery in this way, as a pact with the devil, but he did think that this deep-seated understanding of it was important enough to be taken seriously, to be reflected on. And he thought that the best alternative model of scientific inquiry would be one that does not simply sweep the old Mephistophelean view aside, but rather modulates it through a humane, sensual cultivation of the scientific life as the locus of a new sort of virtue.
Goethean science lost out, of course, to the point that his vision of what science is or ought to be, just two hundred years or so after the fact, is barely recognizable to most people. The reason for the dramatic rupture in the late nineteenth century, between the romantics and the scientists, between—to speak perhaps overly emblematically—the absinthe bar and the Eiffel Tower, had much to do with this loss, and with the sudden sense that one must choose a side: that science is not about feeling or sensuality, and in turn that poetry is not about insight into the harmonious or rational order of the external world, but rather about laying bare the dark and disordered depths of one’s own internal world. This rupture brought about, among other things, a stark radicalization of tendencies in both science and poetry, which were not exactly new in the mid-nineteenth century, but had not until that point come to dominate so fully as the respective spirits of these two basic human activities. Science was now the home of reason; poetry, and art, and the exercise of the imagination more generally, of unreason. Both of these spheres of human life continue to hobble along today, injured by the violence of their separation.