On II May 1960, at six-thirty in the evening, Richard Klement got down as usual from the bus which brought him home from work at the Mercedes-Benz factory in the Suarez suburb of Buenos Aires. A moment later he was seized by three men and in less than a minute forced into a waiting car, which took him to a rented house in another suburb. Asked who he was, he replied instantly, ‘Ich bin Adolf Eichmann,’ adding, ‘I know I am in the hands of the Israelis.’ The Israeli Secret Service had had ‘Klement’ under surveillance for some time, the culmination of a determined effort on the part of the new nation that the crimes of World War II would not be forgotten or forgiven. After his capture, Eichmann was kept secretly in Buenos Aires for nine days until he could be secretly flown to Jerusalem on an El Al airliner. On 23 May, Prime Minister David Ben-Gurion announced to cheers in the Jerusalem parliament that Eichmann had arrived on Israeli sod that morning. Eleven months later, Eichman was brought to trial in the District Court of Jerusalem, accused on fifteen counts that, ‘together with others,’ he had committed crimes against the Jewish people, and against humanity.1
Among the scores of people covering the trial was Hannah Arendt, who was there on behalf of the New Yorker magazine and whose articles, published later as a book, caused a storm of controversy.2 The offence arose from the subtitle of her account, ‘A Report on the Banality of Evil,’ a phrase that became famous. Her central argument was that although Eichmann had done monstrous things, or been present when monstrous things had been done to the Jews, he was not himself a monster in the accepted sense of the word. She maintained that no court in Israel – nor any court – had ever had to deal with someone like Eichmann. His was a crime that was on no statute book. In particular, Arendt was fascinated by Eichmann’s conscience. It wasn’t true to say that he didn’t have one: handed a copy of Lolita to read in his cell during the trial, he handed it back unfinished. ‘Das ist aber ein sehr unerfreuliches Buch,’ he told his guard; ‘Quite an unwholesome book.’3 But Arendt reported that throughout the trial, although Eichmann calmly admitted what he had done, and although he knew somewhere inside him that what had been done had been wrong, he did not feel guilty. He said he had moved in a world where no one questioned the final solution, where no one had ever condemned him. He had obeyed orders; that was all there was to it. ‘The postwar notion of open disobedience was a fairy tale: “Under the circumstances such behaviour was impossible. Nobody acted that way.” It was “unthinkable.” ’4 Some atrocities he helped to commit were done to advance his career.
Arendt caused offence on two grounds. 5 She highlighted that many Jews had gone to their deaths without rebellion, not willingly exactly but in acquiescence; and many of her critics felt that in denying that Eichmann was a monster, she was diminishing and demeaning the significance of the Holocaust. This second criticism was far from the truth. If anything, Arendt’s picture of Eichmann, consoling himself with clichés, querying why the trial was being prolonged – because the Israelis already had enough evidence to hang him several times over – only made what Eichmann had done more horrendous. But she wrote as she found, reporting that he went to the gallows with great dignity, after drinking half a bottle of red wine (leaving the other half) and refusing the help of a Protestant minister. Even there, however, he was still mouthing platitudes. The ‘grotesque silliness’ of his last words, Arendt said, proved more than ever the ‘word-and-thought-defying banality of evil.’6
Despite the immediate response to Arendt’s report, her book is now a classic.7 At this distance her analysis, correct in an important way, is easier to accept. One aspect of Arendt’s report went unremarked, however, though it was not insignificant. It was written in English, for the New Yorker. Like many intellectual emigrés, Arendt had not returned to Germany after the war, at least not to live. The mass emigration of intellectual talent in the 1930s, the bulk of which entered the United States, infused and transformed all aspects of American life in the postwar world, and had become very clear by the early 1960s, when Eichmann in Jerusalem appeared. It coloured everything from music to mathematics, and from chemistry to choreography, but it was all-important in three areas: psychoanalysis, physics, and art.
After some early hesitation, America proved a more hospitable host to psychoanalytic ideas than, say, Britain, France, or Italy. Psychoanalytic institutes were founded in the 1930s in New York, Boston, and Chicago. At that time American psychiatry was less organically oriented than its European counterparts, and Americans were traditionally more indulgent toward their children, as referred to earlier. This made them more open to ideas linking childhood experience and adult character.
Assistance to refugee analysts was organised very early in the United States, and although numbers were not large in real terms (about 190, according to one estimate), the people helped were extremely influential. Karen Horney, Erich Fromm, and Herbert Marcuse have already been mentioned, but other well known analyst-emigrés included Franz Alexander, Helene Deutsch, Karl Abraham, Ernst Simmel, Otto Fenichel, Theodor Reik, and Hanns Sachs, one of the ‘Seven Rings,’ early colleagues of Freud pledged to develop and defend psychoanalysis, and given a ring by him to symbolise that dedication.8 The reception of psychoanalysis was further aided by the psychiatric problems that came to light in America in World War II. According to official figures, in the period 1942–5 some 1,850,000 men were rejected for military service for psychiatric reasons, 38 percent of all rejections. As of 31 December 1946, 54 percent of all patients in veterans’ hospitals were being treated for neuropsychiatrie disorders.
The other two most influential emigré psychoanalysts in America after World War II were Erik Erikson and Bruno Bettelheim. Erikson was Freud’s last pupil in Vienna. Despite his Danish name, he was a north German, who arrived in America in 1938 when he was barely twenty-one and worked in a mental hospital in Boston. Trained as a lay therapist (America was also less bothered by the absence of medical degrees for psychoanalysts than Europe was), Erikson gradually developed his theory, in Childhood and Society (1950), that adolescents go through an ‘identity crisis’ and that how they deal with this is what matters, determining their adult character, rather than any Freudian experience in childhood.9 Erikson’s idea proved extremely popular in the 1950s and 1960s, with the advent of the first really affluent adolescent ‘other-directed’ generation. So too did his idea that whereas hysteria may have been the central neurosis in Freud’s Vienna, in postwar America it was narcissism, by which he meant a profound concern with one’s own psychological development, especially in a world where religion was, for many people, effectively dead.10 Bruno Bettelheim was another lay analyst, who began life as an aesthetician and arrived in America from Vienna, via a concentration camp. The account he derived from those experiences, Individual and Mass Behavior in Extreme Situations, was so vivid that General Eisenhower made it required reading for members of the military government in Europe.11 After the war, Bettelheim became well known for his technique for helping autistic children, described in his book The Empty Fortress.12 The two works were related because Bettelheim had seen people reduced to an ‘autistic’ state in the camps, and felt that children could therefore be helped by treatment that, in effect, sought to reverse the experience.13 Bettelheim claimed up to 80 percent success with his method, though doubt was cast on his methods later in the century.14
In America, psychoanalysis became a much more optimistic set of doctrines than it had been in Europe. It embodied the view that there were moves individuals could make to help themselves, to rectify what was wrong with their psychological station in life. This was very different from the European view, that sociological class had as much to do with one’s position in society, and that individuals were less able to change their situation without more widespread societal change.
Two matters divided physicists in the wake of World War II. There was first the development of the hydrogen bomb. The Manhattan Project had been a collaborative venture, with scientists from Britain, Denmark, Italy, and elsewhere joining the Americans. But it was undoubtedly led by Americans, and almost entirely paid for by them. Given that, and the fact that Germany was occupied and Britain, France, Austria, and Italy were wrecked by six years of war, fought on their soil, it was no surprise that the United States should assume the lead in this branch of research. Göttingen was denuded; Copenhagen had been forced to give up its place as a centre for international scholars; and in Cambridge, England, the Cavendish population had been dispersed and was changing emphasis toward molecular biology, a very fruitful manoeuvre. In the years after the war, four nuclear scientists who migrated to America were awarded the Nobel Prize, adding immeasurably to the prestige of American science: Felix Bloch in 1952, Emilio Segrè in 1959, and Maria Mayer and Eugene Wigner in 1963. The Atomic Energy Act of 1954 established its own prize, quickly renamed after its first winner, Enrico Fermi, and that too was won by five emigrés before 1963: Fermi, John von Neumann, Eugene Wigner, Hans Bethe, and Edward Teller. Alongside three native American winners – Ernest Lawrence, Glenn Seaborg, and Robert Oppenheimer – these prize-winners emphasised the progress in physics in the United States.
Many of these men (and a few women) were prominent in the ‘movement of atomic scientists,’ whose aim was to shape public thinking about the atomic age, and which issued its own Bulletin of the Atomic Scientists, for discussion of these issues. The Bulletin had a celebrated logo, a clock set at a few minutes to midnight, the hands being moved forward and back, according to how near the editors thought the world was to apocalypse. Scientists such as Oppenheimer, Fermi, and Bethe left the Manhattan Project after the war, preferring not to work on arms during peacetime. Edward Teller, however, had been interested in a hydrogen bomb ever since Fermi had posed a question over lunch in 1942: Once an atomic bomb was developed, could the explosion be used to initiate something similar to the thermonuclear reactions going on inside the sun? The news, in September 1949, that Russia had successfully exploded an atomic bomb caused a lot of soul-searching among certain physicists. The Atomic Energy Commission decided to ask its advisory committee, chaired by Oppenheimer, for an opinion. That committee unanimously decided that the United States should not take the initiative, but feelings ran high, summed up best by Fermi, whose view had changed over time. He thought that the new bomb should be outlawed before it was born – and yet he conceded, in the Cold War atmosphere then prevailing, that no such agreement would be possible; ‘Failing that, one should with considerable regret go ahead.15 The agonising continued, but in January 1950 Klaus Fuchs in England confessed that while working at Los Alamos he had passed information to Communist agents. Four days after the confession, President Truman took the decision away from the scientists and gave the go-ahead for an American H-bomb project.
The essence of the hydrogen bomb was that when an atomic bomb exploded in association with deuterium, or tritium, it would produce temperatures never seen on earth, which would fuse two deuterium nuclei together and simultaneously release binding energy in vast amounts. Early calculations had shown that such a device could produce an explosion equivalent to 100 million tons of TNT and cause damage across 3,000 square miles. (For comparison, the amount of explosives used in World War II was about 3 million tons.)16 The world’s first thermonuclear device – a hydrogen bomb – was tested on 1 November 1952, on the small Pacific island of Elugelab. Observers forty miles away saw millions of gallons of seawater turned to steam, appearing as a giant bubble, and the fireball expanded to three miles across. When the explosion was over, the entire island of Elugelab had disappeared, vaporised. The bomb had delivered the equivalent of 10.4 million tons of TNT, one thousand times more violent than the bomb dropped on Hiroshima. Edward Teller sent a telegram to a colleague, using code: ‘It’s a boy.’ His metaphor was not lacking in unconscious irony. The Soviet Union exploded its own device nine months later.17
But after World War II ended, most physicists were anxious to get back to ‘normal’ work. Quite what normal work was now was settled at two big physics conferences, one at Shelter Island, off the coast of Long Island, near New York, in June 1947, and the other at Rochester, upstate New York, in 1956.
The high point of the Shelter Island conference was a report by Willis Lamb that presented evidence of small variations in the energy of hydrogen atoms that should not exist if Paul Dirac’s equations linking relativity and quantum mechanics were absolutely correct. This ‘Lamb shift’ produced a revised mathematical account, quantum electro-dynamics (QED), which scientists congratulate themselves on as being the ‘most accurate theory in physics.18 In the same year as the conference, mathematically and physically trained cosmologists and astronomers began studying cosmic rays arriving on Earth from the universe and discovered new subatomic particles that did not behave exactly as predicted – for example, they did not decay into other particles as fast as they should have done. This anomaly gave rise to the next phase of particle physics, which has dominated the last half of the century, an amalgam of physics, maths, chemistry, astronomy, and – strange as it may seem – history. Its two achievements are an understanding of how the universe formed, how and in which order the elements came into being; and a systematic classification of particles even more basic than electrons, protons, and neutrons.
The study of elementary particles quickly leads back in time, to the very beginning of the universe. The ‘Big Bang’ theory of the origin of the universe began in the 1920s, with the work of Georges Lemaître and Edwin Hubble. Following the Shelter Island conference, in 1948, two Austrian emigrés in Britain, Herman Bondi and Thomas Gold, together with Fred Hoyle, a professor at Cambridge, advanced a rival ‘steady state’ theory, which envisaged matter being quietly formed throughout the universe, in localised ‘energetic events.’ This was never taken seriously by more than a few scientists, especially as in the same year George Gamow, a Russian who had defected to the United States in the 1930s, presented new calculations showing how nuclear interactions taking place in the early moments of the fireball that created the expanding universe could have converted hydrogen into helium, explaining the proportions of these elements in very old stars. Gamow also said that there should be evidence of the initial explosion in the form of background radiation, at a low level of intensity, to be picked up wherever one looked for it in the universe.19
Gamow’s theories, especially his chapter on ‘The Private Life of Stars,’ helped initiate a massive interest among physicists in ‘nucleosynthesis,’ the ways in which the heavier elements are built up from hydrogen, the lightest element, and the role played by the various forms of elementary particles. This is where the study of cosmic rays came in. Almost none of the new particles discovered since World War II exists naturally on earth, and they could only be studied by accelerating naturally occurring particles to make them collide with others, in particle accelerators and cyclotrons. These were very large, very expensive pieces of equipment, and this too was one reason why ‘Big Science’ flourished most in America – not only was it ahead intellectually, but America more than elsewhere had the appetite and the wherewithal to fund such ambition. Hundreds of particles were discovered in the decade following the Shelter Island conference, but three stand out. The particles that did not behave as they should have done under the earlier theories were christened ‘strange’ by Murray Gell-Mann at Caltech in 1953 (the first example of a fashion for whimsical names for entities in physics).20 It was various aspects of strangeness that came under scrutiny at the second physics conference in Rochester in 1956. These notions of strangeness were brought together by Gell-Mann in 1961 into a classification scheme for particles, reminiscent of the periodic table, and which he called, maintaining the whimsy, ‘The Eight-Fold Way.’ The Eight-Fold Way was based on mathematics rather than observation, and in 1962 mathematics led Gell-Mann (and almost simultaneously, George Zweig) to introduce the concept of the ‘quark,’ a particle more elementary still than electrons, and from which all known matter is made. (Zweig called them ‘aces’ but ‘quark’ stuck. Their existence was not confirmed experimentally until 1977.) Quarks came in six varieties, and were given entirely arbitrary names such as ‘up,’ ‘down,’ or ‘charmed.’21 They had electrical charges that were fractions – plus or minus one-third or two-thirds of the charge on an electron – and it was this fragmentary charge that was so significant, further reducing the budding blocks of nature. We now know that all matter is made up of two kinds of particle: ‘baryons’ – protons and neutrons, fairly heavy particles, which are divisible into quarks; and ‘leptons,’ the other basic family, much lighter, consisting of electrons, muons, the tun particle and neutrinos, which are not broken down into quarks.22 A proton, for example, is comprised of two up quarks and one down quark, whereas a neutron is made up of two down quarks and one up. All this may be confusing to nonphysicists, but keep in mind that the elementary particles that exist naturally on Earth are exactly as they were in 1932: the electron, the proton, and the neutron. All the rest are found only either in cosmic rays arriving from space or in the artificial circumstances of particle accelerators.23
It was the main aim of physicists to amalgamate all these discoveries into a grand synthesis that would have two elements. It would explain the evolution of the universe, describe the creation of the elements and their distribution among the planets and stars, and explain the creation of carbon, which had made life possible. Second, it would explain the fundamental forces that enable matter to form in the way that it forms. God apart, it would in effect explain everything.
One day in the middle of 1960, Leonard Kessler, a children’s-book illustrator, ran into Andy Warhol – a classmate from college – coming out of an art-supply store in New York. Warhol was carrying brushes, tubes of paint, and some raw canvases. Kessler stared at him. ‘Andy! What are you doing?’
‘I’m starting pop art,’ Warhol replied.
All Kessler could think of to say was, ‘Why?’
‘Because I hate abstract expressionism. I hate it!’24
Do art movements really start at such specific moments? Maybe pop art did. As we shall see, it totally transformed not only art but also the role of the artist, a metamorphosis that in itself epitomises late-twentieth-century thought as much as anything else. But if Andy Warhol hated the abstract expressionists, it was because he was jealous of the success that they enjoyed in 1960. As Paris had faded, New York had become the new home of the avant-garde. Warhol would help change ideas about the avant-garde too.
The exhibition Artists in Exile at the Pierre Matisse Gallery in 1943, when Fernard Léger, Piet Mondrian, Marc Chagall, Max Ernst, André Breton, André Masson, and so many other European artists had shown their work, had had a big impact on American artists.25 It would be wrong to say that this exhibition changed the course of American painting, but it certainly accelerated a process that was happening anyway. The painters who came to be called the abstract expressionists (the term was not coined until the late 1940s) all began work in the 1930s and shared one thing: Jackson Pollock, Mark Rothko, Arshile Gorky, Clyfford Still, and Robert Motherwell were fascinated by psychoanalysis and its implications for art. In their case it was Jungian analysis that attracted their interest (Pollock was in Jungian analysis for two years), in particular the theory of archetypes and the collective unconscious. This made them assiduous followers (but also critics) of surrealism. Forged in the years of depression, in a world that by and large neglected the artist, many of the abstract expressionists experienced great poverty. This helped foster a second characteristic – the view that the artist is a social rebel whose main enemy is the culture of the masses, so much of which (radio, talking pictures, Time, and other magazines) was new in the 1930s. The abstract expressionists were, in other words, natural recruits to the avant-garde.26
Between the Armory Show and World War II, America had received a steady flow of exhibitions of European art, thanks mainly to Alfred Barr at the Museum of Modern Art in New York. It was Barr who had organised the show of Cézanne, Van Gogh, Seurat, and Gauguin in 1929, when MoMA had opened.27 He had a hand in the International Modern show at MoMA in 1934, and the Bauhaus show in 1937. But it was only between 1935 and 1945 that psychoanalytic thought, and in particular its relation to art, was explored in any detail in America, due to the influx of European psychoanalysts, as referred to above. Psychoanalysis was, for example, a central ingredient in the ballets of Martha Graham and Merce Cunningham, who in such works as Dark Meadow and Deaths and Entrances combined primitive (Native American) myths with Jungian themes. The first art exhibitions to really explore psychoanalysis also took place in wartime. Jackson Pollock’s show in November 1943, at Peggy Guggenheim’s gallery, started the trend, soon followed by Arshile Gorky’s exhibition at Julien Levy’s gallery in March 1945, for which André Breton wrote the foreword.28 But the abstract expressionists were important for far more than the fact that theirs was the first avant-garde movement to be influential in America. The critics Isaac Rosenfeld and Theodore Solotaroff drew attention to something they described as a ‘seismic change’ in art: as a result of the depression and the war, they said, artists had moved ‘from Marx to Freud.’ The underlying ethic of art was no longer ‘Change the world,’ but ‘Adjust yourself to it.’29
And this is what made the abstract expressionists so pivotal. They might see themselves as an avant-garde (they certainly did so up until the end of the war), and some of them, like Willem de Kooning, would always resist the blandishments of patrons and dealers, and paint what they wanted, how they wanted. But that was the point: what artists wanted to produce had changed. The criticisms in their art were personal now, psychological, directed inward rather than outward toward the society around them, echoing Paul Klee’s remark in 1915, ‘The more fearful the world becomes, the more art becomes abstract.’ It is in some ways extraordinary that at the very time the Cold War was beginning – when two atomic bombs had been dropped and the hydrogen bomb tested, when the world was at risk as never before – art should turn in on itself, avoid sociology, ignore politics, and concentrate instead on an aspect of self – the unconscious – that by definition we cannot know, or can know only indirectly, with great difficulty and in piecemeal fashion. This is the important subject of Diana Crane’s Transformation of the Avant-Garde, in which she chronicles not only the rise of the New York art market (90 galleries in 1949, 197 in 1965) but also the changing status and self-conception of artists. The modernist avant-garde saw itself as a form of rebellion, using among other things the new techniques and understanding of science to disturb and provoke the bourgeois, and in so doing change a whole class of society. By the 1960s, however, as the critic Harold Rosenberg noted, ‘Instead of being … an act of rebellion, despair or self-indulgence, art is being normalised as a professional activity within society.’30 Clyfford Still put it more pungently: ‘I’m not interested in illustrating my time…. Our age – it is of science – of mechanism – of power and death. I see no point in adding to its mammoth arrogance the compliment of graphic homage.’31 As a result the abstract expressionists would be criticised time and again for their lack of explicit meaning or any social implications, the beginning of a long-term change.
The ultimate example of this was pop art, which both Clement Greenberg and the Frankfurt School critics saw as essentially inimical to the traditional function of avant-garde art. Few pop artists experienced poverty the way the abstract expressionists had. Frank Stella had had a (fairly) successful father, Joseph, and Andy Warhol himself, though he came from an immigrant family, was earning $50,000 a year by the mid-1950s from his work in advertising. What did Warhol – or any of them – have to rebel against?32 The crucial characteristic of pop art was its celebration, rather than criticism, of popular culture and of the middle-class lifestyle. All the pop artists – Robert Rauschenberg, Jasper Johns, James Rosenquist, Claes Oldenburg, Roy Lichtenstein, and Warhol – responded to the images of mass culture, advertising, comic books, and television, but the early 1960s were above all Warhol’s moment. As Robert Hughes has written, Warhol did more than any other painter ‘to turn the art world into the art business.’33 For a few years, before he grew bored with himself, his art (or should one say his works?) managed to be both subversive and celebratory of mass culture. Warhol grasped that the essence of popular culture – the audiovisual culture rather than the world of books – was repetition rather than novelty. He loved the banal, the unchanging images produced by machines, though he was also the heir to Marcel Duchamp in that he realised that certain objects, like an electric chair or a can of soup, change their meaning when presented ‘as art.’ This new aesthetic was summed up by the artist Jedd Garet when he said, ‘I don’t feel a responsibility to have a vision. I don’t think that is quite valid. When I read artists’ writings of the past, especially before the two wars, I find it very amusing and I laugh at these things: the spirituality, the changing of the culture. It is possible to change the culture but I don’t think art is the right place to try and make an important change except visually…. Art just can’t be that world-shattering in this day and age…. Whatever kind of visual statement you make has first to pass through fashion design and furniture design until it becomes mass-produced; finally, a gas pump might look a little different because of a painting you did. But that’s not for the artist to worry about…. Everybody is re-evaluating those strict notions about what makes up high art. Fashion entering into art and vice versa is really quite a wonderful development. Fashion and art have become much closer. It’s not a bad thing.’34
From pop art onward, though it started with abstract expressionism, artists no longer proposed – or saw it as their task to propose – ‘alternative visions.’ They had instead become part of the ‘competing lifestyles and ideologies’ that made up the contemporary, other-directed, affluent society. It was thus entirely fitting that when Warhol was gunned down in his ‘Factory’ on Union Square in 1968 by a feminist actress, and survived after being pronounced clinically dead, the price of his paintings shot up from an average of $200 to $15,000. From that moment, the price of art was as important as its content.
Also characteristic of the arts in America at that time, and Manhattan in particular, was the overlap and links between different forms: art, poetry, dance and music. According to David Lehman the very idea of the avant-garde had itself transferred to America and not just in painting: the title of his book on the New York school of poets, which flourished in the early 1950s, was The Last Avant-Garde.35 Aside from their poetry, which travelled an experimental road between the ancien regime of Eliot et alia and the new culture of the Beats, John Ashbery, Frank O’Hara, Kenneth Koch, and James Schuyler were all very friendly with the abstract expressionist painters De Kooning, Jane Freilicher, Fairfield Porter, and Larry Rivers. Ashbery was also influenced by the composer John Cage. In turn, Cage later worked with painters Robert Rauschenberg and Jasper Johns, and with the choreographer Merce Cunningham.
By the middle of the century two main themes could be discerned in serious music. One was the loss of commitment to tonality, and the other was the general failure of twelve-tone serialism to gain widespread acceptance.36 Tonality did continue, notably in the works of Sergei Prokofiev and Benjamin Britten (whose Peter Grimes, 1945, even prefigured the ‘antiheroes’ of the angry young men of the 1950s). But after World War II, composers in most countries outside the Soviet Union were trying to work out the implications ‘of the two great contrasted principles which had emerged during and after World War I: “rational” serialism and “irrational” Dadaism.’ To that was added an exploration of the new musical technology: tape recording, electronic synthesis, computer techniques.37 No one reflected these influences more than John Cage.
Born in Los Angeles in 1912, Cage studied under Schoenberg between 1935 and 1937, though rational serialism was by no means the only influence on him: he also studied under Henry Cowell, who introduced him to Zen, Buddhist, and Tantric ideas of the East. Cage met Merce Cunningham at a dance class in Seattle in 1938, and they worked together from 1942, when Cunningham formed his own company. Both were invited to Black Mountain College summer school in North Carolina in 1948 and again in 1952, where they met Robert Rauschenberg. Painter and composer influenced each other: Rauschenberg admitted that Cage’s ideas about the everyday in art had an impact on his images, and Cage said that Rauschenberg’s white paintings, which he saw at Black Mountain in 1952, gave him courage to present his ‘silent’ piece, 4′ 33″, for piano in the same year (see below). In 1954 Rauschenberg became artistic adviser to Cunningham’s dance company.38
Cage was the experimentalist par excellence, exploring new sound sources and rhythmic structures (Imaginary Landscape No. 1 was scored for two variable-speed gramophone turntables, muted piano, and cymbals), and in particular indeterminacy. It was this concern with chance that linked him back to Dada, across to the surrealist Theatre of the Absurd and, later, as we shall see, to Cunningham. Cage also anticipated postmodern ideas by trying to break down (as Walter Benjamin had foreseen) the barrier between artist and spectator. Cage did not believe the artist should be privileged in any way and sought, in pieces such as Musiccircus (1968), to act merely as an initiator of events, leaving the spectator to do much of the work, where the gulf between musical notation and performance was deliberately wide.39 The ‘archetypal’ experimental composition was the aforementioned 4 ‘33 “ (1952), a three-movement piece for piano where, however, not a note is played. In fact Cage’s instructions make clear that the piece may be ‘performed’ on any instrument for any amount of time. The aim, beyond being a parody and a joke at the expense of the ordinary concert, is to have the audience listen to the ambient sounds of the world around them, and reflect upon that world for a bearably short amount of time.
The overlap with Cunningham is plain. Born in 1919 in Centralia in Washington State, Cunningham had been a soloist with the Martha Graham Dance Company but became dissatisfied with the emotional and narrative content and began to seek out a way to present movement as itself. Since 1951, Cunningham had paralleled Cage by introducing the element of chance into dance. Coin tossing and dice throwing or clues from the I Ching were used to select the order and arrangement of steps, though these steps were themselves made up of partial body movements, which Cunningham broke down like no one before him. This approach developed in the 1960s, in works such as Story and Events, where Cunningham would decide only moments before the production which parts of the dance would be performed that night, though even then it was left to the individual dancers to decide at certain points in the performance which of several courses to follow.40
Two other aspects of these works were notable. In the first, Cage or some other composer provided the music, and Rauschenberg, Johns, Warhol, or other artists would provide the settings. Usually, however, these three elements – dance, music, and set – did not come together until the day before the premiere. Cunningham did not know what Cage was producing, and neither of them knew what, say, Rauschenberg was providing. A second aspect was that, despite one of Cunningham’s better-known works being given the title Story, this was strongly ironic. Cunningham did not feel that ballets had to tell a story – they were really ‘events.’ He intended spectators to make up their own interpretations of what was happening.41 Like Cage’s emphasis on silence as part of music, so Cunningham emphasised that stillness was part of dance. In some cases, notices in the wings instructed certain dancers to stay offstage for a specified amount of time. Costumes and lighting changed from night to night, as did some sets, with objects being moved around or taken away completely.
That said, the style of Cunningham’s actual choreography is light, suggestive. In the words of the critic Sally Banes, it conveys a ‘lightness, elasticity … [an] agile, cool, lucid, analytic intelligence.’42 Just as the music, dance, and settings were to be comprehended in their own right, so each of Cunningham’s steps is presented so as to be whole and complete in itself, and not simply part of a sequence. Cunningham also shared with Jacques Tati a compositional approach where the most interesting action is not always going on in the front of the stage at the centre. It can take place anywhere, and equally interesting things may be taking place at the same time on different parts of the stage. It is up to the spectator to respond as he or she wishes.
Cunningham was even more influenced by Marcel Duchamp, and his questioning of what art is, what an artist is, and what the relationship with the spectator is. This showed most clearly in Walkaround Time (1968), which had decor by Jasper Johns based on The Bride Stripped Bare by her Bachelors, Even and with music by David Behrman entitled … for nearly an hour, based on Duchamp’s To Be Looked at (from the Other Side of the Glass) with One Eye, Close to, for Almost an Hour. This piece was Johns’s idea. He and Cunningham were at Duchamp’s house one evening, and when Johns put the idea to him, the Frenchman answered, ‘But who would do all the work?’43 Johns said he would, and Duchamp, relieved, gave permission, adding that the pieces should be moved around during the performance to emulate the paintings.44 The dance is characterised by people running in place, small groups moving in syncopated jerkiness, like machines, straining in slow motion, and making minuscule movements that can be easily missed. Walkaround Time has a ‘machine-like grace’ that made it more popular than Story.45
With Martha Graham and Twyla Tharp, Cunningham has been one of the most influential choreographers in the final decades of the century. This influence has been direct on people like Jim Self, though others, such as Yvonne Rainer, have rebelled against his aleatory approach.
Cunningham, Cage, the abstract expressionists, and the pop artists were all concerned with the form of art rather than its meaning, or content. This distinction was the subject of a famous essay by the novelist and critic Susan Sontag, writing in 1964 in the Evergreen Review. In ‘Against Interpretation,’ she argued that the legacy of Freud and Marx, and much of modernism, had been to overload works of art with meaning, content, interpretation. Art – whether it was painting, poetry, drama, or the novel – could no longer be enjoyed for what it was, she said, for the qualities of form or style that it showed, for its numinous, luminous, or ‘auratic’ quality, as Benjamin might have put it. Instead, all art was put within a ‘shadow world’ of meaning, and this impoverished it and us. She discerned a countermovement: ‘Interpretation, based on the highly dubious theory that a work of art is composed of items of content, violates art. It makes art into an article for use, for arrangement into a mental scheme of categories…. The flight from interpretation seems particularly a feature of modern painting. Abstract painting is the attempt to have, in the ordinary sense, no content; since there is no content, there can be no interpretation. Pop art works by the opposite means to the same result; using a content so blatant, so “what it is,” it, too, ends by being uninterpretable.’46 She wanted to put silence back into poetry and the magic back into words: ‘Interpretation takes the sensory experience of the work of art for granted…. What is important now is to recover our senses…. In place of a hermeneutics we need an erotics of art.’47
Sontag’s warning was timely. Cage and Cunningham were in some respects the last of the modernists. In the postmodern age that followed, interpretation ran riot.