INTRODUCTION
image
THE TYRANNY OF IMAGES
This is a book about the end of cinema, the end of the world, and the end of civilization as we know it. The signs are there, waiting to be deciphered. The worldwide proliferation of nuclear arms, including North Korea’s decision to resume nuclear weapons production, assures that sooner or later one of these violently destructive devices will be used, either by some sort of constituted government, or by terrorists seeking to gain the upper hand. Natural resources are being depleted, the polar ice caps are melting, “rogue states” pop up with increasing frequency, and international politics are fraught with tension and deception. And other than a few desultory op-ed pieces in the newspaper, or on television, no one is really doing anything about it; embracing instead the fossil fuel culture of the 20th century as our model for what’s left of the future. We’d like to care, but it’s too much effort. Bombarded by a plethora of competing media sources, contemporary humankind seeks some sort of refuge from the incessant images of destruction that flood our telescreens, without success. The use of the Orwellian word “telescreens” is not an accident; if ever we lived in a zone of perpetual hypersurveillance, we do so now. Images seek to control and dominate us; access to information is strictly controlled. If something doesn’t fit into the corporate pattern, it is ruthlessly excised. We have abandoned humanism as a model, and replaced it with the culture of ceaseless “alerts,” “bulletins,” and “updates,” designed to terrorize us into submission to the state. The televised “hate breaks” of Orwell’s 1984 have been replaced by omnipresent 24 hour “news” coverage, which isn’t news at all, but rather a series of carefully crafted press releases designed to favor the ruling elite, and convince dissenters that resistance to this new regime is futile.
Indeed, as a culture, we seem tired with life. As we enter the 21st century, there are signs of exhaustion everywhere. The narrative structures of feature films, as I will discuss, are being shamelessly recycled from one film to the next, and sequels (which have always been a part of movie history) now predominate the box-office. Politics, especially in the U.S., have become so “stage managed” as to divorce themselves from anything remotely resembling democracy or the representation of one’s constituents: it’s all about money, access to the media, hyperconglomerization and buying up the competition, the better to dominate what passes for public discourse. The media is now so closely controlled by a few corporations that any hope of getting alternative viewpoints to the general public has all but evaporated. Are we interested in the welfare of the general populace? Or do we seek the obliteration of social culture, in which the ever-widening gap between the rich and the poor recalls the gulf between the workers and the ruling class in Fritz Lang’s Metropolis (1927)? The questions this books thus seeks to address are, “how does the current cinema reflect these concerns?,” “what are the forces that have brought us to the brink of social and cultural apocalypse?,” and “what assurance do we have that our continued existence is feasible, or even likely?” As the reader will discover, I feel that we are experiencing a global cultural meltdown, in which all the values of the past have been replaced by rapacious greed, the hunger for sensation, and the desire for useless novelty without risk. Indeed, in all our contemporary cultural manifestations as a worldwide community, we seem “eager for the end.”
And there is, after all, something comforting in the thought of imminent destruction. All bets are off, all duties executed, all responsibilities abandoned. Contemplating not just one’s own mortality, but that of an entire civilization, somehow makes the unthinkable not only palatable, but also vaguely reassuring. If no one will survive, who is to say that any of us ever existed at all? What values will histories of the earth have when the entire planet is destroyed? The people, places, events, wars, cataclysmic upheavals, intrigues, triumphs, and failures of humankind will be obliterated in an instant, leaving no one behind to tell the tale. If we all go together, so to speak, have we ever been here at all? Equality will at last be achieved in the final seconds before Armageddon, when countries, boundaries, and political systems instantaneously evaporate, making a mockery of all our efforts at self-preservation. The absolute destruction of the earth, and all the peoples on it, will create a vacuum only for those who no longer exist. Centuries of art, literature, scientific achievement, the mechanisms of instantaneous communication, all will vanish—including this book and its potential readers. No one will know that we have ever existed. No one will bear witness to our demise.
The cinema has grappled with these questions since its inception in the late 1880s and remains fascinated by them today. Yet each filmic depiction of the apocalypse inherently projects the existence of surviving witnesses, for whom the film has been made. Thus, none of the cinema’s numerous attempts to “document” the true end of the world can be held to be truly authentic if only because they require an audience by the very fact of their existence. And yet, as we know, films can easily be destroyed, and 50% of all films made before 1950 no longer exist, the victims of nitrate deterioration, poor preservation, or corporate and/or private neglect. As each film ceases to exist, it takes with it the actors who performed in it, the director who staged their actions, the scenarist who plotted their conflicts, the cinematographer who recorded these staged conceits, and all the other participants in the film’s creation into oblivion: merely a line, perhaps, in the Library of Congress catalogue. Memories of surviving actors, technicians, and auteurs, recorded and transcribed by scholars who seek to keep the memory of the particular film in question alive. Testimony shelved in libraries, forgotten, and then eventually purged from the collection, perhaps pulped to create raw material for new texts. But if and when the entire planet Earth ceases to exist, all records of its existence will also cease to exist, and nothing (except for some bits of space junk, and perhaps a plaque on the moon with Richard Nixon’s name on it) will remain for any future civilizations to contemplate. This, then, is the true and humbling nature of genuine apocalypse; not an atom of the Earth will remain to bear witness to our birth, life, and death. What makes this appealing is the thought that if none shall survive, then, at last, all class, social, and racial boundaries will have been erased. No more slavery, no more sweatshops, no more prejudice, and no more inequality. As the Earth atomizes into cosmic dust, we will at last achieve the true perfection of nonexistence with nothing but some space debris to bear witness to our passing. We are all, thus, equal in death.
And yet this thought is simultaneously too appalling, too comprehensive for the human psyche to truly absorb. No books, postcards, monuments, videotapes, museums, armies, childhood memories, picnics, wars, riots, strikes, weddings, or funerals to remember? Nothing to mark our time on Earth? The thought is too totalizing for the mind to absorb. Yet perhaps that is why so many people are eager for the end. As an “event,” the true apocalypse would cancel out all other occurrences and render the state of human existence absolutely blank. Such a situation is impossible to comprehend, even as our own mortality is incomprehensible to us. We move though life in a fever dream, acquiring objects, pursuing goals, engaging in trysts, and surviving conflicts until that day when a car swerves into the wrong lane and crashes into our vehicle head-on, or a blood vessel in our brain bursts, or a button is pushed, or, after a night’s sleep, we just do not wake up. The total embrace of nothingness thus becomes the unstated goal of existence, a headlong rush into the abyss. Simultaneously feared and anticipated, the end of “all” is the defining moment that we all seek. Perhaps this is why the movies are so obsessed with ritual displays of sex (procreation) and violence (death), coupled with myriad cataclysmic explosions to spice up things when the hopelessly recycled narrative slows to a crawl. There is no real difference, for example, between Jason X (2001) and Hollywood Ending (2002). In the first film, Jason rises from a state of cryogenic suspension to begin, once again, the mute and ritual murder of yet another isolated group of victims, this time on board a spaceship in the 25th century. At the film’s end, Jason plunges to Earth (or Earth II, the original having been rendered unfit for human habitation by centuries of relentless pollution) and lands in a replica of Crystal Lake, where the first film, Friday the 13th (1980), took place. The film’s final shot of Jason’s signature hockey mask floating in the water assures us that he will again return to dispatch more victims, only to be momentarily vanquished so that he can once again return from the dead. In Hollywood Ending, Woody Allen is once again the stammering neurotic who self-destructs through fear of success, improbably in love with a woman half his age, suffering temporary indignities only to be miraculously rescued in the final reel so that he can return for another installment next year. Both Jason Voorhees and Woody Allen are thus series characters, as Andy Hardy and Vin Diesel, and Julia Roberts are series characters, their fictional personae indistinguishable from their offscreen selves. Spider-Man (2002) thus paves the way for The Hulk (2003), with director Ang Lee attached to the project; Spider-Man 2 (2004), in which Sam Raimi again puts Tobey Maguire and Kirsten Dunst through their predictable paces; X2 (2003), reuniting Bryan Singer with numerous members of the original cast, including Patrick Stewart and Hugh Jackman; as well as future projects Fantastic Four (2003), Daredevil (2003), and Sub-Mariner, all announced from the Marvel Comics stable. Batman and Superman, as characters, seem momentarily exhausted from their labors, but they will no doubt return in the near future, in new and updated digital versions. In the meantime, we can content ourselves with Star Wars: Episode II—Attack of the Clones (2002), which at least affords Christopher Lee some work, as well as Stuart Little 2 (2002), Men in Black II (2002), a remake of Mr. Deeds Goes to Town (1936), this time titled Mr. Deeds (2002), with Adam Sandler as the new millennium’s everyman, not to mention Austin Powers in Goldmember (2002), the latest Austin Powers film.
Sequels have always dominated genre filmmaking—look at the Sherlock Holmes series, The Bowery Boys, Blondie, Boston Blackie, and numerous other long-running examples as proof of this phenomenon—but once upon a time, the series films were B pictures, produced by Monogram or PRC or Republic, shown on Saturday mornings along with a newsreel and a travelogue and perhaps a cartoon. Now, the comic book characters have assumed center stage, pushing all humanist concerns off the screen in favor of a diet of perpetual adolescence. The television series The Osbournes (2002-?) is the new Father Knows Best; whereas the original series ran from 1954 to 1967 and ran on ABC, NBC, and CBS before its demise, The Osbournes was designed for one season, although a follow-up season on MTV was eventually negotiated to drag out the wretched spectacle a bit longer. But as the pace of connectivity accelerates, so does burnout; we know people instantly, and then we want something new. Out with the newly old, in with the momentarily new. Decay, collapse, and then regeneration; no wonder William Faulkner observed, “they worship death here [in Hollywood]. They don’t worship money, they worship death” (qtd. in Friedrich 237). Only death can presage resurrection. Only death ensures immortality. Only death permits endless repackaging. The past, the present, and the future all melt in one moment, that space when the original vanishes to be replaced forever by the simulacrum. The digital recycling of cinematic icons is so firmly established as an imagistic convention that we no longer blink when the long-dead John Wayne appears in a beer commercial, or Marilyn Monroe or Jack Kerouac appear in print ads for everything from men’s slacks to perfume. These faces, these attitudes, no longer belong to their originators. They belong to us, and they long ago forfeited any authenticity they may once have held claim to. Delbert Mann remembered working with Cary Grant on one of his last films, That Touch of Mink (1962), and recalled that Grant was
rather strange. He was always charming; always smiling, warm, and witty, and very likable. But I think he knew he was quite close to the end of his career. He was rather bored with acting by now; he was looking forward to not doing it anymore, and it just did not challenge him or excite him. Maybe it was the role; he was playing someone, I think, essentially rather close to himself; therefore, the role itself didn’t offer the kind of stimulation another part might have. His concern seemed more with the physical aspects of the production than with his performance, though in every way he was always totally professional, on time, never caused any problems, knew his lines. (qtd. in Higham and Moseley 264-5)
Already the subject of countless tributes and retrospectives, Grant knew that his image on screen was assured of a fair degree of indestructibility—why expend any more effort on something that had already been accomplished?
Shuei Matsubayashi’s The Last War (1961; original title Sekai daisenso, 1961) confronts these issues more directly than the numerous US and British films that toy with the concept of Armageddon. Films such as On the Beach (1959), Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964), The Day the Earth Caught Fire (1961; also known as The Day the Sky Caught Fire), Armageddon (1998), The Core (2003), Meteor (1979), Fail-Safe (1964), and other Dystopian fantasies either hold out the promise of false hope or treat the prospect of global annihilation as a matter of grim amusement. The Last War, in contrast, posits a world without a future, in which the inexorable buildup of nuclear weapons in the East and West leads inevitably to nuclear holocaust. Japan positions itself within the film’s narrative as a neutral observer, unable to influence the events that bring about world destruction. Although the two world powers in the film are thinly disguised depictions of US and former Soviet Union forces, thus placing the film firmly within the context of the Cold War era, unlike its closet US counterpart, the television movie The Last Day (1975), The Last War offers no hope of a return to normalcy. Throughout the film, minor world crises keep both sides on edge, and numerous false alarms occur due to mechanical malfunctions and border disputes. An uprising in Korea along the thirty-eight parallel nearly precipitates world destruction, but through last-minute diplomacy this crisis is revolved.
In the end, what triggers global nuclear destruction is human error rather than aggression. Two airplanes from the rival superpowers accidentally collide in midair over the Arctic, and both sides, fearing imminent attack, launch their entire fleet of missiles; this time, there is no reprieve. As the surprisingly downbeat press book for The Last War details, the film’s final reel depicts nothing less than the entire destruction of the planet, beginning with a huge nuclear firebomb over Tokyo, illuminating the night sky for one last, irrevocable instant. The film’s synopsis concludes with these stern words:
And then, the end. In colossal, blinding explosions the urban areas of the world are leveled to dust, while rural populations are left to die by agonizing degrees from radioactive fallout. Thus man, through stubbornness and blind stupidity, has wrought his own destruction. (The Last War press kit)
In a series of brutal wipes, Paris, New York, Moscow, and other foreign capitals are obliterated in an instant. A few vessels at sea hold a minuscule number of survivors, who, realizing the hopelessness of their situation, decide to return to their homes to die. The film’s final images depict the planet as a nuclear wasteland thanks to Eiji Tsuburaya’s superb predigital effects.
The Last War, produced by Toho, the company that spawned Godzilla, King of the Monsters! (1956; original title Gojira, 1954), Mothra (1962; original title Mosura, 1961), Rodan (1956; original title Sora no daikaijû Radon, 1956), Monster Zero (1970; original title Kaijû daisenso, 1965), and numerous other post-Hiroshima monster films, is not an entertainment. Along with a few other films, such as Peter Watkins’s The War Game (1965), the mock documentary film that depicted the effect of a nuclear attack on Great Britain, The Last War is an earnest plea for nuclear disarmament, one that was, unfortunately, unheeded. With such postapocalypse films as the latest version of The Time Machine (2002) and a TV movie remake of On the Beach (2000), it seems that despite the collapse of the decades-old East/West dynamic, the threat of global annihilation is as great as ever. At this writing, after the events of September 11 in the United States, Iraq is supposedly developing long-range nuclear missiles and “dirty bombs” for terrorist use in large metropolitan centers, while war rages unchecked in the Middle East. Films such as Black Hawk Down (2001), We Were Soldiers (2002), The Sum of All Fears (2002), Collateral Damage (2002), and other jingoistic texts recall the propagandistic excesses of World War II, even as a steady stream of escapist comedies and romances, such as The New Guy, Deuces Wild, 40 Days and 40 Nights, Ice Age, The Scorpion King, and Snow Dogs (all 2002) seek to satiate and distract the contemporary viewer. What comes next will probably be a curious revival of the noir cycle, unrelated to the spate of so-called neo-noir films such as Wild Things (1998), Red Rock West (1992), L.A. Confidential (1997), The Usual Suspects (1995), and Bound (1996), this last film the first feature by Larry and Andy Wachowski, who went on to create The Matrix (1999).
The hallmark of neo-noir film is a certain manic energy that keeps these films moving with effortless assurance through a bewildering series of double crosses and betrayals, in contrast to the slightly more restrained narrative pacing of the classic noir. But the neo-noir, operating in the shadow of nuclear oblivion as a fact of existence rather than a new phenomenon (as the classic post-1945 noirs did) treat the unthinkable as a commonplace, rather than an omnipresent threat. When the flag-waving and escapism of the current conflict is cleared away, in some unimaginable fashion, and after how many wasted lives, the postnoir will be distinguished by its somberness. The nihilistic dread of Arch Oboler’s The Arnelo Affair (1947), in which the conscienceless John Hodiak draws Frances Gifford into a web of lies and murder, or the cold calculation of José Ferrer as he hypnotizes Gene Tierney to do his bidding in Otto Preminger’s Whirlpool (1949), will prove a better model for the 21st century postnoir. In Wild Things, the audience is invited to unravel the puzzle presented by the film’s protagonists; Red Rock West and L.A. Confidential are so luridly aggressive in their characterizations and events that they qualify for near-camp status. Nothing in neo-noir films is real, or pretends to be; all is exaggeration. There is no real threat, nothing truly at stake; we are conscious that we are watching a recreation and often a period piece at that. The Amelo Affair and Whirlpool take place in the domain of the living dead, utilizing truly vulnerable women (Gifford and Tierney) juxtaposed with vicious, unscrupulous leading men to create a vision of utter hopelessness. There is no escape from the world of the true noir, just as there will be no escape from the postnoir. As in Basil Dearden’s underrated The Man Who Haunted Himself (1970), in which Roger Moore’s doppelgänger gradually takes over his existence until the copy replaces the original in the affection of his family and friends, the classic noir spins a tale of chronic alienation so complete that it is a totalizing experience: the death of the soul.
As Abel Gance observed, “Abandon All Hope, Ye Who Enter the Hell of Images” (qtd. in Virilio, War and Cinema 31). Virilio goes on to note that by World War I,
the cinema became the major site for a trade in dematerialization, a new industrial market which no longer produced matter but light, as the luminosity of those vast stained-glass windows of old was suddenly concentrated into the screen. “Death is just a big show in itself” said Samuel Lionel Rothapfel […] inventor of the first cinema to be baptized a cathedral, the Roxy. (Virilio, War and Cinema 32)
The “hell of images” Gance refers to thus refuses to release its dead, forcing them to repeat, and repeat again, past situations and poses, endowing them with real, not phantom, agency. In many ways, Star Wars: Episode II—Attack of the Clones is the perfect exemplification of this phenomenon in that the line between the real and the constructed has been completely obliterated, and the film itself was produced entirely without conventional motion picture technology. A digital creation from first to last, except for its transfer to 35mm film for final projection in most theaters, Attack of the Clones is a moving image construct in which the actors are overwhelmed by the hypertechnology of the enterprise that surrounds them. Attack of the Clones is the perfect postfilm movie; it lacks soul, inspiration, originality, and style. As A. O. Scott observed in his appropriately scathing review of the film, Attack of the Clones possesses “all the spontaneity and surprise of an election day in the old Soviet Union.” Scott continues:
while Attack of the Clones is many things—a two-hour-and-12-minute action-figure commercial, a demo reel heralding the latest advances in digital filmmaking, a chance for gifted actors to be handsomely paid for delivering the worst line readings of their careers—it is not really much of a movie at all, if by movie you mean a work of visual storytelling about the dramatic actions of a group of interesting characters. (Bl)
More tellingly, Scott argues that “Mr. Lucas […] has lost either the will or the ability to connect with actors, and his crowded, noisy cosmos is psychologically and emotionally barren” (B20). Indeed, the actors in the film seem to have been cut adrift in Attack of the Clones without any concern for the consistency of their performances; as Scott observes, only the endlessly dependable Christopher Lee (still energetic and involved at age 80) manages to convey any authority in his performance (see Edelstein 3, 17). A viewing of the electronic press kit for the film confirms Lucas’s lack of emotional connection to the project. Left to their own devices, the actors deliver their lines in robotic fashion, helplessly seeking some credibility in the text that does not exist. In contrast, Lee treats Lucas far more brutally during the shooting of the film, demanding to know precisely where his “eye-match” is, complaining when the proposed blocking will not work, and drawing on his inner reserves as an actor in the classic British tradition to give his Count Dooku at least some semblance of motivation and believability. Lee also brings a welcome dash of gravitas to Peter Jackson’s The Lord of the Rings: The Fellowship of the Ring (2002), demonstrating again that even in an age dominated by high-tech effects, the human element is still indispensable. As Jackson comments, “in front of the camera he has to do something to his eyes […] they suddenly glaze over and then gleam in a very chilling way; it’s as if he turns on an internal light. When you’ve got your shot he turns it off and he’s back to [his normal self]” (Edelstein 17). In short, Lee’s presence is more than enough to hold the viewer’s attention; he has no need of external artifice.
Which leaves us with the much-heralded digital special effects. The human aspect of Attack of the Clones having been so handily disposed of, what about the visual sheen of the film? No problem there: they are the essence of manufactured perfection, immaculately conceived and executed, devoid of any human agency, a testament to the film’s essential inhumanity and hollowness. But the effects are merely a landscape, a backdrop, marvelous in their evanescent syntheticity, but scrupulously devoid of any defect that might render them fallible, authentic, organic. Lucasfilm officials have complained in the press that the “twenty-first century” films of George Lucas are being presented with “nineteenth-century technology.” As the president of Lucasfilm, Gordon Radley, noted shortly before the release of Attack of the Clones, “we’ve proselytized about this for years. So, it’s disappointing to think that it continues to take digital cinema longer to come to fruition than it should” (qtd. in Di Orio, “Clones” 1). But technology is the driving force here, not creativity. And when the final episode of the Star Wars saga is finally released, what more can Lucas possibly offer us? The Adbusters Media Foundation reminds us in a broadside that “corporations are legal fictions that we ourselves created and must therefore control” (“Organized Crime”). In such a nonhumanist atmosphere, actors become little more than props within the scenes they nominally inhabit, to be used for effect rather than for dramatic motivation. Diane Lane recalled during the shooting of Adrian Lyne’s Unfaithful (2002) that:
There was […] one morning when Adrian surprised me. He was waiting for me in my trailer before I’d even had coffee. He said, “I got this great idea: I think we should do this scene naked, except you’re wearing Oliver’s combat boots.”
What do you say? You slowly, graciously whittle it down to, “Maybe that’s not the greatest idea.” (Kehr, “Aiming for More” B6)
In such an atmosphere of exploitation and strategic situations, motivation is lost in a parade of artificial constructions. Also lost in the media blitz surrounding the film is the fact that Unfaithful is a remake of Claude Chabrol’s The Unfaithful Wife (1969; original title La Femme Infidèle, 1969), something that most reviewers of the film failed to realize.
And yet, in the contemporary cinematic marketplace, who has time for history? For all its artistic nullity, the Star Wars franchise is the benchmark for the new cinema, as confirmed by producer Sean Daniel, whose film The Scorpion King created a new action star in The Rock, a former professional wrestler, and is itself part of the new Mummy franchise, which recycled the 1932 original and its 1940s successors into a new series beginning in 1999, which continues with no end in sight. “You really have to look at the success of Star Wars as one of the signature enterprises in this trend,” said Sean Daniel, a producer of The Scorpion King and a former studio production chief. “Truly, that series was—and is—the envy of everybody. As the movie culture becomes more global, there is an added value to making movies that are part of a series or attached somehow to some title or brand that already has market recognition” (qtd. in Lyman, “Spinoff” B1). In addition to the aforementioned franchise films, we see new versions of Halloween, more Harry Potter films, more American Pie sequels, The Fast and the Furious 2 (the “original” itself was a thematic remake of a 1954 film by legendary exploitation producer Roger Corman), and the list goes on and on. As Rick Lyman comments:
Warner Brothers, now leading the way in this franchise frenzy, is preparing to revive the Batman, Superman, Conan, Terminator and Exorcist franchises, as well as creating new ones with Wonder Woman, Looney Tunes and remakes like Westworld and Charlie [not Willy Wonka] and the Chocolate Factory. Universal Pictures is hard at work on The Hulk, The Cat in the Hat, Red Dragon (featuring the movies’ favorite serial killer, Hannibal Lecter) and a Coen Brothers remake of the 1960’s caper comedy Gambit.
New Line Cinema has remakes of The Texas Chainsaw Massacre and Willard in the works; MGM wants to bring The Outer Limits television series to the big screen and has teamed with Miramax on a remake of Akira Kurosawa’s classic Seven Samurai; DreamWorks is hoping to jumpstart a new franchise based on the Matt Helm spy books that spawned a Dean Martin series in the 60’s; Sony is working on a new version of Peter Pan; Miramax is starring Roberto Benigni as Pinocchio; Paramount is beginning a remake of The Warriors, a 1979 teenage-gang thriller; and RKO, the long dormant Hollywood studio, is reviving its name and hoping to capitalize on remakes of movies drawn from its vaults, like Suspicion and Beyond a Reasonable Doubt.
“It’s much harder these days to get anyone’s attention,” said Ted Hartley, RKO’s chairman. “You only have about 10 seconds to grab them. And to get somebody to react positively to some new idea, some new title, takes a lot more than 10 seconds. So we all love starting with something that’s already known.” (“Spinoff” B10)
Or, as producer Mark Abraham succinctly comments, “Everybody is trying to establish a brand and exploit it” (Lyman, “Spinoff” B10).
The reason for all this recycling is simple: money. Traditionally, sequels underperformed the financial success of their predecessors; now, increased brand awareness makes subsequent films in a series more profitable than the original. The first Austin Powers film, released in 1997, grossed a mere $54 million; the first sequel, Austin Powers: The Spy Who Shagged Me, generated $205 million in rentals in 1999. Rush Hour (1998) did a respectable $141 million in business, but Rush Hour 2 (2001) made $226 million (Lyman, “Spinoff” B10). Is it any wonder that invention or originality has become curiously passé? As another Hollywood executive noted, “If you have a movie that is not differentiated, that does not fit into one of these categories, it really has to be spectacular to carve out a spot in the marketplace today” (Lyman, “Spinoff” B10). With the ever-increasing cost of movie production and distribution, what chance does a small, thoughtful film have with the general public? Eric Rohmer’s The Lady and the Duke (2001; original title L’Anglaise et le duc, 2001) is the veteran director’s own initial foray into digital technology. Set during the French Revolution, the film uses extensive “green screen” work to place the actors within a series of paintings that suggest the locations of the era, but also depends heavily on the performers to contribute to the success of the project, creating a sort of cinema verité video record of a vanished time and place (see Mackey 20). In contrast to the digital universe posited by George Lucas, Jerry Bruckheimer, and their mainstream cohorts, Rohmer’s film is at once spectacular yet intimate, deeply connected to the human drama of the state in crisis. Rather than armies of anonymous clones or robotic battleships, Rohmer uses the new digital tools to create a small, intimate film, very much in the spirit of his earlier films. Yet Rohmer’s film was screened at the New York Film Festival for only a few screenings and then opened to desultory bookings in New York and a few major cities in 2002 without any hope of a national release. Increasingly, it seems that audiences do not wish to be entertained; they want to be bombarded by an assault of light and sound. Imagine Marty (1955) winning an Academy Award for best picture in the current climate, a modestly budgeted film about the forlorn love life of a Bronx butcher. It just is not possible.
It is astonishing to think that as late as 1951, a modest western such as Philip Ford’s The Dakota Kid (1951) could be shot in seven days on a budget of $52,471 and be assured of a theatrical release. The film’s single biggest expense was the cast—$8,761—and the film used only 23,000 feet of 35mm film to create a 60-minute feature. One year earlier in 1950, John Ford’s Rio Grande was shot in 32 days, with a total budget of $1,214,899, using a mere 80,000 feet of film to complete the 105-minute film. The stars included John Wayne, Maureen O’Hara, Ben Johnson, Victor McLaglen, and numerous other stalwarts of the Golden Age of Classic Cinema. Their combined salaries for the project totaled just $318,433 (McCarthy and Flynn, “Economic Imperative” 26, 29). Such figures are impossible to replicate a half a century later. Certainly, even after allowing for factors such as inflation costs, which must have necessarily risen since the post-World War II era, the simple fact is that a level of human craftsmanship has been sacrificed in the contemporary cinema for the sake of synthetic spectacle. Who could imagine completing a film in seven days now? Perhaps only James Toback, who shot the modestly successful Two Girls and a Guy (1997) in one week in Robert Downey Jr.’s loft, using Downey, Heather Graham, and Natasha Gregson Wagner to create a compellingly intimate tale of two women vying for the same man. Two Girls and a Guy actually received a national release, solely on the basis of its star cast and modest budget, but such films are now a rarity. With the average production cost of a Hollywood film nearing the $60-million mark, plus another $30 million for prints and advertising, to say nothing of promotional budgets to finance talk-show appearances by the film’s stars, who can afford to bet on an untried commodity? We have become a culture of sameness in which conformity dominates, and everything is equally replaceable.
What drives all this conformity is, of course, the fear of the new—not new technological developments, which are eagerly embraced by capitalist culture as tools to increase their dominion over international consumers—but rather new ways of interacting with one another, departures from the hyperconglomerized new order now firmly established, anything that threatens to upset the status quo. The current political and social climate, particularly after the events of September 11, ensures that all deviations from what are perceived to be normative values will be immediately censured. The entertainment conglomerates—AOL Time Warner, Vivendi Universal, and Rupert Murdoch’s News Corporation—create a virtual monopoly on all that we see and hear with increasingly infrequent interruptions. All of these corporate entities are undergoing internal turmoil in the wake the dot-com crash and the broadband glut, but eventually they will survive, refigured in another form. These nation-states, entities whose realm is international, specialize in misinformation, repetition, and titillating scandal, all in an attempt to satisfy an increasingly bored and restless public, unsure of where their next meal is coming from, unable to escape the cycle of grinding poverty that supports these media giants.
And to whom can they turn for the truth? Certainly not their supposed leaders. The Nixon White House serves as an excellent example of the complete lack of integrity in government. When George Wallace was shot, Nixon told his aides to start a rumor that Wallace’s attacker “was a supporter of McGovern and Kennedy. […] Say you have it on unmistakable evidence.” He suggested that the famous photograph of a young Vietnamese girl running from a US napalm attack was faked. He described protesters against the conflict in Vietnam as “a wild orgasm of anarchists sweeping across the country like a prairie fire.” And when Billy Graham told Nixon that he believed that Jewish interests had a “stranglehold” on the media, Nixon responded “Oh boy. So do I. I can’t ever say that, but I believe it.” Indeed, in a mood of exasperation, Nixon rhetorically inquired of his aide Bob Haldeman, “What the Christ is the matter with the Jews, Bob? What is the matter with them? I suppose it’s because most of them are psychiatrists” (all qtd. in Slansky 43). All of these excerpts are culled from tapes Nixon himself made during his tenure at the White House, as if he were possessed of a compulsion to document his own destruction. Andy Warhol accomplished much the same function with his silent screen tests, compiling a silent, visual record of his numerous associates during his most prolific period as an artist. As Jonathan Jones describes these 100-foot, 16mm black-and-white static films:
Warhol’s Screen Tests are his best works on film, closest to the severe dignity of his painted portraits. […] They are elegiac too, even though most of the people filmed are younger than Warhol. The youthful and vulnerable pass before his camera, are loved by it, and then vanish. Each black-and-white film ends with a silvery fade-out, the face slowly dissolving in a burst of light, as if the Bomb had just been dropped. This ghostly effect could not more explicitly make us think of mortality—and of film as a fragile defense against it. Hollywood specialises [sic] in immortality, but Warhol’s use of film is more material. At the end of each film, you see the texture of the celluloid itself. These people could easily have been dead for centuries, the films found rolled up in a Ballardian desert necropolis. (“Candid Camera”)
Thus, as many of his subjects died, Warhol’s cinematic portraits remained as mute testimony to their brief, incandescent existences. In the 1950s, 1960s, and 1970s myriad movies addressed the human desire for self-immolation, including the forgotten Ladybug, Ladybug (1963), The Earth Dies Screaming (1964), and Damnation Alley (1977), along with a number of sensationalized “documentaries” purporting to depict humankind in the last days before Armageddon, the most famous of which is perhaps The Late Great Planet Earth (1979), cobbled together from stock footage and some hastily recreated sequences and narrated by a transparently desperate Orson Welles.
All of these enterprises—the deceptions, the recreations, and the staged immolations—have one thing in common: they contain the seeds of their own destruction. Nixon was clearly eager to see himself implode, at some level, in front of an international audience. Warhol’s desire for self-publicity is justly legendary. Not surprisingly, Hollywood eagerly grabbed the baton of willful self-destruction and ran with it, imagining a world in which destruction is not something to be avoided, but the desired object of all ambition. One could argue that the very act of construction anticipates destruction, as E. B. White famously pointed out in his 1949 study, Here Is New York. At the conclusion of the text, after celebrating the city’s multicultural heritage and magnificent urban sprawl, White sounded a note of warning:
The city, for the first time in its long history, is destructible. A single flight of planes no bigger than a wedge of geese can quickly end this island fantasy, burn the towers, crumble the bridges, turn the underground passages into lethal chambers, cremate the millions. The intimation of mortality is part of New York now; in the sound of jets overhead, in the black headlines of the latest edition.
Of all targets, New York has a certain clear priority. In the mind of whatever perverted dreamer might loose the lightning, New York must hold a steady, irresistible charm. (qtd. in Frank 9)
Thus, long before the prophets of disaster turned Armageddon into a pop culture pastime, the author of Charlotte’s Web and Stuart Little had a perfect fix on precisely what makes Manhattan so alluring: its vulnerability. Even something so illimitable as the Internet has an “end,” appropriately titled, “The Last Page of the Internet.” When one arrives at this cyberdestination, the viewer is confronted with these words: “Attention: You have reached the very last page of the Internet. We hope you have enjoyed your browsing. Now turn off your computer and go outside and play.” This is followed by a hyperlink to take the viewer back to the “boring old Internet”; appropriately enough, it is a dead link (1112 Networks). Even hyperspace has a putative end; although it increasingly lacks imagination, Hollywood continues to churn out new, eminently exploitable product. The New Guy offers us yet again the story of the geek turned cool; About a Boy (2002) presents us with the unlikely spectacle of Hugh Grant in a film that might well be titled One Man and a Baby, practically a remake of the Adam Sandler vehicle Big Daddy (1999), which has itself been remade many times before. The Bourne Identity (2002) is a remake of a 1988 television movie based on the book by Robert Ludlum, starring Richard Chamberlain; now Matt Damon takes his place. Scooby-Doo (2002) recycles the long running Hanna-Barbera cartoon series as a live action/digital animation hybrid. Steven Spielberg serves up yet another dose of predictable action in Minority Report (2002), a futuristic thriller in which Tom Cruise plays a police officer framed for a murder he did not commit, interspersed with the visual spectacular leaps and structures audiences have come to expect from a “summer movie.”
The Powerpuff Girls make their big-screen debut in The Powerpuff Girls (2002), while the Spy Kids are back for more predictable misadventures in Spy Kids 2: The Island of Lost Dreams (2002). The Tuxedo (2002) presents us with the unlikely pairing of Jennifer Love Hewitt and Jackie Chan in one of Chan’s trademark action comedies (the sole difference being that, even in middle age, Chan still does his own stunts, thus lending a welcome touch of verisimilitude to the endless succession of doubles who normally populate such films). Eight Legged Freaks (2002) once again trots out the menace of gigantic killer spiders on the loose, redolent of Tarantula (1955), Them! (1954), and Earth vs. the Spider (1958). Insomnia (2002) is a remake of a superb 1997 film of the same title from Norway, starring Stellan Skarsgård as a dysfunctional detective investigating a murder case. In the new version, Al Pacino replaces Skarsgard in the lead role, the action moves to Alaska, and Christopher Nolan, who did such a good job with his initial film Following (1998), takes over the director’s chair. Certainly Nolan is an appropriate choice for such dark material, but why is he doing a remake when he has shown himself capable of brilliant work as an original auteur? Mel Gibson stars in M. Night Shyamalan’s Signs (2002), another film about crop circles and UFOs. Nothing new, nothing original; all is the same. Is there no alternative to this endless procession of prefabricated blockbusters?
Agnès Varda’s The Gleaners & I (2000; original title Les glaneurs et la glaneuse, 2000), is one alternative vision to the current glut of cynically calculated Hollywood remakes. The film opened in 2001 in New York City in a limited commercial release. In The Gleaners & I, Varda documents the activities of a group of French social “outsiders,” who scavenge food, appliances, housing materials, and other essentials from the trash that others create. Shooting with a small, handheld digital video camcorder, rather than a conventional 35mm film camera, Varda demonstrates that it is possible to create a compelling alternative to the mainstream Hollywood film with the simplest of equipment. Near the conclusion of The Gleaners & I, Varda focuses on a young man who has jettisoned a promising career to become a dedicated recycler of society’s cast-off goods. He lives in a homeless shelter and teaches English to some of the émigré inhabitants in his spare time. Varda offers us a view of the high and low in French society; the landowners and magistrates who are not too sympathetic to the gleaners’ activities, and the gleaners themselves, unshaven and unkempt, living in trailers without running water or electricity, but still possessed of both generosity of spirit and the faith that life is worth fighting for.
However, millions of people will never see Varda’s film, even at an art house. For the most part, we have lost our access to the vision of filmmakers from other cultures and other countries. Only large metropolitan centers such as New York, Paris, London, and Amsterdam offer the public any real choice in the films they see. As Uberto Pasolini, the producer of The Full Monty (1997), a surprise hit, bitterly noted even after that film’s success:
Distribution, distribution, distribution—that’s the issue […] the whole business of people saying to European producers that you just need to make films audiences want to see is complete crap. There are American movies that should not be in 100 theaters, ghastly movies with terrible reviews that no one cares about, but because a major has the muscle they get them onto those screens, (qtd. in Miller, Govil, McMurria, and Maxwell 148)
The Full Monty became a box-office bonanza only after 20th Century Fox, sensing a certain appeal in the film’s rough-and-tumble sexuality, decided to pick up the negative for international distribution. As Miller, Govil, McMurria, and Maxwell report, other films have not been as lucky. Ken Loach’s Riff-Raff (1990) was pulled from UK theaters to make way for Ron Howard’s Backdraft (1991), even though the latter film had already performed dismally in the United States. Similarly, Louis Malle’s Damage (1992; original title Fatale, 1992) lasted only a week in UK theaters before it was taken off to accommodate a prearranged release date for a Hollywood film (Miller, Govil, McMurria, and Maxwell 148). Without the imprimatur of a major distributor, most overseas films languish in the limbo of big city bookings (“now playing in selected cities”), on DVD, or cable. And as we have seen, if the film is not in English, dubbing or subtitling will no longer suffice. The film must be remade to accommodate US audiences, with American stars, and then exported, in most cases, to the same country that the original film was produced in. Thus Hollywood’s hyperconglomerate vision colonizes the world, even as those members of the public who know that an alternative cinema exists are denied access to it. Such is the current tyranny of images that informs moving image production, distribution, and reception in the contemporary dominant cinema.