Chapter 8

Digital cinema and the internet

The mini-majors invested in computer-generated imagery (CGI) in the 1980s as part of their formula for making bigger blockbusters that could challenge the studios. The studios not only met the challenge; they made CGI a staple of Hollywood filmmaking. The incorporation of digital images into film and television is as much a revolution as sound, color, or widescreen cinema. And as competition from small and international media companies has increased, Hollywood has invested more and more in CGI blockbusters. Small independent companies may be able to create media tailored to the fragmented, niche internet audiences. But only Hollywood can put $200 million dollars on screen and reach the global mass audiences that have been the industry’s purview since after World War I.

Computer generated images

CGI’s genealogy begins with computer-drawn geometrical shapes, called vector graphics, used in 1950s military flight simulators. In the 1960s university computer science departments started to investigate the possibilities of computer graphics, and in 1962 MIT graduate student Ivan Sutherland wrote a program called Sketchpad, which allowed users to draw images directly on the cathode ray tubes of their monitors. Sutherland joined University of Utah professor David Evans to start the graphics firm E&A, which helped to commercialize computer assisted design (CAD) software. CAD standardized the language of graphic design, and it allowed media animators to create wireframes, the digital skeletons that underlie CGI animation.

In 1965 Sutherland wrote an essay called “The Ultimate Display,” in which he exclaimed that anything was now possible in the realm of computer representation. Filmmakers were no longer bound by the laws of physics and perception. Avant-garde animators John and James Whitney were already experimenting with psychedelic imagery, and their work influenced title designer Saul Bass, whose early computer animations for the credits of Alfred Hitchcock’s Vertigo (1958) suggested psychological space. These experiments using nonnaturalistic imagery were the exceptions, however, and in general Sutherland’s manifesto fell on deaf ears. Hollywood filmmakers worship at the altar of realism, and CGI has been used to create more realistic effects.

CGI began to take off in 1970s films, where it was used primarily to simulate futuristic computer displays. The science fiction drama Westworld (1973) used CGI to represent a robot gunslinger’s point of view, and the sequel, Futureworld (1976), used wireframes to show a 3-D model of a hand displayed on a monitor. The latter sequence was created by Sutherland’s student Edwin Catmull, whose team at the New York Institute of Technology would become (and remains) a leader in the development of CGI. In 1977 George Lucas hired Catmull to head the computer design division at Lucasfilm, which became the effects house Industrial Light and Magic (ILM). Lucas later spun off Catmull’s division as the company Pixar, which was then sold to Apple founder Steve Jobs and later became the animation division of Disney.

The same year that Catmull came to Lucasfilm, Lucas used an extended vector graphic sequence in the Star Wars briefing scene that laid out the trench-run attack on the Death Star. (Star Wars’ real special effects breakthrough, however, was not CGI but the use of computer-controlled models known as “motion control”). And in the late 1970s, the science fiction cycle set in motion by Star Wars contained many computer-display simulations similar to the trench run sequence. Star Trek II: The Wrath of Kahn’s (1982) “Genesis effect” sequence was one of the most elaborate. The scene in Star Trek is framed by a computer panel, almost like quotation marks telling the audience that they are only expected to understand the CGI effects as a computer simulation on a screen, not as an integrated part of the film’s live-action world.

Catmull’s ILM division took CGI to a new level in the 1980s, creating fully digital characters. ILM developed a computer called the Pixar Image Computer, which later gave its name to the animation division of ILM. The Pixar computer was slow at first; it took sixteen hours to scan one minute of film, but it could create sophisticated composites of digital and live-action material. George Lucas used the Pixar computer in Star Wars: Return of the Jedi (1983), and then it was used to create the first photorealistic composite CGI character, the Stained Glass Knight in Young Sherlock Holmes (1985).

After that, ILM developed a specialty in the digital morphing of characters—characters who could change shape. It became de rigueur for every fantasy and science fiction film in the late 1980s to have a morphing character. Some of the highlights included Joe Dante’s Innerspace (1987), Ron Howard’s Willow (1988), and Steven Spielberg’s Indiana Jones and the Last Crusade (1989).

The digital-character breakthrough came with James Cameron’s Terminator 2: Judgment Day (aka T2) (1991), produced by mini-major Carolco. Cameron had used an abstract CGI monster in The Abyss (1989), and Carolco had used some CGI the year before in Total Recall (1990). But T2 contained a character, the T-1000 android, who morphs into and out of a liquid metal state throughout the entire film. The T-1000 was not depicted as a figure on a computer screen or given an amorphous shape, as in earlier films. T2 seamlessly creates composite shots mixing CGI and live-action footage. There are only forty-seven CGI shots in the movie, but Cameron used them judiciously to give the impression of many more. In some scenes, the film cheats, very briefly using a stunt double in a silver suit to fake the appearance of the CGI-created liquid metal android. But you really need to be looking for the shots to see them.

After T2, even more CGI characters appeared, and there was speculation about doing away with living actors in favor of digital characters. George Lucas’s three Star Wars prequels used an unprecedented number of composite shots as well as several fully digital charters, but Lucas saw the limitations. In a revealing sequence in the making-of documentary included on the DVD of Star Wars Episode I: The Phantom Menace (1999), Lucas is shown working closely with an animator to get Yoda’s ears and vocal inflection just right for a single line of dialogue. When Lucas is finally happy with the shot, he says that digital actors will not replace live actors, because it is even more difficult to get a performance out of the digital actors. The animator adds that Yoda may be digital, but he is still created by humans. Digital characters actually add layers of human interaction rather than taking them away.

In addition, most digital characters are first played by real actors in motion-capture studios. Actor Andy Serkis has specialized in playing digital characters in The Lord of the Rings trilogy (2001–2003), King Kong (2005), and the “rebooted” Planet of the Apes franchise (2011–present), among other roles. In a motion-capture studio, actors wear suits that allow cameras to record the outline of their movements. Animators then flesh out the digital character by layering images over the outlines created by the motion capture software. Films like Sky Captain and the World of Tomorrow (2004), produced by Jon Avnet, and Sin City (2005), based on the graphic novel by Frank Miller, started to be filmed entirely on motion-capture stages, giving filmmakers free rein to superimpose the settings.

A small but interesting group of films broke with Hollywood’s emphasis on creating realistic effects in science fiction and fantasy films to explore more painterly applications of CGI. What Dreams May Come (1998), starring Robin Williams, represented the afterlife as a series of living paintings, complete with wet paint squishing under characters’ feet. Pleasantville (1998) switched back and forth between color and black and white, even in the same shot. And Moulin Rouge (2001) featured a long opening sequence that layered CGI and live action to give the effect of stepping into a diorama of late nineteenth-century Paris.

But these films, which seemed to take up Sutherland’s call to let imagination rather than reality guide CGI, were the exception that proved the rule. Many more films used CGI to create realistic storms, animals, and crowds, from Jurassic Park (1993) and Twister (1996) to Elizabeth (1998) and Gladiator (2000). Audiences have learned to marvel at the spectacle of realism: dinosaurs brought to life or historical periods recreated.

CGI has become an aesthetic tool of choice in Hollywood, even when the spectacle is invisible to audiences. When Joel and Ethan Coen (the Coen brothers) wanted a washed-out, 1930s dustbowl look for their O Brother, Where Art Thou (2000), for example, cinematographer Roger Deakins did not use filters or special film processing. He shot the film in bright colors and used the tools of digital postproduction to adjust the color of the entire film (a standard practice today). It was also one of the first Hollywood films to use a digital intermediate process, going from film to digital to film again.

ILM remained at the forefront of digital effects, while its Pixar spinoff transformed animated films. After George Lucas sold Pixar to Steve Jobs in 1986, the digital animation team worked primarily as a military subcontractor. In their spare time, they created experimental shorts to be shown to other engineers at the SIGGRAPH computer animation conference. That year, Pixar hired animator John Lasseter, who began to insert more complex narratives into the Pixar shorts. Luxo Jr. (1986) combined new shadow-generating software with an elegant narrative, and it was nominated for an Academy Award. Pixar followed Luxo Jr. with a series of award-winning shorts. Steve Jobs, who at that point was no longer at Apple Computer, tried to sell Pixar several times, but he had already invested so much money in the company that it did not pay to sell.

Then in 1995 Pixar released the first full-length CGI feature, Toy Story. Toy Story did for digital animation what Snow White and the Seven Dwarfs (1937) had done for hand-drawn animation. It led the field with a technical and artistic masterpiece. Pixar had an initial public offering that made Steve Jobs a billionaire (amazingly, his first billion did not come from Apple). And Pixar followed Toy Story with dozens of successful animated films, including A Bug’s Life (1998), Finding Nemo (2003), The Incredibles (2004), Cars (2006), Ratatouille (2007), and Up (2009).

As the technology grew more sophisticated, so did the animation. On 1995 computer hardware, the average frame of Toy Story took two hours to render. A decade later, on 2005 hardware, the average Cars frame contained so much detail that it took fifteen hours to render, despite a three-hundredfold overall increase in computer power. Dreamworks SKG’s animation division (later Dreamworks Animation), headed by former Disney executive Jeffrey Katzenberg, and other studios began making digitally animated features. The technical and creative boom created a windfall for the studios. The average gross profit of the ten digitally animated films Hollywood produced between Toy Story and Cars was $200 million.

image

11. In the 1990s and 2000s, animators’ ambitions continually outpaced technology. It took a computer more than two hours to render each frame of Pixar’s first animated feature, Toy Story (Pixar, 1995) (above). A decade later, the average frame of the movie Cars (Pixar, 2005) was so complex that it took fifteen hours to render, despite a three-hundredfold percent increase in computer power.

CGI eventually displaced earlier forms of animation, and in 2006 Disney took over Pixar, making Steve Jobs Disney’s largest shareholder and placing Lasseter and Catmull in top creative positions. As a result of the cross-fertilization created by the merger, Disney content was among the first released on Apple’s iTunes video, and Apple created new media software to support Pixar. A new synergy between Northern and Southern California, between technology companies and Hollywood, was solidified.

From the internet back to the nickelodeon

The internet shook Hollywood’s foundation. For close to ninety years, Hollywood thrived on the twentieth century’s broadcast model of production and distribution. During most of Hollywood’s history, a small, geographically concentrated group of people created media that was mass distributed and consumed by millions.

This was not the case during the pre-Hollywood nickelodeon period, in which exhibitors bought short films and curated them while mixing the images with music and dialogue. Nickelodeons were a hybrid of mass global and local media—they were “glocal”—as were most forms of entertainment in the preceding centuries: local theater, prerecorded music, and oral storytelling. In the nineteenth century (and before) a wealthy few could travel to a city and attend an opera, concert, or the theater. But the more common experience of culture was a traveling troupe, a local production, or a domestic recital.

Hollywood and twentieth-century media technology (sound film, television, and radio) changed all of that, and it became possible for a few artists to reach a global audience. Vaudeville performers were displaced by the Charlie Chaplin films that played on hundreds of screens at once. Movie-theater piano players had to find new jobs when synchronized sound put the New York Philharmonic in every small town theater. Much of the local flavor of entertainment disappeared.

Seen on a long historical scale, the twentieth-century broadcast model was an anomalous blip in the history of world culture. The internet not only ushered in a new epoch in entertainment; it also undid the broadcast model’s one-to-many distribution system and returned the interactivity, local participation, and a many-to-many model of cultural exchange that existed before the birth of mass media.

Hollywood has reacted to this change, rather than leading it, but the studio system has also adapted to the age of media convergence and participation. The challenge of the internet is a big one, but the story of Hollywood’s response should by this point be a familiar one of competition, adjustment, and triumph.

In 1992 media scholar Henry Jenkins wrote a book about television media fans called Textual Poachers. Other scholars read with fascination about this small subculture who did not passively consume television like proverbial couch potatoes. Fans, Jenkins explained, used television as a jumping-off point to write novels about the characters, to reveal patterns in television shows through re-edited videos, and to meet with other fans in communities of shared values. In 2008 Jenkins wrote another book, Convergence Culture, which described many tendencies similar to those explored in his earlier book, but this time he was writing about mainstream viewing practices, not a small subculture.

In the age of the internet, media consumers are active and participatory. They watch television while sharing opinions in chat rooms. They recut movies into music videos that comment on popular culture. They upload amateur fiction to the internet for others to read. They use video games as engines to create movies, called “machinima.” The make parody movie trailers revealing film marketing conventions. They edit clips from television series to explore the lives of minor characters. A new media culture has emerged in which media consumers are also producers.

In 2005 a series of online video websites began to appear allowing users to upload and share videos, YouTube being the most popular. The immediate flood of videos uploaded to YouTube only revealed the pent-up desire for communities of remixers and active viewers who wanted to share their creations.

The Hollywood studios responded in a number of different ways, very gradually accepting the increased participation of media audiences. At first, some studios used copyright law to try to silence fans. One fan, for example, recut Stars Wars Episode I: The Phantom Menace, reducing the long political speeches and minimizing the screen time of Jar Jar Binks, the digital character who invoked offensive African American stereotypes. George Lucas had the video, known as The Phantom Edit, enjoined from circulation. NBC tested the waters of YouTube by quietly uploading a video, “Lazy Sunday,” made for the television show Saturday Night Live, only to remove it after five million views and reupload it to the websites of NBC and the advertising-supported joint venture Hulu. An employee of the talent agency Creative Artists Agency (CAA) joined with experienced filmmakers to surreptitiously make a video blog, or “vlog,” Lonelygirl15 (2006–2008), showing that professional talent could raise the quality of new internet genres. On a larger scale, the media conglomerate Viacom, which had bid for and lost the company YouTube to Google, sued the video sharing site for mass copyright infringement; YouTube ultimately triumphed.

Over time, the Hollywood studios learned to work with YouTube and welcome user-made videos. In 2007 YouTube instituted the Content ID system, which both blocks pirated media and allows producers to profit from user-uploaded material. With the Content ID system, studios upload their video and audio libraries to YouTube. When a user posts a video, the system checks it against the studio libraries. Studios have the option of blocking the videos, which many did at first, occasionally leading to court disputes. But studios also have the option of allowing the videos to be published, with ad revenue shared between Google and the original producer, an increasingly common practice.

There is profit for studios in user-generated content, and in some cases popular YouTube content providers, or “YouTube stars,” as they are known, can profit from their videos as well. In addition, many YouTube stars have had offers to work with larger media companies. A hybrid economy has emerged in which media conglomerates and their no-longer-passive audiences can jointly participate in the creative and financial ecosystem.

The surge in consumer-made media complements the studios’ move toward bigger media franchises and tent-pole films with stories and characters that span media, a practice known as “transmedia storytelling.” The narrative of the Matrix franchise (1999–2003), for example, unfolded across movies, an animated series, video games, and internet forums. Producers of the popular science fiction series Battlestar Galactica (2004–2009) actively communicated with fan communities online to drive the narrative. And many media companies have experimented with the creation of mashup engines, software that allows consumers to remix commercial media, as a virtual equivalent of playing with action figures or acting out scenes from a favorite movie. In these experiments, media companies have exerted varying degrees of control over users. But even if the steps are small, they are moving toward a hybrid media culture in which consumers and conglomerates, amateurs and professionals cohabit the same media landscape.

While Hollywood studios began to create more internet content, writers realized that, like in the early days of television, their contracts did not extend to the new medium. In 2008 Hollywood writers went on strike, seeking to be compensated when their work was viewed online.

One unintended consequence of the writers’ strike was the production of a self-funded three-part video opera created by highly successful writer-director Joss Whedon and his friends, family, and frequent collaborators. Their trilogy, Dr. Horrible’s Sing-Along Blog (2008), is a fictionalized version of a popular internet fan genre: the vlog. It both celebrates and critiques the genre, with an ironic reference in its title to an early form of audience interaction, the sing-along movie with a bouncing ball over the onscreen lyrics. Just as fan works often explore the lives of minor or underappreciated characters, Dr. Horrible looks at the life of the misunderstood evil superhero villain.

The series’ cult popularity made Dr. Horrible a profitable hit on Hulu, iTunes, and DVD. And its female lead, Felicia Day, was launched as an internet celebrity. At every level, Dr. Horrible highlighted the new models of genre, celebrity, and distribution that the internet enabled. Many new players began to enter the online media business, and a new cycle of independent production began to challenge Hollywood. After Dr. Horrible, for example, Felicia Day’s own YouTube series about video gamers, The Guild (2007–2013), released a second season funded not by Day’s small company or a traditional media company but by software giant Microsoft.

The studios’ subsequent attempts to tighten control over online film and television distribution have generally backfired, creating even more media producers outside of the Hollywood system. The two largest subscription media streaming services, Netflix and Amazon, both moved into video production when Hollywood studios raised the licensing fees for streaming video. Unwilling to pay the new fees, Netflix and Amazon took aggressive and very different approaches to creating new video content, although both leveraged the so-called big data of the internet.

Netflix started streaming video online in 2007, and within a few years Netflix video streams accounted for over one-third of US internet traffic. When Hollywood studios began to price Netflix out of the market for licensing their movies and TV shows, Netflix used its massive amounts of user data to help guide it into the online video production business. Netflix had sponsored small independent films and documentaries, but its first large-scale foray into video production was 2013’s House of Cards, a remake of the British series of the same name.

image

12. Microsoft took over producing Felicia Day’s successful web series The Guild after it tapped into the commercial market for fan culture on YouTube, the Xbox, and other internet video platforms. In the music video “(Do You Wanna Date My) Avatar,” The Guild ’s cast performs dressed like their virtual world avatars.

In some ways producing House of Cards was a safe and Old Hollywood decision. It was based on a presold property, directed by proven director David Fincher, and starred well-known actor Kevin Spacey. What made this move different is that Netflix’s production team already knew every film and television show that customers had ever watched through the service. And they used the data to inform their production decisions. House of Cards was a critical and popular success, and Netflix followed it with a long line of hits, including new seasons of cancelled Fox comedy series Arrested Development (Netflix, 2013–present) and an original series, Orange Is the New Black (2014–present).

Amazon, also awash in user data, took a different approach. Amazon crowdsourced its new video projects, allowing users to vote for their favorite shows. It is easy to think that Netflix and Amazon are allowing the wise crowd of internet users to guide their media production, but both companies have used the data as only one of many pieces of information guiding creative decisions. If there is an algorithm for creating successful movies and television shows, it has not yet been discovered. Creative individuals still play the lead role in the production of online media.

As in previous periods of technological change and successful independent film movements, Hollywood has responded to the internet by incorporating its greatest competitors. In 2014 Walt Disney purchased YouTube’s most popular channel, Maker Studios, for more than $500 million. On a smaller scale, Felicia Day’s YouTube channel, Geek and Sundry, was bought by Warner Bros. subsidiary Legendary Pictures for an undisclosed sum. The Philadelphia-based cable company Comcast has emerged as a vertically integrated powerhouse, acquiring film and television studios NBCUniversal. Comcast also has a major online streaming service, Xfinity, which has acquired licenses for much of the content that Netflix and Amazon can no longer afford.

As they have since the 1910s, technological and cultural changes have created new challenges for Hollywood. And some shakeup of the studio system is inevitable. But so far there is no evidence to suggest that Hollywood will not continue to grow bigger and stronger and continue to dominate global media production for the foreseeable future.