CHAPTER 4

PUTTING FEAR IN YOUR EARS

Sound and music play an important part in horror film tradition. Audio is used across all cinematic genres to enhance the emotions of a scene and support a movie’s storytelling. It’s hard to imagine what Jaws (1975, dir. Steven Spielberg) would be like without John Williams’s famous score, or Halloween (1978, dir. John Carpenter) without Carpenter’s energetic 5/4 theme, or any of the many movies that feature children singing lullabies. It’s funny; music isn’t inherently scary, and children singing shouldn’t creep us out as much as it does. One of the first instances of a child’s song giving us the creeps in a horror movie is in The Innocents (1961, dir. Jack Clayton). Since then, it’s been firmly established that if we hear children’s voices, something spooky is bound to happen.

We take our cues from the sounds and scores of horror films. We know that sudden loud sounds are meant to make us jump and to enhance a startle. We know that when the score starts building and swelling with intensity, we should be preparing for some sort of tension or important moment. We know that when things get too quiet, something sudden and shocking is about to happen. And we are often introduced to musical motifs that help us identify heroes and villains. Think of the theme from Psycho (1960, dir. Alfred Hitchcock), composed by Bernard Herrmann: we hear the main melody when we are driving along with Marion Crane (Janet Leigh), and don’t really hear it again until after she is murdered. This soaring melody can’t belong to Norman Bates (Anthony Perkins) at all. The only thing we can ever associate with him is the eee! eee! eee! of screeching violins. Hearing the sounds and scores we expect in horror films allows us to play along as the audience.

Halloween further reinforces how important these horror music rules can become. John Carpenter once screened an early cut of Halloween, minus the sound effects and music, to an executive from 20th Century Fox. When she wasn’t scared at all by the movie, he became determined to “save it with music.” He worked with David Wyman to create the now-recognizable synth-based score and cut in all sorts of harsh-sounding stingers to coincide with the attacks of the Shape (Michael Meyer’s credited character name in the first film of the franchise). Months later he ran into the executive again, and she said that she now loved the movie. According to Carpenter, the only thing he’d changed, the only thing that had made this difference, was adding sound.

Of course, that doesn’t mean movies can’t break their own rules.

28 Days Later (2002, dir. Danny Boyle) shows the power of sound and silence in equal measure. After a loud and chaotic prologue in which animal rights activists release violent RAGE virus–infected chimpanzees from a lab only to be attacked and infected themselves, the title card comes up in silence. Jim (played by Cillian Murphy) wakes up from a coma in a London hospital’s intensive care unit. It is clearly abandoned. Horror movie rules dictate that with this much silence, a threat should be lurking just around the corner. There should be dead bodies on the ground. Instead, Jim wanders from abandoned space to abandoned space littered with trash in a way that signifies he has clearly awakened to some terrible aftermath. The only sounds are his own footsteps and his lone voice calling out “Hello.”

As he wanders into Piccadilly Circus, the song “East Hastings” by Godspeed You! Black Emperor slowly simmers to life. (Fun fact: the song—the only song this scene could have been cut to, in Danny Boyle’s mind—wasn’t easy to get. Godspeed You! Black Emperor is a Canadian experimental music collective that’s known for being very anti-corporate in their politics; they don’t tend to grant licenses to their music. Boyle may have managed to get an edited version of the song into his film, but he did not get permission to include it on the movie’s soundtrack.) Despite the swell of music, Jim is alone in a strange London void of people. He opens a car door, and an alarm goes off, buried into the music. It startles him, breaking the dreamy state of the sequence. It’s the first significant sound he’s heard since waking up.

Boyle did experiment with playing the vacant London scene without any music—with unexpected results. It turned out that, without music, the silence in this scene was too scary, and the contrast between absolute silence and the sound of the car alarm was too jarring. “You couldn’t do that in the cinema,” he explained to Amy Raphael, author of Danny Boyle: Authorised Edition. “People would have heart attacks. Pacemakers would short circuit.”

So, stripping sound from visuals clearly has an effect, but what about the reverse, taking visuals away? We do it when we think something scarier than we can handle might happen on-screen: we close our eyes, we cover our faces with our hands, or we hide behind our blankets or bags of popcorn to spare ourselves from seeing the worst parts of the movie. And does it work? Uh … no. Studies have concluded that closing your eyes against a scary scene is ineffective, because you can still hear what’s going on—and whatever images your brain conjures up will probably be even scarier than the scene you’re avoiding. Putting aside the fact that you’re missing out on what might be great horror visuals, closing your eyes might actually be enhancing your experience of horror sounds.

One such study on this was done by Talma Hendler, a neuroscientist and psychiatrist at Tel Aviv University in Israel. She had volunteers listen to spooky, Hitchcockian music with their eyes open or closed, and did the same with comparatively neutral music. Brain scans revealed that the amygdala got fired up while participants were listening to the scary themes and unpleasant dissonance with their eyes closed. Other areas of the brain were co-activated with the amygdala, the locus ceruleus and the ventral prefrontal cortex (VPC). These areas are associated with both visceral and cognitive processing of emotional information. Interestingly, these results were not replicated when participants repeated the experiment in a dark room with their eyes open. It seems like having your eyes closed is an important factor.

Just like when you close your eyes to tune in better to your favorite song, closing your eyes helps you focus on sounds. Hendler and her team suggest that when we close our eyes, we’re actually triggering a mechanism that helps the brain amplify certain information and better process and integrate the emotional experience attached to what you’re hearing. Listening with your eyes closed, then, immerses you in the soundscape.

This immersion is part of why horror movies are often more effective when you watch them in a theatre with a good sound system (or if you happen to have a sweet surround sound setup at home). Humans typically have pretty good skills when it comes to sound localization, which includes our ability to sense the direction from which sounds are coming from and also pinpoint which sounds are more important than others. When you’re watching horror in a space where you’re surrounded by speakers, your immersion is higher because you’re being engaged by auditory information that tells you more about your (movie) environment than what you’re necessarily seeing on the screen. You might be seeing a close-up of a character but hearing a twig snap coming from behind you. The impression is then that you are the one with a killer behind you and at a greater risk than the character on-screen.

How music is applied to a movie goes a long way toward determining the overall mood of the narrative. Think of the iconic pub fight scene in Shaun of the Dead (2004, dir. Edgar Wright). Shaun (Simon Pegg) and his pals are trapped in the Winchester pub with zombies pressing in against the windows. The pub owner, an old man but also definitely a zombie, is trapped inside the pub with them. Shaun and others take turns pummeling the zombie with pool cues as Queen’s “Don’t Stop Me Now” plays on the jukebox, their whacks perfectly timed with the song’s beat. In another room, David (Dylan Moran) frantically tries to find the right breaker to kill power to the jukebox, but ends up instead flashing lights on and off outside the pub, also in perfect synchronicity with the song. Imagine if “Don’t Stop Me Now” were stripped away from this scene completely and replaced with a standard horror score. What’s clearly a comedic scene automatically becomes something darker: a group of trapped people trying to fend off a zombie with useless tools like pool cues and throwing darts, while one of their own ineptly attracts more and more zombies to their location.


SCARE SPOTLIGHT: A QUIET PLACE (2018, DIR. JOHN KRASINSKI)



The Abbott family has worked hard to build their homestead. They are as self-sufficient as possible, growing their own crops and mending their own clothing. Everything is hand-modified to meet their specific needs: they use large leaves instead of plates, their walkways are marked by a thick path of white sand, and they communicate almost exclusively in American Sign Language. Their quiet, rural life would seem idyllic if it weren’t for the plague of monsters that kill anything they can hear.

In the world of A Quiet Place, as the name implies, silence is central to survival. This key component to the narrative posed a huge creative challenge for the film’s sound design team, led by Erik Aadahl and Ethan Van der Ryn. In a reversal of the usual sound design process, Aadahl and Van der Ryn were required to strip all the sound away and add it back in bit by bit. As they built the soundscape back up, they had to establish rules early on for how they deployed sound so that they didn’t introduce anything that would go against the logic of the film’s world. According to Aadahl in an interview with Vox, the sound design team got into the habit of announcing, “Dead!” if someone noticed that something was just too loud. If the volume of an unmasked on-screen action was even a touch too loud and didn’t attract any monsters, then the threat falls apart.

This meant that some sound developed in unusual ways. In most films, when nature sounds such as crickets or bird calls are part of the soundscape, lone cricket chirps and individual bird calls are layered in. There are cricket beds in A Quiet Place, but no single cricket stands out with louder chirps than the others. Because even for the crickets and birds, being louder than the base level of noise would automatically expose them to danger.

The few times the Abbott family have aural missteps, Aadahl and Van der Ryn made sure to push up the volume on the offending noise to enhance the contrast between the sound and the oppressive silence of the moments that precede and follow it. One of the few moments where music is played during the film, when Lee (John Krasinski) and Evelyn (Emily Blunt) take a moment for themselves to share a dance to a song played through Evelyn’s earbuds, feels alarmingly loud, even though it’s clear that the music is audible only to them.

There are actually three moments of complete silence, digital zero, in the film. These correspond with times when Regan’s (Millicent Simmonds) cochlear implant is turned off (when we are seeing from Regan’s perspective and her implant is turned on, we hear a low hum that we might not notice in a noisier film).

In return, the audience of A Quiet Place felt the tension of so much silence, and the unrelenting risk that someone, on-screen or off, might make a noise and attract death. Moviegoers anecdotally reported being super aware of the sounds they and the people around them were making while seeing this movie in theatres. Crunching popcorn and shifting noisily in your seat suddenly felt like a gross betrayal, as if the audience’s sounds might affect the Abbott family’s fate. Personally, this was one of my favorite in-theatre horror experiences ever. The power of silence, and the audience’s collective motivation to participate in that silence, was palpable.


DISSONANCE

When it comes to scoring, the key in which music is composed has a similar influence on mood. We’ve historically attributed specific moods to different keys, but as a general rule major keys tend to be perceived as happier, while minor keys are sad. As far as research has shown, however, these perceptions are culturally informed, and don’t have any scientific basis. Still, there are certain tricks to sound and music beyond stingers and scores that can get under our skin. This can be something as simple as which chords are played.

Dissonance involves unstable chords that demand to be resolved into more harmonious tones. They sound “wrong” and can actually feel uncomfortable to listen to, which can make them super-handy tools for building tension in a horror sequence. Believe it or not, there is actually a mathematical reason for this (and music is basically math, after all). The simplest way to put it is that if you play a chord of three tones, each of those tones has its own frequency. If you picture them in a linear fashion, each frequency looks like a wave with peaks and valleys. If you stack these three waves on top of one another, where they line up and where they don’t can tell you a lot about their relationship to each other. Tones that have a relationship that’s a whole number ratio tend to be consonant or harmonious and sound good together. The pattern they make will have a sort of symmetry. We can actually reproduce this symmetry as shapes known as Lissajous figures. For example, if two instruments are playing the exact same tone (same pitch, or a 1:1 ratio between the tones), the Lissajous figure produced would be a perfect circle. If two instruments are playing the same tone, but one is playing the tone one octave higher than the other (so a 2:1 ratio), then the resulting Lissajous figure would take the shape of a perfect figure eight. Tones that do not have relationships that are whole number ratios will tend to be dissonant or unharmonious and will sound off. Dissonance, with its use in tension building and release, does have its rightful place in music, though, especially rock. And despite its lack of harmony, people can still enjoy dissonance—I mean, heavy metal fans do exist.

One dissonant interval popular in heavy metal earned the nickname Diabolus in Musica or “the Devil in music,” known in music theory terms as a tritone or an interval made up of three adjacent whole tones. Given its nickname, the story goes that the Church outright banned the use of the tritone, but there really isn’t evidence to show that this was true. Tritones’ dissonance doesn’t exactly make them great candidates for church-friendly tunes, so composers were probably already avoiding them (and associating them with demons with a term like Diabolus in Musica probably didn’t help either).

Can a music interval be so off-putting as to be truly diabolic? You can be the judge: the first three notes in the theme song for The Simpsons fit the bill for Diabolus in Musica, as does the interval between the tones in many fire truck and ambulance sirens.

Also, while you don’t have to have perfect pitch to appreciate dissonance, you do have to be able to differentiate between tone sounds. Experiments done with amusic participants—people who are truly tone-deaf—demonstrated that they were immune to dissonance’s effects.

Dissonance is far from the only sound that can make humans feel uneasy. There are other sounds that are almost universally considered to be unpleasant, like nails on a chalkboard. Still other sounds might make you feel uncomfortable without you even consciously hearing them.

SOUNDS ON THE FRINGES OF HUMAN HEARING

The range of human hearing roughly spans 20 to 20,000 Hz. If your ears are young and the hair cells, a.k.a. the sound vibration sensors, aren’t damaged, then you might hear frequencies that are higher or lower than this range in perfect lab settings. In general, high frequencies (although not necessarily super high) are designed to get our attention. We need to hear screams to respond to potential dangers, and baby cries to take care of our young. The sound of snapping branches or brush crunching underfoot might signal the presence of an approaching threat.

Younger ears tend to be able to pick up higher audible frequencies than adult ears, since these higher-limit frequencies start to drop off in most humans as we age. This was the inspiration behind the brief popularity of mosquito ringtones around 2008. These ringtones subverted an earlier use of high-frequency tones by shop owners who tried to deter loitering teens by playing unwelcoming sounds that only younger ears could hear. As ringtones, these sounds played at around 17,000 Hz and were marketed as a way for teens to discreetly hear their phones in class without their teachers noticing. The downside of this product is that the sound, while audible to teens, is also annoying. The teacher might not hear it, but classmates could and could complain about it. I feel really lucky that I finished high school right before this particular trend took off, even if that didn’t stop me from playing around with mosquito tone generators online to see if my hearing was still “young enough” to hear them. My hair cells were still spry back then, but my most recent revisit to mosquito tones forced me to face the fact of my own aging and slowly diminishing hearing range.

At the other end of the frequency spectrum, sound that falls below the lower human limit of hearing is known as infrasound. Like all sound, infrasound travels in waves, but these waves are longer than those in audible sound and the peaks are farther apart. Even if we can’t hear infrasound, we can still perceive the vibrations caused by it, especially if the sound pressure levels are high enough. Even when they don’t realize they’re hearing it, people exposed to infrasound might report a sense of uneasiness if they’re sensitive to it.

In 2003, psychologist Richard Wiseman and acoustic scientist Richard Lord teamed up with composer and engineer Sarah Angliss to create a mass experiment to explore the spooky emotional effects tied to infrasound. They held two back-to-back concerts in the Purcell Room, a concert hall in South London. Some of the songs were laced with infrasound while others were infrasound-free. When later surveyed, the concert guests, without knowing which songs were infrasound-infused, reported more unusual sensations, awe, and fear during songs that had 17 Hz tones playing beneath the music.

You can listen to the infrasound-free version of a concert composition on Sarah Angliss’s website. According to Angliss, if she had included any true infrasound tones in her piece, they wouldn’t have survived the mp3 compression process. They relied on a hand-built infrasonic generator, an acoustic cannon made with a stiff sewer pipe and extra-long-stroke subwoofer, to create the low frequencies that they needed for their experiment.

It’s been suggested that infrasound might make people feel odd if they’re sensitive to it. Environmental infrasound has been blamed for what is ominously named the Hum. It was first reported as a mysterious droning, rumbling sound, similar to an idling truck engine, in Bristol, England, in the 1960s, and since then similar complaints have been lodged worldwide. Part of what’s mysterious about the Hum is that it’s hard to guess at where the sound might be coming from. Dr. Colin Novak at the University of Windsor in Ontario, Canada, decided to investigate what had come to be known as the Windsor Hum, to characterize it and to try to localize its source. He managed to record the Hum’s presence on only a handful of days during his study, but he did get enough evidence to conclude that the Hum was real and registering around the 35 Hz mark (decidedly in the range of human hearing, but still qualifying as a low frequency). He suggested that the sound was originating from a blast furnace on Zug Island, south of the city and downriver of Detroit, Michigan. The island has historically been home to steel mills. Follow-up studies have concluded that Zug Island is probably not the source, but that hasn’t stopped conspiracy theorists (the conspiracy goes that a U.S. Air Force program is behind the mysterious noise).

Like the Hum, most of the phenomena people have experienced thanks to infrasound have been easy enough to explain away as simple unpleasant sensations. But for some people, their experiences with low frequencies border on the paranormal.

One of the most fascinating pieces of research on the uncanny effects of infrasound was published by Vic Tandy and Tony Lawrence in 1998 in the Journal of the Society for Psychical Research. Tandy was inspired to investigate infrasound after a strange and unusual personal experience, what he dubs “The Case of the Ghost in the Machine.” At the time of his experience, he was working as an engineering designer for a company that made medical equipment. His colleagues often gossiped about how their laboratory was haunted, and he recalls at least one instance when a cleaner left the building in distress because she thought she had seen something. As time went on, he began to note more weird occurrences: he kept feeling like someone was watching him, and he’d turn to talk to a colleague that he swore was right next to him, but there’d be nobody there. He started feeling discomfort. Sure, the laboratory could be spooky and make strange noises, but this was something else. Finally one night, when he was working alone, he noticed a ghostly gray blur in his peripheral vision. When he turned to look at it head-on, it faded and disappeared. He had no way to explain this apparition and it freaked him out … so he went home.

The next day, despite his heebie-jeebies, Tandy went back into the lab because he was entering a fencing competition and wanted to use the lab equipment to work on his fencing foil. He gripped the foil blade into a bench vise and left to find some oil. When he got back to his foil, it was vibrating frantically in the grip. Instead of deciding that the lab was definitely haunted by some poltergeist, Tandy recognized that the vibrations were coming from some sort of wave. Then and there he developed an impromptu experiment using the fencing foil as a makeshift dowsing rod to pinpoint the vibration’s source.

Tandy and his coworkers weren’t sharing the lab with a ghost at all—they were sharing it with a standing wave. This wave, caused by an extractor fan in the lab, was just the right frequency to be reflected back completely by walls at either end of the laboratory and create a sweet spot in the center of the room, which was shaped like a long corridor. What’s more, this standing wave was vibrating at a frequency similar to the resonant frequency of the human eyeball. So, in the right conditions, the wave was vibrating Tandy’s eyeballs without him realizing and causing his strange visual disturbances.

The effect of sound waves vibrating Tandy’s eyeballs is the same one that we see in action when a singer shatters a wineglass with their voice. Every material has a natural resonant frequency—a speed at which it will vibrate if it’s disturbed by something, even if that something is a sound wave. When you flick a wineglass, you can hear the tone it makes when it vibrates. When a singer matches that tone frequency with their own voice, they are vibrating the air molecules around the glass with the same frequency as the glass’s resonance (and, in turn, vibrating the glass). These vibrations put stress on it and cause it to deform. The louder the tone you make, and the longer you sustain it, the more the glass deforms, and the more prone it gets to weakening and shattering. Shattering wineglasses isn’t a feat limited only to opera singers; it just takes a lot of effort. When the television series MythBusters captured singer and vocal coach Jaime Vendera shattering a wineglass with his voice (on his twelfth attempt), his voice registered over 100 decibels. For perspective, normal speaking voices usually register more in the 50-decibel range. It also helps if the glass that you’re choosing to destroy is pre-weakened in some way, with small scratches or imperfections that can be aggravated by the vibrations.

Eyeballs are not wineglasses; they are squishier and (thankfully) more resilient to temporary deformations than rigid glass or crystal. The vibrations that Tandy experienced were close to the resonant frequency of his eyeballs, but his exposure was limited and it is unclear just how loudly that extractor fan was vibrating. Theoretically, at the right frequency and loudness, infrasound can have adverse effects on the body beyond some brief ghostly apparitions.

This brings us to one of the big infrasound legends: the Brown Note. Popularized in an episode of the television series South Park, the Brown Note is said to be an infrasonic tone that resonates on the human body in such a way that it has a laxative effect. Despite the massive number of videos online purporting to be recordings of the Brown Note, there isn’t actually any evidence that these low-frequency tones are effective (unless you want to believe the online comments on these videos, claiming the tone’s efficacy). It’s also unlikely that a web video would have the sound quality needed to deliver an appropriate note. For the curious: yes, I did find a three-minute Brown Note video and listened to it. It even came with a warning not to proceed if I have bowel issues, which I shrugged off despite a chronic intestinal disorder. I’m almost disappointed that I didn’t feel a single thing stirring in my guts.

Taking inspiration from Tandy’s experiments, in 2008 a group of researchers attempted to create a “haunted” room by manipulating electromagnetic fields and infrasound. The room that they used was empty and featureless, white, dimly lit, and cold. Participants were given a copy of the room’s floor plan and were asked to wander around the room alone for fifty minutes, recording any unusual experiences and where they occurred on the floor plan. They didn’t know whether they were in a neutral room, or being exposed to infrasound, complex electromagnetic fields, or both. Most of the participants reported some eerie experience during their time in the room, whether it was feeling dizziness, tingling, a sense of something else’s presence, or straight-up terror. Once the data was analyzed, however, this team found that these experiences were probably not being caused by either infrasound or electromagnetic fields (and they probably weren’t being caused by a real haunting, either, although that would be a horror movie–worthy twist). The researchers suggested that the reported experiences were probably caused through the power of suggestibility. All of the participants were informed that they might experience unusual sensations and were asked to report on them as part of the experiment, so they were primed to be more receptive to any sensations that might be unusual.

With infrasound being implicated in so many spooky experiences—whether or not it’s truly at the root of them—you can bet your butt that horror filmmakers have found a way to work infrasound, or at least near-infrasound, into their projects. Like in the haunted room experiment, viewers are already primed to be exposed to uncomfortable stimuli in a horror movie. Part of what makes infrasound so uncomfortable to experience when it’s layered into a soundscape is that we can’t pinpoint a source. If you can’t identify what’s causing these abstract feelings, your imagination might make up a reason primed by whatever you’re watching on-screen. Director Gaspar Noé famously admitted to using near-infrasound (registering around 27 Hz) in the first act of his 2002 film Irréversible, as if a film with a ten-minute-long-take rape scene and extremely graphic scenes of violence wasn’t already uncomfortable enough for viewers. Paranormal Activity (2007, dir. Oren Peli) is also said to have boosted its quiet, spooky moments with infrasound. In keeping with a found-footage style, this movie does not otherwise use a music soundtrack.

On a happier note, infrasound doesn’t make all creatures great and small feel unease. As loud as a trumpeting elephant can sound, some elephant vocalizations are actually infrasound, in the range of 5 to 30 Hz. These vocalizations are known as “rumbles.” The benefit of communicating in infrasound is that lower frequencies can travel farther without being absorbed or reflected by the environment, so elephant rumbles can actually let them communicate with each other when they are miles apart.

Mercifully, not all sound frequencies make you feel like garbage like infrasound can; otherwise we wouldn’t enjoy music so much. For select people, some sounds even trigger a specific kind of pleasurable response. This is known as the autonomic sensory meridian response (ASMR)—and while it sounds like a clinical term coined in a medical journal, it’s not. It was named back in 2010 by Jennifer Allen, an ASMR online community pioneer, who realized that the phenomenon would need a name before people would feel comfortable talking about it. The term describes a pleasant “brain tingle” that some people feel in response to sensory triggers. You might have heard of it before if you’ve spent enough time on the right social media channels. ASMR has become a sensation on sites like YouTube, where there are entire channels dedicated to whispering, gentle tapping, and other soft sounds that have the potential to trigger the response. ASMR even has a market beyond the internet, with paid immersive experiences such as Whisperlodge. Part theatre and part ASMR spa treatment, Whisperlodge is an intimate experience in which you find yourself in one-on-one scenes with the cast while they crinkle paper next to your head or stroke your face with soft makeup brushes.

As a newer phenomenon, there is little research to explain ASMR. The first peer-reviewed study was conducted in 2015 to try to classify the response. The researchers from this study suggested that there may be a link between people who experience ASMR and people who experience synesthesia, another phenomenon in which stimuli for one sense trigger sensations from another (like perceiving words as having a specific color, or sounds as having particular tastes). More recent studies have provided evidence for the relaxing effect that ASMR-sensitive people experience: their heart rate measures showed a marked decrease during ASMR videos, compared with people who do not experience ASMR and compared with their measures when watching neutral footage.

ASMR contrasts wildly with another type of sound sensitivity, termed misophonia, where affected people are prone to irrational anger or even violent responses when exposed to certain sounds. Often these sounds are ones that are commonly disliked, like the sound of another person chewing food, but the response is atypical and extreme.

When it comes to selecting sounds for horror, uncomfortable sounds like chewing and aggressive, decidedly anti-ASMR finger-tapping and clock-ticking are often favored because they ramp up disgust (in the case of chewing) and tension (in the cases of tapping and ticking). For someone who experiences misophonia, these sounds might risk causing a tension overload or just a hellish moviegoing experience. Outside of these misophonic outliers, though, when it comes to sounds on the fringes of human hearing, horror filmmakers should favor the ones that are meant to make us uncomfortable, like infrasound, over ones that might give you enjoyable brain tingles.


SCARE SPOTLIGHT: THE BLAIR WITCH PROJECT (1999, DIRS. EDUARDO SÁNCHEZ AND DANIEL MYRICK)



In October of 1994, three student filmmakers disappeared in the woods near Burkittsville, Maryland, while shooting a documentary.

A year later their footage was found.

Heather, Josh, and Mike have been lost in the woods for days. They are armed with only their camera gear and some meager camping supplies. Tensions are running high as Heather remains bent on documenting every part of their trip, even when they come across strange cairns and hanging stick effigies, when they hear weird sounds outside their tent at night, when Josh disappears and something seems to be stalking them and leaving gifts of stones and human teeth. They do find Josh in the cellar of an abandoned house, standing motionless and facing a corner like in the myths they’d heard about the Blair Witch back in town, but they never do get footage to prove the existence of the witch—something unseen knocks Mike’s and Heather’s cameras, and presumably Mike and Heather, to the ground. Their footage was found, but their bodies were never recovered.

Sound in movies doesn’t follow the same rules as sound in the real world. Some sound might be diegetic. This is sound with a source that exists in the world of the movie, actual sound that the characters on-screen experience just as clearly as you do. Other sounds might be non-diegetic, existing outside of the story’s space. We hear the atmospheric music and sound effects added to enhance the mood of a scene, but these sounds aren’t real or observable to the characters in the film. The balance between the use of diegetic and non-diegetic sound in movies can mean the difference between feeling like you’re watching a recording of real events or being reminded that you’re being absorbed into a fictional movie narrative. Playing with this balance can also blur the lines between spaces and create the ambiguity that is oh so appealing in horror, where the intention might be to have the viewers, consciously or subconsciously, ask themselves, Is this really happening?

Found-footage films hinge on this question, and standout found-footage films risk tricking their audiences into believing that they have documented real events. The Blair Witch Project is one of the go-to examples for found-footage films, and the absence of non-diegetic sound in the film plays a huge part in its effectiveness. The entire film is shot on the cameras being handled by the students, so we only ever see what they are seeing and are limited by where they choose to point their cameras and what they manage to actually get in focus. Likewise, we only hear what is picked up by their cameras’ microphones. There is no obvious soundtrack to underscore the characters’ emotional journeys, no foreboding crescendos or stingers to prepare us for scary imagery. The effect is that you feel like you’re watching an amateur home video. What makes The Blair Witch Project scary is how real and unpolished it feels.

Despite holding The Blair Witch Project up as a masterpiece of diegetic-only sound, it wouldn’t be right to say that the film has no score at all. The Blair Witch Project has a subtle and creepy score composed by Tony Cora, and you have to strain to hear it. It’s probably most audible in the film’s final moments, as Mike and Heather are running through the house searching for Josh.

In 2016, a sneaky sequel to The Blair Witch Project premiered at San Diego Comic-Con. Con-goers thought they were catching a premiere of a film called The Woods, only to find out that it was Blair Witch (dir. Adam Wingard) in disguise—the marketing campaign even went so far as to print and hang posters for The Woods and then change these out for Blair Witch posters while the movie was playing.

The Blair Witch Project is a tough act to follow, not just because of its impact on found-footage films, but because of its unprecedented viral marketing campaign. Both Blair Witch and an earlier sequel, Book of Shadows: Blair Witch 2 (2000, dir. Joe Berlinger), failed to live up to the original. One of the most notable differences is this handling of diegetic versus nondiegetic sound and visuals. Blair Witch boasts a full score and obvious sound effects. This sequel also tries to pass itself off as a found-footage piece, but outfits the characters with earpiece cameras instead of the handheld and shoulder-mount cameras from the original film. This technology upgrade lets them gloss over some unlikely shots that would have been impossible for the original The Blair Witch Project team to capture with their two cameras. But some of those shots come from perspectives that don’t match up with the sightlines we’d expect from a camera held at head level, and all of them are unexpectedly clean and in focus despite the fact that hair should be falling over the lenses. The newer film is also introduced with a title card, much like the one in the original film, that states that this footage was “assembled” rather than strictly “found,” which means that the movie can handwave the fact that the footage acquired is polished and finely edited. This makes the finished product a far cry from the poorly framed and jittery shots in the original when the actors were in fact the camera people and, despite portraying film students, had minimal experience behind the camera.


THE SCREAM

We can’t talk about sounds that scare without talking about the scream. And I’m not talking about Edvard Munch’s famous painting, although it does give a great visual: the subject’s eyes are wide and staring and his mouth is stretched open in an expression that can only be recognized as fear. If this painting had audio, you know exactly what sound that figure would be making.

No matter what language you speak, the sound of an adult screaming is universally understood as a vocalization of fear or alarm. Babies scream too, to express alarm like adults do, but also to alert their parents to needs like hunger or discomfort that they don’t yet have words to describe. Screams are not only loud and shrill; they’re also immediately recognizable as a distress signal.

Screams in horror movies tend to come in two flavors: the sudden and perfectly timed jump scare scream (an art perfected in YouTube screamer videos) that aims to startle the viewer, and the reactionary scream that aims to amplify feelings of fear by demonstrating a character’s horror.

Despite the universality of the scream, the science of screams still isn’t well understood.

In 2015, neuroscientist David Poeppel and his team launched an investigation to decipher what exactly is happening when we hear a human scream and what makes its sound so special. They created a catalogue of human screams both by downloading a collection of screams from movies and by bringing real people into the lab and recording their screams. These screams were added to a bank of sounds that also included spoken sentences, artificial sounds (like alarms and instrument sounds), and tones.

These sounds were then ranked by participants on the order of how scary they were on a scale from 1 (neutral) to 5 (alarming). Unsurprisingly, human screams stood out from the rest of the sound bank. The screams that freaked participants out the most were ones that fell into a range between 30 to 150 Hz. The scariest screams also showed the highest measures in a sound quality known as roughness. In sounds with high roughness, the amplitude or loudness is modulated very, very fast (between 30 and 150 times per second). Roughness makes scream sounds more detectable; it also makes scream sounds sound very unpleasant. This modulation places screams in the category of nonlinear sounds, sounds that distort when they are too loud and rough for the instrument projecting them—like a raw scream straining the limits of the human larynx.

Irregular, scratchy nonlinear sounds are often found in nature, usually in young animals whose cries need to attract their parents’ attention. Similarly, humans respond more to baby cries that contain nonlinearities than ones that don’t. Furthermore, studies focused on meerkats have shown that they don’t easily habituate to nonlinear sounds—they are meant to be alarming, abrasive, and hard to ignore. It makes sense—these sounds indicate the presence of a potential predator. Survival requires us to not get used to the kind of sound that might point to something dangerous lurking nearby, whether that sound is a scream, a growl, a snapping twig, or a creaking floorboard. Although it’s incredibly unlikely that sound designers are up to date on meerkat research, they have clearly been keyed into nonlinear sound’s ability to evoke these feelings for a while now. A number of movies incorporate nonlinear sounds into their scores, notably The Shining (1980, dir. Stanley Kubrick), where violins swarm alarmingly as Jack Torrance (Jack Nicholson) chops through a bathroom door and delivers his most famous line from the film, “Here’s Johnny!”

Poeppel’s experiment also used fMRI to take scans of participants’ brains as they listened to screams and neutral speech. Generally speaking, if you play any loud sound to a listener, that sound will activate the parts of the brain that process auditory information. In this experiment, he found that scream sounds also selectively activated the amygdala and fear circuit. This activation was sensitive to a scream’s roughness. This sensitivity suggests not only an “acoustic niche” in the brain for processing human screams, but an ability to accordingly modulate a fear response to faked or less urgent-sounding screams and screams describing real fear.

Are any of these sound design techniques crucial to crafting the perfect scary atmosphere? After all, sound design in film only began to pick up real momentum with the advent of digital sound in the 1970s, and movies have existed for much longer than that. While you can build an evocative horror scene with tight visuals and narrative cues, sound (and the vibrations created by sound) can act as a special ingredient that takes a great horror sequence and escalates it to a scare that shakes you to your core.


IN CONVERSATION WITH RONEN LANDA



Ronen Landa is a film score composer. His horror film credits include The Pact (2012, dir. Nicholas McCarthy), At the Devil’s Door (2014, dir. Nicholas McCarthy), and 1BR (2019, dir. David Marmor).

I feel like when I’m watching horror films I notice the things I expect to notice. I notice stings. I notice when that low hum comes in during tension-inducing scenes. When it comes to your process, how much of it is leaning on those familiar techniques?

So, it’s interesting because you develop a tool kit. And some of those things I carry from film to film. There’s a process for scoring horror films specifically that works for me, because there are some challenges that are specific to horror.

Horror is difficult because we’re using all of these extended techniques and all sorts of interesting sound-making techniques; the idea is to be strange and to find weird noises that are almost traditional, but freak people out because they don’t quite sound like they should. For instance, you can make a cello sound nasty and not pretty—people want that cello to sound pretty, so the unexpected sound is shocking and jarring. As much as I can, I try to use acoustic instruments and record them. I come up with those creative techniques myself and develop those ensemble sounds so that they’re original. I don’t want to use things that are only prefabricated and that I can just, like, plop in from a sample library.

But there’s no way to show the director what it is that you’re going for in a mock-up unless you actually have the sounds. So I started to build a process where I have recording sessions before composing, and developing a lot of those sounds that I’ll be using in these sessions. And from there I create an architecture for each cue, and color in with those sounds I developed. And so I use a lot of my personal sound library to build stings and other elements. But I try to create new sounds for each project. So, part of it is building that tool kit, which is a process tool kit, and also an actual sonic tool kit, and then part of it is understanding what’s in front of you. And what’s in front of you is the film.

How much do individual narrative elements of a film color your scoring? Do subgenres come with their own sound?

The way I would score one slasher film would be different from the way I would score another. If you listen to Eloise [2016, dir. Robert Legato], obviously there are horror elements, but there’s camp to Eloise that’s nowhere in The Pact, for example, or in At the Devil’s Door. For the most part, the process involves looking at a specific film and the musical problems that the film presents. Not problems in a bad sense, but as in puzzles. My job is to solve those puzzles in a way that will move the audience and will help them connect to whatever the story is that we’re telling them. I always think of music as the emotional glue in a film.

It’s interesting because I’m always trying to tap into that emotion that I had when I first read the script or the first time I saw a cut of the film. What I’m trying to tap into is: How did I feel when I first saw that? Because it’s always going to have that shock value the first time.

Usually, a film comes to me with some temporary music in it. So there’s going to be music that I’m already responding to when I’m watching a scene for the first time. I’m trying to put myself in the audience’s shoes, but I’m also trying to keep our thematic material and the sound of the film—each film, as I said, is different, and that’s because I really put a lot of time into thinking, What’s the sonic universe for this film? And I want that to be a thread that helps to create a more cohesive experience.

What’s an example of a sonic puzzle that you’ve had to solve through your score?

In horror there’s often that sense of Is this film being done to me or is this film bringing me into something? I think that those are the big questions. In 1BR there’s an interesting scene involving a cat. And that was a really hard scene to score. It was a complicated scene to solve musically because the filmmaker had set up a double scare, where you have the anticipated scare with the cat, and then a bigger unexpected scare immediately after. So there was this huge challenge there: How are we going to make both scares effective when they’re so close together?

If we cranked both scares up to eleven it wouldn’t work because it wouldn’t feel earned. And I worked to build a sequence where the first scare would be unsettling and disturbing, more upsetting than scary. And then this scare comes when she’s taken. And so there’s a real progression there in the music. If you listen to the track without the visuals you can really hear the narrative cues of the scene as you’re listening to it: the first smaller gut-punch scare and then the bigger Omigod! scare right afterward. That was tough! It was an example, on a micro-level, of how you have to earn your scares. Because if we had gone too big on that first scare, the second one would have fallen flat. It would have had a reduced impact. And really, in terms of story, the movie is about the second scare and not the first one.

Where does silence come into play?

Silence is a really important element in a horror film too. Usually the scariest scene in the whole movie is something quiet, without any music. But if you didn’t have the music in other parts of the film, would that silent scene be as effective as it is? I don’t know. It’s like in music notation we have the idea of a rest—and understanding where to place that rest can so effectively enhance the drama. So, even knowing where not to score is essential when you’re building an emotional soundscape.

Is there anything that you feel that, as a horror composer, is often missing from the conversation about your work?

It’s a good question. It’s a generous question. I think that the thing that I’m constantly trying to stress when I’m having these kinds of conversations is the collaborative nature of the work. I’m an artist and I have a voice, and I really try to bring the most of myself to all of these projects. But all of them are reflective of a state of constant collaboration, working really closely with the filmmakers to achieve their vision. Different filmmakers have different relationships with music and expectations for how they’re going to interface with their composer, but the best scores come from these really close-knit collaborations of filmmakers and also with the musicians. I can’t tell you how much, when I listen to Dan Tepfer, who played piano on The Pact. Or when I listen to Anna Bulbrook, who played violin on The Pact, or I listen to Karolina Rojahn, who played piano on 1BR—when I listen to their performances, just to name a few, how much artistry they all bring to our recording sessions. All of the experimentation that we can do together, finding new sounds together. A lot of this stuff happens in a collaborative way. It’s a rare kind of human connection. And it’s very special. I’m really, very proud of that collaborative aspect.