We’ve talked a lot now about sounds, but have barely touched on the fact that sounds occur in space—they occur in the air, in the water, and resonate in our bodies. In fact, without this secondary element of sound (i.e., where sound occurs), sounds wouldn’t occur at all! Sound cannot travel in a vacuum; or in the words of Ridley Scott’s Alien movie, “In space, no one can hear you scream.” Since sound waves travel through vibrating molecules in the air/water/solid, if there are no molecules to vibrate, the sound can’t travel.1
In this chapter, we are going to talk about the basics of how sounds propagate—how they move in their environment—and what impact that has on our perception of sound. We’re also going to start looking at the use of some digital effects that mimic the real-world effects of sounds in space. In chapter 7 we’ll explore more advanced sound propagation along with the 3D positioning of audio in surround sound and virtual worlds.
Stand in one spot on the side of a long, straight road and wait for a vehicle to drive by. Look down the street and listen to the sound of the vehicle as it approaches, and then as it passes and drives away. What happened to the sound? As kids we pick up on this effect intuitively and with dinky cars or toy motorcycles we make the sound with our mouths—“nnnnnneeeeeooowwww”—the frequency changes as the vehicle approaches and drives by. This change in frequency is called the Doppler effect (or Doppler shift). You can hear it more obviously when the car is making a tonal sound, like a siren or horn.
Figure 4.1
Doppler effect created by a moving sound object.
We learned about sound waves and they how propagate in a spherical direction in chapter 1. With the Doppler effect, when the source of the sound wave is moving (as when a car is driving), the direction that it moves toward will cause a “bunching” effect of waves. For a stationary listener, the result is that as it moves toward us the sound waves are heard at a higher frequency. As the vehicle moves away from us, the wavelength becomes lengthened, resulting in the perception of a lower frequency.
Exercise 4.1 Doppler Effect
Go out and record the Doppler effect occurring on a road. It will work best with a car that has a siren on, but will work with regular vehicles too.
Exercise 4.2 Make a Doppler Effect
You can make a Doppler effect pretty easily: just get something that is making a constant sound, tie it to a string, and swing it around your head. For instance, you can purchase a cheap piezo buzzer/speaker (~$5) and attach it to a 9 volt battery. Stuff the noisy thing into a ball on a string. You may need to boost the signal so it’s loud enough, or purchase an extraloud piezo buzzer. I found a tin Pokémon ball that unscrews into halves, but I’ve also used a ball designed to hold dog treats.
Figure 4.2
Materials with which to make your own Doppler effect.
Reverberation is a massive subject, so we will cover just the basics here. When sounds move in any environment, eventually they’re going to hit some form of surface, like a wall or cliff. Much like the balls on a billiards table, sound waves reflect off surfaces. The amount of reflection depends on the type of surface and the size of the soundwave, so the result may be that some frequencies are absorbed while others are diffused, and others are reflected.
Let’s first focus on sounds reflected on a hard, smooth, flat surface. When a room has a lot of highly reflective surfaces we call that a “live” or “wet” space. You probably remember in high school math class learning about reflection, where the angle of incidence is equal to the angle of reflection. This reflection is what happens with most sound waves: the angle that the sound approaches the smooth surface is the same angle at which it will reflect. This type of reflection, known as specular reflection, occurs when the wavelengths are smaller than the reflecting surface (figure 4.3).
The sound waves don’t stop after hitting just one wall in a room, however: they continue, and will reflect again when they hit the next wall, if it’s a very live or wet space. When sounds reflect off multiple surfaces, as in a room, we get reverberation, or reverb. Reverb consists of the direct sound, early reflections, and the late reflections, which blur or smear together. Each time the sounds travels, some frequencies are being absorbed by the air and by the surface they meet, and others are continuing to reflect, reducing in amplitude.
Exercise 4.3 Reflections of Sound
Use a tiny but powerful flashlight or laser light on a very hard surface (like the white board in a classroom or a mirror in your home)—turn the lights out and watch how the reflection bounces off the surface. Now direct a sound in the same place as the flashlight, and move a directional (e.g., shotgun) microphone around until you find the sweet spot—is it the same as the light? You can also use cardboard tubes to listen for and find the angle of reflection.
Figure 4.3
Specular reflection of a sound wave on a smooth surface.
When the angle of the reflective surface is not flat, the sound reflects at a different angle. If the surface is smooth and convex, the sound wave will bounce outward, having an antifocusing effect (figure 4.4).
If the surface of the reflecting surface is smooth and concave, then we can focus the sound much in the same way waves are focused on a satellite dish. This parabolic effect is why amphitheaters are built as semicircles that reflect back to the audience, and it’s even possible that some ancient spaces like Stonehenge were built in a ring for the same reason (see Cox 2014, 83–84). You may remember high school math where parabolas reflect toward a single location, the focus point. This is the principle of parabolic microphones: a parabolic dish focuses the sound waves onto a microphone (figure 4.5).
Figure 4.4
A convex surface will scatter sound waves.
Figure 4.5
A parabolic microphone picks up the reflections from a concave surface.
Exercise 4.4 Make a Parabolic Microphone
You can buy parabolic microphone dishes that are used in field recording (a professional one will cost over $1,000), but you can also make your own. It won’t be as good, but it will be good enough to demonstrate the effect. For this exercise, we’ll need some tools. First, we need a large bowl as close to a parabola shape as possible (if you can find a discarded TV satellite dish, that works great!). It needs to be a highly reflective, perfectly round dish, like a plastic salad bowl. In an ideal world the bottom of the bowl would also be round, but most salad bowls you’ll find will have a flat bottom. We know that the size of sound waves increase as we lower the frequency, so larger dishes will reflect more sound: You may see small cheap tiny parabolic microphones on a handheld gun-like device on eBay, for instance, but these will only reflect high-frequency sounds. The larger the dish, the more frequencies we will amplify.
Next we need to find and mark the focal point. Drill a ¼″ hole in the center of the bottom of the dish and stick a dowel or chopstick into it, straight into the bowl, with the length of the stick longer than the height of the bowl. This will be our measuring stick and will hold our microphone. If you’ve got a laser pen, you can shine it onto the bowl and it should reflect at the focal point. Check a few different angles to be sure. Mark the focal point with a pen. If you don’t have a laser pen, you can use your ears and the microphone. Set up a directional sound and point the bowl toward the sound, and move the microphone until the amplitude is at its loudest.
Tape a small omnidirectional microphone, like a lavalier microphone to the focal point, facing the bottom of the bowl.
Most rooms have parallel walls, and that means that reflections of particular sound waves will create some interesting phenomena. Standing waves are waves that reach a barrier such as a wall, and bounce back at the same wavelength, building up by bouncing back and forth. The result is a sometimes visible disturbance of the air. Some people have theorized that standing waves could have been created by humans singing in caves where ancient people have painted images, and perhaps the reason for the caves holding special purpose for early humans was due to their acoustic properties (see Coimbra 2016). As the wave bounces back and reinforces itself, a process of constructive and destructive patterns occur, creating what are called nodes and antinodes. Nodes are where the wave stays in a fixed position because of destructive interference. Antinodes are the places on the wave where it vibrates at the maximum amplitude. Standing waves appear to be “standing still” in the space as they bounce back and forth. If we use our analogy of the billiards table with a ball bouncing off the walls, there are places where the ball will always repeat: if that ball were traveling fast enough, it would appear to be standing still in those places (figure 4.7).
Chladni (pronounced “klad-nee”) plates demonstrate the standing wave patterns that are created by the constructive and destructive interference of waves. Salt or sand is placed on a plate and vibrated at a set frequency. The sand gathers at points along the plate that are not vibrating, creating patterns. These are the same types of constructive and destructive patterns that make a great violin (figure 4.8).
If the time it takes for the sound wave to bounces between parallel walls is more than 50 ms, what sometimes results is a pattern of reflections known as a flutter echo, sometimes called “chatter” or “zing,” although many people describe it as a “boing” sound. Flutter echoes can make the sound appear to “flutter,” and sound quite hollow or “ringing.” You can often hear a flutter echo if you go into a smooth, highly reverberant large space, such as an empty gymnasium, and clap your hands.
Figure 4.6
A homemade parabolic mic with salad bowl.
Figure 4.7
A standing wave created by the reflection of a sound wave off of perpendicular surfaces.
Figure 4.8
Chladni plate patterns.
Figure 4.9
Flutter echo created by parallel walls.
Exercise 4.5 Standing Waves and Flutter Echo
Visit some different indoor spaces around your campus, house, mall, etc. Clap your hands and listen for the reflection. Try to generate some frequencies between parallel walls and see if you can create a standing wave pattern. You will hear more reflection if you can go at a time when there aren’t many people around. Did you find anywhere that has a flutter echo? Did you find anywhere with a standing wave?
Resonance refers to the constructive and destructive interference pattern of sounds in an object: All objects have frequencies at which they vibrate naturally, which are called resonance frequencies. The shape and size of a resonant cavity means that some frequencies interfere constructively and create standing waves, while others interfere destructively and are canceled out.
If you’ve ever seen a cartoon where an opera singer breaks glass, this actually works, and is to the result of resonance. It needs to be crystal, not glass, to work effectively. With a finger dipped in water, rub your finger around the top of the glass, and you’ll make the glass sing (hum). It’s possible to use different-sized glasses to create a musical instrument—these are known as glass harps, glass harmonicas, or armonicas. Benjamin Franklin invented his version in 1761, which included thirty-seven different crystal bowls. There were rumors that the playing of the instrument would cause madness and depression in both player and audience, so the instrument fell out of favor, although some more recent artists like Björk and David Gilmour (of Pink Floyd) have used the glass harmonica.
Exercise 4.6 Singing Glass
Record the sound of a crystal glass humming, and look at a spectrogram to determine the frequency. It is possible to break the glass by playing a sound that is the same frequency and the right amplitude to exceed the strength of the crystal. (You can usually pick up used crystal glasses at charity shops for a few dollars.)
If you’ve ever heard that some people put egg cartons on the surface of their home recording studio, it’s down to the belief that the egg cartons work as a diffuser. Instead of the sound waves reflecting back evenly, the theory is that the waves will tend to scatter, reducing the amount of reflection (reverberation) in the room. In fact, egg cartons don’t actually work that well. The absorption ability—called the absorption coefficient—of cardboard and Styrofoam aren’t very effective. Absorption occurs when the surface material absorbs sound energy rather than reflecting it. This is why sound studios are commonly treated with some type of absorption material, such as foam. Absorption will reduce the reverberation, or “liveness,” of a room.
Professional sound studios designed for recording music often want to retain some degree of liveness in a room, so they will use diffusers. Diffusers scatter the reflections rather than absorbing them. A diffuser needs to be at least six inches deep to be effective, and needs to be about six feet away from microphones or sound sources, since they introduce artifacts into the sound. Most professional studios will use a two-dimensional diffuser, which looks a bit like bricks of many different lengths. So, in addition to not absorbing sound very well, egg cartons are also not effective diffusers, since the egg crate cups are all at the same short depth, and will only diffuse a narrow range of frequencies and not others.
If you look up at the ceiling of a classroom, doctor’s office, or other public space you’ll often notice tiles with lots of different-sized holes in them. These are acoustic tiles, designed to absorb and diffuse sounds, reducing reverberation to increase clarity of speech. Low-frequency sounds are absorbed, the deep cuts are designed to absorb and diffuse mid-frequency sounds, and high-frequency sounds pass through the holes to be dispersed in the space inside the ceiling.
It’s not always possible to install absorption or diffusion tiles where we need to record, but we can do some things to improve our recording. If we are recording in a space with a lot of reflective surfaces, we can hang some cloth to reduce the reflections to get a clearer sound. If you’re recording something small or your voice, you can make a small vocal-booth-style box with a homemade frame using PVC piping from a hardware store and some heavy blankets, or specially designed acoustic blankets.
Sometimes we want some reverberation: we tend to associate a little reverberation with warmer sound. You may notice you sound better when you sing in the shower, because you’re hearing the reverberation in the shower stall come back at you with your voice, thickening out the voice. For a time in the late 1950s and 1960s, it was very trendy in rock recording to have a big reverberant sound, and singers like Duane Eddy, Elvis Presley, and Lee Hazlewood were recorded inside large oil drums or grain elevators. Stax Records even recorded their stars in the bathroom of the studio (Zwisler 2017)!
Exercise 4.7 Same Sound, Different Place
Record the same sound with the same microphone at the same distance and angle in different places inside and outside. Hang some cloth around the sides of your mic and notice how it changes the sound of the recording. Try to find a highly reflective space like a four-sided glassed-in shower, or a concrete or plastic playground tube.
Exercise 4.8 Room Tone
Room tone is typically gathered on a film or television set for later use, and involves two minutes without anyone making noise, just recording the ambient space (inside or outside). Go record some room tones!
Exercise 4.9 Stitching Reverbs
Play your favorite song in three different rooms on the same device, and record the playing of the song in the space. Put ten seconds of each version back to back a few times, alternating with hard cuts between them, compare the sound of each.
Figure 4.10
Sound diffusion.
Rather than recording in our bathrooms, normally we want to record in a “dry” or fairly “dead” space and add reverberation digitally afterward, so that we can adjust the amount of reverb to what we want. Reverb can simulate a space, but it can also add a little texture, creating a soft, ethereal feeling, and it can be used to provide contrast (e.g., bands might put reverb on some of their instruments but not others). Reverb can be used on sounds to create a sense of flashback, in-the-head, or “voice of God” effect. We can also use just the reverberation of sound, known as the “tail” as a sound effect in itself, creating otherworldly sounds, similar to taking the attack portion off the sound as we did earlier.
Often there are presets for reverb that allow you to choose the type of space effect you are going for. But understanding the basic settings will help you to adjust your own reverb settings.
Usually the settings will involve some or all of the following:
Figure 4.11
Comparison of dry (top) and sound with light reverb on it. Note the longer tail on the second sound.
Figure 4.12
Reverberation: original signal and reflections.
In Audacity, the additional parameters are as follows:
Reversing reverbs can create a “backward” sounding effect, particularly if a reverse reverb is layered below a forward-played effect, creating a preverb effect. This can be used on vocals to create a psychedelic, or ghostly effect. To create the effect:
Effects > Reverse
).Effects > Reverb > Manage > Factory presets > Large room
.Effects > Reverse
).You may also come across the term convolution reverb. Convolution reverb applies all aspects of the reverb patterns of a space onto a sound. A real space is typically measured using impulse responses—calculating the distances and times of frequencies in a reverb pattern to record the “signature” of a space. It is possible now, in other words, to appear as if your file was recorded in Carnegie Hall by applying convolution reverb to your dry sound file. You can download impulse responses from libraries just as you might sound effects. Some are free and some are paid.
Exercise 4.10 Reverb in Audacity
Record some sounds and try out some different reverb effects on them. If you select the “Manage” menu, you will be given the option of a variety of presets. Try the different settings out and preview the setting to hear the difference. Try some preverb effects as well.
Exercise 4.11 Thinking about Reverb
What associations or memories do you have with reverberation? How much reverberation do you associate with warmth, versus the “voice in my head” effect? To create the voice of a ghost? To simulate a sound in a cave? Try using reverb on different sound samples and at different settings to create different effects.
In addition to reverberation, some reflections are delayed so long that they are called an echo. In technical terms, an echo is a distinct signal that comes back with a delay of more than 1/10 of a second (0.1 seconds). Sound travels in dry air at 20° Celsius at a rate of 343 meters per second. The distance a sound will travel in 0.1 seconds, then, is 34 meters. So, if we stand at least 17 meters from a barrier and make a sound (i.e., the distance from us to the barrier and back is 34 meters), we should hear an echo. If we stood closer to the barrier, we would instead hear the reflection as reverberation, not an echo (i.e., not a distinct sound reflected).
Delay effects take a sound signal and delay it before playing it back: a delay of more than 0.1 second effectively creates an echo. As such plugins are sometimes separated out into delay and echo, but usually we don’t need separate delay and echo plugins. In music production, delays are often set to the time of the music so that they are intentionally on-beat or off-beat, so many delays have not a specific time but a beats per minute, or bpm, setting. The effect of a delay can be to create a sense of space, create psychedelic effects, to create a sense of “liveness” or thicken out a sound.
Most delays have some kind of feedback gain control which keeps sending the signal through the delay, reducing the amplitude each time. The plugin takes the signal, spits out one sample, then feeds it back into the delay again, but at a reduced amplitude. There are straightforward fixed delays and there are variable delays that modulate the amount of time (rate) the delay takes (to vary the time between signals). Variable delays may also vary the shape (the path taken by the delay setting in changing its parameters) and the depth of delay (how much the delay is modulated). Figure 4.13 shows how a delay plugin works: the original signal (dry) is split and the signal is sent to the delay (becoming “wet”) where it is delayed by a certain time, then output. The output is often split into two and fed back into the delay. The dry signal is then mixed with the wet for the final output.
If we take a sound sample and add some delay, adjusting the delay time so we that have a full sample returning to us, we can see that the delay is processed so that each time it comes back it has a reduced amplitude (figure 4.14).
Open a sample in Audacity. Select Effects > Echo
. There are two options in the pop-up window: the first is the time between the dry signal and the echo, and the second is how much of an amplitude decay between echoes there should be (on a scale of 0 to 1). Note that the number of echoes you get will depend on the length of your file—if you don’t have enough room in the file for the echo, it will get cut off, so you need to add silence to the end of your file by clicking on the end of your sample, and selecting Generate > Silence
.
Figure 4.13
Delay diagram signal path.
Figure 4.14
Digital delay in Audacity.
Go back to your original (dry) file. Now open the Effects > Delay
panel. Audacity offers several options:
Exercise 4.12 Echo
Try out some different echo settings (Effects > Echo
) on a sound sample, and listen and compare. What uses besides environmental sounds could you imagine for an echo effect?
Exercise 4.13 Digital Delay
Try out the various delay settings and presets on a sound sample, and listen and compare. How does the delay effect compare with echo?
Exercise 4.14 Hearing Delay
Find three examples of music that use delay effects. Why do you think the musicians chose to use delay? What effect does it have?
There are additional delay effects that you will commonly encounter, including ADT and Chorus, although you won’t find them in Audacity without downloading additional plugins.
Chorus is when a signal is duplicated, usually multiple times, with a slight deviation in delay, with the amount of delay modulated by a low-frequency oscillator, creating a copy that is slightly varied in pitch, which is then mixed back with the original. The effect is usually used to create the feeling of a chorus, or multiple people singing very slightly off-pitch from each other and very slightly out of sync.
ADT, or automatic double tracking (sometimes called “artificial” double tracking), is used for filling out a voice or instrument, often with some other effect on the signal. It is effectively a chorus effect created by delaying an audio signal and then mixing that back with the original. ADT is an automated process based on a historical approach to singing multiple takes and mixing takes together (double tracking).
Exercise 4.15 Chorus
Create a chorus effect in Audacity by duplicating a single sample three times, giving it a slight delay each time—in effect, you are manually double-tracking:
Effects > Change Pitch
.How does this chorus style effect change the ways that you hear the sound now? Where do you think you might use an effect like this?
We learned about sound wave constructive and destructive interference in chapter 2. A phaser, also known as a phase shifter, uses these properties of sound waves. A phaser splits the sound into two signals, where one uses a filter that shifts one of those signals out of phase, resulting in constructive interference when the peaks overlap, and destructive interference when both the peaks and valleys overlap. With a phaser, the switch back and forth into and out of phase is created with a sweep across the frequencies, creating a “whooshing” sound.
A flanger is a type of phaser that also uses a second delayed signal to modulate the original, but there are technical differences in how the result is produced. With a phaser, the phase of some frequencies may be shifted a different amount than the phase of other frequencies, so the peaks and valleys of the output aren’t necessarily harmonically related. A flanger on the other hand uses a delay evenly across all frequencies: The out-of-phase “whooshing” effect is therefore more noticeable when we use a flanger.
We can see the difference on the waveform if we generate a sinewave tone then apply the phaser effect. Although the pattern repeats (see figure 4.15), there are variations within the pattern. Compare that to the waveform of this flanger effect (see figure 4.16). This flanger is an even repetition of the pattern in the effect (note also the boost in amplitude in this flanger).
Figure 4.15
Phaser effect on a sine wave.
Figure 4.16
Flanging effect on a sine wave.
Exercise 4.16 Listening for Phasing
Find some examples of music that use a phasing effect. How does it change the feeling of the sound?
The phaser in Audacity uses an LFO (a low-frequency oscillator) to shift the phase. Altering the frequency will alter the rate at which the sweep of frequencies occurs. Open a sample in Audacity. It may work better if your sound file is at least a few seconds long so you can hear the effect more easily. Click on Effects > Phaser
. There are several options:
Exercise 4.17 Phasing
Explore the phaser effect by adjusting the phaser parameters and presets on a sound you’re familiar with. What do you think you would use different degrees of this effect for?
Exercise 4.18 Flanging
There is no flanger effect by default in Audacity, but you can install a plugin flanger effect and explore flanging. How does it change the sound of different samples?
Stretching a sample out over a longer period of time is called time-stretching. Time-stretching may or may not change the frequency of the sample, depending on the plugin. Technically, changing the length of time a sample plays with raise the frequency when shortening the time, or lower frequency when lengthening the time, unless the effect allows for simultaneous frequency adjustment.
In Audacity there are two ways to stretch out the time. The first is to change the tempo (Effects > Change Tempo
). Here you will see the option to change the frequency (pitch) at the same time. This particular plugin is set up for music, so it gives the option in beats per minute, but it also has the option to change length in seconds. This will create a stuttering. The second option to alter the time of the file is to change speed (Effects > Change Speed
). This will give you a smoother stretch (figure 4.17).
Exercise 4.19 Droning On
Use the speed and tempo tools (Effects > Change Speed | Effects > Change Tempo
) to make a drone from an everyday sound. Try to stretch it out to different lengths, and try out different sounds.
Sound designer Walter Murch uses the term worldizing, which refers to creating a place or space in which sound exists, a world in which it inhabits, taking into account the acoustic atmosphere and the role of space in the perception of sound. It seems an obvious part of designing sound, but it hasn’t been all that long since these techniques began being used in film sound, for instance, or in game sound. I interviewed Nick Wiswell, who designs the sound for racing games, and he explains the idea in detail:
Figure 4. 17
Sample dry (top), speed (middle), and tempo (bottom).
One of the big things for me, when simulating the sound of a race, is simulating the environment that the car is driving through. If you think about in the real world, you’ve got a car, you’ve got your barriers, you’ve got your things behind the barriers, you’ve got the general ambience of the world. Sometimes you’ve got tunnels. And if you’ve got a nice-sounding car, the first thing you do when you put it in a tunnel is you wind the windows down, you slow down as much as you can, and you floor it, just so you can hear what it sounds like. We want to simulate that.
But you also get those little sensations if you’re driving along and then suddenly a concrete barrier is alongside you, a foot from the side of the car, you’ll notice the sound completely changes as you hear the tires and the engine slap back off that wall. One thing I think racing games can focus too much on is the engine sound. You play a racing game and you hear it and it’s just engine. That’s all it is, is engine. And it’s like, well, surely the interaction of the way the car travels through the world should be just as important. Make it feel like a space. On the graphic side they start talking about HDR lighting and image-based lighting and the lighting reacting to the world around where you are. I want to reproduce that from an audio perspective, so we’re building a system in the game that allows us to model various early reflections and more distant reflections actually real-time. It’s not baked into a reverb effect. You’ll actually hear, as the car approaches a barrier, you’ll hear the sound start to reflect from the barrier. Behind the barrier, you could have buildings and then no buildings and you should hear that sound change in contrast as it does so. You’ll enter a big grandstand area and you should hear the sound sort of echoing around in there. Drive through a tunnel and you’re completely enclosed, so you want a completely different sound again, so we’ve started designing systems using multiple delays and multiple reverbs to start really modelling not just the sound of the car but the sound of the car in the world around it and how it changes.
So the car itself isn’t changing, but everything around it that the sound is reflecting off is changing, and that creates a whole different experience and that’s somewhere where we’re going to keep pushing further and further because we believe that’s where we can make the biggest strides, in making it feel like you are in this space. You’re in this space not just because the sounds are right, but how the sounds are reacting to the environment and how far away it is. Distance models on a lot of things in games aren’t realistic. It’s sort of, the sound is gone, pretty quickly as it goes off into the distance, but we’re working on a system now where you can hear the car two kilometers away. You don’t really hear the car anymore, but you’re hearing the sound reflecting off the environment and bouncing around and that’s what you’re hearing. You’re not really hearing the direct sound of the car. You’re hearing all the interactions of the car with its world and that all bounce around, and I think that’s a big push. If we can start simulating that a lot better, it will start feeling like you are in the space and the sound is reacting to the space you’re in. (quoted in Collins 2016, 312)
Although both Walter Murch and Nick Wiswell are speaking specifically about sound design to support visual media, the concept of worldizing is equally—if not more—applicable to sound design for audio stories. We’ll come back to this point in chapter 9, but identifying the location of a scene through sound is a key role that sound design plays in radio dramas and fiction podcasts. Worldizing can involve applying impulse responses of real spaces to make the sounds feel more realistic in a particular environment, or it can be recording sounds in particular spaces to capture the change in sound, or using reverb effects creatively. Whatever technique you use on a sound should be used on all sounds in that particular scene, to create a very realistic sense of the space, unless you’re trying to have certain sounds stand out by reducing or eliminating the reverberations.
Exercise 4.20 Worldizing
“Worldize” a group of sounds into three different locations: how does it change the way you hear the sounds? Try to use each of the techniques: record the sounds in one space, use the same reverb settings, or download an impulse response for a space and apply that.
If you don’t have access to a professional recording space, now that you know a little bit about acoustics it’s often possible to set up your own at home fairly cheaply, which will suffice for most beginner purposes. Many videos and tutorials can be found online, and numerous books are available about setting up a home studio on a budget. Here are a few tips:
In this article, Doyle talks about the introduction of room ambience to popular music recordings, comparing the techniques in music with film sound design of the era and tracing the techniques through the various changes in recording studios.
This may be an academic paper with a lot of big words in the title, but it’s an accessible and fascinating read about acoustics in prehistoric times that brings together a lot of what we’ve learned in this chapter.
Maynes talks about Murch’s approach to his remake of the sound for Orson Welles’s Touch of Evil, along with other recent approaches including Richard King’s sound work on Gattaca and others. A short, accessible read.
A video of Walter Murch talking about Worldizing and how he came about using it: http://designingsound.org/2009/10/07/walter-murch-special-the-concept-of-worldizing/.
A short (five-minute) NPR program about reverb in popular music.
1 In fact, this isn’t exactly true. It’s just that with so few molecules in space, the sound waves need to be very, very low frequency to travel from one molecule of space dust to the next. A black hole, for instance, emits an oscillation every 10 million years or so, and so creates a frequency about a million billion times deeper than we can hear! See Shiga 2010.