Introduction

The composer John Cage tells the following story in his book Silence: after he played the same sound on a loop nonstop for fifteen minutes to a class of students, a woman got up and ran screaming from the room, “Take it off, I can’t bear it any longer!” Cage turned the sound off, only for another student to ask, “Why’d you take it off? It was just starting to get interesting” (2013, 93). Throughout this book, I hope that you will learn to “find the interesting” in sound. I aim to take you on a journey from being the person who might run out of the room screaming in annoyance, to being someone who is very comfortable thinking and talking about sound, who can focus on sound and learn to listen to different aspects of sound. A person who finds that the more they listen, the more interesting their world becomes.

Other books about sound design are available, but in my experience, sound design tends to be taught as sound for moving image (that is, sound for film/television, animation, theater, or games). The reader is left with no time to cultivate an appreciation of just sound, or to develop a language and rhetoric of sound on its own, to explore the potential that lies in sound as a medium and as a rhetorical device. The complexity of sound on its own is often rushed through in order to get to the technical aspects of sound for moving image. Part of this oversight is the fault of an educational system that focuses on other aspects of the production of media, and the entire world of sound is often forced into a thirty-hour course over one semester. The result is that sound designers are always enslaved to the image, creating sound for that purpose, rather than developing their skills in actually designing sound. I’m not suggesting that sound design for moving image doesn’t have a purpose: clearly, it has a very specific purpose. I’ve spent years studying the relationship that sound has to image, and image to sound, and at this point published seven books and countless research papers on that very subject. However, sound for moving image has often assumed the role of a subordinate: sound is there, we are told, to support the dominant image. The eye rules supreme in our ocular-centric Western culture. Is it any surprise that image dominates sound design practice and education, too? Of course, most sound design jobs are in film or games, so it’s understandable that sound design programs focus on sound for moving image, but having a background in sound-for-image misses out on all the possibilities that can be created by sound design as “just” sound design.

An interview I conducted with a video game sound designer, Adele Cutting, made me think there may be a better way to teach sound design in schools, by focusing on sound before moving on to sound for picture. Cutting had been hired to design the sound for an audio-only video game: that is, a game designed with little to no visual component. She explained some of the differences when there’s no image to design to:

I worked on Audio Defence: Zombie Arena (Somethin’ Else 2014), the audio-only game, the zombie shooter, and that’s like the Holy Grail for a sound designer, isn’t it? An audio-only game! It was a short turnaround—like four weeks—to do all the sounds. And it took me a good couple of days—probably three days—which is a lot of time when you’ve only got four weeks, to get my head around it. Because all these tricks that I did [with visuals]: Say, you were making a giant sound, you learn every time all these tricks to make it weighty and heavy. But when there’s no giant’s foot falling [visually], it didn’t work. I really had to get my head around it. That there was no visual clue to hang on with, because I’m always talking about how audio fills in, how audio is the glue that holds everything together, and we fix things. We make things look better when the animator hasn’t had time to do this, so we’ll put a sound in, so nobody notices. We’re always fixing things, and if things are far too slow, you can add audio and it speeds it up. You can add audio and make it go slower, but all of a sudden, [without visuals] it’s just you. I found that game at the start very, very difficult because you have to be so focused. There can’t be any fat on your sounds. It’s just got to be the one thing that you need to hear, and you can’t mix in [with visuals]. . . . I found myself chucking a lot of things out with the sound, to get the focus on it. . . . I felt it was so important that if there was only one sound going to be playing, or if you could only focus on one thing at once, it had to be the right thing. (quoted in Collins 2016, 119)

I am proposing that sound design, as a practice, may be better approached as an art form that stands alone from image, prior to learning about the complex things that happen when we put sound and image together. In other words, before we learn to put sound to image (looking and listening), sound designers are better served learning to just listen. I’ve designed this book based on my own teaching of sound design for about fifteen years now at several universities and in industry presentations and workshops, with the aim of helping others to structure a course in sound design beyond image. In an ideal world, students would then go on to learn sound for moving image in another course, and sound for interactive media in yet another course. We don’t often get the luxury of teaching multiple courses on sound, however. Anyone studying visual production would get all kinds of courses in drawing, illustration, painting, printmaking, typography, digital arts, graphic design, and so on; sound designers rarely get that same kind of scaffolded and multifaceted approach to learning.

This book is about sound design as “just” sound design. I bring in examples from other media, but the many exercises I include are meant to focus the student of sound on just that—sound. But what does it mean to design sound? We hear the term “sound designer” applied to film or video games, but what exactly does a sound designer do? In fact, although the term is fitting, it was an almost accidental title. In the Hollywood movie system, a sound editor was (and still is) the person responsible for creating and selecting sounds for film (by substituting, eliminating, and adding to the original live recording or creating the sounds in postproduction). The term sound designer was first used to describe the work of Walter Murch. Director Francis Ford Coppola recalls:

We wanted to credit Walter for his incredible contribution—not only for The Rain People, but for all the films he was doing. But because he wasn’t in the union, the union forbade him getting the credit as sound editor—so Walter said, “Well, since they won’t give me that, will they let me be called sound designer”? We said, We’ll try it—you can be the sound designer. . . . I always thought it was ironic that “Sound Designer” became this Tiffany title, yet it was created for that reason. We did it to dodge the union constriction. (quoted in Ondaatje 2002, 53)1

Although the term sound design is most commonly associated with film and more recently video games, it is also applied to radio, theater, product design, and more. Traditionally, the goal of product sound design has been to reduce or remove sound, by engineering products that absorb (incorporating foam, perforations, etc.), block, or enclose the sound. Today, however, a growing awareness of the important role that sound can play in products is redefining the role of a product sound designer. Product sound design now has many of the same concerns as film and game sound design, that is, driving our emotions, rather than strictly information.

Increasingly, sound designers are finding a role in the growing audio-based media world of podcasts, smart speakers, and audiobooks. We can also add to sound design the growing field of sound art, in which artists use sound to convey their thoughts and feelings and express themselves, much as they have done for millennia using visuals. Artists are no longer confined to canvas; they can create multimedia works that incorporate sound or make sound the primary focus of their work.

Sound design can take place at the level of a single discrete sound or at the level of an entire soundscape. Tomlinson Holman, the inventor of the THX sound format, provides a succinct definition of sound design that will suit our purposes: “getting the right sound in the right place at the right time with the equipment available” (2002, 26). Of course, describing what is the right sound is a more complicated process that requires further exploration. Sound designers must work within the constraints of context, in addition to budgetary and technical constraints. But there are additional elements that must be satisfied: the aesthetic choices made will affect the overall reception of the work or product. Is it pleasing? Is it annoying? Designers must make choices about sounds based on the ways in which they want the audience to (consciously or unconsciously) interpret the sound.

Designers design sounds by:

I was a projectionist, and we had a projection booth with some very, very old simplex projectors in them. They had an interlock motor which connected them to the system when they just sat there and idled and made a wonderful humming sound. It would slowly change in pitch, and it would beat against another motor—there were two motors—and they would harmonize with each other. It was kind of that inspiration, the sound was the inspiration for the lightsaber and I went and recorded that sound, but it wasn’t quite enough. It was just a humming sound, what was missing was a buzzy sort of sparkling sound, the scintillating which I was looking for, and I found it one day by accident. I was carrying a microphone across the room between recording something over here and I walked over here when the microphone passed a television set, which was on the floor, which was on at the time without the sound turned up, but the microphone passed right behind the picture tube and as it did, this particular produced an unusual hum. It picked up a transmission from the television set and a signal was induced into its sound reproducing mechanism, and that was a great buzz, actually. So I took that buzz and recorded it and combined it with the projector motor sound and that fifty-fifty kind of combination of those two sounds became the basic lightsaber tone, which was then, once we had established this tone of the lightsaber of course you had to get the sense of the lightsaber moving because characters would carry it around. They would whip it through the air. They would thrust and slash at each other in fights. And to achieve this additional sense of movement I played the sound over a speaker in a room. Just the humming sound, the humming and the buzzing combined as an endless sound, and then took another microphone and waved it in the air next to that speaker so that it would come close to the speaker and go away and you could whip it by. And what happens when you do that by recording with a moving microphone is you get a Doppler shift. You get a pitch shift in the sound and therefore you can produce a very authentic facsimile of a moving sound. And therefore give the lightsaber a sense of movement and it worked well on the screen at that point. (Burtt 1993)

I set up some synthesis programs for the ASP [synthesizer] that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece. The “score” consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz.” . . . The sound was produced entirely in real-time on the ASP. (Whitwell 2005)

This book focuses on the first three of these four means to design sound. The programming and use of synthesizers to create sounds is a fascinating topic, but it requires at least a book of its own as well as more advanced skills. Likewise, interactive sound also requires a separate book to understand the complexities and software involved.

The aim of this book is to provide a set of material that, with each chapter, builds on previous work that you have learned and put into practice. I have interwoven theory and suggested further reading and listening materials throughout, with the hopes that you will take it upon yourself to improve your skills by exploring the many resources available to help you to learn about sound. I have suggested exercises to help you put the theory into practice, and while you may not want to complete all of these exercises, I believe that the more you undertake, the better you will become. Most of the exercises you can do on your own, so there is no need to be enrolled in a class to do these exercises, but a handful of exercises are better experienced with the participation of a partner or class.

In my experience, many introductory books on sound can get very technical with lots of equations and physics, which might put off a beginner coming at the field from an artistic background. It’s my goal to focus on the creative side of sound design, and give you just enough of a technical foundation to get you started so you can put your creativity to work. Learning more about the technical side is an important step in a professional sound designer’s training, but in my opinion that can happen after you begin to feel comfortable with the terminology and tools available.

I use Audacity as the software sound editor for the examples that demonstrate the techniques in this book. The reason for this choice is simple: it’s free. Audacity has its limitations, and if you’re serious about sound design you’ll find yourself outgrowing it quickly, but if you’re just dipping your toes into the waters of sound design, it’s a great cross-platform tool or complement to other tools in your digital audio workstation (known as a DAW). It’s important to note that Audacity is designed as a sound editor, rather than a multitrack editor. It’s great for editing individual sounds, but as we’ll see, the software becomes more problematic when dealing with mixing multiple tracks. The exercises can be undertaken in any other audio software you are comfortable with, like Audition, Logic, ProTools, or Reaper.

As well, there is a companion website to this book at studyingsound.org that provides examples, tutorials, some of the reading material, links, videos, and other resources that you can consult as you travel on your sound design journey. I’d love to hear about your successes with the exercises, and if you’d like to share your work, be sure to keep in touch!

The book is arranged in a scaffolded fashion: ideally you should follow sequentially, so that you can build on your skills as you go. In chapter 1 we start with learning to listen, and begin to think about sound in a new way to train those ears to do what they were born to do. In chapter 2 we’ll begin to develop a language to talk about sound as an acoustic phenomenon, and learn the basics of digital audio. Then we’ll turn our attention to recording and learn the basics of microphones in chapter 3. Of course, sounds don’t exist in a vacuum, and the space in which sounds occur is important—and the focus of chapter 4. We’ll begin to explore digital audio effects that mimic spatial effects, and then take a deeper dive into some of the other types of audio effects in chapter 5. Chapter 6 puts those technical skills together in exploring the theory and practice of mixing. In chapter 7 we’ll explore an overview of spatial sound, or “3D audio.” Chapters 8 and 9 take a different tack, and present some of the useful theories for understanding sound design and putting those into practice in creating sound for story.

Setting Up

To get started, we need to set up our listening space and our software. Try to find somewhere quiet to work in. While an increasing number of sound designers use home studios, it’s important to minimize external noises in any audio workspace. If you have the option, choose a bigger room, which will generally sound better than a smaller room, but it’s important that this room is as far from neighbors or external noise as possible—a basement is generally the quietest. Ideally, you can use some basic acoustic treatment to help reduce reverberation: foam and/or acoustic tiles can be placed strategically around the room. A blanket can be hung over a window to cut down some of the noise (and the window should be closed). A quick search online for “home studio on a budget” can provide some useful tips for your particular space and link to items you can purchase in your country to improve your sound experience.

If you are on a budget and don’t have a private space to work in, a good pair of headphones is an important investment to listen to your work, and headphones are suitable for someone just starting out. Eventually you will want to purchase a good pair of studio monitors, but for this book headphones will suffice. You want to get a pair of over-ear or on-ear headphones, rather than earbuds. This is for both the sake of comfort as well as audio quality: the smaller the transducer, the less ability the headphones have to reproduce lower frequencies. Although wired headphones will give you better quality, wireless Bluetooth technology is improving. Wireless technology can be subject to electrical interference, however—I used to pick up the dispatch from the local fire station on mine! I keep mine plugged in when I work. If you can afford to purchase new headphones, do some online research as to the current best options for your budget.

As mentioned above, we are going to be using Audacity. To install Audacity, the software can be downloaded from the company’s website, https://www.audacityteam.org. In addition to the basic software, throughout the book you’ll need to install some extra plugin modules. The instructions for downloading and installing plugins, and links to the plugins available, can be found on Audacity’s website as well as studyingsound.org.

We will also need some basic recording equipment for the exercises. I recommend using an external microphone with your recorder (particularly if you are using your phone as your recording device). You can always purchase more expensive equipment later, but a basic kit is much cheaper than it used to be. A simple handheld recorder like the Zoom H4N or Tascam DR-40 is under $300. You want to get a recorder that has at least one XLR input and one line input, so that you can plug in an external microphone (professional microphones typically use XLR). These handheld recorders also usually come with built-in stereo microphones, but you’ll find yourself using the external inputs more than the built-in mics. You can start with the built-in microphones, and, if you can afford it, add an external microphone—a shotgun microphone is a common first microphone to purchase for your sound design kit. As with much technology, the price of microphones has been dropping, and today the difference between a cheaper microphone and the high-end professional microphones can be minor in some cases. Certainly, while you are starting out and training your ears, the high-end microphones are not necessary. As your listening ability improves (and your playback technology also improves), you will notice the difference between the cheaper and the more professional microphones. Most sound designers have a selection of microphones for different purposes, but we’ll come back to that later.

1 In fact, Coppola misremembers, as Murch is credited as creating the “sound montage” in The Rain People (1969), and was first given the title of sound designer for his work on Apocalypse Now (1979).