1 Hearing and Listening

What is the difference between hearing and listening? Lift your eyes from this page and look straight ahead for a moment: Notice that we see many more things than what we are looking at directly. We can focus on an object, but the eyes—and the brain—take in a lot more around us than just what we may or may not be aware of. I’m looking at my computer monitor, but I see my speaker monitors behind, and the posters on the wall behind them, and my shelves off to the side, and a second desk off to my left where another computer sits. My dog curls up in the corner. There is a black rug under my chair, and I see my arms moving, and all kinds of small details that aren’t part of what I’m looking at. I see far more than I usually observe. A similar perceptual phenomenon happens with hearing. We are surrounded by sounds, and most of the time we are in a passive hearing mode, actively listening only when we are talking with someone (and actually paying attention to what they say!), in a potentially dangerous situation like crossing the road, or listening for a responding “beep” to a text message. What music we listen to is often just on in the background, a wallpaper of noise. Most of the time, sounds are just there, all around us. We listen with ears half open, not consciously paying attention to sound unless it’s something that we are actively focusing on. We hear without listening, just as we see without looking.

Can we train ourselves to listen? How can we become better listeners? Like anything else we learn, what we need is practice, and we start our journey into sound design with becoming more aware of the sounds around us. We can learn, over time, to spend less time with ears half open and more time actively listening. Like a photographer walking around and mentally framing shots while scanning the landscape, we can learn to be aware of and thinking about sounds around us. Becoming a listener doesn’t happen overnight, but with time and patience and practice, you will find yourself noticing more and more of the sounds around you. You’ll find yourself hearing sounds that others haven’t noticed, and you’ll hear sounds that you never noticed before, and, sometimes, sounds you wish you hadn’t noticed! Unfortunately, once you’ve opened your ears, the world becomes a very noisy place.

This chapter will introduce hearing and listening and begin to provide a language to think about and talk about sound. Listening is work that should be practiced and referred to again and again, until it becomes second nature. There are many exercises here to get you thinking about the sounds you’re hearing, and training you to listen to them instead of just hear them. Training your ears is just like training your muscles in the gym: you can’t transform yourself overnight. You have to keep going back and working at it, and it must be sustained or you’ll find yourself losing your gains.

Exercise 1.1 Quiet Time

We’ve probably all tried sneaking into our house at night: every sound we made seemed suddenly amplified. Trying to be quiet is a great way to focus on actively listening. Try standing up from your seat without making any sound. Try it again with eyes closed. Listen to the sounds. How was the process of listening to your own sounds different from the way that you normally hear sound? (adapted from Schafer 1992)

1.1 Talking and Writing about Sound

Throughout this book you will find exercises and suggestions to get you to think about, experience, and practice sound in new ways. Keeping a notebook to write down your thoughts will help you to formulate your own ideas about sound and track your progress. You might also take a few moments to compare your own thoughts with those of friends, colleagues, or classmates as you follow along, or check the companion website (studyingsound.org) for another perspective.

Purchase a new journal for your sound practice. It helps if it’s pocket-sized. You might wonder why I suggest a paper notebook and not your computer or phone. In theory, you could use a portable computer (laptop, phone, or tablet), but you’ll find a pocket notebook will be handy to keep with you on a walk where you may not want to bring a computer (for instance, out in the rain). A phone isn’t as effective to take notes on because the act of typing on a touchscreen requires you to focus visually on the phone and concentrate on that rather than the sound, which can interfere with the practice. You may also want to use your phone for other aspects of the exercises in the following chapters, as a pocket recorder, for instance, or to check frequencies or the volume of sounds you are hearing.

Once a day, practice sitting still for five minutes and writing down what you hear. You can sit in a different place, or sit in the same place. Sit at different times of day, and in different moods, or the same time and place and mood. What matters isn’t so much what sounds you hear as your practice to actively attend to, concentrate on, and think about those sounds. You need to do this daily, rather than trying to pack in a week’s worth all at once, because you need to start training yourself to listen, and this takes time. If you’re serious about sound, listening is the most important skill you can have.

In addition to this daily exercise, keep writing down your thoughts about the other exercises, any readings or news media you come across that are related, as well as note any interesting sounds you hear in real life or in movies or other media, so you can reflect on your learning, and refer back to it in a few months and see your progress. You may also come up with some great sound design ideas that you don’t want to forget, and your sound journal is a great place to jot these down as you go.

To be a sound designer, we need a language to talk about sound. Language is one of the tricky aspects of dealing with sound. And even after we’ve grasped the language, chances are we’re going to have to talk about it with someone who hasn’t yet learned that language! As children, we’re taught a lot about visuals. We learn about shape and color and texture, and we learn the language to talk about these. If I asked you to draw a circle with a diameter of five centimeters and fill it in with a smooth, lime green color, you could probably come up with something very similar to what I have in my mind. But how do we talk about sound? Sound is time based, which makes it more difficult, and it’s never the same twice. Even if we use an electronic reproduction of a sound, we don’t hear it the same way twice, and the environment in which it’s played is also always shifting and plays a role in our hearing. More importantly, we’re also usually not taught a language to describe sound unless we are referring to musical sound, which has its own specialized language and doesn’t actually refer to the sound of the notes played, only the notes themselves.

Exercise 1.2 Describing Sound

Undertake this exercise every day, and we’ll build on it as we go: Take five minutes and sit quietly, writing down all of the sounds that you hear. The first time you try this exercise, you might come up with a list a little like this one, which are the sounds currently occurring as I type this out:

  • • Music in the background
  • • A car driving by
  • • My fingers typing on the keyboard
  • • Breathing of my dog next to me
  • • A scraping sound of someone shoveling snow outside
  • • The backup beepers on a truck at the construction site
  • • My own breathing
  • • Whirr of the heating duct
  • • Hum of the overhead light

This list is a good start, and we’re training our ears based on how we’ve been taught to listen in the past; but let’s dig a little deeper here.

1.1.1 Sounds and Their Causes

There are two ways I’ve described the sounds I heard in my listening exercise 1.2. The first is in terms of their cause—in other words, the thing that is causing, or making, the sound: for instance, “a car driving by.” The problem with such a description when used to describe sound is that it only tells you what sound I’m hearing if you know what type of car is being driven (a truck sounds different from a Porsche), what the weather conditions are (tires in rain sound different from tires on dry road), what time of day it is (a car in the middle of the night will appear to sound louder), what the speed of the car is, what gear it is in, what the mechanical condition of the car is (is there a hole in the exhaust?), what kind of tires it has (winter tires make a different sound from summer tires), and more. Without all of this detail, we might conjure up a generic concept of “car-ness,” but it’s not a very accurate descriptor of what I heard. How the car is moving is an important indicator of what is happening: Are they squealing tires with some bass thumping out the windows, or are they creeping past very slowly, eerily, suggesting some form of surveillance or stalking—these are two very different sounds! We have to have an agreement on what my description of the car means to even begin to guess all of the associations with the sound of a car driving by.

Let’s look at another from my list: “my fingers typing on the keyboard.” We’ve all typed on a keyboard, but keyboards have very different sounds, and the speed of typing depends on the skill of the typist. The volume of the typing might depend on whether or not the person is frustrated or angry. The tempo may be altered if they are stopping and thinking about what they are typing, or if they know what they are going to type in advance. An Apple keyboard with its low-lying keys sounds very different from a cheap PC keyboard. I have one key that sticks and requires me to hit it harder. So again, “typing on a keyboard” is not really an accurate description of what “typing” sounds like, only the cause behind the sound. The first thing we can learn as sound designers is to be more descriptive in our journals. Moving forward, as you practice listening, get as descriptive as possible for each sound. This requires us to really concentrate on the many attributes that go into the sound, rather than just the cause behind the sound. Concentrate and think about the sounds you hear and imagine trying to describe them in a way that someone could use to reproduce the sound.

1.1.2 Onomatopoeia

The second type of description I used in exercise 1.2 relates to onomatopoeia: a word formed from the description of the sound it makes. I’ve used the “whirr” of the heating and “hum” of the light. We use these types of descriptions with animals a lot: a dog’s “bow wow,” for instance. But did you know that onomatopoeia is dependent on language and culture? A dog says “av-av” in Serbian and “hong-hong” in Thai. So much for using words to describe sounds! If you’re a gamer or anime fan, you’ve probably heard the phrase “doki-doki”: this is the Japanese term for the heart beating quickly. It’s not just a literal sound, but also carries the meaning that one is in love, and their heart is racing. The Japanese actually separate onomatopoeia into three categories, and about 1,200 Japanese words are cases of onomatopoeia, compared to English, which only has about 400 (Kincaid 2016).

Exercise 1.3 Gerald McBoingBoing

Dr. Seuss created a character called Gerald McBoing-Boing (TV series, 1956) who talked in onomatopoeia sound effects: “When Gerald started talking, you know what he said? He didn’t speak words—he went boing boing instead!” The animation uses sound effects, but the book relies on onomatopoeia to describe the sounds. How many onomatopoeia words can you describe off the top of your head? How much can you communicate with just onomatopoeia? Try to write an entire day’s journal entry just using onomatopoeia (hint: you’re going to have to make up some new examples of onomatopoeia).

1.1.3 The Importance of How We Think and Talk about Sound

How can we describe sounds in a way that everyone understands? To do this, we need to learn more about acoustics and use a more precise technical language for sound. There is so much more to sound than what was in my list in exercise 1.1. If you look at my list, you’ll see I didn’t describe where sounds were occurring in space in relation to my position: was the music in front or behind me? Did the car drive by me on the right or the left? How far away was the construction site? The placement of sound in relation to our own bodies also affects the way we perceive sound. We’ll be tackling a language to describe sound and focusing our ears on all of these issues in the coming chapters. Gradually, as we progress through our journey, we will learn to fine-tune our descriptions of what we hear. For now, start to think about how descriptive you can get about the sounds you hear on your daily listening practice. Try to capture as much information about the sounds as possible. The more descriptive you get, the more you’ll find you need to really focus on the sound itself and not just the cause of the sound.

How we think about and talk about sound influences how we use sound in our creative processes. In sound libraries—collections of sound effects recorded for our use as sound designers—sound effects are often categorized based on what caused the sound: “airplane” sounds, for instance, or “bird sounds.” But sounds could be categorized based on what we might use them for: “scary sounds” (hawks or crow sounds are often used in horror), or “morning” sounds (the rooster or the dawn chorus). When we design sounds for media, we often use sounds that are not tied to their actual causality. For instance, we use the snap of frozen celery sticks for the breaking of a bone. Who would think to look for “vegetable sounds” in a sound effects library for their horror film unless they were aware of these uses?

In other words, how we describe sounds to ourselves and to others can influence the creative uses of those sounds. It’s important, then, to think “outside the box” in our descriptions and categories, and to move beyond causality into other aspects of sound. To do that, we need to practice listening, and we need to learn a new language for talking about sound.

Exercise 1.4 Categorizing Sound

Take a list you created in one of your daily listening exercises, and think of the ways in which you might categorize these sounds. For instance, you might divide the list into opposing elements:

  • natural—human-made
  • pleasant—unpleasant
  • quiet—loud
  • rough—smooth
  • low—high
  • discrete—continuous
  • near—far

What other categories can you come up with to group your sounds? What do the categories tell you about the types of sound that you hear, and the ways that you think about sound?

Exercise 1.5 The Sound Walk

Sound walks are simply walking while paying attention to sound, rather than sitting in one place, so that we can experience several different places and listen to the changes. On a sound walk, we are quiet and listen with attention to all of the many sounds that we normally ignore. We can do sound walks alone or with a partner who guides us, blindfolded, around the walk. The sound theorist Hildegard Westerkamp (2007) suggests first starting with listening to your own body while moving. Listen to your footsteps and how they change on different surfaces. Make a sound by clapping your hands or whistling. Try the sound in different rooms. How does it change? Once you’ve practiced listening to yourself, pay attention to the environment. Do you hear other people? Can you detect rhythms? What are the loudest and quietest sounds that you hear? Focus on a sound and walk toward it. Notice how it changes with proximity. Move indoors. How does sound change in different environments? How did your listening ability change when you couldn’t see?

Exercise: 1.6 Destined to Repeat

“In Zen they say: if something is boring after two minutes, try it for four. If still boring, try it for eight, sixteen, thirty-two, and so on. Eventually one discovers that it’s not boring at all but very interesting” (Cage 2013, 94). Find a sound that at first might seem boring, but after repeated listening becomes much more interesting. How does the sound (appear to) change over time? Describe it!

Exercise 1.7 Listening for the First Time

Listen attentively to something that you typically hear but never listen to, such as the full cycle of a dishwasher, washer, or dryer. What did you hear that you never noticed before? How difficult was it to pay attention for such a long length of time? Did you mentally add beats, or musical notes, or anything to force a structure or pattern onto it? How long were you able to listen before your mind started wandering? Can you train yourself to listen for longer? Repeat this exercise after you’ve finished the book, and compare notes with your first listen.

Exercise 1.8 Soundmarks

R. Murray Schafer, one of the first acoustic ecologists, writes, “Just as every community has landmarks which make it special and give it character, every community will also have original soundmarks. A soundmark is a unique sound, possessing qualities that make it special to a community” (1992, 123). Examples might be a local public clock, foghorns, trains, and so on. Find and describe the soundmarks in your community—either your home, your neighborhood, or the entire city.

Exercise 1.9 Sonic Fingerprint

What sounds are personal to you that others might be able to identify you by? For instance, my dog used to be able to identify my car from all the others that went by our busy street and would run to the window when he heard it. One exercise I try in my classes is to have four students come up to the class with their sets of keys. Facing the front, another student stands behind them and subtly shakes their keys. Can the students recognize which set of keys is theirs by the sound alone? I find the majority of the time they can guess their own keys, even though they’ve never consciously paid attention to the sound before. Think about your own personal sonic fingerprint(s): perhaps it’s your car, an unusual walk, or your keys, and come up with a list of sonic ways that someone close to you might be able to identify you. (adapted from Schafer 1992)

Exercise 1.10 Sound Timer Reminder

Send yourself a little reminder to stop and listen. We can get distracted pretty easily and forget to pay attention to what is around us. You can get a timer for your phone or watch, and set it to go off a few times a day. When it does, take sixty seconds out to focus on and listen to the sounds of wherever you are. Listen to how basic sounds change depending on the environment—your footsteps change based on the temperature outside, what you’re walking on, what mood you’re in, what the weather is, what other sounds are around you, where you are, and so on. Pick a sound to focus on, like footsteps, clicking your fingers, or your breathing, and write down how that sound changes throughout the day.

1.2 The Ear and the Brain: How We Hear

While we focus on listening practice, it’s worth understanding what is happening on the biological side of hearing. In a sense, we hear with our whole bodies and not just with our ears. Our bodies have resonant cavities in them in which sound vibrates: our lungs, our bones, and even our eyeballs resonate with different frequencies. Scientists have tested the base human body resonance to be between 5 and 16 hertz (Hz) (Kitazaki and Griffin 1998). Different parts of our body vibrate at different rates, though, with our head vibrating between 20 and 40 Hz (Hz is a measure of vibrations per second: we’ll come to that in chapter 2).

The human eyeball typically resonates at about 19 Hz, which is below the normal threshold of hearing (meaning we can’t hear a sound at 19 Hz). In the 1980s, a scientist named Vic Tandy was working in a “haunted” medical lab that many people found left them feeling uneasy. One day he brought in his fencing sword and noticed it vibrating. He discovered that the sword vibrated at about 19 Hz, and traced the vibration to a fan in the building. Shutting the fan down shut down all the reports of ghosts. Tandy later tested the theory in a fourteenth-century “haunted” pub cellar and found the same frequency (see Jasen 2016). Could it be that what we call ghosts are just cases of our own eyeballs resonating? More recent work has found that the roar of a tiger is 18 Hz, and could be used to disorientate and paralyze prey in advance of an attack by resonating their eyeballs (American Institute of Physics 2000). Different frequencies of sounds, in other words, affect our physical body in different ways.

In addition to sensing sound through our bodies as a whole, a common means of hearing is through bone conduction, and hearing-impaired individuals can have some hearing sense through this method. It’s been reported that the famous eighteenth-century composer Ludwig van Beethoven used bone conduction to hear after he went deaf, by using his jawbone. Clenching a wooden rod in his teeth and attaching it to the piano, he could sense the vibrations through his jaw (Larkin 1971). Bone conduction bypasses the eardrum, and vibrates the inner ear directly through the bones of the skull. Bone conduction headphones sit on the bone in front of (or behind) our ears, and are used by the military because they don’t cover the ear canal, so can be used to supplement regular hearing for communication. In this way, we can hear everything going on around us with our ears, and any communication through the headphones. Apple was recently granted a patent for a method to incorporate bone conduction technology into their own headphones, so it’s likely bone conduction is going to become more commonplace in the future (Dusan et al. 2013).

Exercise 1.11 Bone Conduction Headphones

If you’re particularly interested in bone conduction, or want a set of headphones you can wear while also listening to the world around you (while jogging, for instance), you can purchase some bone conduction headphones for a reasonable price. If you have access to a pair, write down your experience of bone conduction listening to music or sound in your journal. How does the sound through bone conduction differ from regular headphone listening? What aspects of the sound are emphasized? Do you hear more or less through the bone?

Exercise 1.12 Bone Conduction with a Dowel

Here we will repeat Beethoven’s technique. Get some wooden dowel (3 mm, or 1/4 width is enough, at about 40–50 cm—15 inches—long) from your local hardware store. Put earplugs in your ears or use your fingers to block your ears. Put one end of the dowel in your teeth and bite down. Put the other end on a speaker, piano, guitar, or other vibrating surface. How does this alter what you hear? What aspects of sound do you miss out on?

Exercise 1.13 Bone Amplifier

In this experiment we will build a jaw-bone conducting amplifier (adapted from Oakland Toy Lab, n.d.).

  • Equipment List
  • Two wires, about 30 cm (10) each
  • Wooden dowel, about 6 mm (1/2) diameter, about 10–15 cm (6) long
  • 3.5 mm audio plug, or “mini jack” (male) for soldering—you may have to purchase these with a cap that you should remove
  • DC motor 1.5–3V 15K RPM
  • Wire stripper
  • Soldering iron and solder
  • Drill with 1/16 bit

Strip about 2 cm (1/2) on each end of the wires.

Solder one end of each wire to one of the tabs on the motor.

Figure 1.1

Equipment to build a bone conduction amplifier.

Solder the other two ends onto the tabs on the jack.

Drill a hole in one end of the dowel with the drill bit.

Push the end of the motor into the end of the dowel with the hole in it. You may have to wiggle it or put some pressure on it to get it to sit firmly in the hole.

Plug it into your computer, stereo or phone’s headphone port and bite down on the dowel. You’ll need to turn the volume right up, particularly if you’re using it with your phone. Keep your fingers free from the dowel so it can vibrate correctly.

Put in some ear plugs or plug your ears with your fingers and listen!

1.2.1 The Outer Ear

While bone conduction is interesting, most of our hearing takes place through our ears. The outer, fleshy part of the ear is known as the pinna (plural pinnae). Another name for this visible part of the ear is the auricle. The pinna funnels sound toward our ear canal. If our ears were cut off, we could still hear, but it would be much more difficult, particularly in localizing (finding the direction of) sounds. With the pinna funneling sound into the canal, we can have a greater sense of our auditory environment and directionality. High frequencies reflect off the pinna in ways that differ according to the angle of the sound. Because we all have differently shaped ears, we hear sound slightly differently. In fact, our pinnae are so unique that earprint identification can be used in forensics like fingerprints (see, e.g., Meijerman, Thean, and Maat 2005).

Approximately 2 to 3 cm inside our ear holes—the auditory canal—is the eardrum, also called the tympanic membrane (tympani are kettle drums used in the orchestra). Unlike the skin of a drum, though, the tympanic membrane of our ear is a very delicate, thin membrane, approximately 0.1 mm thick. It can be easily pierced, which is why sticking anything into our ear canal—like cotton buds—is dangerous. The tympanic membrane is so sensitive that it can even be pierced by very loud sounds or pressure changes as when scuba diving or flying in an airplane. The many nerve fibers in the membrane make the eardrum very sensitive to pain. The tympanic membrane vibrates with the different sounds that enter the ear canal and transmits those vibrations through to the bones of the middle ear where they are amplified for hearing.

1.2.2 The Middle Ear

The middle ear consists of the space between the tympanic membrane and the oval window. This hollow space of the middle ear is known as the tympanic cavity, and is surrounded by the tympanic bone, which can function as a bone conductor. The tympanic cavity works as an amplifier that takes the vibrations from the tympanic membrane and transmits them to the inner ear via three tiny bones called the ossicles: the hammer (the malleus), the anvil (the incus), and the stirrup (the stapes). The malleus and incus bones developed through evolution from the upper and lower jaw bones in reptiles, which has been traced in fossil records. Our frequency range and sensitivity is determined by the shape and arrangement of these bones, which is why some mammals can hear ranges of sounds that humans cannot. The last of the three bones, the stapes, is situated inside a membrane-covered window in a bony separation between the middle and inner ear, known as the oval window.

The tympanic cavity is connected to the nasal cavity by the eustachian tube, which allows us to equalize pressure in our ears. We can manually adjust the pressure (for instance if we are scuba diving) using what is known as the Valsalva maneuver, in which we pinch our nose closed and then blow out gently. Blowing too hard can damage the ear, so this must be done carefully.

Figure 1.2

Diagram of the ear (not to scale).

1.2.3 The Inner Ear

When the stapes vibrates, it moves the fluids in the inner ear. Unlike other areas of the ear, the inner ear is filled with fluid and is responsible for both sound and balance. The inner ear contains the three components of the semicircular canal—three ring-like structures that are responsible for determining our sense of balance. As fluid, called endolymph, moves around the canals with the position of our head, sensors are triggered that allow our brain to determine our head position. Two vestibular sacs in the inner ear—the saccule and utricle—provide information about linear acceleration and gravity.

The other area of the inner ear, more important for sound, is the cochlea, a snail-shaped organ consisting of many tiny hair-like structures known as cilia. The entire length of the cochlea is lined with these cilia, and these are each attuned to different frequencies. As a sound wave moves in the cochlea, different frequencies will trigger different cilia by bending them slightly, sending electrical signals via the auditory nerve to our brain to tell us which frequencies were heard. There are many thousands of these cilia (between 12,000 and 24,000) gathering sound waves and sending impulses to our brain.

A large part of the cochlea is dedicated to the middle frequencies, with a peak range of 3500–4000 Hz. Most of what we hear in our world—including music and speech—is in this range of our hearing. In fact, when the gramophone was invented, most records (78 RPM shellac discs) until the mid-1940s had a top range of about 4,200 Hz (see Browne and Browne 2001). Even though we can hear much higher frequencies, most of our hearing takes place below about 8,000 Hz, and we are particularly sensitive to the speech range.

1.2.4 Hearing Development

The hearing organs start to grow in a fetus at just three weeks of pregnancy, and by week eighteen a baby will begin to hear sound. Soon after, the baby will begin to respond to voices or other noises it hears. Because there is a barrier between the baby and the world; the volume is muffled to about half of what we would hear outside the womb. But the baby can also hear sounds in the mother’s body—the grumbling of the intestines, the heartbeat, and so on, and these would be heard much louder than by someone outside the body. Before a baby is even born, it can recognize the sound of its mother’s voice. You may have heard of attempts to increase a baby’s intelligence by playing music to it while in the womb, but there is no evidence that this works. What babies do learn, though, is the rhythm and cadence of what will become their native language—they can tell the difference between English and French, for instance, and can recognize the rhythm and pattern of stories that have been read to them in the womb after they are born.

Figure 1.3

Diagram of the cochlea.

Exercise 1.14 Our First Sounds

Write in your journal what it must be like for a baby hearing sound from the womb. What sounds would they not be able to hear because of the muffled barrier of the womb? What sounds would they hear more loudly because of where they are?

Exercise 1.15 Hearing, not Listening

We hear constantly, even in our sleep, to the point where sounds can shape our dreams. Can you recall any dreams you’ve had where an external sound entered your dream? I know I’ve heard my phone ring in my sleep, then gotten up and discovered it hadn’t rung after all. Try setting a timer on your computer or phone to play a sound quietly just before you wake up (before your alarm clock, if you set one), and see if it gets incorporated into your dreams.

1.3 Human Hearing Ability

Humans generally have a hearing frequency range of about 20 Hz to about 20,000 Hz (20 kilohertz, or KHz)—twenty vibrations per second to up to twenty thousand vibrations per second. As we age we lose the higher frequencies, with this deterioration beginning about age eighteen. Most people over about the age of thirty have already lost the top few thousand frequencies. Fortunately, there isn’t much in that range that humans need to hear, so you likely will not notice. Currently there is nothing we can do to combat this age-related hearing loss. Some people have used this type of hearing loss to their advantage: the “mosquito” ringtone for phones is at a frequency of about 17 KHz, and is designed for young people to use (in classrooms, for instance) without the knowledge of older people, who won’t be able to hear it. Older people have also used this to their advantage with “mosquito alarms,” which are played outside some convenience stores to deter teenagers from hanging around.

Sound above our hearing threshold is called ultrasound. You’re probably familiar with dog whistles: dogs can hear tones above our own hearing range, and most dog whistles are about 22 KHz. But even dog hearing is unimpressive compared to some other creatures: bats echolocate at frequencies of up to about 200 KHz. The wax moth can hear sounds as high as 300 KHz. On the other hand, some creatures can hear frequencies well below our hearing threshold, called infrasound—humpback whales have been recorded singing as low as 3 Hz, and the mantis shrimp, which can make sounds as high as 100 KHz, is also capable of sounds as low as 1 Hz.

Exercise 1.16 Imagining the Hearing of Others

We now know that some animals hear sounds we can’t hear. But what’s even more remarkable is the way that some animals hear. Some fish have cilia along a line on their sides, known as the “lateral line,” so their whole body responds to sound waves. One type of squid, the longfin inshore squid, changes color based on sound—its chromatophores respond to changes in the environment, including sound. This exercise is a practice in creativity: imagine your sonic environment from the perspective of another creature, and write down what your listening environment sounds like.

Exercise 1.17 Online Hearing Test—Frequency Responses

You can test your hearing using a tone generator, which you can find at studyingsound.org. Use headphones. Set the volume of your computer to a comfortable level. Start in the middle range, which as you learned when discussing the cochlea is not the technical middle, but the range where our hearing ability peaks, at about 3,500 Hz. Reduce the frequency to the point where you can no longer hear the sound. Record the lowest frequency that you can hear. Note that the low sounds may drop off because your headphones or computer can’t reproduce those frequencies, not because your hearing is damaged. Now try going up in the other direction. What is the highest frequency that you can hear? A professional audiologist will test your frequency range for speech, but rarely tests above or below speech levels (in my experience, an audiologist tested only 200 to 8,000 Hz). You may need to use a subwoofer or studio speakers (monitors) to get a more accurate representation of your low frequency threshold.

Exercise 1.18 The Cocktail Party Effect

Most hearing tests will play multiple sounds at once to see how well you differentiate speech from other background sounds. The cocktail party effect is the brain’s ability to focus on and differentiate sound in a noisy environment—like trying to listen to someone talking to you at a busy party. How loud can background sounds get before you can no longer hear what is being said? This speech differentiation is often the first thing many people notice if they have hearing loss.

When it comes to loudness, humans can hear sounds between 0 and 140 decibels (dB). Decibels measure perceptual hearing level, not loudness, so while there are sounds below 0 dB, we can’t usually hear them with our ears (we’ll come back to decibels in the next chapter). We can also hear sounds that are more than 140 dB, but it’s painful, will cause permanent hearing damage, and will likely rupture our ear drum, so it’s not practical for us to hear above that threshold.

We begin to cause damage to our hearing at about 80 dB if we’re exposed to the sound for many hours, as in some workplaces, and the damage can build up over time. The European Union cutoff safety point for sound in workplaces is 80 dB. At 90 dB it takes less time for our hearing to become damaged, but for short periods of time 90 dB is usually safe. At 115 dB (which is quieter than many rock concerts!), even a very short sound will cause irreversible damage. The cilia in our ears do not regenerate, so once damage has been done, our hearing is permanently damaged. Although we can’t help age-related hearing loss, we can control noise-induced hearing loss.

Table 1.1 Approximation of sound loudness

180 dB

Rocket launch (measured on the platform)

160 dB

Gunshot (at close range)

150 dB

Fireworks

140 dB

Pain threshold

130 dB

Plane taking off

120 dB

Loud concert, yelling at maximum volume, siren

110 dB

Pneumatic drill, jackhammer

100 dB

Subway train, power mower

90 dB

Bass drum, legal limit for industrial noise in many places, motorcycle, loud club

80 dB

Busy restaurant, EU limit for noise exposure in workplaces without protective hearing

70 dB

Hairdryer, alarm clock, traffic

60 dB

Busy street, talking loudly

50 dB

Average conversation

40 dB

Mosquito near you

30 dB

Quiet room, recording studio background level

20 dB

Whisper

10 dB

Breathing quietly

0 dB

Leaf falling on the ground

1.3.1 Equal Loudness

Different frequencies have different perceptual volumes, since human hearing sensitivity varies with frequency. Lower frequencies drop off sooner, so low-frequency sounds are often given a boost by built-in equalizers in our stereo systems, to appear to balance out the frequencies. To demonstrate our hearing sensitivity, we can use what is called a Fletcher–Munson curve. These equal loudness contours measure decibel sound pressure level (dB SPL) over the entire frequency spectrum, providing a uniform appearance of loudness with pure sine wave tones.

To read these diagrams, first look at the bottom of the chart: these are the frequencies. Note that frequencies are not evenly spaced. This chart shows frequencies from 20 Hz to about 15 KHz. On the left are the decibels going up the chart. As stated above, we hear sounds of different frequencies at different perceptual volumes. So for a sound of 100 Hz, we would need a decibel level of nearly 40 dB to hear the sound. At 1,000 Hz, where we are more sensitive, we can hear the sound at 0 dB, and in the range we are most sensitive to (about 3,000 Hz), we can actually hear below 0 dB in optimal conditions (it’s unlikely you can hear these frequencies at that level anywhere but in a specially designed studio and only if you have excellent hearing).

Figure 1.4

Fletcher–Munson curve.

In simple terms, our ears are not very good at hearing the lower frequencies compared to the higher frequencies. As loudness increases (the higher lines on the graph), the lower frequencies tend to flatten out as the level of volume increases. This means at higher sound levels the ear is more sensitive to (better at hearing) lower frequencies. Once we hit about 6,000 Hz the ear becomes less sensitive again. When we listen to music, we tend to “crank it up” because the added bass we can hear that comes with the volume increase means that the music feels richer, since we hear those bass frequencies more effectively at the higher volumes.

Exercise 1.19 Remanence

Remanence “is the continuation of a sound that is no longer heard” (Augoyard 2009, 87), like a musical earworm. The sound gives the impression of remaining after it’s no longer there. Keep your notebook handy and track any remanence you hear in a day. Are there any common traits you hear among sounds that lead to remanence for you?

Exercise 1.20 Sudden Silence

Turn the power off where you live. How many sounds were there in the background that you hadn’t noticed before? When you turn the power back on, how many new sounds are added back into your daily environment that your brain had learned to tune out?

Exercise 1.21 A Day without Sound

For this exercise you will need some equipment: at a bare minimum, a set of very good ear plugs. Ideally, you will use earplugs and then wear safety ear muffs over those. Remove sound from your life for one day (or half a day will suffice). Be sure you are going to be safe by staying with a friend or staying at home. Write a journal entry of your time without sound. Once you’ve spent a few hours without sound, how does returning to sound change the way you hear sound? What new sounds do you hear that you hadn’t noticed before?

Exercise 1.22 Listening to Auditory Streams

In any soundscape, there are usually different things making sounds. We can think of these like different instruments in an orchestra. Find a busy soundscape, and spend a minute listening to each separate stream, or auditory source, focusing on the individual sounds and then on the whole. How many separate streams can you hear? What is the busiest place you’ve found in your sound walks? What is the least busy?

Exercise 1.23 Listening to Media

Once you have had some practice listening to a variety of natural environments, try comparing that with listening to a film or video game. If you’re alone, it’s easiest to do this exercise with a film, but if you have a friend with you who can play a game while your attention is on the sound, you can do it that way, too. Pick a film that you know well and have watched already at least once. Turn your back to the screen and just listen to the film. What do you hear that you didn’t notice before? What sounds don’t resemble the real world you’ve been listening to, and why?

1.4 Protecting Your Hearing

Probably the greatest damage to your ears is going to come from loud sounds, whether it’s from your iPod, long-term exposure to a noisy workplace (everything from nightclubs and rock concerts to landscaping with power tools), or being exposed to a sudden loud sound. Fireworks at close range (150 dB), gunshots (140 dB), race car engines (140 dB), and industrial machines are the biggest culprits in urban life, but natural events can also cause great damage—thunder at close range is about 120 dB, and earthquakes have reached at least 250 dB. It’s been estimated that Krakatoa was about 180 dB and ruptured the eardrums of people forty miles away. The Tunguska meteor explosion in Russia in 1908, the loudest known sound, was about 300 dB. Even the blue whale sings at nearly 200 dB! In other words, there are some things we can’t control that we may be exposed to in our lifetimes, but most of the time we have control over the noise pollution by using earplugs (which usually reduce sounds by about 20 to 30 dB), ear protectors (a good pair will reduce sounds by about 30 to 40 dB), and not playing our music too loudly.

The canal from the outer ear to the tympanic membrane contains ear wax. The ear wax, called cerumen, may be wet and waxy or dry, depending on your genetics. The wax protects the ear from dust, microorganisms, and foreign material. Great caution should be taken when cleaning your ear wax with a cotton bud or other foreign body. It is better to wipe away any wax that has already exited the ear canal, and not put anything into your ears to remove the wax yourself. Not only do you run the risk of perforating your eardrum accidentally, but you can end up pushing the wax deeper inside, and impacting it inside your canal where it will reduce your hearing ability and must be taken out by a doctor with a special instrument. Don’t use ear candles, tinctures, or medications to clean the wax unless you are under the care of a physician.

Cold weather can also cause damage to your hearing over time, so it’s worth wearing a hat or earmuffs in the winter if you live in a northern climate. This damage is known as surfer’s ear, a form of exostosis. It’s not going to happen overnight, but over time the tympanic bone will thicken and develop new bony growths in an attempt to protect the inner ear from the cold. The thickened bone can actually trap water in your ear and lead to infections. If you spend a lot of time in cold water or outside in the cold, be sure to invest in something to keep your ears warm.

Tinnitus, often described as a ringing in the ear but which can also present as a hiss, a grinding sound, or other auditory phenomena, is often the first sign of hearing damage. Hearing damage can be caused by a number of factors: disease, injury, exposure to noise, stress, and medications can all affect hearing ability. Some common over-the-counter and prescription medications like acetaminophen, narcotics, antidepressants, and anticancer drugs can cause temporary or permanent damage to hearing, called ototoxicity. If you’re serious about a career in sound, or want to protect your ears, it’s important to discuss ototoxicity with your doctor and pharmacist whenever you are taking a new medication. Not all doctors are aware of the ototoxic effect of some medications, and you may not notice until it’s too late, so do your own research. If you are taking medication and develop tinnitus, be sure to see a professional right away to discuss whether the cause could be your medication. In addition to diminishing your ability to work as a sound designer and to listen to music, studies have shown that hearing loss can contribute to dementia and depression, so it’s worth caring about your ears!

1.5 Headphones Guide

Actively listening to sound is also referred to in sound design terms as monitoring, since you are monitoring what is occurring. Unless you have a home studio set up, headphones are the best way to monitor sound. A frequency response chart resembling the Fletcher–Munson curve graph is usually included on a box or leaflet with a set of headphones. Flat response is ideal for monitoring, but as long as you know where the peaks and valleys are on your headphones, you can make adjustments. For example, if you know your headphones have a bump at about 500 Hz, you can remember that when you go to listen to, mix or master files (more about that later).

There are several types of headphones to be aware of:

Reading and Listening Guide

Each chapter introduces some reading and listening suggestions. Take the time to read, listen, and answer the questions as well as add your thoughts about the readings or listening in your journal.

Michel Chion, the “Three Listening Modes” from Audio-Vision (1994)

Film sound theorist Michel Chion describes three ways of listening to sound. The most common is by identifying the cause of the sound, which we saw above in our own listening practice. He calls this causal listening (not to be confused with casual listening!). As we discussed, we usually talk of sounds in terms of the cause or source of the sound: a car motor, a bird, and so on. The second is semantic listening: this is what we do when we listen to people talking. The sounds are a part of a linguistic code, and we listen to the code as much or more than the sound itself. Chion draws on the musique concrète composer Pierre Schaeffer to describe focusing on the traits of the sound in reduced listening. To describe sounds in a reduced way we need a language to talk about sound, which we’ll cover in the next chapter: we can focus on the texture, qualities, or timbre of a sound. We must also listen to a sound many times in order to separate out its acoustic properties from its cause. These three listening modes, however, fail to capture many of the other ways in which we may listen (and Chion acknowledges this). What other ways of listening can you imagine? Think about your listening journal and the ways you’re practicing listening. Is listening to music different from listening to sound? Why or why not?

Pauline Oliveros, Deep Listening: A Composer’s Guide to Sound Practice (2005)

Oliveros’s book offers a different type of listening practice, focused on years of studying Zen and meditation. Oliveros reflects on her retreats and workshops and presents several deep listening exercises influenced by meditative practice. Perhaps most useful in my opinion is her slow walk, a meditation walk in which one attempts to walk as slowly as possible while listening. She tells us to “walk so silently that the bottoms of your feet become ears.”

R. Murray Schafer, “I Have Never Seen a Sound” (2009)

Acoustic ecologist R. Murray Schafer explains his journey into studying the soundscape (analogous to landscape) of an environment. What sounds have been introduced to the soundscape during your lifetime? What would where you live sound like one hundred years ago? Five hundred years ago? What are the politics and power structures in the sounds of your environment?

There are many collections of soundscapes available, with a Spotify playlist linked on the studyingsound.org website. What do soundscapes tell you about the places you’re listening to? What are the key sounds that differentiate them? Should we record our present soundscapes? Why or why not?