Chapter 12
Audio Production

Key Terms

A/D Conversion

Amplitude

Audio

Automatic Gain Control (AGC)

Balanced Audio

Boom Pole

Clipping

Condenser Microphone

Decibel (dB)

Dynamic Microphone

Dynamic Range

Equalization (EQ)

Frequency

Gain

Handheld Microphone

Headphones

Lavalier Microphone

Microphone

Mini Plug

Monitoring

Over-Under Wrap

Phantom Power

Pitch

Plosives

Polar Pattern

Preamp

Proximity Effect

RCA Connector

Ribbon Microphone

Shotgun Microphone

Sound

Sound Pressure Wave

Three-Band Equalizer

TS/TRS (Tip and Sleeve/Tip, Ring, and Sleeve)

Connector

Unbalanced Audio

VU (Volume-Unit) Meter

Windscreen

XLR Connector

The notion of media will be completely blended in one digital stream. Moving pictures and static pictures and text and audio—the mix of any of those will be commonplace. Imagine turning the pages of a magazine: every one of the images is a sound clip, and when you put your cursor over one of them, the audio comes. That’s absolutely going to be happening in 15 years. Full convergence of media and television and computers.

—Jim Clark, computer scientist and founder of Silicon Graphics Inc. and Netscape (1994)

Chapter Highlights

This chapter examines:

Sound and Audio

This chapter examines sound, one of the key ingredients in the multimedia producer’s recipe book. Sound is what we hear and can be featured in a standalone product, such as a song, a podcast, or a radio commercial, or it can be part of a larger product, such as the music or dialog in a feature-length film. Think about how boring video games would be without sound effects to go along with the movement of spacecraft and the explosion of weapons. In television production, the phrase sound on tape (SOT) is used to describe audio captured on location at the time of a video or film recording. For example, during the World Series, the sounds of the ball hitting the bat, the crowd cheering in the background, and the voices of the announcers calling the game, are synchronized in real time with camera shots of the runner rounding third base after hitting a home run. While the images and sound of a live event are often acquired together, at the same moment, the work-flows for recording sound for use in a preproduced video or film are not necessarily the same, and it is important to understand the differences. This chapter focuses on the basic concepts, tools, and techniques you need to be aware of as you delve into sound acquisition, recording, and editing.

What is Sound?

It’s important to distinguish between the physical characteristics of sound as a phenomenon of nature and the process of audio production—the electronic capture and reproduction of sound. Sound is a natural phenomenon that involves pressure and vibration (see Figure 12.1). Understanding how sound and hearing work will help you capture and produce better quality audio. What we perceive as sound traveling across time and distance is actually the invisible moving presence of a sound pressure wave. Sound waves are a special type of energy and require a molecular medium for propagation. They can travel though solids, gases, or liquids, but air molecules are the best transmission medium. Despite Hollywood’s portrayal of loud space battles, sound cannot travel within the vacuum of space, and in reality, a ship exploding in space would not make a sound as there are no air molecules. We hear by discerning changes in the pressure and movement of the air particles around us. When a tree falls in the forest, the air molecules in its path are momentarily displaced. They are violently pushed out of the way to make room for the falling tree. This sets off a chain reaction as the energy of the initial force is passed along to neighboring molecules in all directions. Back and forth they go, oscillating until the energy that caused the disturbance dissipates.

Figure 12.1 Striking a tuning fork causes its two prongs to vibrate, which, in turn, produces a musical tone.

Figure 12.1 Striking a tuning fork causes its two prongs to vibrate, which, in turn, produces a musical tone.

Figure 12.2 Concentric ripples are the visible evidence of molecular vibration in water. Uniform waves progress outward in every direction as energy is released at the point of disturbance.

Figure 12.2 Concentric ripples are the visible evidence of molecular vibration in water. Uniform waves progress outward in every direction as energy is released at the point of disturbance.

The best way to illustrate the movement of a sound pressure wave is to look at something more visible in nature. When you drop a small rock into a still pond, you see concentric ripples or waves traverse outward from the point of the disruption (see Figure 12.2). Here, water serves as the conduit for energy to flow away from the source. However, the actual water molecules travel only a tiny distance as they bounce back and forth transmitting the energy signature of the wave. As the wave travels further away from the source, the oscillations of the molecules begin to slow down until the pond is once again at rest.

Tech Talk

Characteristics of a Sound Wave While sound is invisible to the eyes, two characteristics of a sound wave can be measured and visualized by digital recording devices and sound processing hardware and software. Amplitude and frequency are the observable dimensions of a sound pres sure wave that we are most interested in (see Figure 12.3). Amplitude is a sound pressure wave’s intensity, or dynamic pressure, and frequency is the wave’s rate of vibration or oscillation. In hearing, we perceive amplitude as the relative loudness of a sound and frequency as its pitch.

Figure 12.3 In Avid Pro Tools, a VU (volume-unit) meter (left) provides a visual reference of the amplitude of a sound source in decibels and a slider for increasing or decreasing its level. Likewise, the seven-band equalizer plug-in (right) visually displays the frequency range and numerous controls for adjusting each individual band.

Figure 12.3 In Avid Pro Tools, a VU (volume-unit) meter (left) provides a visual reference of the amplitude of a sound source in decibels and a slider for increasing or decreasing its level. Likewise, the seven-band equalizer plug-in (right) visually displays the frequency range and numerous controls for adjusting each individual band.

Amplitude

The first thing we tend to notice about sound is how loud it is. Loud sounds capture our attention almost immediately, while soft sounds strain our senses or elude us entirely. Because sound waves are invisible to the human eye, we must use pictures to illustrate their physical qualities and characteristics (see Figure 12.4). A sound wave’s height (amplitude) indicates the intensity or magnitude of the pressure wave. Amplitude is defined as the distance from the crest of the wave to the trough. The louder the sound, the greater the amplitude, and the taller its waveform. The amplitude of a sound is greatest near the source and diminishes over distance and time.

Figure 12.4 Sine waves are often used to visualize the repetitive oscillations of sound vibrations. A) Amplitude is represented by the height of the wave. B) Wavelength is the distance traveled during one complete vibration cycle. C) Frequency is the number of complete wave cycles that occur over a set period of time (usually measured in one-second intervals).

Figure 12.4 Sine waves are often used to visualize the repetitive oscillations of sound vibrations. A) Amplitude is represented by the height of the wave. B) Wavelength is the distance traveled during one complete vibration cycle. C) Frequency is the number of complete wave cycles that occur over a set period of time (usually measured in one-second intervals).

Amplitude is measured in decibel units. The decibel (dB) is a logarithmic unit of measurement used to quantify the sound pressure level (SPL) or magnitude of a sound wave in an acoustical space. Humans are capable of hearing a wide range of sounds, from 0 dB to 120 dB. A value of 0 dB represents the least audible sounds we can hear (just above silence). With each increase of 20 dB on the decibel scale, the amplitude and perceived loudness of sound increases 10 times. Thus, a 20 dB sound is 10 times louder than the faintest sound. A 40 dB sound source is 100 times louder than the faintest sound and 10 times louder than sound at 20 dB. When you reach 120 dB, the SPL is 1,000,000 times greater than the level of sound at 0 dB. While adapting to a logarithmic scale can be confusing at first, using a scale with a relatively small number of decibel units is easier to deal with than one with a million or more increments of variation.

The human threshold for pain begins around 140 dB, while permanent hearing loss occurs at 150 dB. Hearing loss most commonly results from repeated exposure to loud sounds over time. With the growing popularity of digital music and MP3 players, concern has emerged over the potentially damaging effects of listening repeatedly to loud music through headphones or earbuds. The close proximity of earbuds to the sensitive organs of the ear makes this an even greater concern and has prompted the makers of personal listening devices to offer volume-limit controls on their units. A volume limiter option allows users to set a maximum listening level based on decibels or the relative volume units of the device. While many factors can contribute to hearing loss or damage, setting a volume limit in place is advisable. By the way, that ringing you get in your ears after a concert—it’s called tinnitus and can become permanent.

Frequency

As sound waves pass through matter, the vibrating molecules experience three phases of movement (see Figure 12.5). As molecules move in an inward direction, they are pushed closer together, leading to an increase in molecular density and sound pressure. This is the compression phase and is represented by the portion of the waveform above the horizontal axis (time). The highest point of the waveform is called the crest and signifies the moment of greatest sound pressure. Once maximum compression has been reached, elasticity kicks in, causing the molecules to return to their original position. For a fraction of a second, the molecules are at rest as they change direction and begin moving outward. During the rarefaction phase, molecules are pulled apart, resulting in a decrease in molecular density and sound pressure. Rarefaction is denoted as the portion of the waveform below the horizontal axis. The lowest point on the waveform is called the trough and indicates the moment of lowest sound pressure.

Figure 12.5 When the air molecules around us are energized by a sound pressure wave, they begin to oscillate, bouncing rapidly back and forth in unison. As molecules travel inward they are squeezed tightly together (compression); as they spring back in the opposite direction, they pull apart and spread out (rarefaction). At one brief moment in each wave cycle, when molecules change direction, they are briefly at rest.

Figure 12.5 When the air molecules around us are energized by a sound pressure wave, they begin to oscillate, bouncing rapidly back and forth in unison. As molecules travel inward they are squeezed tightly together (compression); as they spring back in the opposite direction, they pull apart and spread out (rarefaction). At one brief moment in each wave cycle, when molecules change direction, they are briefly at rest.

Table 12.1 The Frequency Chart for a Six-String Acoustic Guitar

String Note Frequency
6th E 82 Hz
5th A 110 Hz
4th D 147 Hz
3rd G 196 Hz
2nd B 247 Hz
1st E 330 Hz

The progression of a sound wave through one phase of rest, compression, and rarefaction is called a cycle, and a sound’s frequency is based on its number of cycles per second. Frequency refers to a sound’s relative low or high pitch. Frequency is measured in hertz (Hz), cycles per second. Every vibration has a unique frequency signature. A common frequency used in audio production for the purposes of calibration is the 1 kHz tone. By the way, 1 kHz is simply 1,000 Hz. Kilohertz (kHz) units can be used as an abbreviated way of referring to particularly high frequencies.

When individually plucked, the strings of an acoustic guitar in standard tuning create the frequencies, or pitch, shown in Table 12.1. Each string can be played separately, producing a single note or pitch. However, most of the time a musician will strike multiple strings at a time, thus creating diverse sounds and harmonics with complex frequency signatures. Most sounds in nature, including human speech, are likewise composed of multiple frequencies interacting together to produce an holistic aural impression.

People with normal hearing are capable of perceiving sound frequencies from 20 to 20,000 Hz. This range is often divided into three subgroups, or bands, called bass, midrange, and treble. The bass frequencies include lower pitch sounds in the range of 20–320 Hz. Midrange frequencies include medium pitch sounds falling between 320 and 5,120 Hz. Treble frequencies, high pitch sounds from 5,120 to 20,000 Hz, represent the largest segment of the human audio spectrum. While the treble range is the broadest, most of the frequencies required for understanding human speech fall in the midrange.

Microphones

A microphone is a recording instrument used to convert sound waves into an electrical equivalent that can be stored, transmitted, and played back through an audio sound system. Some microphones are designed specifically for use in a studio environment, while others are optimized for field use. Likewise, some microphones are better for voice work while others are designed primarily for instrument recording. While there’s often more than one clear-cut choice, it’s important to understand the fundamental differences in microphone design in order to choose the most appropriate tool for the job.

Great Ideas

Equalization

Sound systems often allow you to adjust the bass, midrange, and treble output of a program source or channel. This feature is known as a three-band equalizer (or EQ) and provides the user with separate controls for raising or lowering the gain of each frequency region or band (see Figure 12.6, top). For example, “rolling off” or “dialing down” the bass frequencies can add brightness and clarity to the sound signal. This can be helpful when listening to news or talk-radio channels, but for music, people often want to feel the deep and penetrating lower frequencies. In such cases, adding bass and rolling off the treble may be more to one’s liking.

A three-band equalizer is simple and inexpensive, but it provides only global controls for adjusting the pitch of the recording or transmission. Professional recording studios and production facilities typically rely on more robust and sophisticated tools for adjusting EQ. The better systems break the frequency spectrum into many more bands (see Figure 12.6, bottom), allowing for precise isolation and manipulation of individual frequencies. You’ll find a virtual version of this tool in most audio and video editing software.

Figure 12.6 Top: A three-band equalizer, like the one pictured here, is a standard feature on most audio mixing consoles and on many electronic sound devices (car stereos, amplifiers, etc.). Bottom: A screenshot of the graphic equalizer in Final Cut Pro X. With this interface, users can select between 10 and 31 bands of equalization.

Figure 12.6 Top: A three-band equalizer, like the one pictured here, is a standard feature on most audio mixing consoles and on many electronic sound devices (car stereos, amplifiers, etc.). Bottom: A screenshot of the graphic equalizer in Final Cut Pro X. With this interface, users can select between 10 and 31 bands of equalization.

Classifying Microphones by Transducer Type

Microphones use a transducer element to capture sounds. The transducer contains a moving diaphragm or ribbon that vibrates when exposed to a sound and encodes a sound wave’s strength and frequency into electricity by modulating the current. Based on transduction method, dynamic microphones and condenser microphones are the two most common types.

Dynamic Microphonea

A dynamic microphone uses acoustical energy and mechanical vibration as the means for producing the electromagnetic signal required for analog recording. Dynamic microphones do not require a power source. They are durable, relatively inexpensive, and moisture and shock resistant. Moving-coil and ribbon microphones are two of the most common types of dynamic microphones. Both rely on electromagnetic induction, which uses magnets to produce an electric current (see Figure 12.8, A).

Figure 12.8 A) A dynamic microphone is less sensitive to sound because the transducer is self-powered by the sound of the subject’s voice. B) A condenser microphone is more sensitive to sound because the transducer is powered by a battery or phantom power source.

Figure 12.8 A) A dynamic microphone is less sensitive to sound because the transducer is self-powered by the sound of the subject’s voice. B) A condenser microphone is more sensitive to sound because the transducer is powered by a battery or phantom power source.

Moving-Coil Microphone

In a moving-coil microphone, a diaphragm is attached to a coil (a metal core wrapped with copper wire) suspended in a magnetic field between the north and south poles of a fixed magnet. The diaphragm is a thin, circular membrane, typically made of paper, plastic, or metal. As the diaphragm vibrates, the coil oscillates in the magnetic field, producing a tiny current that’s transmitted via copper wire to the microphone cable. The electromagnetic signal modulates in unison with the amplitude and frequency of the sound pressure wave, producing a copy of the original waveform.

Ribbon Microphone

A ribbon microphone uses a thin metal ribbon of corrugated metal, usually aluminum, as the transduction element. The ribbon is suspended in a magnetic field between the opposite poles of a fixed magnet and generates an electromagnetic current when it pulsates in the magnetic field. Ribbon microphones are technically superior to moving-coil designs because they respond to sound bidirectionally, from both the front and the back of the element. While ribbon microphones are relatively expensive, broadcasting and recording professionals value them for their superior performance and natural sound reproduction. The metal elements in early ribbon microphones were quite delicate, and ribbon microphones had a reputation for being easy to damage. Newer ribbon microphones are more robust, though as with their predecessors, you need to be careful about picking up wind noise when using them outdoors (see Figure 12.9).

Figure 12.9 Vintage radio microphones like this one often have a ribbon transducer.

Figure 12.9 Vintage radio microphones like this one often have a ribbon transducer.

Condenser Microphonea

A condenser microphone uses a capacitor to record variations in amplitude and frequency. The capacitor has two parts, the back plate (containing the electric charge) and the diaphragm. As the diaphragm vibrates, the distance between it and the back plate changes, thus modulating the intensity of the voltage signal. Condenser microphones are much more sensitive to sound than dynamic microphones and as a result can be positioned farther from the source of the sound. Condenser microphones are separated into two groups based on diaphragm size. Large diaphragm condensers have a bigger form factor and are more often used in a studio recording environment, while small diaphragm condensers have a slender body profile and may be found in both field and studio environments (see Figure 12.8, B).

Condenser elements require a power source to supply the electric charge to the back plate. For this reason, condenser mics are often equipped with an attached battery pack or built-in power module. A single AA battery is usually all that’s required. You can also power a condenser microphone with phantom power, an electric current that’s transmitted to the microphone from an attached mixer or recording device (see Figure 12.10). Phantom power supplies a 48-volt (+48V) electric charge to the capacitor through the signal wires of the XLR cable connecting the microphone. Professional audio mixers/recorders and video cameras usually provide phantom power output. However, you may need to flip a switch or change a menu setting to activate it.

An electret condenser microphone is slightly different from a true condenser microphone in that the back plate is designed by the manufacturer to stay permanently charged, eliminating the need for a power source. Most professional condenser microphones in use today feature an electret condenser. And while the condenser may not need external power, most of these mics also contain an integrated preamp that requires a tiny amount of sustained voltage. For this reason, a battery or phantom power is still needed for the microphone to work.

Figure 12.10 The two microphones on the left are condensers. However, the one pictured at the top can be powered with either a battery or phantom power. The microphone on the bottom does not have a battery compartment and must be powered by the camera or recorder it is connected to. Most professional recording devices can provide phantom power, but it must be turned on to work. The phantom power switch may be located on the outside of the unit or, as shown on the right, within the menu system of the device.

Figure 12.10 The two microphones on the left are condensers. However, the one pictured at the top can be powered with either a battery or phantom power. The microphone on the bottom does not have a battery compartment and must be powered by the camera or recorder it is connected to. Most professional recording devices can provide phantom power, but it must be turned on to work. The phantom power switch may be located on the outside of the unit or, as shown on the right, within the menu system of the device.

Classifying Microphones by Polar Pattern

Microphones are also classified according to their polar pattern (or pickup pattern). Polar pattern refers to how well a microphone picks up sound within 360 degrees of its central axis. Polar patterns are three-dimensional, so in effect, the sensitivity field includes the area above and below the microphone as well as to the right, left, front, and back. The narrower the pickup pattern, the more directional the microphone will be, and the more effective it will be in sensing sounds along the central axis. In short, the polar pattern of a microphone affects how you use it and under which circumstances the microphone will function at its best (see Figure 12.11).

Omnidirectional

The pickup pattern of an omnidirectional microphone is a sphere around the microphone, although not an entirely perfect one. In theory, these microphones respond equally to sound in all directions. In practice, however, the microphone body, particularly on handheld microphones, can block or obscure the path of a sound wave. This can shield the microphone from some frequencies. The smaller the microphone’s

Figure 12.11 Six of the most common polar patterns: A) omnidirectional; B) bidirectional; C) cardioid; D) supercardioid; E) hypercardioid; F) ultracardioid (or shotgun).

Figure 12.11 Six of the most common polar patterns: A) omnidirectional; B) bidirectional; C) cardioid; D) supercardioid; E) hypercardioid; F) ultracardioid (or shotgun).

body, the less of a problem this is. Because they pick up sound from all directions, omnidirectional microphones are best used in situations where there is little to no ambient sound. You may also hear these microphones called nondirectional.

Bidirectional

Bidirectional microphones pick up sound equally from the front and rear of the element. Most ribbon microphones are bidirectional. As a broadcast performance microphone, these are ideal for interviews where the host and guest are seated on opposite sides of a table or in situations where two people are required to share a single microphone.

Cardioid (Unidirectional)

As the name implies, a unidirectional microphone picks up sound from only one direction. This makes it well suited for working in situations with lots of ambient (or background) sound. There are a number of variants of this type of microphone. Cardioid microphones have a unidirectional polar pattern with a heart-like shape (hence their name). This pickup pattern favors sounds coming from the front and sides up to 130 degrees. Cardioid microphones boast a relatively narrow pickup field and do a good job of rejecting ambient sound from the rear of the microphone. Cardioid microphones are ideal for recording single subjects and vocalists. Other members of the unidirectional family include supercardioid, hypercardioid, and ultracardioid (or shotgun) microphones. Each progression comes with a narrower pickup field and an expanded area of sound rejection from the rear of the microphone. The narrower the pickup pattern, the more deliberate the operator needs to be in aiming the microphone directly at the sound source during recording.

Classifying Microphones by Form Factor

Microphones come in many different shapes and sizes, but in terms of practical application, there are four microphone styles you will run into more than all the others: handheld, lavalier, shotgun, and boundary microphones (see Figure 12.12 and Figure 12.21). Once you are familiar with these, you will be ready to handle the vast majority of recording challenges with ease and confidence.

Figure 12.12 Microphones are classified by form factor depending on the general purpose they are designed for.

Figure 12.12 Microphones are classified by form factor depending on the general purpose they are designed for.

Figure 12.13 Television news reporters often use handheld microphones for conducting field interviews. Because handhelds are usually dynamic, they should be positioned no further than a few inches from the front of the subject’s mouth.

Figure 12.13 Television news reporters often use handheld microphones for conducting field interviews. Because handhelds are usually dynamic, they should be positioned no further than a few inches from the front of the subject’s mouth.

Handheld Microphone

Handheld microphones are designed for the talent or performer to hold during a recording session. Dynamic handheld microphones are ideal for rugged use and heavy handling, but they need to be held close to the subject’s or talent’s mouth in order to generate enough sound pressure for a good recording (see Figure 12.13). The rugged construction of dynamic handheld microphones minimizes noise caused by sudden movement, rough handling, or when passing the microphone along from person to person. Reporters rely on this type of microphone most often when recording a stand-up interview or conducting field interviews. If you are interviewing someone using a directional handheld microphone, remember that you need to aim it toward the other person when he or she is talking. Think about it as sharing an ice cream cone. If the other person is going to eat, you’ve got to put the cone in front of him or her.

Some handhelds are equipped with an electret condenser element. While this increases the microphone’s sensitivity, it raises the risk of unwanted noise from outside forces. To reduce handling noise, condenser handheld microphones usually come with an internal shock mount. The shock mount suspends the element in midair with elastic bands, insulating it against sudden jarring movements.

Most handheld microphones are unidirectional. Handhelds work best when the microphone is positioned no further than six inches away from the mouth and slightly off-axis. Getting too close to the microphone grill (the shield) can produce annoying artifacts such as the infamous “popping Ps” and other unpleasant plosives—the sound caused by releasing blocked air in speech, particularly from the pronunciation of the hard consonants b, d, g, k, p, and t. Remember, the microphone element is highly sensitive to sound pressure vibrations, and when a plosive ignites less than an inch away, vocal distortion and clipping are likely to occur. Using a windscreen or pop filter can reduce or eliminate vocal artifacts and wind noise (see Figure 12.14 and Figure 12.19).

Figure 12.14 When using any microphone outdoors, it’s a good idea to attach a windscreen.

Figure 12.14 When using any microphone outdoors, it’s a good idea to attach a windscreen.

Up until now, we have assumed that a handheld microphone must be held at all times. While this is a common approach in field productions, concerts, and the like, handhelds can also be attached to a floor stand, tabletop stand, podium, or boom arm using a microphone clip, gooseneck, or other adapter (see Figure 12.15). Securing a handheld microphone to a stand or mount ensures hands-free operation and eliminates the risk of handling noise, unless of course the stand topples over or the talent swats the microphone. It happens!

The downside to using stands, or handheld microphones at all, lies in the fact that most inexperienced users (nonprofessionals) vehemently shy away from close contact with a microphone. Most people feel nervous around microphones and assume they will work just fine if positioned 10 feet away from them. So if the keynote speaker for the annual fundraising banquet is an engineer, doctor, lawyer, or accountant, and not a professional orator, you may need to remind him or her about the importance of moving in close to the microphone in order to be heard and obtain a usable recording. Even then, they may choose to ignore your instructions or simply forget them as the common fear of public speaking sets in. In either event, a better option awaits you in our discussion of the next form factor.

Figure 12.15 A handheld microphone is supported by a microphone stand for hands-free operation.

Figure 12.15 A handheld microphone is supported by a microphone stand for hands-free operation.

Lavalier Microphones

Unlike a handheld microphone, a lavalier mic is designed for hands-free operation and is a popular choice for recording interviews. Also known as a lapel or lav microphone, this low-profile workhorse is designed with an electret condenser transducer element. Although they come in a variety of polar patterns, the most common ones have an omnidirectional, cardioid, or supercardioid element. Professional lav microphones usually require a battery pack or phantom power source.

Lavalier microphones are highly sensitive to sound and touch and should never be held by hand when recording. They are also not designed to be spoken into directly (on axis) at close range. Instead, lavs are designed to be attached to the subject’s clothing three to six inches below the chin. Using a specially designed alligator clip, a lav can easily be attached to the front of a jacket, shirt, tie, or lapel. Because they are physically attached in one spot, the distance from the microphone to the source remains constant. Whether the subject is running, walking, sitting, or standing, the position of a lavalier microphone, relative to the source, will not change. However, even when a lavalier is properly attached, you have to be careful. The talent’s physical actions (moving hands, feet, clothing, etc.) can cause unwanted noise if the microphone is suddenly bumped or jostled. Lavs are particularly popular with platform speakers who want the freedom to walk and talk at the same time without having to hold a microphone or stand behind a podium. They are also a good choice for recording interviews, especially when used indoors in a quiet setting (like a TV studio or office) and where the talent’s physical movements won’t interfere with a good recording.

Tech Talk

The Proximity Effect The proximity effect is an acoustic phenomenon that boosts the bass frequencies of your voice as you move progressively closer to the microphone diaphragm. Next time you’re in a recording studio, test this out by putting on a pair of headphones and listening to how your voice resonates more deeply as you narrow the gap between your mouth and the microphone. While the proximity effect is common with most unidirectional dynamic microphones, especially those with a single, large diaphragm, you’ll particularly notice it with ribbon microphones, as both sides of the diaphragm are exposed to sound pressure. While you need to be close to a dynamic microphone when you use it, avoid leaning in too closely. When you are a quarter inch or closer to the microphone, it will lead to unnatural low-end distortion artifacts.

Whether the proximity effect is good or bad is simply a matter of taste and perspective. Radio disc jockeys, public-address announcers, and voice recording artists often get paid for having warm, larger-than-life voices. A professional can use the proximity effect to his or her advantage to enhance the overall warmth and presence of a performance or to increase the power and delivery of certain words and phrases (see Figure 12.16). Over time, vocal artists develop an instinctive ability to control the proximity effect and to gauge when and how far to lean in on the microphone to mediate its intensity.

Too much bass, however, can muddy the audio. Overemphasized bass frequencies can cause the midrange and highs to be compressed, and the overall clarity and breadth of a vocal recording may suffer. Most professional microphones vulnerable to the proximity effect have a bass or low-frequency roll-off feature that gradually reduces the bass response as sound pressure increases. Mixers and recorders often include a similar control for attenuating (reducing) bass sounds or canceling out excessive low-end frequencies. Because of the popularity of the bass roll-off feature with recording engineers, this feature is sometimes even included on microphones, resulting in little to no proximity effect distortion.

Figure 12.16 A radio announcer is comfortable working closely to the microphone when speaking to her audience over the air.

Figure 12.16 A radio announcer is comfortable working closely to the microphone when speaking to her audience over the air.

Proper microphone placement is essential for a quality recording and optimizing the rejection of unwanted ambient sound. The microphone element should face upward toward the chin and be free of any obstructions from clothing and jewelry. Attaching a windscreen to the microphone capsule can help alleviate wind or breathing noise. Lavs should be positioned so the subject’s voice projects directly over the top of the microphone. Most of the time, this means affixing it directly in the center of the upper body or chest. However, if the subject is turned off-axis to the camera, then the microphone should be positioned slightly to the left or right of center so that it remains directly under the mouth or chin of the subject.

To maintain a professional appearance, lavalier microphones should be properly dressed. Dressing a microphone involves making it as attractive and obscure as possible (see Figure 12.17). At all costs, be sure to avoid the rookie mistake of allowing the microphone cable to dangle down the front of the subject’s shirt or blouse. This is a telltale sign of an amateur production. With just a little bit of effort, and discretion, the microphone cable can be rerouted out of sight beneath clothing or hidden behind a tie, lapel, jacket, or collar. Discretion is critical because working with people in a professional production setting requires sensitivity to cultural norms and rules of etiquette. While you need to keep these issues in mind whenever you are working, you need to be particularly cognizant of these issues when working with people from other cultures or with different genders.

Figure 12.17 In the top two photos, the lavalier is improperly affixed to the subject. Take the time to properly position and dress a lavalier microphone, hiding the cable from view as best you can. Doing so will improves the quality of the recording and the appearance of the subject during an on-camera interview.

Figure 12.17 In the top two photos, the lavalier is improperly affixed to the subject. Take the time to properly position and dress a lavalier microphone, hiding the cable from view as best you can. Doing so will improves the quality of the recording and the appearance of the subject during an on-camera interview.

In order to effectively hide the microphone cable, it may be necessary to conceal it under a shirt or blouse. Such a request needs to be made in a professional manner, with sensitivity to personal space and gender differences. Giving subjects clear instructions about microphone placement and offering them the option of moving to an offset location (such as a dressing room or restroom) is often appreciated. However, do not assume the subject will know which way is up and which way is down. Do not expect the subject to understand the best technique for dressing the cable or assume he or she will have performed the task completely as instructed. In the end, this is your job, and before recording begins, you should make every effort to ensure the microphone has been properly attached, positioned, and dressed. Leave nothing to chance.

Shotgun Microphones

Shotgun microphones are among the most directional microphones. They feature a condenser element with an extremely narrow pickup pattern—in supercardioid, hypercardioid, and ultracardioid varieties. They are so named because of their long and slender form factor, which resembles the general shape of a shotgun barrel. Shotguns microphones are housed in a cylindrical capsule with a small diaphragm. While they are relatively expensive, film and video producers like these microphones because of their versatility and usefulness in complicated mic situations, particularly those in which more than one person in a scene is speaking. The main advantage of using this type of microphone is that it can remain hidden out of sight, beyond the camera’s field of view. Some shotgun microphones have interchangeable capsules that allow you to change the characteristics of the microphone on the fly.

Figure 12.18 A boom pole is used to position a shotgun microphone within a few feet of the subjects in this scene. The videographer works with the boom pole operator to ensure that the microphone does not dip down into the visible portion of the frame.

Figure 12.18 A boom pole is used to position a shotgun microphone within a few feet of the subjects in this scene. The videographer works with the boom pole operator to ensure that the microphone does not dip down into the visible portion of the frame.

Because of their narrow polar pattern, shotgun microphones need to be aimed in much the same way that a rifle has to be pointed toward its intended target. In a film-style recording setup, a shotgun is often mounted to a boom pole (or fish pole), a device that allows the audio operator to extend the microphone 6 to 12 feet into the scene where the actors are located (see Figure 12.18). It can also be attached to a small pistol grip for handheld control (see Figure 12.19) or mounted directly on top of a video camera. Using a boom pole or pistol grip, the audio operator is able to keep the axis of the barrel continually aimed at the subject as dialog moves from one person to another or as sound traverses along a linear path. The boom pole operator monitors the recording with headphones to assist with maintaining the “sweet spot” where the microphone is positioned at its optimal best. Remember to make sure the microphone stays out of the shot when you use a boom pole (see Figure 12.20).

Figure 12.19 A pistol grip is used in lieu of a boom pole for acquiring the sound of footsteps on leaves. When using a shotgun microphone to acquire sound effects, the audio engineer usually wants to get the microphone as close to the source of the sound as possible.

Figure 12.19 A pistol grip is used in lieu of a boom pole for acquiring the sound of footsteps on leaves. When using a shotgun microphone to acquire sound effects, the audio engineer usually wants to get the microphone as close to the source of the sound as possible.

If you are working alone, you may not have a second set of hands to operate a boom pole or pistol grip. In such cases, attaching it to the camera will allow for hands-free operation. While mounting a shotgun microphone directly to a camera is sometimes less than ideal, this technique can produce good results as long as the distance between the sound source and the camera remains constant.

Whether attached to a camera, boom pole, or pistolgrip, shotguns need to be secured in a sturdy shock mount in order to reduce mechanical transmission noise. Since they use a condenser element, shotguns require a power source, either phantom power or a battery—typically found in a compartment at the end of the microphone capsule. While broadcast and film producers have been using shotgun-style microphones for years, they are now quite popular with corporate and low-budget productions. If you can only afford to have one microphone in your production arsenal, this is the one to choose.

Figure 12.20 Different techniques can be used for positioning a microphone with a boom pole—either above the subjects or below them. Placing it too closely to the talent (bottom photo) can interfere with a performance or result in the microphone showing up on camera, thus ruining the take.

Figure 12.20 Different techniques can be used for positioning a microphone with a boom pole—either above the subjects or below them. Placing it too closely to the talent (bottom photo) can interfere with a performance or result in the microphone showing up on camera, thus ruining the take.

Boundary Microphones

Boundary microphones, also known as pressure zone microphones (PZMs), are condenser microphones intended to be placed on a flat surface, usually a table, ceiling, or wall. The enclosed PZM mic capsule is affixed above a metal baseplate and detects changes in the sound pressure waves within the air gap—an area called the pressure zone (see Figure 12.21). The design minimizes the effects of sound reflecting off other sources such as a wall by limiting the sound capture area. PZMs are usually used to record meetings and conferences and are good for recording multiple people at the same time. As you can see, they don’t really look like what we typically think a microphone looks like. You can use this to your advantage—PZMs are less likely to make people nervous.

Figure 12.21 The Audio-Technica ES961 is a cardioid condenser boundary microphone designed for use on a flat surface such as a conference table or floor.

Figure 12.21 The Audio-Technica ES961 is a cardioid condenser boundary microphone designed for use on a flat surface such as a conference table or floor.

Source: http://audio-technica.com.

Built-in or External Microphones?

The microphones we’ve talked about so far have been external microphones that you attach to a recording device, but many devices have built-in microphones, including laptop computers, cameras, cell phones, and voice recorders. Most often, built-in microphones are low-end condenser microphones. After all, if you are using a microphone for a phone call or video chat, it just needs to be good enough to enable you to be clearly understood. As such, many built-in microphones are designed primarily for transmitting conversations, not recording them. There are, of course, exceptions. As a general rule, avoid using a built-in microphone for voice acquisition without testing it first.

Regardless of the quality of a built-in microphone, however, it has some inherent limitations. One of the most obvious ones is that it is built in. This means that in most cases the operator has little control over the positioning of the microphone, particularly with video cameras, and the closer the microphone is to the source, the better it will perform. Although professional external microphones are relatively more expensive, they are made with better transducers and components, and as such, there’s simply no substitute for using one given a choice (see Figure 12.22). In fact, as a general rule in professional situations, you should avoid the temptation of ever using a built-in microphone to acquire primary audio content (actor dialog, voiceovers, interviews, etc.).

Figure 12.22 Left: The producer uses the built-in microphone on his digital audio recorder. Right: Here he uses the same recorder but attaches an external handheld microphone. All things being equal, you will achieve better results using a professional external microphone to conduct interviews.

Figure 12.22 Left: The producer uses the built-in microphone on his digital audio recorder. Right: Here he uses the same recorder but attaches an external handheld microphone. All things being equal, you will achieve better results using a professional external microphone to conduct interviews.

Why use an External Microphone?

Here are four reasons for using a professional-grade external microphone instead of one that is built into the recording device:

  • 1. You can select the best type of microphone from a vast assortment of professional microphones to fit the recording setting, subject, and application.
  • 2. You will have greater control over the placement of the microphone and its proximity to the subject, irrespective of where the sound or video recording device is located.
  • 3. All things being equal, a professional external microphone will have better sound recording specs than the built-in microphone attached to your device.
  • 4. Professional external microphones use a balanced XLR connector, which is better for reducing RF interference and other types of transmission noise.

Wireless Microphones

Although most of the time you are working with audio you’ll be running a wire directly from the microphone to your recorder or mixer, you’ll occasionally need to use a wireless microphone, particularly if you have talent who likes to walk around the room a lot. In some cases, you can plug your regular microphone into a wireless transmitter—essentially a tiny radio station that will send the audio to a radio receiver—that you’ll plug into your mixer or recorder (see Figure 12.23). You may lose some of your frequency range when you switch to wireless. Like all battery-operated equipment, make sure you keep fresh batteries on hand for the transmitter and the receiver. If you use more than one wireless microphone at a time, you’ll have to make sure they aren’t using the same radio frequency. Just as two radio stations can’t broadcast on 88.1 FM at the same time, you can’t have two wireless microphones operating on the same radio frequency. The better the system, the more frequency options you’ll have. Remember, you may not be the only one using a wireless microphone at some locations.

Figure 12.23 A wireless microphone like this one is designed for remote field production.

Figure 12.23 A wireless microphone like this one is designed for remote field production.

The microphone and transmitter are attached to the subject. The receiver is attached to the camera.

Source: http://www.audio-technica.com.

Tech Talk

Synchronizing Video and Audio in Post Most DSLR cameras lack an XLR input for connecting a professional external microphone to the camera. Also absent from most of them is a built-in preamp for setting levels, a VU meter for monitoring, and, in some cases, a headphone jack for listening. For this reason, many low-budget filmmakers have adopted the old-school technique of recording video and audio on two separate devices. Portable and inexpensive field recorders like the Zoom H4 have grown immensely popular in recent years for just this purpose. The H4 is designed with dual XLR inputs, phantom power, high-definition sampling modes for recording uncompressed audio, and a host of other settings and features that professional videographers and audio engineers are used to having.

In order to align asynchronous video and audio clips in post, the camera needs to record a reference audio track for each shot. Fortunately, DSLRs capable of shooting video include a built-in microphone that can be used for this purpose. At the beginning of each shot, a film slate or clapboard is positioned in front of the camera. Information can be written on the slate’s white board as a visual reference to production details such as the shooting date and the scene, shot, and take numbers (see Figure 13.10). With the camera rolling, an assistant opens and closes the clap sticks to create a loud “slap” that will be used later as an audible cue for synchronizing the sound captured on the portable recorder with that picked up by the DSLR camera. Professional NLE software increasingly provides built-in tools for automated audio synching of clips in the timeline. A specialty software program such as PluralEyes, made by RedGiant, analyzes the audio waveforms of related clips and synchronizes them in seconds. Some manual refinement may be necessary, but in practice, automated synching is remarkably accurate when working with clips with good audio levels. Specialty software like PluralEyes is also handy for batch processing an entire collection of clips at the end of shoot prior to reviewing or editing them in post.

There are two basic transmitter types for professional equipment, UHF and VHF. Of the two options, UHF offers the most available frequencies. This may be helpful if you are using more than five wireless microphones at a time or are going to be in locations where others are using wireless microphones. On the other hand, UHF microphones are typically more expensive than VHF microphones. Be aware that some of the radio frequencies overlap with television station frequencies. Always test your equipment before you begin production. A third option for wireless is an infrared transmitter, which uses a beam of infrared light to send the signal. It’s usually best to avoid infrared systems, as they require line-of-site transmission. If something gets between the transmitter and the receiver, it won’t work, much like the remote control for your television. Infrared does have the advantage of being more secure—you can’t pick it up from outside the room—and infrared transmitters typically don’t interfere with each other.

Audio Connectors

Microphones may be connected to audio recording devices by a variety of cables and connectors. Professional microphones use balanced connectors, while consumer devices use unbalanced connectors.

Balanced Audio Connectors

Professional microphones and audio devices are connected using balanced cables and connectors (see Figure 12.25). A balanced microphone cable has three wires encased in an outer rubber sheath. Audio signals flow in a loop, from the microphone to the recorder, and then back again. In a balanced system, a pair of twisted wires is used for conducting the signal path. The current travels down the positive wire in one direction and returns on the negative wire in the opposite direction. Since the impedance or resistance of the current is the same in both directions, this is called a balanced line or circuit. The third wire in a balanced cable is called the shield or ground. The ground wire is designed to “shield” the audio signal from electrical interference that can distort or weaken it in any way. The shield’s job is to eliminate noise (buzzes, hums, hisses, etc.) by keeping interference from coming into contact with the audio signal. While using balanced wiring doesn’t guarantee total noise rejection, it offers the best solution for protecting the integrity of the signal path. Balanced audio cables can be run for long distances with good results. This is particularly helpful in large rooms, for fixed recording environments, or with live events and concerts, where recording equipment is kept at some distance from the microphones and talent.

Figure 12.25 Professional microphones and recording systems are equipped with balanced XLR connectors.

Figure 12.25 Professional microphones and recording systems are equipped with balanced XLR connectors.

Unbalanced Audio Connectors

Most consumer microphones and electronic equipment use an unbalanced audio connector for patching analog sound sources. The male end of the connector is called the plug and the female end or receiving socket is referred to as the jack. An unbalanced cable uses two wires, a center conductor surrounded by a shield. The positive wire conducts the outbound signal flow, while the negative wire functions as both the return conduit and ground. Unbalanced cables are highly susceptible to interference when used across long distances. As a result, it’s best to use them on patching runs of 20 feet or less.

RCA Connectors

In the early 1940s, the Radio Corporation of America designed the RCA phono plug for connecting phonographs (or record players) to amplifiers. Today, it is used for connecting both the audio and video signal paths of a diverse array of audiovisual (A/V) devices, including television monitors, gaming consoles, projectors, and numerous other things. RCA plugs are often color coded to match the receiving end of an RCA jack. Yellow designates the composite video channel, while red and white refer respectively to the right and left audio channels in a stereo system. You’ll also find RCA plugs used for component video—a video format that uses separate cables for the red, green, and blue signals.

Figure 12.26 A wide variety of audio connectors and adapters are used in audio production. These are just a few you may run into.

Figure 12.26 A wide variety of audio connectors and adapters are used in audio production. These are just a few you may run into.

Adapters

While it’s best to avoid using adapters, you’ll sometimes need to use one for hooking up cables and devices with incompatible connectors, and it’s a good idea to have some good-quality adapters in your audio kit. One of the problems you may run into is the need to mix balanced and unbalanced audio gear. For example, the external microphone jack on a consumer camcorder (if it has one at all) is likely going to be an eighth-inch mini plug. In order to use a professional microphone with a camera of this type, you’ll need an XLR-to-mini plug adapter. Since the adapter is only traveling a short distance from the end of the microphone cable to the camera, you shouldn’t have to worry too much about interference affecting the audio signal.

Still, adapters can complicate matters, as they introduce another potential failure point in the audio chain. If the adapter fails or comes undone, you’ll lose your audio. To be safe, secure adapters with a small piece of gaffer tape to keep them from working lose during the recording. While gaffer tape looks similar to duct tape, don’t confuse the two. Gaffer tape uses a different type of adhesive that doesn’t leave a residue when you remove it. Mistake the two and you’ll end up with messy gear. Make the mistake when taping cables down on a carpet and expect not to be invited back to the gig.

Cable Management 101

While a microphone cable may seem like a rather inconsequential item, the role it plays in protecting the integrity of signal flow in the audio chain is critical to the success of a production. When properly cared for, cables will last longer, perform better, and be easier to handle and use. It pays to invest some time to learn the art and science of proper cable care and management. One of the best things I learned during my first internship as a young college student was how to wrap and store cables properly. The crusty broadcast engineer I was assigned to work with made no bones about how pitifully poor my cable-wrapping technique was. While I had to endure a bit of public humiliation and some colorful expletives along the way, I have always been grateful to him for taking the time to teach me the importance of cable etiquette and, even more important, for imparting to me a healthy sense of pride in regard to the proper use and care of equipment.

The most important lesson in cable management is learning how to properly wrap and secure a cable when you’re finished using it. Scrunching it up into a chaotic heap simply will not do. The next person who uses it will not appreciate the time he or she has to waste untangling the mess you created. Instead, cables should be carefully wrapped in a uniform coil, 12 to 24 inches in diameter (depending on the length of the cable). For short cables less than 50 feet, each loop should be comprised of 30–36 inches of wire. For longer cable runs, the loops can be larger. The important thing is consistency. As you wrap a cable, each loop should be roughly the same length in order to preserve the circular shape of the coil when finished.

Proper wrapping keeps cables from twisting, kinking, creasing, or bending, which can cause permanent damage to the encased wires or weaken them over time. Cables retain a “memory” based on good or bad patterns of repeated winding. In the long run, it’s easier to wrap a cable properly every time than it is to retrain a gnarled cable that has been poorly managed or abused. Once kinks and twists have been introduced into cable memory, they are difficult to undo.

Audio Monitoring

One of the simplest recording scenarios is the one-person interview. All that’s needed is a subject, a recording device (camcorder or audio recorder), a microphone, an XLR cable, and a set of headphones. The producer’s goal is to acquire source material by interviewing the subject and recording his or her voice to disk. To achieve professional results, you need to monitor the audio signal as it is being recorded. Audio monitoring is a two-step process that includes 1) the objective act of measuring sound intensity and setting the record levels and 2) the subjective act of listening to the audio signal as it is being recorded.

Monitoring Record Levels Using a VU Meter (Step 1)

The electrical signal produced by a microphone is very weak and must be amplified during the recording process. Audio mixing consoles and recording devices have a built-in microphone preamp for boosting the strength of the signal for audio processing. The preamp setting (or record level) is controlled with buttons or dials on the recording equipment. In a stereo system, there are separate preamps and controls for the left and right channels. As the recording engineer or operator, it’s your job to monitor the amount of amplification that’s applied to the microphone signal. The levels you choose will depend on many variables, including the strength of the subject’s voice, the type of microphone being used, the distance from the subject to the microphone, and the amount of background noise in the interview setting. For example, a soft-spoken person usually requires more amplification than a person with a naturally loud delivery. On professional systems, the preamp can be controlled automatically using automatic gain control (AGC) or manually using the volume-unit (VU) meter and record level controls. Given a choice, most professionals prefer the manual method.

Tech Talk

The Over-Under Wrap To prevent twists and kinks from developing, the method of over-under wrapping is used (see Figure 12.27). With this technique, each loop in the cable wrap is formed by twisting the wire in the opposite direction of the loop immediately before and after it. When the cable is unfurled, the alternating twists cancel each other out, allowing the cable to lie flat on the surface. Depending on whether you are right- or left-handed, the “over” loop runs in a clockwise or counterclockwise direction “over” the wire at the point where the loop began (Steps 1 and 2). The “under” loop runs the same way but is turned inward, causing it to twist in the opposite direction of the previous loop (Steps 3 and 4). In this pass, the cable is guided “under” the wire at the point where the loop began. This alternating pattern of over and under loops continues until the end of the cable is reached.

To complete the task, a cord or cable tie is used to secure the ends of the cable and keep the coil from coming undone. Once perfected, you will find that this technique can be used for all manner of video and audio cables. In fact, you may discover, as I have, that this method of coiling cables also works just as effectively on an extension cord or garden hose.

Figure 12.27 Nobody likes working with a tangled cable. Here, a student uses the over-under wrap to properly coil a microphone cable.

Figure 12.27 Nobody likes working with a tangled cable. Here, a student uses the over-under wrap to properly coil a microphone cable.

A VU meter displays the strength of the microphone signal (in decibel units) after it has passed through the preamp (see Figure 12.28). An analog VU meter has a typical range of −20 dB to +3 dB. A bouncing needle indicates the loudness of the signal as it modulates throughout the full dynamic range of the recording (from the quietest moments to the loudest ones). Digital VU meters vary in style. Most of them have a wider range on the low end, starting at −48 or −36 dB. Instead of a needle, they feature a row of colored LEDs.

Figure 12.28 During a recording, try to keep the audio levels within the green portion of the scale, or good range, at roughly 50%–80%. Setting the record levels too high can produce clipping or distortion. Setting them too low will require you to amplify them in postproduction, introducing noise unnecessarily into the signal path.

Figure 12.28 During a recording, try to keep the audio levels within the green portion of the scale, or good range, at roughly 50%–80%. Setting the record levels too high can produce clipping or distortion. Setting them too low will require you to amplify them in postproduction, introducing noise unnecessarily into the signal path.

Figure 12.29 Digital VU meters often include a clipping warning light that illuminates whenever sound levels exceed the distortion threshold. Sometimes, as shown here, the red area of the decibel scale is not displayed on a VU meter, perhaps to save space on a screen or visual interface. Instead, a clipping warning light may be all you have to work with to ensure that levels are kept from over-modulating.

Figure 12.29 Digital VU meters often include a clipping warning light that illuminates whenever sound levels exceed the distortion threshold. Sometimes, as shown here, the red area of the decibel scale is not displayed on a VU meter, perhaps to save space on a screen or visual interface. Instead, a clipping warning light may be all you have to work with to ensure that levels are kept from over-modulating.

On most VU meters, the region above 0 dB is color coded red to indicate that the signal is being overmodulated because of excessive amplification. While an occasional bounce into the lower region of the red scale usually isn’t a problem, when too much amplification is applied, waveform distortion can occur, causing a phenomenon known as clipping (see Figure 12.29). Clipping permanently corrupts the fidelity of the audio signal and cannot be repaired. For this reason, it is best to avoid pushing levels into the red at all. On the opposite end of the scale, you should also avoid setting the record level too low. A low audio signal will need to be boosted to acceptable levels in postproduction. Whenever you re-amplify a recorded audio signal, noise is introduced, and the quality of the original recording deteriorates. The lower your original record levels are, the more you will need to re-amplify them later. Maintaining proper levels throughout a recording session is key to obtaining professional results.

The Sound Check

Before starting the recording, conduct a sound check with your subject. Prompt the subject to speak in a normal tone of voice. Using the VU meter as a visual reference, adjust the record levels to the point where the loudest portions of their speech peak around 0 dB without going “into the red.” After pressing “Record,” continue monitoring the levels and adjusting them as necessary.

Monitoring with Headphones or Speakers (Step 2)

A VU meter gives you a visual reference of what the electronic recording device is “hearing” and is an objective indicator of the recorded signal’s intensity. You also need to monitor a recording by listening—using your ears to assess and evaluate the aesthetic properties of the recording. Monitoring a live recording with headphones or near-field speakers allows you to hear the voice of your subject and any associated background sounds or noise as it is being recorded. The volume control on a recording device is used to raise and lower the headphone or control room level and has no effect on the actual recording. So just because the recording sounds loud to you in your ears doesn’t mean the record levels are set properly. It might just be that the volume is set to the maximum level, leading to a false impression. As a matter of practice, set the record levels first and then adjust the volume of your headphones or speakers to the desired level.

Headphones

It is worth investing in at least one set of good-quality headphones. Look for an over-the-ear rather than earbud design. Professional video cameras and field recorders will have headphone jacks that let you monitor your audio during the capture process—so get in the habit of always using headphones when you are working! Doing so will help you make sure nothing goes wrong with your audio. You can’t rely on the VU meter alone to tell you if you are capturing good audio. It doesn’t tell you if the audio signal is good or whether or not your microphone is working or picking up the main subject. You could just be picking up static or background noise. You might even be recording with the wrong microphone by mistake, for example, using an internal mic when you meant to be recording with an external mic. Try to keep the headphones on whenever you are capturing audio. Just because the audio sounded good when you started out doesn’t mean it will sound good all the way through. Batteries die, cables get unplugged, and microphones can move or become detached. Oh, and cell phones can sometimes wreak havoc on your audio if you are not using RF-shielded microphones. You need to know if your audio has problems before you go back to the studio to edit.

Listen for potential audio problems—your headphones can help you here as well. Is your subject wearing metal bracelets that might clink together? What about background noise? Is the air conditioner making too much noise? What about that high-pitched electronic squeal from the video projector? Whenever possible, either eliminate the source of the unwanted sounds or find a better location. Whatever you do, don’t just say, “I’ll fix it in editing.” There’s a good chance you won’t be able to, at least not easily, particularly if the noise is in the same frequency range as your talent’s voice. Watch for echoes in a room and pay attention to where you set up your equipment. Try to avoid recording in the middle of a large room with hard walls— instead move to the side of the room. Look for things that will absorb rather than reflect sound. Recording on a busy street? Your headphones will help you make sure you’ve placed your microphone in a position to minimize the traffic noise.

Chapter Summary

All too often, newcomers to multimedia production don’t pay enough attention to the quality of their audio work, and it shows in the final product. To work in the industry, you need to understand the correct tool to use for a task. For audio, this means knowing which microphone to use when and what the relative advantages are for each type. As an example, we’ve seen that a ribbon microphone is great as a tabletop microphone for conducting an interview in a studio. On the other hand, it is not a great microphone for conducting an interview outdoors, as it is very susceptible to wind noise. Make sure you know when to use a balanced cable and when you can get away with using an unbalanced cable—the short answer is, use balanced whenever possible, but by all means keep your unbalanced cable runs to around 20 feet or less. Use professional-grade equipment whenever possible—this doesn’t mean the most expensive, just good quality—and avoid internal microphones when possible. Oh, and don’t forget your headphones. They really are one of the most important tools you’ve got, not only when you are editing audio but when you are capturing it as well. Few things are worse in audio production than coming back from an assignment and realizing that the battery died in your microphone three minutes into the interview and, because you weren’t wearing headphones, you didn’t realize it.