The problem of phase is a consistent issue for recording and mixing engineers alike. If there are problems with phase inside your song, even the smallest problems can ruin your music. It can make tracks sound empty or spectrally degraded, like something is missing. Issues with phase on a track can lead to problems on other tracks as well. These problems, as severe as they can be, can also be easily avoided or fixed, but first it is essential to understand how the problem of phase can initially occur.
This essay will attempt to discuss almost everything there is to know about phase, what is it, how it occurs, how it sounds and some procedures to deal with it.

What Is Phase?

I’m going to consult the all knowing source they call Wikipedia to answer this question.

It says:

Phase in sinusoidal functions or in waves has two different, but closely related, meanings. One is the initial angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the wave cycle which has elapsed relative to the origin.

A less scientific definition provided by Sweetwater Sound is:

Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).

No wonder why phase is such a confusing topic for people. At a quick glance, the definition is even confusing to me, but that’s why I am not a professor. At the end of the day, how does this definition apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about phase vs polarity. I’ll try to break it down more simply.

Phase Vs Polarity

Lets define things a bit more starting with phase and polarity. These two words are often used interchangeably but they are VERY different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case these definitions went over your head, Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

What it means to be In and Out of phase

When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, peak amplitude, and shape, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be twice as loud when combined.

When two waveforms are combined that are exactly the same but have a 180-degree phase difference they will cancel out completely producing no output. In the real world of recording, these conditions rarely happen. More than likely the two signals will either be slightly different, like two different microphones on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the waveforms are not 0 or 180 degrees, or the waveforms are somehow different, you get constructive and destructive interference or what is also defined as comb filtering. The nulls and peaks of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the trick to using several microphones on a single source.
For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s extremely extremely EXTREMELY slow compared to light. Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if the two signals are out of phase with each other, your recordings will sound thin and your music will not sound good.

The biggest problem area for phase is when you’re using multiple microphones on a single sound source, like drums or an orchestra. Depending on where each microphone is placed in relation to the sound source, the sound will reach each microphone at different moments in time. When you listen to these microphones blended together, there’s a chance that it will sound “hollow” and “thin” because each signal captured by each microphone is out of phase.

Really, at the end of the day, phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.

3 Small Tips for Dealing with Phase Issues

1. Microphone placement – Most phase related issues you’ll deal with are simply from how the microphones are placed. Take time to listen to all the microphones you are using in your recording session blended together. Each microphone may sound fine by itself, but phase issues happen when all the microphone signals are blended together. The easiest way to listen for this is to listen to them together in mono. Also, with every additional microphone used in the session, the more chances there are for phase to occur. So it’s always best to use the least amount of microphones as possible to get the job done.

2. Plug-in Latency – Within your recording software, pretty much any plug-in you use will induce latency, or delay, in your audio. This may cause small phase problems in your mix. If you put a plug-in with 20 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This isn’t necessarily that big of an issue, but it’s something you need to keep in mind. If you used two microphones on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other. Most audio software that is out today compensates for plug-in latency, but it is still something to keep in mind.

3. Linear Phase Processing – This last entry isn’t really much of a tip. It’s just something that one should be aware of. Most plug-ins process low frequency information and high frequency information at different speeds. The lows may come through the plug-in a little faster than the highs, for example, or vise versa. This is known as phase shift, and theoretically, it can affect the overall clarity of your audio. Most plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat the problem of linear phase shift.

Comments.

Currently there are no comments related to this article. You have a special honor to be the first commenter. Thanks!

Leave a Reply.

Book Now
css.php CALL US NOW!