7 Commandments of Audio Engineering

If you are wondering what you have to do to break into the music industry as an audio engineer in Chicago, have no fear. Recording in Chicago is no different than pretty much anywhere else on the planet, except for language. Here is a comprehensive list of skills that you can aim to develop to position yourself as a top engineer in the future. Notice that four of these skills are what can be defined as “base skills,” meaning they are imperative for any job in the music industry or elsewhere. The other set of skills are known as “job specific skills” and relate categorically to your work in the studio.


1. Ability to read, write, and follow directions. So why is it so critical to follow instructions in a recording studio? For starters, you could fuck up the gear in the studio. You are also working with client’s master recordings that are the result of perhaps thousands of hours in time and financial investment. In Chicago, some of those clients might not be to happy if their masters get messed up, so it could potentially mean your life. More over, following instructions also means that you are reliable and dependable, which in turn brings confidence to the head engineer or manager that you can be developed and mentored to integrate and properly accomplish client requests. Following directions is crucial to discovering how to work successfully in any recording or production studio, let alone life.

2. Communication. There have been many times when I was engineering a session when the artist or producer turned to me and said, “It just doesn’t sound right. I’m not sure really what it is about it, but it is not grabbing me.” We often spend long hours trying to figure out how to understand our clients. In a way, one could say we are quasi-pyschologists. The ability to communicate clearly is crucial in order to be as productive as possible in the studio. Many delays and fuck ups in the studio are a result of a lack or breakdown in communication. Knowing when to and not to speak out comes over time through patience and practice, and understanding.

3. Ability to stay cool and calm. Musicians can get pretty emotional in the studio. In essence, they are dumping their emotional well being into their performance for all to hear. So they get very emotional. A good engineer must know how to stay calm and reserved when a musician voices their frustrations. I have seen many sessions where fights break out in the control room between band members or band members and management. These people have actually swung at each other, which generally is not helpful to the whole creative process. Remember, your job is to keep the project on track at all times, so it is important for you to always remain calm and relaxed, especially in Chicago. May the force be with you.

4. Basic computer knowledge. So, how much do you really need to know about computers to become a good recording engineer? Well, many ambitious producers and sound engineers have a good deal of experience and knowledge operating sound recording and editing software on a computer. It’s certainly a bonus. The more you know about computers, the more valuable your service will be in the studio. It is important to master the basics, such as word processing and data entry, as well as understanding spreadsheet functions so you can use the computer to do simple math. It is important to be comfortable with these basic three applications as well as the computers recording and production software. A basic computer course at your local community college can teach you these fundamentals. It’s also important to know both the Macintosh and PC platforms. Macintosh more so for composing, recording, and mixing. PC’s for business management and data entry. Initally, all the best computer editing software for sound and music was found on a Mac, but over the last couple years, the PC has been making strides in the audio department. Many programs that were once exclusive to Mac are now available on PC as well.


5. Critical auditory skills. If you haven’t heard or experienced sound in an acoustic setting, you might not know what you are listening for which can bring you problems as an engineer. You’ve got to use your ears and really listen to the sound or music. As an engineer, it is important to get out there in the real world and experience every type of music that there is in a concert setting, from country to jazz, rock to big band, and opera to blues, etc. Remember, musical recordings are really just sonic paintings. In order to be a competent recording engineer, you have to come to really understand what instruments sound like naturally, by themselves or together in ensembles. Look at your time spent developing these skills just as you would if you were doing homework. Go out as much as possible because it is important to hear it all. You never know when that time is going to come when a client steps into the studio with a certain kind of instrument, sound or musical skill that you might not be familiar with. This unfamiliarity can lead to poor engineering decisions’ which in turn lead to poor or undesirable recordings. That is why it is important to know how each instrument sounds naturally.

6. Audio aptitude. It is important to develop a comprehensive knowledge of audio, such as level, signal flow, phase, frequency spectrum, microphone selection/placement, and acoustics. Whether you went to a reputable audio school or learned on your own, it is important learn and understand the basic concepts of how to make a recording, do overdubs, correctly edit, manage a mix-down properly, and master. Even the knowing the process of duplication and distribution to stores and online retail outlets sure doesn’t hurt either.

7. Studio Chi. The overall tone or vibe that an engineer brings into a session with a client is vitally important to the overall energy and creative workflow in the studio. Some of the best engineers out there are the ones who create a climate that is conducive to positive and creative workflow. The equipment doesn’t really mean much if the vibe of the session is no good. Even with a half million dollar recording console, is it really doing any good if a client walks in and doesn’t feel right. When artists are babied or pampered in the studio, they tend to lose their inhibitions, open up, and perform much better overall. A good engineer will be able to help generate that vibe in the studio in order to capture and bring it out in the song.

Now you know the basic skill set needed to a good career in field of audio engineering. The first six you can learn in school, whereas, the seventh takes time and experience. It’s important, not only as an aspiring engineer/producer but also as a musician, to sit in sessions and watch how other engineers do their thing. Internships at major recording facilities are a great opportunity to see how things really work in a professional studio. After awhile, you will find that every session and client is different as well as what is specifically needed to create the right mood and vibe for each session. At the end of the day, you’ll probably find yourself playing psychologist as much as you are being an engineer, producer, songwriter, mentor, friend, fan. The list can go on and on.

Phase Issues and how to resolve them

The problem of phase is a consistent issue for recording and mixing engineers alike. If there are problems with phase inside your song, even the smallest problems can ruin your music. It can make tracks sound empty or spectrally degraded, like something is missing. Issues with phase on a track can lead to problems on other tracks as well. These problems, as severe as they can be, can also be easily avoided or fixed, but first it is essential to understand how the problem of phase can initially occur.
This essay will attempt to discuss almost everything there is to know about phase, what is it, how it occurs, how it sounds and some procedures to deal with it.

What Is Phase?

I’m going to consult the all knowing source they call Wikipedia to answer this question.

It says:

Phase in sinusoidal functions or in waves has two different, but closely related, meanings. One is the initial angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the wave cycle which has elapsed relative to the origin.

A less scientific definition provided by Sweetwater Sound is:

Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).

No wonder why phase is such a confusing topic for people. At a quick glance, the definition is even confusing to me, but that’s why I am not a professor. At the end of the day, how does this definition apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about phase vs polarity. I’ll try to break it down more simply.

Phase Vs Polarity

Lets define things a bit more starting with phase and polarity. These two words are often used interchangeably but they are VERY different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case these definitions went over your head, Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

What it means to be In and Out of phase

When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, peak amplitude, and shape, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be twice as loud when combined.

When two waveforms are combined that are exactly the same but have a 180-degree phase difference they will cancel out completely producing no output. In the real world of recording, these conditions rarely happen. More than likely the two signals will either be slightly different, like two different microphones on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the waveforms are not 0 or 180 degrees, or the waveforms are somehow different, you get constructive and destructive interference or what is also defined as comb filtering. The nulls and peaks of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the trick to using several microphones on a single source.
For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s extremely extremely EXTREMELY slow compared to light. Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if the two signals are out of phase with each other, your recordings will sound thin and your music will not sound good.

The biggest problem area for phase is when you’re using multiple microphones on a single sound source, like drums or an orchestra. Depending on where each microphone is placed in relation to the sound source, the sound will reach each microphone at different moments in time. When you listen to these microphones blended together, there’s a chance that it will sound “hollow” and “thin” because each signal captured by each microphone is out of phase.

Really, at the end of the day, phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.

3 Small Tips for Dealing with Phase Issues

1. Microphone placement – Most phase related issues you’ll deal with are simply from how the microphones are placed. Take time to listen to all the microphones you are using in your recording session blended together. Each microphone may sound fine by itself, but phase issues happen when all the microphone signals are blended together. The easiest way to listen for this is to listen to them together in mono. Also, with every additional microphone used in the session, the more chances there are for phase to occur. So it’s always best to use the least amount of microphones as possible to get the job done.

2. Plug-in Latency – Within your recording software, pretty much any plug-in you use will induce latency, or delay, in your audio. This may cause small phase problems in your mix. If you put a plug-in with 20 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This isn’t necessarily that big of an issue, but it’s something you need to keep in mind. If you used two microphones on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other. Most audio software that is out today compensates for plug-in latency, but it is still something to keep in mind.

3. Linear Phase Processing – This last entry isn’t really much of a tip. It’s just something that one should be aware of. Most plug-ins process low frequency information and high frequency information at different speeds. The lows may come through the plug-in a little faster than the highs, for example, or vise versa. This is known as phase shift, and theoretically, it can affect the overall clarity of your audio. Most plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat the problem of linear phase shift.