Equalizing With Your Eyes Instead of Your Ears

So, we’ve all done it at some point or another. Whether it be looking at the transients on waveforms to match tempos, watching the meters too much, or the biggest culprit – watching the graphic display on an EQ, we have all used our eyes way too much when mixing at some point or another…. How can one not with all those pretty graphics and curves.


Waves Renaissance EQ 6 Parametric EQ

This little black EQ has been a big “go to” of mine for over 10 years now. I would have to say that close to 98% of the time I use this EQ during my mixes.

I’m not going to use this time to diss the Renaissance EQ, it just happens to be one of my main DAW EQ’s of choice. Its simple to use, I’ve always liked the sound and UI on this EQ, as well as the small amount of processing power it uses up. And I’ve always liked to see the results of what turning the knobs will bring. For some reason, I have always loosely correlated that the stranger the graphic display on an EQ looks after I use it, the better job I did at equalizing the sound. This probably stems from watching engineers mix ‘ITB’ back in the early days and noticing that the EQ curves on their graphic displays were always rather strange looking. My brain almost combines the two actions together (listening and looking at the curve)…. probably to my disservice. I found it’s really hard to not look at times. Even though I started my career on analog equipment, I’ve worked the majority of that career on a DAW, Pro Tools more specifically. I’m used to that workflow and have now come to find it familiar and home.


Recently, I was mixing a song for one of my regular clients and the mix just wasn’t coming out the way I hoped it would. The REQ6 has always helped me get that modern vocal sound for hip hop, rap, and trap music, but this project that I’ve been working on isn’t quite that. The approach needed was a more old school approach. The mix needed a warm, personal, underground kind of sound. That’s probably not the best way to describe it, but you get the idea.

To keep the story to the point, my traditional approach wasn’t working. So, I decided to go back to the drawing board. And what does that exactly mean in the world of digital audio engineering??


So I decided to start the mix over completely from scratch. Over the last 16 years of engineering, I’ve learned that it’s ok to start over sometimes. Starting over allows you to put your head into a different space, try new things, not be so restricted, which is important when mixing a song. Usually when I make this decision, it is the right thing that is needed for the mix. So what was to be my new approach?

Since I decided I wanted to try and go for more of an old school approach to this mix, I needed to try and emulate the same workflow from that time period. So, just like a real analog console, I decided to use the SAME signal chain on every channel, since a real console has the same equalization and dynamics on every channel. My trusty old Waves SSL 4000 Channel Strip seemed like a good fit for this eclectic approach.


I started the mix over, and after a few hours, it started to sound the way I was hearing it in my head, smooth, fat, and personal. The mix was much warmer and less digital sounding than my first approach. Overall, it was more balanced and sounded pretty analog for a 100% digital ‘ITB’ mix. Because of this approach, I forced myself to really LISTEN, not listen through visual ques and folley. Now just because this approach worked this time around, doesn’t mean it will always work. Each song and genre is different, as well as their requirements for what makes a good mix. What I did learn from this though is that I had become lazy when using equalizers with graphic displays. At times, I found I was using my eyes to eq sounds, and since we can’t listen with our eyes, this was having the occasional adverse effect on my mixes.

So, if you are just starting out your career as a mix engineer, may I highly recommend not getting yourself stuck into the visual part of mixing if you are mixing ‘ITB’. There are times when it is important to see what you are doing, but it is what we hear that makes the overall sound of the mix.

Phase Issues and how to resolve them

The problem of phase is a consistent issue for recording and mixing engineers alike. If there are problems with phase inside your song, even the smallest problems can ruin your music. It can make tracks sound empty or spectrally degraded, like something is missing. Issues with phase on a track can lead to problems on other tracks as well. These problems, as severe as they can be, can also be easily avoided or fixed, but first it is essential to understand how the problem of phase can initially occur.
This essay will attempt to discuss almost everything there is to know about phase, what is it, how it occurs, how it sounds and some procedures to deal with it.

What Is Phase?

I’m going to consult the all knowing source they call Wikipedia to answer this question.

It says:

Phase in sinusoidal functions or in waves has two different, but closely related, meanings. One is the initial angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the wave cycle which has elapsed relative to the origin.

A less scientific definition provided by Sweetwater Sound is:

Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).

No wonder why phase is such a confusing topic for people. At a quick glance, the definition is even confusing to me, but that’s why I am not a professor. At the end of the day, how does this definition apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about phase vs polarity. I’ll try to break it down more simply.

Phase Vs Polarity

Lets define things a bit more starting with phase and polarity. These two words are often used interchangeably but they are VERY different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case these definitions went over your head, Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

What it means to be In and Out of phase

When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, peak amplitude, and shape, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be twice as loud when combined.

When two waveforms are combined that are exactly the same but have a 180-degree phase difference they will cancel out completely producing no output. In the real world of recording, these conditions rarely happen. More than likely the two signals will either be slightly different, like two different microphones on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the waveforms are not 0 or 180 degrees, or the waveforms are somehow different, you get constructive and destructive interference or what is also defined as comb filtering. The nulls and peaks of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the trick to using several microphones on a single source.
For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s extremely extremely EXTREMELY slow compared to light. Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if the two signals are out of phase with each other, your recordings will sound thin and your music will not sound good.

The biggest problem area for phase is when you’re using multiple microphones on a single sound source, like drums or an orchestra. Depending on where each microphone is placed in relation to the sound source, the sound will reach each microphone at different moments in time. When you listen to these microphones blended together, there’s a chance that it will sound “hollow” and “thin” because each signal captured by each microphone is out of phase.

Really, at the end of the day, phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.

3 Small Tips for Dealing with Phase Issues

1. Microphone placement – Most phase related issues you’ll deal with are simply from how the microphones are placed. Take time to listen to all the microphones you are using in your recording session blended together. Each microphone may sound fine by itself, but phase issues happen when all the microphone signals are blended together. The easiest way to listen for this is to listen to them together in mono. Also, with every additional microphone used in the session, the more chances there are for phase to occur. So it’s always best to use the least amount of microphones as possible to get the job done.

2. Plug-in Latency – Within your recording software, pretty much any plug-in you use will induce latency, or delay, in your audio. This may cause small phase problems in your mix. If you put a plug-in with 20 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This isn’t necessarily that big of an issue, but it’s something you need to keep in mind. If you used two microphones on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other. Most audio software that is out today compensates for plug-in latency, but it is still something to keep in mind.

3. Linear Phase Processing – This last entry isn’t really much of a tip. It’s just something that one should be aware of. Most plug-ins process low frequency information and high frequency information at different speeds. The lows may come through the plug-in a little faster than the highs, for example, or vise versa. This is known as phase shift, and theoretically, it can affect the overall clarity of your audio. Most plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat the problem of linear phase shift.

WAV or MP3? The debate ends here.

In today’s market of music, it seems the file format of the MP3 has finally won out over the WAV file format. Music buyers can purchase their music digitally online and download it to their phone or music player within seconds. More and more popular music producers are rendering their instrumental beats and stems to mp3’s for use in studio sessions and licensing. Even the audio in the digital videos we watch on sites like Youtube and Netflix are streamed as an mp3. But why has this happened, who is to blame, and most importantly, why should we stop using the mp3 and resort back to wav.

To answer these questions, we must first understand the definition of what exactly an mp3 and wav file is, and what the differences are between them. So grab your nuts and pucker those cheeks, this is about to get more complicated than Keanu Reeves sexuality.

To understand what the definition of an mp3 and wav file exactly is, we will resort to the mighty all knowing Wikipedia. Thanks guys!

So according to Wikipedia…..

The MPEG-1 or MPEG-2 Audio Layer III, more commonly referred to as MP3, is an encoding format for digital audio which uses a form of lossy data compression. It is a common audio format for consumer audio streaming or storage, as well as a de facto standard of digital audio compression for the transfer and playback of music on most digital audio players.

The MP3 is an audio-specific format that was designed by the Moving Picture Experts Group (MPEG) as part of its MPEG-1 standard and later extended in MPEG-2 standard. The use in MP3 of a lossy compression algorithm is designed to greatly reduce the amount of data required to represent the audio recording and still sound like a faithful reproduction of the original uncompressed audio for most listeners. An MP3 file that is created using the setting of 128 kbit/s will result in a file that is about 1/11 the size of the CD file created from the original audio source. An MP3 file can also be constructed at higher or lower bit rates, with higher or lower resulting quality.

The compression works by reducing accuracy of certain parts of sound that are considered beyond the auditory resolution ability of most people. This method is commonly referred to as perceptual coding. It uses psychoacoustic models to discard or reduce precision of components less audible to human hearing and then records the remaining information in an efficient manner.

So like, wow! It pretty much means that the MP3 is an audio file format developed for people who are too auditorially stupid to hear that it sounds bad. And trust me, there are people out there who think MP3 sounds better, and some of them claim to be audio engineers. Oh no!

Now that we have that ingrained in our skulls, lets see what good ol’ Wikipedia has to say about WAV files.

Waveform Audio File Format (WAVE, or commonly known as WAV due to its filename extension) is a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in ‘chunks’, and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Windows systems for raw and typically uncompressed audio. The usual bitsream encoding is the linear pulse-code modulation (LPCM) format.

Both WAVs and AIFFs are compatible with Windows, Macintosh, and Linux based operating systems. The format takes into account some difference of the Intel CPU such as little-endian byte order. The RIFF format acts as a ‘wrapper’ for various audio compression codecs.

Though a WAV file can hold compressed audio, the most common WAV format contains uncompressed audio in the linear pulse code modulation (LPCM) format. The standard audio file format for CDs, for example, is LPCM-encoded, containing two channels of 44,100 samples per second, 16 bits per sample. Since LPCM uses an uncompressed storage method which keeps all the samples of an audio track, professional users or audio experts may use the WAV format for maximum audio quality.


So now that we have come to the understanding of what exactly an MP3 and WAV file are, let us return to the all important life seeking questions I proposed earlier in this diatribe.

So how did the MP3 format become the dominant format used in the music industry if it isn’t as good of quality as a WAV file? This question can be answered by pointing out that more people buy their music digitally now than physically. Physically meaning CD, Cassette, Vinyl. Since most online music retailers sell music only as MP3’s, people are then predominately purchasing only MP3’s as a result. A few retailers give the option of buying WAV files, but not many. (It should also be pointed out that Itunes sells music on their IStore in an AAC file format, which is another crappy audio file format unworthy of even the deaf.)

So now that the MP3 has taken over the retail music market, people have begun to think of it as a proper file format to use in the studio in their sessions because it’s the only format they know exists. Whether recording vocals to an MP3 instrumental, mixing music containing MP3 music stems, or printing final mixes as MP3’s, we see it here at Studio 11 more and more each day. What these people don’t understand is by using the MP3 format in the studio session, they are lessening the overall fidelity of their final music master.

The way to improve the fidelity is to use WAV files for your instrumentals, musical stems and final masters. If you are a vocalist who licenses or purchases instrumental music from a producer to record to, make sure you are only licensing or purchasing WAV files. Sure they may cost a little more than the MP3, but the quality of your final product coming out of the studio will dramatically increase. If you are a producer who makes his or her own beats, render your music stems down to 16 Bit 44.1kHz audio stems, or better yet 24 Bit 44.1khz stems. If you are an engineer or an aspiring engineer, always render your final stereo mixes down to a 16 or 24 Bit WAV file.

So who do we blame for this whole MP3 fiasco. I myself put blame squarely on the digital music retailers online for not selling a WAV file option or charging too much for the WAV file. Shame on you guys. Every retailer that sells music digitally should also include a standard WAV file option of 16 Bit 44.1kHz. If it’s a space issue on the retail servers, then they should just sell WAV files at a reduced price and drop the MP3 format altogether.

And we should also blame ourselves for purchasing music that is of inferior sound quality. I can not exclude myself from this category, as I am guilty of occasionally buying mp3’s for my music collection too when I am strapped for cash. However, I will declare from this point forward that I will only purchase wav files from now for my music collection. Music was made to be heard, and listening via a WAV file gives the listener the best chance to hear the music for what it is.


When it comes to Hip Hop & Rap music, many music aficionados like to think of them in a negative context. Their argument, that these genres really aren’t music because they’re absent of any melodic structure and theory, could hold true if you were alive during the Baroque Period. But what they forget is rhythm is just as important as melody. Rhythm is what drives melody and harmony. It is its’ foundation or backbone.

Hip Hop & Rap music, driven largely by drumbeats and rhythmically spoken word vocals called “raps”, is an exploration into the sound and syncopation of rhythm. The drumbeat in rap music usually provides the basic rhythm of the song, while the “spoken rap” provides the intricately syncopated rhythms to the beat. Understanding & appreciating that these two core elements are central to the hip hop and rap genres are vitally important to the correct approach and methods of recording these styles of music.

While it’s true that recording Hip Hop & Rap may not be as complicated as recording a live Jazz Ensemble or Rock Band per se, the process of recording a good rap vocal is equally as complicated. It is not just about setting up a microphone or headphones and then pressing record. First, the room in which the rapper is recorded in must be taken into consideration. The sound or ambience of a room can have a major effect on a recorded vocal. When a rapper is performing, the waveforms coming from his or her voice bounce and reflect in many different directions around the room they are recording in. Depending on the size of the room and the materials it is built out of, will determine the length of time the waveform will reflect or reverberate around the room. This effects the overall tone of the vocal recorded, sometimes pretty dramatically. A room with a lot of reverb, or a wet room, isn’t preferable when recording good sounding rap vocals. A wet room can tend to smear the sound of the voice or rap, dulling out the rhythmical excitement and clarity of the performance. Recording rap vocals in rooms that do not possess reflective or reverberate qualities, also known as a dry or dead room, is most optimal when attempting to capture a good rap performance.

The next thing to consider when attempting to record a good “rap” performance is the microphone that will be used to capture the recording. The microphone, which is a transducer, is the most important piece of the recording process. It is the conduit that allows the rapper’s voice and message to be heard anywhere, anytime. Imagine if you could buy yourself a pair of ears, would you buy yourself a cheap or average pair of ears? No, you would get the best ears you could afford so you could have the best hearing possible. The same can be said for microphones, you want the best possible so the listener can really hear you. Good vocalist microphones should provide a crisp and smooth high frequency response, along with a warm and present midrange, and a warm but gentle low frequency response. Condenser microphones such as Neuman U47, AKG C12, Telefunken Ela M 251, or the Audio Technica 4060 (which is what we use) are great not only for Hip Hop and Rap, but Pop, Rock, and Jazz among many other styles of music.


Recording Studio For Rap And Hip Hop In Chicago

The third thing to consider when recording a rap performance is the power and pre amplification source for the microphone. Good condenser microphones like the U47 Microphone or AT 4060 require either a direct power source like a power supply or phantom power, which is a power source built into your pre amplifier device. However, the best way to power your microphone is from a power supply. Good power supplies are built from high quality parts and electronics, which will in turn relay a clean and steady direct source of power to your microphone. Power supplies are the better option because their job is to solely provide power to the microphone and nothing else. With a pre amplifier, phantom power relays electricity to your microphone from the same circuitry that powers the amplifier. The power source is just not as clean and reliable because it is divided amongst the different circuitry within the pre amp. It’s kind of like moving water through one hose and then splitting the flow off into four hoses. Water will flow out of the four hoses, but the water pressure will be less reliable and steady than if it flowed out of just the one hose.

Now that we understand the importance of powering your condenser microphone with a power supply, it is now time to discuss the pre amplification of the microphone signal into your recorder. Microphone signals are often too weak to be transmitted to units such as mixing consoles and recording devices with adequate quality. Preamplifiers increase the microphone signal to line level (the level of signal strength required by such devices) by providing stable gain while preventing induced noise that would otherwise distort the signal.

Even though the microphone is the source to most of the coloration of the recorded vocal tracks, the microphone preamplifier also affects the sound quality of the signal. A preamplifier might load the microphone with low impedance, forcing the microphone to work harder and thus change its tone quality. It also might add coloration by adding other built in characteristics, or features such as vacuum tubes, equalization, and dynamics control. The preamplifier we use here at Studio 11 is the Manley Voxbox, which is a vacuum tube amplifier with both equalization and dynamics control. Out of the 18 years or so of recording both Rap and Hip Hop, we haven’t come across a better preamplifier for recording ‘rap’ vocals. The preamp delivers a warm sound, with good midrange at around 1k. The limiter is great for controlling rappers whose performances are rather dynamic.

The last step in the process of recording Rap & Hip Hop vocals is the consideration of the recorder that will be used to capture the performance. Back at the start when Hip Hop & Rap first found its way into the music scene across Chicago, the rapper’s performances, beat, and music were all recorded to analog tape in a professional music studio. This helped provide that rich warm dirty sound that characterized such early Chicago Hip Hop artists like Common, Crucial Conflict, Ten Tray, and Twista to name a few. As time progressed, digital media like the Alesis ADAT and the Tascam DA Series began to take over the recording market because of their affordability over analog tape machines. Smaller localized studios began to open up offering cheaper rates over larger professional studios, which in turn offered more artists the chance to get into a studio and record their projects.

By the early 90’s, companies like Digidesign, Synclavier, and Sonic Solutions began to develop software & hardware for the purpose of recording and editing audio on a computer based system. At first, these DAW systems were expensive and could only record and edit. But by the late 90’s, Digidesign’s flagship software & hardware system Pro Tools started to become the industry wide standard by allowing engineers to not only record and edit multi-track audio in real time, but mix, master, compose, and arrange it as well. No system could offer all these options to the degree of reliability and stability that Pro Tools could offer at the time. Also, projects and sessions became completely recallable which was always a tedious chore when using analog tape.

Now in 2014, the market for quality DAW’s has expanded due to the affordability of powerful computers. Virtually any audio software out there has the ability to record audio from a microphone source. However, Pro Tools is still the only software that allows the engineer to record multiple tracks at once with zero latency while using plug ins and other real time features. No matter what professional studio you go to, it will feature a Pro Tools system 99 times out of 100. It is the industry standard when it comes to recording & editing Hip Hop & Rap.

Another small thing to be concerned about when recording a rapper is the headphone mix that the performer or rapper will reference while recording their performance. Remember, microphones pick up all sound no matter how loud or subtle. You must be careful with how loud the level of the headphones are when recording, as the microphone will pick up the residual sound or bleed through of the headphones. This bleed through can add up in volume when recording multiple tracks of vocals and can alter the sound of the vocal over the track by creating issues with phase in the midrange, not only in the vocal but with the track itself. It is quite common for rappers to prefer a louder headphone mix so they can ‘get into’ their performance. A good way to achieve a loud headphone mix while reducing phasey headphone bleed through is to have the rapper or performer wear a stocking cap over their headphones when performing. This will keep the seal of the headphones tight to the ears and dull out any bleed through that is emitted. The recorded rap performances will be much cleaner and thus easier to work into your mix.

The last thing to be discussed when recording Hip Hop and Rap is the beat in which the rapper will be performing on top of. For the past decade or so, most rappers have been recording their performances on top of an instrumental 2-track that they either purchased or licensed from an online beat store or a producer. Every now and then, you get lucky and a client will bring in the individual stems or track outs to the song they are performing on top of. These individual tracks are usually produced and rendered down in production software like Ableton, FL Studio or Logic. The problem with these rendered tracks is that because there is the possibility they may have been created inside cheap production software, they can tend to take on a cold lifeless digital sound. Programs like Reason, Garageband, and Acid are notorious for rendering files that do not sound as good as they originally did in the session. If there is extra time in our clients session’s here at Studio 11, we like to take these digital track outs and transfer them over to analog tape, which in turn brings the digital files back to life by adding new characteristics such as warmth, saturation and even harmonics. Yes, transferring to tape might add a little noise, but noise isn’t always a bad thing. Sometimes, it can be the difference maker.


We all know the classic adage “You get what you pay for”, well nothing could be more true when it comes to studio time – especially cheap studio time. While searching for cheap recording studios in Chicago there are 3 things you should consider.

First off, we must address the fact that we are in Chicago, the nations third largest city. For those of you that live in Chicago, you know that it is not a “cheap” city by any means. So factor this in to what you consider to be cheap studio time. Recording studios must pay rent in accordance with their location. So it is fair to assume that a cheap recording studio downtown may be pricier than a cheap recording studio in a low profile part of Chicago. Location, location, location as they say…

Secondly, mind the quality of equipment or “gear” the recording studio offers. A studio with high end mics, pre amps, digital systems, and post production equipment offers an amazing improvement in sound quality over the budget based studio with cheap low cost recording gear. For just 10-20 dollars more per hour you can access the same quality gear that major label records are recording on. Your music isn’t cheap is it? Why cheap out when it comes to sound quality? You will regret it.

Lastly, engineering skills are everything in audio. It’s great that young budding recording students set out to build cheap start up studios with cheap equipment, but these engineers cannot match the artistry and sound that is offered by seasoned professional engineers who have earned major recording credits. For a small amount more money you can offer your music an enormous improvement by hiring world class engineering talent. Fact is that a professional engineer will work much faster than a novice. This affects the bottom line of your recording costs. You may be paying for 3 hours of recording time at a “cheap” studio for a job that should only take 1 hour in the hands of a professional engineer.

Here at Studio 11 we offer the highest quality equipment, best engineering talent, and the fastest turnaround. You can’t beat that. Call us at 312.372.4460 or email studio11chicago@gmail.com for free consultation. Or use the contact form found HERE. Our rates start at 65/HR.

 Page 5 of 6  « First  ... « 2  3  4  5  6 »
Book Now
css.php CALL US NOW!