The Rough Mix: The Art and the Risk

As an audio engineer, the start of a recording/production project is usually one of the best parts of the project; the teamwork is all there, everyone involved is optimistic, and the overall focus of the big picture is challenging, invigorating, and most of all fun.

However, when it comes to the end of the project, it usually is one of the most taxing and least gratifying parts of the project. This is intensified by shifting focus to each individual involved in the project; all the endless detail and waffling of editing; the infinite possibilities of mix tweaks allowed by our computer systems; etc.

Strangely enough, that pivotal moment in the project where things change between collective ideology and individual artistic vision is usually during the time of rough mixing.


In an ideal audio engineers world, daily rough mixes would be an essential step in moving a project forward. However, this isn’t always the case due to things like session time constraints, client financial issues, release dates, etc. Therefore, the limited number of rough mixes made during the project become even more important to the overall scope of the project. Each rough mix must represent a significant advancement in the overall vision of the project; inspire future creativity within the project; and help determine how much more time is needed before final mixing begins. Here are a few guidelines to think of when you are at the rough mix stage in your project.

The first rough mix of a project is sometimes referred to as the ‘raw mix’ or ‘organic rough.’ This is because it allows a unique opportunity to hear what the recorded tracks really sound like together with natural dynamics and unused space. When it comes time to begin the first rough mixes, keep in mind these mixes should be nothing but a representation of the initial recorded performance. This means that the first rough mix should only focus on the balance of all the recorded tracks together with very few edits, EQ, and processing. No overdubs either. The rough mix is actually raw and unproduced, but if recorded properly, still hi-fi. Allowing the imagination to open up, the rough mix helps the engineer, producer, and especially the artist contemplate what to do next in the song and how to fill any open space.

As the process of overdubbing begins, the next batch of rough mixes will understandably increase in complexity. It is this second round of rough mixes where you can and really should try things out. Take some f’n chances. You never know when you might discover something new and useful that will help take what you are working on to the next level. Experiment with different outboard combinations, try a new plug-in, create an over the top effect or two. And the good thing is, if it doesn’t work out, you can just get rid of it. Knowledge is power!

After all the experimental foray’s have been successfully/unsuccessfully attempted, it is time to do some rough mixes that one could say are dressed up rehearsals of what the final mix will become. Using the collective knowledge from all the experimentation as well as the overdubs and initial recording, I’d fashion together my most glorified mix of the project all while attempting to locate and resolve any problems from poor recording to bad performances, bad editing, etc. These mixes are then aided by having the artist or band go and listen to them on their own preferred sound system in hopes of providing meaningful feedback.

One thing to keep in mind, the rough mix process doesn’t always work out so well for every client. It’s always best when making that first rough mix to verify if the artist or band would like to hear it raw and organic or if they prefer it polished up a bit. Some current musicians and artists wouldn’t even think of listening to their part or song without at least some kind of tuning, quantizing, Eq-ing, or manipulation. They would likely find the organic rough mix to be faulty to a point of questioning the engineer’s proficiency and qualifications. They would never question their own. If some of your recording methods rely on drum replacement, comping and editing, virtual amplification then you might have to do a fair amount of work before presenting the first rough mix to your client.

In the second stage of rough mixing, where it’s good to experiment and create options, the liabilities can sometimes be quite overwhelming. Remember, even though experimenting and over effecting is fun for us because we’re engineers, to an artist who fears hokey exuberance and manipulation of their sound, it can be an all out declaration of war. Once again, it is always good to check with the artist or band member before experimenting or radically changing their sound, especially a vocalist. For some vocalist’s, having their voice washed in effects can really chap their ass, driving them to question your judgement, which in turn prevents you from making simple recording suggestions like ‘you should double this or stack that.’

By the time the third stage rough mixes are complete, they better be almost as good as what one would hear on the radio. Having a near perfect rough is just the beginning, it is now also all about ‘temp mastering’ (EQ, compression, expansion, limiting) on rough mixes so artists don’t throw a fit over low levels in comparison to final mastered albums.

So what exactly is the flip side to all of this? If the rough mixes are too good, they might get prematurely distributed. This is either because of an anxious artist posting their rough mixes on the web, unintended airplay or unintended leaks via insider moles. If people think a rough mix sounds finished, then it might get overexposed and diminish any chances of success the final mix could have.

As important and fun as rough mixing can be, the modern audio engineer should always make sure his or her approach and overall intentions are acceptable to everyone involved in the project. A rough mix is sometimes going to create disharmony and strong opinions amongst everyone involved in the project, but that dialogue and concession is exactly what is required to bring about the best in our art. As time passes and you begin to build a stronger reputation as an audio engineer through success, people will begin to hold more merit to your opinions and experimentation, which at the end of the day means your job as an engineer is just going to keep getting better, and better, and better, and better………

Is College Necessary for a Career in Audio Engineering

Today I am going to address a very common topic I get here at Studio 11 almost on a daily basis. Is it worth it or not to go to an audio school to become a recording and mixing engineer. With so many young people discovering the passion for music engineering and production, the natural course of action for them is to want to work on music full time, as a job.

Just like many professions, it is believed that a degree is needed not only for the knowledge and experience, but for the resume as well. The idea of going to school to learn how to use mixing consoles, outboard gear, and microphones are very alluring. So if you have played around with these questions on your future profession, let me give you a few quick thoughts on the subject.

You Don’t Need A Degree For Sound Engineering

I will go ahead and just get straight to the point. You really don’t need a degree from a university or trade school to earn a living recording, mixing, or producing music. As a matter of fact, it wouldn’t surprise me if most of the top audio engineers out there did not go to school for engineering. Some might not have gone at all or, like myself, dropped out when they realized a degree wasn’t needed to obtain a good job as an engineer.

Audio engineering and production is an artform, not just a field of study. The most common and effective form of training for a young aspiring engineer is an internship where he or she works with a master audio engineer or producer in a professional studio environment. As an intern, your job is to watch and learn the master engineer’s every trick and move so you can eventually assist his sessions properly. This is how its always been done. What really matters most in the audio engineering business is experience and connections. Let me say that again, experience and connections.

Pretty much you can go to any studio around the world today, obtain an unpaid internship, and begin the long and hard process of becoming an engineer. Yes it is true you’ll probably run a lot of errands and sweep a lot of floors, but the compromise is that you might be able to assist the engineer in a session, do some editing for him, cable patching, microphone setup, so on and so forth. It’s how most of us began our careers as audio engineers. I just like many others before me didn’t need any kind of degree to begin the process.

After reading this blog, if you think I am against school, you should know I went to college for a year and a half to learn audio engineering. It was a five-year program, so with the little bit I did learn in that short time, it was definitely an invaluable experience. The beginning of the program focused more so on the physics of acoustics, music theory, and conceptual understanding. I will say, I probably wouldn’t have been taught these things in my internship, so I am glad I got a chance to learn the stuff in school. So, if you have the money to get accepted into an audio program at a major university or trade school, you should go!

On the other side of the fence though, if don’t have the money, its hard to justify the possibility of going into large debt for a degree in audio engineering. The messed up thing that no one tells you, even with that fancy degree, you still have to pay your ‘dues’ and get an internship at a studio somewhere. That internship is the same kind of internship you could have gotten even if you didn’t go to school. Chances are you’ll probably be the smartest and most experienced intern at the studio though, which means you might get more opportunities to sit in and assist on sessions.

Now whether or not you go to school, you don’t necessarily have to work your way up through the ranks to become an engineer. You can open up your own studio any time, all you need is a good source of capital to purchase some gear. So, after my internship had ended, I decided to open up my own little space out of my apartment in Chicago. It was small, the neighbors yelled at me everyday because of the noise, but I recorded, edited, and mixed many songs and albums for many bands and artists of all types. I learned a great deal, met lots of great people, and most importantly, made money.

So you see, there isn’t really much stopping you from starting a career as a recording or mix engineer. The biggest hurdle is building a client base. A good way to go about this is to do some free work to build up client portfolio, put it up on a website and promote your services. Then little by little, start charging for your work.

So at the end of it all, education really is a great thing. I am so thankful for all that I learned from my professors during my time at school. But looking back on it all, it was something that I didn’t need to do to get where I am at today. I learned 99% of everything I know about audio engineering by watching other engineers and producers work, and also through experience. So I guess what I am saying, is if I could do it, then you could do it. All that it takes is a lot of patience and a lot of passion. Pretty simple things compared to becoming a doctor or a lawyer.

The Click Pop Factor

Distortion, clicks, and pops can destroy an otherwise good recording session, and these problems 99% of the time are manufactured by something simple and easily avoidable. So here are a few quick things to check if you ever run into these situations.

Update Firmware, Software, and Drivers

The first thing that you must do is make sure your system has been completely updated. Lets say you just updated your OS to OSX Yosemite and you installed the drivers from the disc that came with your audio interface. You should always go to the website of the software you are installing and check for the most current software and drivers. With quick things move in the computer world, it is not always possible to possess the latest driver on the installation disc that comes with your software, even if you just purchased it.

It is also wise to make sure any audio applications, software synths, and plugins you own are up to date as well. Always go check the manufacturer’s website and update according to what they say in their instructions.

Buffer Settings

The next to check after you have updated your software would be your buffer settings. One of the simplest fixes for these types of problems can be changing your buffer settings. Understanding why you might possibly need to change the buffer setting can be a bit more difficult to understand and will be discussed separately.


For the simplest answer: If you are getting distortion, pops or clicks in your recording raise the buffer setting in your audio application. In Logic, click on Preferences > Audio. On the Core Audio tab you will see “I/O Buffer Size.” Regardless of what the buffer size is set to, move it to the next highest number. For example, if your buffer size is currently set to 32, change it to 64. If it is set to 64, then move it up to 128, etc. In GarageBand, it is a bit easier: Click on GarageBand & Preferences. Click on the Audio/Midi icon and you will see the following screen:

Change to “Maximum number of simultaneous tracks/Large buffer size.”

Pretty much every other audio application will have very similar settings. If you are not sure where to these settings then contact the software manufacturer or look it up on Google or YouTube. If switching the buffer doesn’t fix the problem, the next step is taking a look at other USB or FireWire devices you may have plugged into your setup.

Other USB or Firewire Devices

Quite often other FireWire or USB devices connected to your computer can cause problems if your audio interface is FireWire or USB. Conflicts with other audio interfaces, cameras, etc can usually cause the biggest problems, but external hard drives can also cause problems as well.


If you are having problems with distortion, pops, and clicks and you have other devices connected to your computer, try the following:

1. Turn off your computer and unplug all USB and FireWire devices except for your main audio interface.

2. After you turn your computer back on, play something from iTunes and make sure the audio you are hearing sounds clear.

3. If you have an external USB or FireWire device, connect it to your computer and try running your session again.

4. If you are still having problems it might be time to bite the bullet and reach out to the manufacturer’s tech support.

It is best to keep all other devices disconnected from the computer while working on audio. The new Mac’s on the market are amazing and extremely powerful. If your desire is to record professional quality audio then you might not be able to connect every USB and FireWire device that you own because there is a limit to what you can do with your computer and DAW. If that is not your desire, connect all your devices back up and just keep doing what you are doing.

7 Commandments of Audio Engineering

If you are wondering what you have to do to break into the music industry as an audio engineer in Chicago, have no fear. Recording in Chicago is no different than pretty much anywhere else on the planet, except for language. Here is a comprehensive list of skills that you can aim to develop to position yourself as a top engineer in the future. Notice that four of these skills are what can be defined as “base skills,” meaning they are imperative for any job in the music industry or elsewhere. The other set of skills are known as “job specific skills” and relate categorically to your work in the studio.


1. Ability to read, write, and follow directions. So why is it so critical to follow instructions in a recording studio? For starters, you could fuck up the gear in the studio. You are also working with client’s master recordings that are the result of perhaps thousands of hours in time and financial investment. In Chicago, some of those clients might not be to happy if their masters get messed up, so it could potentially mean your life. More over, following instructions also means that you are reliable and dependable, which in turn brings confidence to the head engineer or manager that you can be developed and mentored to integrate and properly accomplish client requests. Following directions is crucial to discovering how to work successfully in any recording or production studio, let alone life.

2. Communication. There have been many times when I was engineering a session when the artist or producer turned to me and said, “It just doesn’t sound right. I’m not sure really what it is about it, but it is not grabbing me.” We often spend long hours trying to figure out how to understand our clients. In a way, one could say we are quasi-pyschologists. The ability to communicate clearly is crucial in order to be as productive as possible in the studio. Many delays and fuck ups in the studio are a result of a lack or breakdown in communication. Knowing when to and not to speak out comes over time through patience and practice, and understanding.

3. Ability to stay cool and calm. Musicians can get pretty emotional in the studio. In essence, they are dumping their emotional well being into their performance for all to hear. So they get very emotional. A good engineer must know how to stay calm and reserved when a musician voices their frustrations. I have seen many sessions where fights break out in the control room between band members or band members and management. These people have actually swung at each other, which generally is not helpful to the whole creative process. Remember, your job is to keep the project on track at all times, so it is important for you to always remain calm and relaxed, especially in Chicago. May the force be with you.

4. Basic computer knowledge. So, how much do you really need to know about computers to become a good recording engineer? Well, many ambitious producers and sound engineers have a good deal of experience and knowledge operating sound recording and editing software on a computer. It’s certainly a bonus. The more you know about computers, the more valuable your service will be in the studio. It is important to master the basics, such as word processing and data entry, as well as understanding spreadsheet functions so you can use the computer to do simple math. It is important to be comfortable with these basic three applications as well as the computers recording and production software. A basic computer course at your local community college can teach you these fundamentals. It’s also important to know both the Macintosh and PC platforms. Macintosh more so for composing, recording, and mixing. PC’s for business management and data entry. Initally, all the best computer editing software for sound and music was found on a Mac, but over the last couple years, the PC has been making strides in the audio department. Many programs that were once exclusive to Mac are now available on PC as well.


5. Critical auditory skills. If you haven’t heard or experienced sound in an acoustic setting, you might not know what you are listening for which can bring you problems as an engineer. You’ve got to use your ears and really listen to the sound or music. As an engineer, it is important to get out there in the real world and experience every type of music that there is in a concert setting, from country to jazz, rock to big band, and opera to blues, etc. Remember, musical recordings are really just sonic paintings. In order to be a competent recording engineer, you have to come to really understand what instruments sound like naturally, by themselves or together in ensembles. Look at your time spent developing these skills just as you would if you were doing homework. Go out as much as possible because it is important to hear it all. You never know when that time is going to come when a client steps into the studio with a certain kind of instrument, sound or musical skill that you might not be familiar with. This unfamiliarity can lead to poor engineering decisions’ which in turn lead to poor or undesirable recordings. That is why it is important to know how each instrument sounds naturally.

6. Audio aptitude. It is important to develop a comprehensive knowledge of audio, such as level, signal flow, phase, frequency spectrum, microphone selection/placement, and acoustics. Whether you went to a reputable audio school or learned on your own, it is important learn and understand the basic concepts of how to make a recording, do overdubs, correctly edit, manage a mix-down properly, and master. Even the knowing the process of duplication and distribution to stores and online retail outlets sure doesn’t hurt either.

7. Studio Chi. The overall tone or vibe that an engineer brings into a session with a client is vitally important to the overall energy and creative workflow in the studio. Some of the best engineers out there are the ones who create a climate that is conducive to positive and creative workflow. The equipment doesn’t really mean much if the vibe of the session is no good. Even with a half million dollar recording console, is it really doing any good if a client walks in and doesn’t feel right. When artists are babied or pampered in the studio, they tend to lose their inhibitions, open up, and perform much better overall. A good engineer will be able to help generate that vibe in the studio in order to capture and bring it out in the song.

Now you know the basic skill set needed to a good career in field of audio engineering. The first six you can learn in school, whereas, the seventh takes time and experience. It’s important, not only as an aspiring engineer/producer but also as a musician, to sit in sessions and watch how other engineers do their thing. Internships at major recording facilities are a great opportunity to see how things really work in a professional studio. After awhile, you will find that every session and client is different as well as what is specifically needed to create the right mood and vibe for each session. At the end of the day, you’ll probably find yourself playing psychologist as much as you are being an engineer, producer, songwriter, mentor, friend, fan. The list can go on and on.

Day in the Life of a Session

You have come up with a great new song. Overcome with a great sense of pride and achievement, it is now time to ponder what to do next. After shoddily recording a few demos on your laptop, you decide that it is time to provide your music with the love it deserves – the professional treatment of a commercial recording studio.

Once you’ve slimmed down your list of local studios, you decide to choose a Chicago recording studio that is warm, economical and operated by people that go out of there way to making your project the most important. On the day of the session, your nerves start to unsettle as you make your way to the place that will help you immortalize your song. Upon arriving, the chill vibes and pleasant nature of the staff and engineer have a calm and reassuring effect.

A half an hour later, everyone is in position. The meters bounce and glow. In just a few minutes, the nervousness that you entered the studio with has now turned to jubilation as you realize that the sound that you are hearing is coming from you. It is your sound. And, paired with the proper recording environment, gear, and engineers, what started as a simple idea is now becoming a really good song.

Now that the recording of the music and lyrics has been completed, the engineer of the session tells you that they will need some time to mix your song so that it has that “radio” shine and is ready for distribution through Itunes and other internet stores. You listen to the sound that is coming out of the speakers in the control room. Faders are raised and lowered, knobs are tweaked, the audio engineer massages the computer keys and bends the software to his will. In a little less than two hours he plays the result for you of both his and your efforts. As a smile leaps from your face, there is only one word that comes to mind – “WOW!”. Going to Studio 11 to record your new song was the best decision you could have made.

20 Valuable Game Changing Studio Lessons

The understanding of recording, mixing, and mastering hip hop, rap, and other kinds of music is no doubt a talent and an art form. Development of this skill takes time, and requires many mistakes, experiments and life lessons. To truly benefit and prosper in audio engineering, one has to have a hunger to go through and take in the vast amount knowledge out there on recording, mixing, and mastering.

Once you begin sifting through the knowledge and applying it in a real world setting, you will come to find there is a big difference between knowing something and actually getting it. The beautiful thing is, once you finally get something, the ability to master those techniques and others becomes exponentially easier. You will find that recording, mixing, and mastering isn’t really a job anymore, it is just a part of life.

Over the last 16 years, I have had to learn many hard lessons in the world of audio engineering. A lot of times I thought I understood what certain techniques were and how to apply them, but I often was wrong or just partially correct on those techniques. Some of my most memorable times in the studio were when I finally ‘got’ a certain recording or mixing technique. You can say these were my ‘a-ha’ moments.

1. Learning everything there is to possibly know about the hardware and tools I have at my convenience.

2. Compression: Much can be discussed about this subject, but one thing that is so crucially important is getting the attack and release times correct. Compression can really lift up a performance or it can shamefully destroy it.

3. The day I realized that pretty much anything in the studio could be automated in some form or another.

4. Low and High Pass filtering is truly my friend.

5. The first time I turned off my computer screen to listen back to a mix. Blew me away how much easier it was to listen, identify, and make changes to the mix.

6. The first time I recorded and mixed in a professional acoustically treated studio. The amount of detail and separation I could hear in the frequency spectrum almost startled me.

7. How simply cutting out a little 275-375 Hz on most close mic’d tracks can remove boxiness and really bring out detail to things.

8. Getting rid of frequencies, or subtractive equalization, is so much better than additive equalization. Its just easier and more natural to take out what isn’t needed than to artificially add it in.

9. Hearing live drums mic’d through a stereo pair of C12’s and PZM’s. I finally understood where the life and dimension of a recorded drum performance came from.

10. Discovering that the more plugins I use in a mix, the more digital and artificial sounding the mix will become.

11. Distortion is a form of compression and a good way to add harmonics.

12. The first time I threw up a quick mix of raw audio tracks instead of attempting to dial in the perfect sound on every track. It increased the overall quality of the mix while cutting down average mix time.

13. It’s always good to get feedback, even if its from somebody without any musical or audio engineering experience.

14. Getting stuck in a mix, zeroing the faders, trashing all inserts and sends and then pushing the faders back up again. Valuable learning experience and test to the ego.

15. Dynamic Equalization via side chain compression. The bees knees!

16. Realizing that knowing how you want things to sound in your mix is so much more important than just knowing cool mix techniques and tricks. The tricks can sometimes help you get there a little faster though.

17. Musical arrangement is vitally important to the outcome of a mix on a song. It’s where the song can really be made or destroyed.

18. “Fix it in the mix” is a term that doesn’t always apply to every situation. Sometimes it is faster and easier to just re-record something if it is not right.

19. Parallel compression allows for smoother, natural dynamics overall and less aggressive compression individually.

20. When, after what seemed like centuries of recording amateur artists and bands, somebody of superstar status steps up in front of the microphone and shows how it’s really done. Wow!

Good drum samples and where to find them

Are you a beat producer who always has problems finding good drum sounds or patches to use in your productions? Are you tired of using cheap ‘out of the box’ sounds? Well don’t worry. It is a problem that every producer has had some point in their career. There are literally millions of drum samples out there on the market for free and for sale. It can really be quite overwhelming finding those perfect sounds.

The first good thing, if you have realized you have this problem, then you are already on your way to becoming a better producer. So congratulations. What people who claim to be producers forget is there is more to producing then just writing the song, there are the individual tracks that make up the song to consider and how each one sounds not only by itself but in relationship to the song.

Nowadays, it is important for producers to go about their craft with an audio engineer’s ears. This doesn’t mean you have to learn to become a professional engineer to make good beats, though it helps. It means learning to understand and accept when a sound or a track isn’t working in a song and what to do to fix it. At the end of the day, putting care into the way each individual track sounds will make for a better sounding song overall. Your beats will be easier to mix and sound smoother and more musical, plus will probably sell better too.

A good rule of thumb, if a sound is getting in the way or lost in the mix, is to replace it with another sound that doesn’t get lost or in the way of the song. Simple enough, but this can sometimes take time and patience. Remember, just because a synth patch or drum sample might sound cool on its own, doesn’t mean it will work with the rest of your tracks. Choose your sounds by listening to how they interact with the other tracks in your arrangement. Never force a track, that will never work.

Now one of the best places to find good drum samples are on your favorite records. Yes that’s right, your favorite records. Some would say that’s stealing, but we say that’s just sampling. So don’t be scared. Sampling drums off of records has been done since the dinosaurs roamed the earth. Not dinosaurs like T Rex’s, but dinosaurs like Bruce Swedien and Bill Porter. The best thing about sampling drums off a record, is that the drum sounds you are sampling already sound really really good. Using good drum sounds that don’t need much treatment allow you the producer to better add in additional tracks and sounds into your song. All your tracks will pocket better and will make you or your mix engineer’s job much easier when mixing begins.

Equalizing With Your Eyes Instead of Your Ears

So, we’ve all done it at some point or another. Whether it be looking at the transients on waveforms to match tempos, watching the meters too much, or the biggest culprit – watching the graphic display on an EQ, we have all used our eyes way too much when mixing at some point or another…. How can one not with all those pretty graphics and curves.


Waves Renaissance EQ 6 Parametric EQ

This little black EQ has been a big “go to” of mine for over 10 years now. I would have to say that close to 98% of the time I use this EQ during my mixes.

I’m not going to use this time to diss the Renaissance EQ, it just happens to be one of my main DAW EQ’s of choice. Its simple to use, I’ve always liked the sound and UI on this EQ, as well as the small amount of processing power it uses up. And I’ve always liked to see the results of what turning the knobs will bring. For some reason, I have always loosely correlated that the stranger the graphic display on an EQ looks after I use it, the better job I did at equalizing the sound. This probably stems from watching engineers mix ‘ITB’ back in the early days and noticing that the EQ curves on their graphic displays were always rather strange looking. My brain almost combines the two actions together (listening and looking at the curve)…. probably to my disservice. I found it’s really hard to not look at times. Even though I started my career on analog equipment, I’ve worked the majority of that career on a DAW, Pro Tools more specifically. I’m used to that workflow and have now come to find it familiar and home.


Recently, I was mixing a song for one of my regular clients and the mix just wasn’t coming out the way I hoped it would. The REQ6 has always helped me get that modern vocal sound for hip hop, rap, and trap music, but this project that I’ve been working on isn’t quite that. The approach needed was a more old school approach. The mix needed a warm, personal, underground kind of sound. That’s probably not the best way to describe it, but you get the idea.

To keep the story to the point, my traditional approach wasn’t working. So, I decided to go back to the drawing board. And what does that exactly mean in the world of digital audio engineering??


So I decided to start the mix over completely from scratch. Over the last 16 years of engineering, I’ve learned that it’s ok to start over sometimes. Starting over allows you to put your head into a different space, try new things, not be so restricted, which is important when mixing a song. Usually when I make this decision, it is the right thing that is needed for the mix. So what was to be my new approach?

Since I decided I wanted to try and go for more of an old school approach to this mix, I needed to try and emulate the same workflow from that time period. So, just like a real analog console, I decided to use the SAME signal chain on every channel, since a real console has the same equalization and dynamics on every channel. My trusty old Waves SSL 4000 Channel Strip seemed like a good fit for this eclectic approach.


I started the mix over, and after a few hours, it started to sound the way I was hearing it in my head, smooth, fat, and personal. The mix was much warmer and less digital sounding than my first approach. Overall, it was more balanced and sounded pretty analog for a 100% digital ‘ITB’ mix. Because of this approach, I forced myself to really LISTEN, not listen through visual ques and folley. Now just because this approach worked this time around, doesn’t mean it will always work. Each song and genre is different, as well as their requirements for what makes a good mix. What I did learn from this though is that I had become lazy when using equalizers with graphic displays. At times, I found I was using my eyes to eq sounds, and since we can’t listen with our eyes, this was having the occasional adverse effect on my mixes.

So, if you are just starting out your career as a mix engineer, may I highly recommend not getting yourself stuck into the visual part of mixing if you are mixing ‘ITB’. There are times when it is important to see what you are doing, but it is what we hear that makes the overall sound of the mix.

Phase Issues and how to resolve them

The problem of phase is a consistent issue for recording and mixing engineers alike. If there are problems with phase inside your song, even the smallest problems can ruin your music. It can make tracks sound empty or spectrally degraded, like something is missing. Issues with phase on a track can lead to problems on other tracks as well. These problems, as severe as they can be, can also be easily avoided or fixed, but first it is essential to understand how the problem of phase can initially occur.
This essay will attempt to discuss almost everything there is to know about phase, what is it, how it occurs, how it sounds and some procedures to deal with it.

What Is Phase?

I’m going to consult the all knowing source they call Wikipedia to answer this question.

It says:

Phase in sinusoidal functions or in waves has two different, but closely related, meanings. One is the initial angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the wave cycle which has elapsed relative to the origin.

A less scientific definition provided by Sweetwater Sound is:

Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).

No wonder why phase is such a confusing topic for people. At a quick glance, the definition is even confusing to me, but that’s why I am not a professor. At the end of the day, how does this definition apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about phase vs polarity. I’ll try to break it down more simply.

Phase Vs Polarity

Lets define things a bit more starting with phase and polarity. These two words are often used interchangeably but they are VERY different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case these definitions went over your head, Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

What it means to be In and Out of phase

When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, peak amplitude, and shape, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be twice as loud when combined.

When two waveforms are combined that are exactly the same but have a 180-degree phase difference they will cancel out completely producing no output. In the real world of recording, these conditions rarely happen. More than likely the two signals will either be slightly different, like two different microphones on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the waveforms are not 0 or 180 degrees, or the waveforms are somehow different, you get constructive and destructive interference or what is also defined as comb filtering. The nulls and peaks of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the trick to using several microphones on a single source.
For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s extremely extremely EXTREMELY slow compared to light. Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if the two signals are out of phase with each other, your recordings will sound thin and your music will not sound good.

The biggest problem area for phase is when you’re using multiple microphones on a single sound source, like drums or an orchestra. Depending on where each microphone is placed in relation to the sound source, the sound will reach each microphone at different moments in time. When you listen to these microphones blended together, there’s a chance that it will sound “hollow” and “thin” because each signal captured by each microphone is out of phase.

Really, at the end of the day, phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.

3 Small Tips for Dealing with Phase Issues

1. Microphone placement – Most phase related issues you’ll deal with are simply from how the microphones are placed. Take time to listen to all the microphones you are using in your recording session blended together. Each microphone may sound fine by itself, but phase issues happen when all the microphone signals are blended together. The easiest way to listen for this is to listen to them together in mono. Also, with every additional microphone used in the session, the more chances there are for phase to occur. So it’s always best to use the least amount of microphones as possible to get the job done.

2. Plug-in Latency – Within your recording software, pretty much any plug-in you use will induce latency, or delay, in your audio. This may cause small phase problems in your mix. If you put a plug-in with 20 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This isn’t necessarily that big of an issue, but it’s something you need to keep in mind. If you used two microphones on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other. Most audio software that is out today compensates for plug-in latency, but it is still something to keep in mind.

3. Linear Phase Processing – This last entry isn’t really much of a tip. It’s just something that one should be aware of. Most plug-ins process low frequency information and high frequency information at different speeds. The lows may come through the plug-in a little faster than the highs, for example, or vise versa. This is known as phase shift, and theoretically, it can affect the overall clarity of your audio. Most plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat the problem of linear phase shift.

Book Now