7 Commandments of Audio Engineering

If you are wondering what you have to do to break into the music industry as an audio engineer in Chicago, have no fear. Recording in Chicago is no different than pretty much anywhere else on the planet, except for language. Here is a comprehensive list of skills that you can aim to develop to position yourself as a top engineer in the future. Notice that four of these skills are what can be defined as “base skills,” meaning they are imperative for any job in the music industry or elsewhere. The other set of skills are known as “job specific skills” and relate categorically to your work in the studio.

BASE SKILLS

1. Ability to read, write, and follow directions. So why is it so critical to follow instructions in a recording studio? For starters, you could fuck up the gear in the studio. You are also working with client’s master recordings that are the result of perhaps thousands of hours in time and financial investment. In Chicago, some of those clients might not be to happy if their masters get messed up, so it could potentially mean your life. More over, following instructions also means that you are reliable and dependable, which in turn brings confidence to the head engineer or manager that you can be developed and mentored to integrate and properly accomplish client requests. Following directions is crucial to discovering how to work successfully in any recording or production studio, let alone life.

2. Communication. There have been many times when I was engineering a session when the artist or producer turned to me and said, “It just doesn’t sound right. I’m not sure really what it is about it, but it is not grabbing me.” We often spend long hours trying to figure out how to understand our clients. In a way, one could say we are quasi-pyschologists. The ability to communicate clearly is crucial in order to be as productive as possible in the studio. Many delays and fuck ups in the studio are a result of a lack or breakdown in communication. Knowing when to and not to speak out comes over time through patience and practice, and understanding.

3. Ability to stay cool and calm. Musicians can get pretty emotional in the studio. In essence, they are dumping their emotional well being into their performance for all to hear. So they get very emotional. A good engineer must know how to stay calm and reserved when a musician voices their frustrations. I have seen many sessions where fights break out in the control room between band members or band members and management. These people have actually swung at each other, which generally is not helpful to the whole creative process. Remember, your job is to keep the project on track at all times, so it is important for you to always remain calm and relaxed, especially in Chicago. May the force be with you.

4. Basic computer knowledge. So, how much do you really need to know about computers to become a good recording engineer? Well, many ambitious producers and sound engineers have a good deal of experience and knowledge operating sound recording and editing software on a computer. It’s certainly a bonus. The more you know about computers, the more valuable your service will be in the studio. It is important to master the basics, such as word processing and data entry, as well as understanding spreadsheet functions so you can use the computer to do simple math. It is important to be comfortable with these basic three applications as well as the computers recording and production software. A basic computer course at your local community college can teach you these fundamentals. It’s also important to know both the Macintosh and PC platforms. Macintosh more so for composing, recording, and mixing. PC’s for business management and data entry. Initally, all the best computer editing software for sound and music was found on a Mac, but over the last couple years, the PC has been making strides in the audio department. Many programs that were once exclusive to Mac are now available on PC as well.

JOB-SPECIFIC SKILLS

5. Critical auditory skills. If you haven’t heard or experienced sound in an acoustic setting, you might not know what you are listening for which can bring you problems as an engineer. You’ve got to use your ears and really listen to the sound or music. As an engineer, it is important to get out there in the real world and experience every type of music that there is in a concert setting, from country to jazz, rock to big band, and opera to blues, etc. Remember, musical recordings are really just sonic paintings. In order to be a competent recording engineer, you have to come to really understand what instruments sound like naturally, by themselves or together in ensembles. Look at your time spent developing these skills just as you would if you were doing homework. Go out as much as possible because it is important to hear it all. You never know when that time is going to come when a client steps into the studio with a certain kind of instrument, sound or musical skill that you might not be familiar with. This unfamiliarity can lead to poor engineering decisions’ which in turn lead to poor or undesirable recordings. That is why it is important to know how each instrument sounds naturally.

6. Audio aptitude. It is important to develop a comprehensive knowledge of audio, such as level, signal flow, phase, frequency spectrum, microphone selection/placement, and acoustics. Whether you went to a reputable audio school or learned on your own, it is important learn and understand the basic concepts of how to make a recording, do overdubs, correctly edit, manage a mix-down properly, and master. Even the knowing the process of duplication and distribution to stores and online retail outlets sure doesn’t hurt either.

7. Studio Chi. The overall tone or vibe that an engineer brings into a session with a client is vitally important to the overall energy and creative workflow in the studio. Some of the best engineers out there are the ones who create a climate that is conducive to positive and creative workflow. The equipment doesn’t really mean much if the vibe of the session is no good. Even with a half million dollar recording console, is it really doing any good if a client walks in and doesn’t feel right. When artists are babied or pampered in the studio, they tend to lose their inhibitions, open up, and perform much better overall. A good engineer will be able to help generate that vibe in the studio in order to capture and bring it out in the song.

Now you know the basic skill set needed to a good career in field of audio engineering. The first six you can learn in school, whereas, the seventh takes time and experience. It’s important, not only as an aspiring engineer/producer but also as a musician, to sit in sessions and watch how other engineers do their thing. Internships at major recording facilities are a great opportunity to see how things really work in a professional studio. After awhile, you will find that every session and client is different as well as what is specifically needed to create the right mood and vibe for each session. At the end of the day, you’ll probably find yourself playing psychologist as much as you are being an engineer, producer, songwriter, mentor, friend, fan. The list can go on and on.

Day in the Life of a Session

You have come up with a great new song. Overcome with a great sense of pride and achievement, it is now time to ponder what to do next. After shoddily recording a few demos on your laptop, you decide that it is time to provide your music with the love it deserves – the professional treatment of a commercial recording studio.

Once you’ve slimmed down your list of local studios, you decide to choose a Chicago recording studio that is warm, economical and operated by people that go out of there way to making your project the most important. On the day of the session, your nerves start to unsettle as you make your way to the place that will help you immortalize your song. Upon arriving, the chill vibes and pleasant nature of the staff and engineer have a calm and reassuring effect.

A half an hour later, everyone is in position. The meters bounce and glow. In just a few minutes, the nervousness that you entered the studio with has now turned to jubilation as you realize that the sound that you are hearing is coming from you. It is your sound. And, paired with the proper recording environment, gear, and engineers, what started as a simple idea is now becoming a really good song.

Now that the recording of the music and lyrics has been completed, the engineer of the session tells you that they will need some time to mix your song so that it has that “radio” shine and is ready for distribution through Itunes and other internet stores. You listen to the sound that is coming out of the speakers in the control room. Faders are raised and lowered, knobs are tweaked, the audio engineer massages the computer keys and bends the software to his will. In a little less than two hours he plays the result for you of both his and your efforts. As a smile leaps from your face, there is only one word that comes to mind – “WOW!”. Going to Studio 11 to record your new song was the best decision you could have made.

20 Valuable Game Changing Studio Lessons

The understanding of recording, mixing, and mastering hip hop, rap, and other kinds of music is no doubt a talent and an art form. Development of this skill takes time, and requires many mistakes, experiments and life lessons. To truly benefit and prosper in audio engineering, one has to have a hunger to go through and take in the vast amount knowledge out there on recording, mixing, and mastering.

Once you begin sifting through the knowledge and applying it in a real world setting, you will come to find there is a big difference between knowing something and actually getting it. The beautiful thing is, once you finally get something, the ability to master those techniques and others becomes exponentially easier. You will find that recording, mixing, and mastering isn’t really a job anymore, it is just a part of life.

Over the last 16 years, I have had to learn many hard lessons in the world of audio engineering. A lot of times I thought I understood what certain techniques were and how to apply them, but I often was wrong or just partially correct on those techniques. Some of my most memorable times in the studio were when I finally ‘got’ a certain recording or mixing technique. You can say these were my ‘a-ha’ moments.

1. Learning everything there is to possibly know about the hardware and tools I have at my convenience.

2. Compression: Much can be discussed about this subject, but one thing that is so crucially important is getting the attack and release times correct. Compression can really lift up a performance or it can shamefully destroy it.

3. The day I realized that pretty much anything in the studio could be automated in some form or another.

4. Low and High Pass filtering is truly my friend.

5. The first time I turned off my computer screen to listen back to a mix. Blew me away how much easier it was to listen, identify, and make changes to the mix.

6. The first time I recorded and mixed in a professional acoustically treated studio. The amount of detail and separation I could hear in the frequency spectrum almost startled me.

7. How simply cutting out a little 275-375 Hz on most close mic’d tracks can remove boxiness and really bring out detail to things.

8. Getting rid of frequencies, or subtractive equalization, is so much better than additive equalization. Its just easier and more natural to take out what isn’t needed than to artificially add it in.

9. Hearing live drums mic’d through a stereo pair of C12’s and PZM’s. I finally understood where the life and dimension of a recorded drum performance came from.

10. Discovering that the more plugins I use in a mix, the more digital and artificial sounding the mix will become.

11. Distortion is a form of compression and a good way to add harmonics.

12. The first time I threw up a quick mix of raw audio tracks instead of attempting to dial in the perfect sound on every track. It increased the overall quality of the mix while cutting down average mix time.

13. It’s always good to get feedback, even if its from somebody without any musical or audio engineering experience.

14. Getting stuck in a mix, zeroing the faders, trashing all inserts and sends and then pushing the faders back up again. Valuable learning experience and test to the ego.

15. Dynamic Equalization via side chain compression. The bees knees!

16. Realizing that knowing how you want things to sound in your mix is so much more important than just knowing cool mix techniques and tricks. The tricks can sometimes help you get there a little faster though.

17. Musical arrangement is vitally important to the outcome of a mix on a song. It’s where the song can really be made or destroyed.

18. “Fix it in the mix” is a term that doesn’t always apply to every situation. Sometimes it is faster and easier to just re-record something if it is not right.

19. Parallel compression allows for smoother, natural dynamics overall and less aggressive compression individually.

20. When, after what seemed like centuries of recording amateur artists and bands, somebody of superstar status steps up in front of the microphone and shows how it’s really done. Wow!

Phase Issues and how to resolve them

The problem of phase is a consistent issue for recording and mixing engineers alike. If there are problems with phase inside your song, even the smallest problems can ruin your music. It can make tracks sound empty or spectrally degraded, like something is missing. Issues with phase on a track can lead to problems on other tracks as well. These problems, as severe as they can be, can also be easily avoided or fixed, but first it is essential to understand how the problem of phase can initially occur.
This essay will attempt to discuss almost everything there is to know about phase, what is it, how it occurs, how it sounds and some procedures to deal with it.

What Is Phase?

I’m going to consult the all knowing source they call Wikipedia to answer this question.

It says:

Phase in sinusoidal functions or in waves has two different, but closely related, meanings. One is the initial angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the wave cycle which has elapsed relative to the origin.

A less scientific definition provided by Sweetwater Sound is:

Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).

No wonder why phase is such a confusing topic for people. At a quick glance, the definition is even confusing to me, but that’s why I am not a professor. At the end of the day, how does this definition apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about phase vs polarity. I’ll try to break it down more simply.

Phase Vs Polarity

Lets define things a bit more starting with phase and polarity. These two words are often used interchangeably but they are VERY different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case these definitions went over your head, Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

What it means to be In and Out of phase

When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, peak amplitude, and shape, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be twice as loud when combined.

When two waveforms are combined that are exactly the same but have a 180-degree phase difference they will cancel out completely producing no output. In the real world of recording, these conditions rarely happen. More than likely the two signals will either be slightly different, like two different microphones on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the waveforms are not 0 or 180 degrees, or the waveforms are somehow different, you get constructive and destructive interference or what is also defined as comb filtering. The nulls and peaks of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the trick to using several microphones on a single source.
For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s extremely extremely EXTREMELY slow compared to light. Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if the two signals are out of phase with each other, your recordings will sound thin and your music will not sound good.

The biggest problem area for phase is when you’re using multiple microphones on a single sound source, like drums or an orchestra. Depending on where each microphone is placed in relation to the sound source, the sound will reach each microphone at different moments in time. When you listen to these microphones blended together, there’s a chance that it will sound “hollow” and “thin” because each signal captured by each microphone is out of phase.

Really, at the end of the day, phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.

3 Small Tips for Dealing with Phase Issues

1. Microphone placement – Most phase related issues you’ll deal with are simply from how the microphones are placed. Take time to listen to all the microphones you are using in your recording session blended together. Each microphone may sound fine by itself, but phase issues happen when all the microphone signals are blended together. The easiest way to listen for this is to listen to them together in mono. Also, with every additional microphone used in the session, the more chances there are for phase to occur. So it’s always best to use the least amount of microphones as possible to get the job done.

2. Plug-in Latency – Within your recording software, pretty much any plug-in you use will induce latency, or delay, in your audio. This may cause small phase problems in your mix. If you put a plug-in with 20 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This isn’t necessarily that big of an issue, but it’s something you need to keep in mind. If you used two microphones on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other. Most audio software that is out today compensates for plug-in latency, but it is still something to keep in mind.

3. Linear Phase Processing – This last entry isn’t really much of a tip. It’s just something that one should be aware of. Most plug-ins process low frequency information and high frequency information at different speeds. The lows may come through the plug-in a little faster than the highs, for example, or vise versa. This is known as phase shift, and theoretically, it can affect the overall clarity of your audio. Most plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat the problem of linear phase shift.

RECORDING HIP HOP & RAP IN CHICAGO

When it comes to Hip Hop & Rap music, many music aficionados like to think of them in a negative context. Their argument, that these genres really aren’t music because they’re absent of any melodic structure and theory, could hold true if you were alive during the Baroque Period. But what they forget is rhythm is just as important as melody. Rhythm is what drives melody and harmony. It is its’ foundation or backbone.

Hip Hop & Rap music, driven largely by drumbeats and rhythmically spoken word vocals called “raps”, is an exploration into the sound and syncopation of rhythm. The drumbeat in rap music usually provides the basic rhythm of the song, while the “spoken rap” provides the intricately syncopated rhythms to the beat. Understanding & appreciating that these two core elements are central to the hip hop and rap genres are vitally important to the correct approach and methods of recording these styles of music.

While it’s true that recording Hip Hop & Rap may not be as complicated as recording a live Jazz Ensemble or Rock Band per se, the process of recording a good rap vocal is equally as complicated. It is not just about setting up a microphone or headphones and then pressing record. First, the room in which the rapper is recorded in must be taken into consideration. The sound or ambience of a room can have a major effect on a recorded vocal. When a rapper is performing, the waveforms coming from his or her voice bounce and reflect in many different directions around the room they are recording in. Depending on the size of the room and the materials it is built out of, will determine the length of time the waveform will reflect or reverberate around the room. This effects the overall tone of the vocal recorded, sometimes pretty dramatically. A room with a lot of reverb, or a wet room, isn’t preferable when recording good sounding rap vocals. A wet room can tend to smear the sound of the voice or rap, dulling out the rhythmical excitement and clarity of the performance. Recording rap vocals in rooms that do not possess reflective or reverberate qualities, also known as a dry or dead room, is most optimal when attempting to capture a good rap performance.

The next thing to consider when attempting to record a good “rap” performance is the microphone that will be used to capture the recording. The microphone, which is a transducer, is the most important piece of the recording process. It is the conduit that allows the rapper’s voice and message to be heard anywhere, anytime. Imagine if you could buy yourself a pair of ears, would you buy yourself a cheap or average pair of ears? No, you would get the best ears you could afford so you could have the best hearing possible. The same can be said for microphones, you want the best possible so the listener can really hear you. Good vocalist microphones should provide a crisp and smooth high frequency response, along with a warm and present midrange, and a warm but gentle low frequency response. Condenser microphones such as Neuman U47, AKG C12, Telefunken Ela M 251, or the Audio Technica 4060 (which is what we use) are great not only for Hip Hop and Rap, but Pop, Rock, and Jazz among many other styles of music.

http://www.studio11chicago.com

Recording Studio For Rap And Hip Hop In Chicago

The third thing to consider when recording a rap performance is the power and pre amplification source for the microphone. Good condenser microphones like the U47 Microphone or AT 4060 require either a direct power source like a power supply or phantom power, which is a power source built into your pre amplifier device. However, the best way to power your microphone is from a power supply. Good power supplies are built from high quality parts and electronics, which will in turn relay a clean and steady direct source of power to your microphone. Power supplies are the better option because their job is to solely provide power to the microphone and nothing else. With a pre amplifier, phantom power relays electricity to your microphone from the same circuitry that powers the amplifier. The power source is just not as clean and reliable because it is divided amongst the different circuitry within the pre amp. It’s kind of like moving water through one hose and then splitting the flow off into four hoses. Water will flow out of the four hoses, but the water pressure will be less reliable and steady than if it flowed out of just the one hose.

Now that we understand the importance of powering your condenser microphone with a power supply, it is now time to discuss the pre amplification of the microphone signal into your recorder. Microphone signals are often too weak to be transmitted to units such as mixing consoles and recording devices with adequate quality. Preamplifiers increase the microphone signal to line level (the level of signal strength required by such devices) by providing stable gain while preventing induced noise that would otherwise distort the signal.

Even though the microphone is the source to most of the coloration of the recorded vocal tracks, the microphone preamplifier also affects the sound quality of the signal. A preamplifier might load the microphone with low impedance, forcing the microphone to work harder and thus change its tone quality. It also might add coloration by adding other built in characteristics, or features such as vacuum tubes, equalization, and dynamics control. The preamplifier we use here at Studio 11 is the Manley Voxbox, which is a vacuum tube amplifier with both equalization and dynamics control. Out of the 18 years or so of recording both Rap and Hip Hop, we haven’t come across a better preamplifier for recording ‘rap’ vocals. The preamp delivers a warm sound, with good midrange at around 1k. The limiter is great for controlling rappers whose performances are rather dynamic.

The last step in the process of recording Rap & Hip Hop vocals is the consideration of the recorder that will be used to capture the performance. Back at the start when Hip Hop & Rap first found its way into the music scene across Chicago, the rapper’s performances, beat, and music were all recorded to analog tape in a professional music studio. This helped provide that rich warm dirty sound that characterized such early Chicago Hip Hop artists like Common, Crucial Conflict, Ten Tray, and Twista to name a few. As time progressed, digital media like the Alesis ADAT and the Tascam DA Series began to take over the recording market because of their affordability over analog tape machines. Smaller localized studios began to open up offering cheaper rates over larger professional studios, which in turn offered more artists the chance to get into a studio and record their projects.

By the early 90’s, companies like Digidesign, Synclavier, and Sonic Solutions began to develop software & hardware for the purpose of recording and editing audio on a computer based system. At first, these DAW systems were expensive and could only record and edit. But by the late 90’s, Digidesign’s flagship software & hardware system Pro Tools started to become the industry wide standard by allowing engineers to not only record and edit multi-track audio in real time, but mix, master, compose, and arrange it as well. No system could offer all these options to the degree of reliability and stability that Pro Tools could offer at the time. Also, projects and sessions became completely recallable which was always a tedious chore when using analog tape.

Now in 2014, the market for quality DAW’s has expanded due to the affordability of powerful computers. Virtually any audio software out there has the ability to record audio from a microphone source. However, Pro Tools is still the only software that allows the engineer to record multiple tracks at once with zero latency while using plug ins and other real time features. No matter what professional studio you go to, it will feature a Pro Tools system 99 times out of 100. It is the industry standard when it comes to recording & editing Hip Hop & Rap.

Another small thing to be concerned about when recording a rapper is the headphone mix that the performer or rapper will reference while recording their performance. Remember, microphones pick up all sound no matter how loud or subtle. You must be careful with how loud the level of the headphones are when recording, as the microphone will pick up the residual sound or bleed through of the headphones. This bleed through can add up in volume when recording multiple tracks of vocals and can alter the sound of the vocal over the track by creating issues with phase in the midrange, not only in the vocal but with the track itself. It is quite common for rappers to prefer a louder headphone mix so they can ‘get into’ their performance. A good way to achieve a loud headphone mix while reducing phasey headphone bleed through is to have the rapper or performer wear a stocking cap over their headphones when performing. This will keep the seal of the headphones tight to the ears and dull out any bleed through that is emitted. The recorded rap performances will be much cleaner and thus easier to work into your mix.

The last thing to be discussed when recording Hip Hop and Rap is the beat in which the rapper will be performing on top of. For the past decade or so, most rappers have been recording their performances on top of an instrumental 2-track that they either purchased or licensed from an online beat store or a producer. Every now and then, you get lucky and a client will bring in the individual stems or track outs to the song they are performing on top of. These individual tracks are usually produced and rendered down in production software like Ableton, FL Studio or Logic. The problem with these rendered tracks is that because there is the possibility they may have been created inside cheap production software, they can tend to take on a cold lifeless digital sound. Programs like Reason, Garageband, and Acid are notorious for rendering files that do not sound as good as they originally did in the session. If there is extra time in our clients session’s here at Studio 11, we like to take these digital track outs and transfer them over to analog tape, which in turn brings the digital files back to life by adding new characteristics such as warmth, saturation and even harmonics. Yes, transferring to tape might add a little noise, but noise isn’t always a bad thing. Sometimes, it can be the difference maker.

 Page 3 of 4 « 1  2  3  4 »
Book Now
css.php CALL US NOW!