Here in Chicago, it is well known that we like our women dirty and our music even dirtier. So is it really worth it to record and produce music in Chicago or anywhere else at a high resolution when it is meant to sound raw and lo-fi, like hip hop and rap. While many audio engineers and producers would say there is just simply no need to, more and more of them are beginning to realize the benefits of higher resolution recording such as 96kHz and 192kHz. And I too realized it after eating two Chicago style hot dogs and one polish. You can too, so listen up!
Wandering away from the times of DADC (digital audio data compression), collectively the questions of why high sample rates could benefit the consumer delivery platform becomes increasingly important. We inside this audio community struggle with the notion of using sample rates that improve fidelity outside the limits of our ears. Why can’t we just stop at the 16 bit, 44.1 kHz digital format. 16 bit gives us the dynamic range to mask the noise floor while 44.1 kHz gives us the full 20 Hz – 20kHz frequency spectrum our ears can discern. Wasn’t the conclusion Phillips and Sony made back in the day good enough? A CD’s sampling rate and bit depth delivered the best sound possible over any other digital product. The answer today in the year 2014/2015 would be no.
High-Resolution Audio systems offer the assurance of an extended high-frequency range. These digital systems now operate at 2 to 4 times the sample rate of the standard CD. This means that these systems have the ability to extend the playback frequency range well above the 22 kHz limit of the standard CD. Does this added high-frequency range really improve the quality our listening experience? Yes, it definitely will. The problem is that High-Resolution Audio will only arrive when all the components in the playback chain can equal the resolution of playback.
For example, lets consider a digital audio playback system consisting of a CD player, preamp, power amp, and speakers. If each component has a frequency response of 20 Hz to 20 kHz, is this good enough to reproduce all the frequencies we humans can hear? The quick answer is no, and here is why.
An audio system’s real frequency response can be understood by adding together the frequency response of each component in the audio systems chain. If we look at our example above, we have four components total in the chain: CD player, pre amp, power amp, and speakers. If each component is -3 dB at 20 kHz, then in total, we have a system that is -12 dB at 20 kHz. Because this collective curve is so steep, it will begin to affect the audible high frequency information we humans hear, measuring at -4 dB at 10 kHz, -.66 at 5 khz, etc. In conclusion, this system will not even come close to matching the performance of what our ears can do.
So, if we want to accurately reproduce audio at 20 kHz, the frequency response of each component must continue well past 20 kHz. Is this what you call excessive and unnecessary? To argue this, let’s change each of the four components in our example above with components that have a 200 kHz bandwidth. Combined, the audio system now measures – 4 dB at 100 kHz, – 0.8 dB at 50 kHz, and close to – 0.2 dB at 20 kHz. This simple 4-component signal chain achieves a 100 kHz bandwidth and is consistent with the 96 kHz bandwidth of a 192 kHz digital sample rate. It can be argued that the region between 20 kHz and 100 kHz may offer little musical content, and even if it does, the only living thing in your house that could possibly hear it is the family dog or cat. The real asset is that we have preserved the entire 20 Hz to 20 kHz bandwidth after passing through four audio components in a typical playback system.
Nowadays, professional audio systems usually have analog signal chains that are much longer than 4 components. These operations place difficult requirements on the frequency response of each analog component in the chain. A chain of 16 analog components total, each with a bandwidth of 20 kHz, will produce an overall frequency response of about -48 dB at 20 kHz, 16 dB at 10 kHz and – 3 dB at 5 kHz. This really is telephone quality at best if you look at the curve on frequency graph! If the same system is built with 200 kHz components, the overall response will be – 3 dB at 50 kHz, about -1 dB at 20 kHz, and -.33 dB at 10 kHz.
All together, very high bandwidth is required of each component in the audio chain if we want to assemble a High-Resolution system that can handle sampling rates such as 192kHz. The proposed benefit of high resolution audio is not inaudible content, but better performance of digital systems within the 20 Hz – 20 kHz range. The idea of high resolution has now branched out to not only recording studio playback systems but consumer playback systems as well. Neil Young’s popular consumer playback system ‘PONO’ offers response all the way up to 250 kHz.
How does this all relate to recording and producing one of these lo-fi Chicago sounding record’s on a high-resolution system. Well if you think about it now after what was mentioned above, it’s going to be a better lo-fi sound. Not in the fact that it’s going to magically sound better because of the high resolution, but in the fact that it’s going to translate better across the consumer market because of the high resolution. What this means is for the first time, people are going to be able to hear the same sound at their home as what the artist heard when making the record in the studio. This is because from recording to mixing, mastering to final print and consumer playback, the full 20 Hz – 20 kHz spectrum will have been entirely preserved. This means the sound is accurate and unaffected from the process of making the record to hearing it. To all you audio engineers out there, take it into consideration, even if you’re recording lo-fi music such as hip hop and rap, especially hip hop and rap from Chicago.
So don’t be afraid to make those high-resolution records in 192kHz, that is, if your system can hack it. The consumer market is finally picking up the pace on affordable high resolution, or high definition playback systems. Really all we need now for high-resolution consumer audio to take hold is for a significant market progression away from the mp3 format. Lets keep our fingers crossed guys and girls!
To become a prosperous recording engineer in Chicago, you must possess both a wide and unique set of skills in and out of the field. Nowadays, not only do you have to be a good musician, computer tech and gear junkie, but you also have to be an extremely good salesman, business manager, psychologist and even journalist. When it comes to the subtleties of sound, the engineer needs to have or develop a trained ear, master complicated analog and digital devices, be in the know on new technologies and methods that achieve specific artistic results.
It isn’t too surprising that most premier recording engineers in Chicago and elsewhere are musicians themselves. Many of them at one time were eager musicians who eventually realized their affinity for being in the studio, helping other artists make the most out of the projects they are recording.
One of the most important skills a recording engineer needs to master is having a sense of balance. No, I am not talking about standing up and falling down. We are not gymnasts. We are recording engineers, specialists of audio and sound. Mostly everything the recording engineer undertakes before, during, and after the recording session primarily has to do with determining and maintaining balance relationships with all the elements or parts that make up a song. The vocal can’t be too quiet. The drums can’t be too overpowering, etc.
One important thing to keep in mind with balance is unless you have learned how to use the tools to properly achieve it, having a good ear is pretty useless. A professionally experienced recording engineer will commonly say that the control board or DAW system is really just an extension of himself, a third hand that invisibly manipulates and paints the sound into a 3 dimensional sonic painting. Kind of like a jigsaw puzzle builder. You will also hear them say that to be a good engineer one must be able to see or visualize the song before it is finished. This visualization is key to understanding what the level of each particular element inside a song should be relative to the rest of the elements.
Commonly with powerful digital audio workstations (DAWs) like Pro Tools and Logic, there is an allurement to really go overboard and use every engineering trick out there on every track. A professional recording engineer will not only know how to balance levels in a session, but also how to balance compression, equalization, and effects through a vast array of editing tricks and software based plug-ins. There is, you know, that thing called being overproduced.
It is also important that a recording engineer should be intimately knowledgeable with every piece of equipment in the studio. This means understanding how each piece of equipment works, how each piece of equipment affects the sound of recorded audio, and what it’s strength’s and weakness’ are. For example, certain compressors sound good on drums, while others sound better on vocals. The engineer must also be a specialist on the varieties of microphones available to him in the studio (condenser, dynamic, ribbon), as well as the different pre-amplifiers and amplifiers that will be used to amplify signal from the microphones.
Another uber important skill that defines the good engineers from the bad is the ability to continuously maintain a strong work ethic while also paying incredibly close attention to detail. The profession of the audio engineer doesn’t always follow the usual 9-5 pm work routine found with most professions. It really isn’t that uncommon for a recording engineer to endure marathon studio sessions that last several days or more, even weeks! No matter what the working conditions might be, the engineer is always expected to make the best recordings he or she can while keeping everything in the session running smoothly.
It is also important that a good recording engineer learns how to work in the studio quickly. He or she must never be a bump in the road to the overall creative process in the studio. For the client trying to record new, spontaneous ideas, if the engineer isn’t ready to go, the client could lose confidence and creativity, thus creating frustration and tension during the recording session. Even though the recording engineer’s job can be incredibly complex, managing many different important tasks at once, it should never dominate the focus of the actual studio session. The focus should be on the creative process at hand as well as the prostitutes you ordered 10 minutes ago to help additionally ‘inspire’ your client.
This brings us to the final and perhaps most demanding skill that the Chicago recording engineer must master in and even out of the studio, communication. Recording artists, who each have their own style of communication, can sometimes make the job quite difficult for the recording engineer. It is important for the recording engineer to learn when to speak out and when to be quiet, as well as learn the intricacies of the words the client speaks. A good recording engineer will designate his or herself early in the recording session as a helpful resource in the creative process. If the client is a new client, the recording engineer will usually try to get to know the client a little bit before the session begins. This can take place by either inviting the client to the studio for a pre session meeting or tour, going to the client’s rehearsal space or home, or even attending a live show of the clients. By creating these personal relationships with your clients, you will ease the process of communication and make the overall recording session more pleasant for everyone involved.
Whether you are a recording engineer or in the process of learning to become one, chances are you have probably heard of the drum mix technique called ‘parallel compression.’ If you haven’t, let me quickly explain. This is when the recording engineer sends the drum mix out through a stereo buss to a compressor and mixes that signal back into the original stereo mix of the song. The method of ‘parallel compression’ can either be used in an extremely subtle or overt manner by modifying the extent and character of the compression and how much of that compressed signal is sent through the stereo buss. I have found that by using this technique the drive of the drum track performance really comes alive in the mix, even during quieter passages. It gives the drums that ‘in your face’ kind of sound while still retaining a smooth listenable quality.
The results vary from just compressing the tracks because at a low level you get both retained transients and an extra sense of loudness from the compression. When the song starts to get louder, the effect of the compression applied to the bussed signal will become less prominent due to the uncompressed track’s dynamic swells that can tend to dominate the mix.
However, the technique of ‘parallel compression’ doesn’t have to be used just on drums. I discovered a similar technique for myself many years ago for vocals, only to learn that other recording engineers before me used this technique too. The reason why I decided to use ‘parallel compression’ on the vocal was I wanted the lead vocal track of the song to be articulate at lower levels while still retaining a listenable quality at higher levels. In essence, what I had created was my own form of dynamic equalization.
So, I wanted my lead vocal track to be brighter at low levels to help it slice through the mix. However, I knew adding top end would only cause the vocal to be really harsh at higher levels. Adding compression to the vocal really didn’t create the sound I was looking for, it just sounded like I was sitting in the vocalist’s mouth. A little to intimate for what was needed in the mix of the song. Also, the amalgamation of compression and high frequency boost caused the vocal track to become really sibilant, which is usually not a good thing when it comes to lead vocals. Moving the compressor to the front of the vocal chain helped a little bit, but it still didn’t provide that magical sound I was looking for.
I wanted my vocal to be brighter at low levels to help it cut through a mix, but just EQ’ing it caused it to be too harsh at higher levels. Adding a compressor to the vocal did not create the sound I had in my head. It was still to harsh at high levels and the EQ caused the compressor to react in ways that I felt were inappropriate. The combination of high frequency EQ boost and compression often causes a well recorded voice track to become sibilant. Not a good thing. Placing the compressor in front of the vocal signal chain helped, but still did not deliver the “magic” I was looking for.
The solution to my problem presented itself after a few hours of mad science and experimentation. Since I work primarily on a DAW, I decided to duplicate the lead vocal track to a second track, so in essence, there were now two lead vocal tracks. On the duplicate lead vocal track, I first Eq’d all the bottom end out by running it through a high pass filter. I then boosted the top end of the duplicate lead vocal by about 5 or 6 dbs’s. My goal was to create a vocal track that didn’t have a lot of tone to it or vowel sound, just consonants. I then heavily compressed the duplicate lead vocal track to control and push back any loud passages, allowing for the softer passages to come through. Once this was done, I mixed the duplicate lead vocal track back in with the original lead vocal track. The end result was a lead vocal track that was now easy to understand at low levels because of the extra boost in the highs, while also be pleasant to hear at louder levels because of the reduction of the highs due to the heavy compression.
Now one thing you must pay attention to when using this technique on a DAW is processing delay. It can vary a lot from plug in to plug in. An easy way to correct this problem is by inserting the same plug ins on both the original and duplicate track. You would then set the plug ins on the original track to bypass so they don’t effect the sound of the original track. If your DAW already has delay compensation built into its software, then make sure this feature is engaged.
This technique can also be done using your analog console. Buss your lead vocal to two channels on your console and then assign both of those channels to the stereo buss. Insert a compressor that has both a quick attack and release onto the duplicate or ‘articulation’ vocal track. Insert an EQ or filter on the articulation track that can high pass everything below 3khz. Set your compressor on the articulation channel to an extreme compression setting with the fastest attack and release possible. Listen to how the consonants sound coming from the articulation channel and make sure they are clean and without and tone or vowel sounds. Mix the articulation channel back into the stereo buss mix until you can noticeably hear the articulation come up at lower levels while hearing the tone change to a warmer sound during elevated levels. Mix the articulation signal in to taste.
In the world of music production, the definition of the phrase ‘music producer’ can mean many different things to many different people. Some producers are musicians, some are just engineers, some are actually remixer’s, while others are all three. So what is it exactly that a music producer does then?
In the simplest and most cohesive terms, the producer is actually the project manager to the process of composition, recording, mixing, and mastering. He or she is in charge of directing and maintaining the overall vision of the project, defines the sound and the goals of the project, brings an exclusive mindset to inspire, and assists with the provocation of the artist. A good producer will make the record more than the sum of its overall parts. In a way, you could almost say he or she is a scientist trying to create musical chemistry.
Each producer brings their own set of skills and approach to the project, so this can make summarizing what they do quite difficult. In this blog, I will define several recognizable types of producers so I can hopefully make this more clear for you, the reader.
The Audio Engineer
The profession of the audio engineer is what usually defines the average person’s stereotypical notion of the ‘classic’ music producer. This definition is aided by the visualization of the engineer perched over the mixing console, sweating over equalization and compression settings, effects combinations like chorus and reverb, track phasing, headroom, dynamics, so on and so forth. To many in the music industry, the studio is almost like an instrument, and it’s the music producer who plays it like a true virtuoso. For them, the project isn’t finished until the overall vision has been 100% realized. Whatever and however long it takes to get to the end goal of a sonic masterpiece, they will attempt it.
The Advisor / Mentor
There are many producers in the music industry who unlike the audio engineer, don’t have much technical expertise to speak of in the studio. They usually don’t sit at the mixing console during production of the records they make, but instead hire the best engineer who can help achieve the overall vision of the specific project in production. These advisor/mentor producers usually focus squarely on the artist’s vision, inspiration, and performance, helping them to produce the best sound and music the artist is capable of. One good example of this kind of producer is Rick Rubin, who seems to have a knack for positively inspiring and energizing the artists he works with.
The Midas Touch
There are some producers in the industry who almost seem to have a magical touch with whatever artist they work with, a kind of mysterious recipe that assures the best chances of success for the artist. Flood, with his trademark “wall of sound”, is one good example of this kind of producer whose career has dominated alternative, punk, and rock music for over 25 years. Dr. Dre, a more recent representation of ‘the midas touch’ producer, was almost entirely responsible for the vast output of some of the biggest names in R&B and Rap. It is important to keep in mind though that a distinctive sound is only a good thing if the style of the producer fits in with the artist.
A lot of people from today’s generation think the profession of ‘remix producer’ is a recent evolution in the music industry. However, the origins of the ‘remix producer’ actually began in the mid 70’s with the fusion of edits in the disco genre. These edits would be comped together to form what are known as ‘dub edits’ or ‘dub remixes.’ In the early 80’s, artist’s like Grandmaster Flash invented the sound of cutting and scratching. Shortly afterwards sampling and midi took remixing to a whole new level. Now, remixing has become such an essential part in the evolution and marketing of a song, that the remix often becomes the top ten hit before fans have heard the original version of the song remixed.
Musicality, while one of the least recognized, is probably one of the most fundamental skills required of a producer. A good producer will add to, remark and counsel on the performance, songwriting, and arrangement of a song they are producing. Many producers are usually great musicians as well. It is also not uncommon to find them playing on the albums they produce.
Some artists take their musicality to a whole other level by actually being the producer of the project. One famous example of the artist taking on the role of producer is Chicago’s very own hit maker, R Kelly. Not only has Kelly produced a continuous stream of hit records for his own brand name, he has produced hit after hit for virtually every major label artist in the dance, pop, r&b, and soul genres. Another great example of the artist/producer is Trent Reznor of Nine Inch Nails, who does everything from write to perform and engineer his own records.
The profession of audio engineering is not only a very exciting line of work, but is also very challenging too. But not everyone knows what a real audio engineer does or what it takes to actually be an audio engineer. On any given day, the engineer can find his or herself working with many talented individuals who can possibly have strong ties to the world of music. And not only does the possibility exist to work with amazing vocal artists, but movie producers and video game designers as well.
As an audio engineer, the possibilities that exist in the audio world are endless. One of the most satisfying and rewarding parts of audio engineering is hearing your finished work on your ipod, favorite radio station or movie. Imagine the gratification and fulfillment one could get by telling all their family and friends that they helped in the production of that brand new song or movie.
If the thought has crossed your mind to pursue an actual career in audio engineering, it is important to know what the average salary of an engineer actually is. To a lot of people, the salary is one of the most important aspects of any career, not just audio engineering. Luckily, the average salary for an audio engineer falls between $80k to 90k a year, which is 24% higher than the national average. If you reside in California or New York, the average salary is even higher. So, at the end of the day, choosing a profession in audio engineering can be a good decision for your finance’s. Many other careers that are similar to that of audio engineering pay considerably lower salaries. For instance, a career as an audio or music producer pays an average salary of about $48k a year. That’s almost a little more than half what an audio engineer would make per year. So it is true that audio engineering is one of the more lucrative professions in the industry.
So, the profession of audio engineering really seems to have it all, great pay, exciting work, and even pretty good benefits. But what is really stopping people from choosing this amazing career and earning that good salary that audio engineers make. For starters, even though the profession of audio engineering is a great career for many people, it necessarily isn’t the greatest job for everyone. Just because you have a passion for music and audio, doesn’t necessarily mean you are going to be a great engineer. On average, a majority of the projects that come through the studio will involve music that the audio engineer might not like or prefer. This means that you might have to work on rock music when your preference might only be hip hop or rap. So as an audio engineer, it is important to know and understand what goes into each genre of music. This way the quality of your work isn’t based off your personal preferences, but off your skills as an audio engineer.
Another reason that turns most people away from a career in audio engineering is the amount of time that it takes to become proficient at it. Most people who are currently working as an audio engineer have said it took them a couple years just to learn and understand the basics of engineering. Nowadays, most folks don’t have the stamina to spend the proper amount of time it takes learning how to become a great engineer. It is a 24 hour, 7 day a week profession. The ones who become great engineers are the people whose lives revolve around their career as an audio engineer. If you really are prepared to go all the way and dedicate yourself 100%, then the profession of audio engineering might actually be a good fit for you. Then, just like us, one day you could be handsomely rewarded too.