As a follow-up to my previous article about using EQ in a live setting, today we're going to focus on how we can use EQ in the studio. While our EQ principles remain the same in both settings, there are a number of idiosyncrasies that lead us to take certain approaches in one situation over the other. Right off the bat, we usually tend to focus more on using EQ for artistic and coloration purposes in the studio, whereas in the live world, we often use EQ as a problem-solving tool before we begin to consider implementing the artistic side of it. Then, with feedback issues no longer playing a role in the sonic landscape of the studio, we can really reach into a sound and begin to craft it.
That doesn't necessarily mean we won't ever use EQ in the studio for corrective purposes; it merely means that as long as we focus on our GIGO (garbage in, garbage out) principle, we tend to have a little more control achieving the right sound and can rely on that to get us most of the way to our finished product. The more things you can fix when a sound is captured, the better off you'll always be when you go to mix a record.
[Can You Fix It in the Mix? How to Handle Your Recording Mistakes]
With the wonderful processing power that's out there in all the various DAWs these days, many engineers and mixers become complacent with a "we'll just fix it in the mix" mentality. While we have almost infinite options at our fingertips with plugins and processing, there still remains a finite amount of corrective EQ and changes we can make to a sound. No matter how much we try, sometimes we end up mixing a project that someone else tracked, something that was recorded in "the heat of the moment," or a live recording where the live multi-track was an afterthought where the original material may be less than ideal and some of those corrective moves may need to be utilized. A properly recorded source, however, will win over processing every time, hands down. What tastes better, a dish that's properly cooked and seasoned in the process of preparing it, or one that's just mediocre food slathered in sauce?
Many people might ask, "Where does one start with EQ in a mix?" Once you have your levels somewhat balanced and in the general ballpark of where they're going to sit live, the first step in the studio tends to be trying to carve out little homes for all the sounds in the mix. Imagine for a second that you're a sculptor, and right now, your mix is more or less a marble block in the rough shape of your perceived statue. The features and details are there, but just need to be chiseled out to be brought more into focus.
This can be done by removing frequencies in one sound/instrument that may be getting in the way of, or "masking," another sound. Now, this isn't so much a 1:1 ratio of "I'm emphasizing 1 kHz over here by 2 dB, therefore I must cut it here to match." Think of it more as a whittling away of certain areas so that your ear is drawn to others in that range. We'll talk about addressing specific frequency ranges in regards to different instruments later, but when in doubt, use your ears! These frequency ranges can be a great starting point, but at the end of the day, your sound is unique to your situation and will need its own unique handling.
While this "carving" approach helps things sit better together, there are a number of newer options that combine stereo field processing and EQ to let us go even further, like Mid/Side EQ processing. While not a completely new concept, the ease of use in these modern plugins allow things in the center of the mix to be treated differently than those on the sides, allowing us to not only cut out "space" within the frequency spectrum, but also play with the stereo field simultaneously. (One example is the FabFilter Pro-Q, one of my absolute favorite go-to EQ plugins.)
Also, one thing to note when listening to your mix in order to EQ your sound: make sure you're making your EQ decisions in context. Meaning, it becomes very easy to solo things and EQ everything in isolation and spend a massive amount of time getting these individual elements to sound their best, only to find out that when we return to the mix, everything sounds muddy or washed out. People will be listening to your mix as a whole, so if it doesn't sound right in context, then it's not right, no matter how immaculate it sounds alone. Don't worry – it's not uncommon for individual elements in a finished mix to sound bad on their own but sound killer in the mix. Our ear does some crazy psycho-acoustic things as far as hearing things that aren't actually there, so just remember that context is king.
In the studio, the number of different EQs available can be sometimes mind-numbing, especially in the plugin world. While I'm a strong believer that choosing one or a small handful to master and use as go-to pieces/plugins is a best practice, not all EQs are created equal, and those unique sonic signatures can be exploited to our advantage with some practice. Whether it be the unique EQ curves and resonances created by a Pultec, the bite and sizzle of an API module, or the musicality of the SSL console EQ, these specific sounds can be as much of an artistic choice as a functional one.
In all honesty, using your own ear to find which filter sounds are appealing to you is the best way to discover the differences in these EQ types. Despite the highly technical nature of the audio field, it's still artistic, and there aren't necessarily right and wrong answers.
EQ can be used in other ways to help other processors do their jobs more effectively or change the effect of said processing besides its classic function. For example, when we set up a sidechain for a compressor, or anytime we're bussing a signal to trigger some form of processing, we can utilize various EQ and filters to change the response of that processor.
Let's say we have a very heavy bass signal triggering a compressor. That compressor is going to be clamping down on that signal every time the bass hits, which may be the desired effect. But if we want the compressor to respond more to overall changes in the dynamics of the signal, not just those big bass hits, we can put a filter in the path of that control signal and filter out the majority of the hits, and the compressor is now going to react to the rest of the signal and not be affected by the bass.
[3 Mixing and Mastering Tips From Veteran Audio Engineer Daniel Wyatt]
Thinking about the EQ's role can also help answer the constant question of many home studio engineers: what comes first, the compressor or the EQ? While the real answer, like most things in the audio world, is "it depends," you can start by asking yourself, "Do I want the compressor to react to the full spectrum of the sound I have recorded and then use the EQ to shape that output, or do I want to shape the sound a little more and then have the compressor react to that?" That will usually help sort out your proverbial chicken-or-egg situation.
We can also use the EQ to help shape things like the returns from things reverbs and parallel processing. Especially with digital reverbs, there can be a number of anomalies in the frequency response that make it feel and sound unnatural, or make it too apparent in the mix (sometimes effects should be "felt" and not heard), so we can use EQ and filtering to help blend this sound the way we want.
Now for a disclaimer: All of this being said, in today's day and age of the highly graphic DAWs, make sure you're using your ears and not your eyes. With so many plugins and options, it can be so easy to just go absolutely crazy with EQ and other processing. Odds are your cuts and boosts don't need to be that extreme, that narrow, or that numerous. Let your ear guide you, and before you add another band of EQ, ask yourself why you're about to introduce that change to the sound. Does it need to be there, or does it just look good? Are you addressing an issue, or do you just feel like you're not "mixing" until all your insert paths are filled and every parameter has been tweaked?
One of my audio mentors early on said something that really blew my mind at the time, especially right on the heels of having so many theories and practices drilled into me. The truth is, at the end of the day, if it sounds good, it is good. It doesn't matter what the Pro Tools session looks like, how many plugins you use, which developers made the plugins, whether you used real outboard gear or a software emulation... none of that matters as long as it sounds good. The listener ultimately will have no idea what happened behind the scenes, so use your ear, and make it nice.
For more tips on getting the best sound every time, check out more from our resident "Angry Sound Guy."
Aaron Staniulis is not only a freelance live sound and recording engineer, but also an accomplished musician, singer, and songwriter. He has spent equal time on both sides of the microphone working for and playing alongside everyone from local bar cover bands to major label recording artists, in venues stretching from tens to tens of thousands of people. Having seen both sides at all levels gives him the perfect perspective for shedding light on the "Angry Sound Guy." You can find out more about what he’s up to at aaronstaniulis.com.