All Things Being EQ-ual, pt. 1

PLEASE NOTE: This article has been archived. It first appeared on ProRec.com in April 1998, contributed by then Media and Mastering Editor Lionel Dumond. We will not be making any updates to the article. Please visit the home page for our latest content. Thank you!

Part 1: eQs and As

As the competent and conscientious recording engineer that you surely are, you’ve taken great care to record your (or your client’s) latest opus. You’ve gotten your greasy little fingers on some mics and placed them more or less in the general vicinity of the instruments being played. You’ve taken care to insure that these instruments were tuned to a scale somewhat resembling those normally heard in modern Western music. You even carefully placed some cool crash cymbals on that dodgy part where the overly-enthusiastic vocalist overloaded your A/D converters.

You’ve soloed every track and listened. The bass sounds fat. The guitar is punchy and open. The kick is round and snappy. The snare is… well, it’s very “snarey” sounding.

So, how come your mix sounds like oatmeal?

I was once asked, “If you could only use a single effect to mix a record, what would it be?” In real life, my first reaction would probably be, “Oh man, that sucks. Does this job pay scale, and where’s the Mr. Coffee machine?” But seriously, given the restrictive parameters of that hypothetical, it’s a brain-dead-simple choice. So come with me, if you will, to the magical land of EQ.

What the Heck?

Equalization, or EQ, is a process by which a specific parts or parts of the audible frequency spectrum are either cut or boosted, in order to change a sound. (Deceptively simple, eh?) Like so many of the technical processes applied in the modern production of music, we have the good ol’ telephone company to thank for this one. Yes, the equalizer was originally invented by the telephony industry in order to compensate for the extremely uneven response of telephone systems — by boosting certain frequency bands attenuated by the transmission of sound over long distances (remember, this was when digital transmission technology was but a gleam in Ma Bell’s eye) the frequency characteristics of sound received at the other end could be made flatter, or more “equal” — and thus more natural-sounding.

Soon, audio engineers found that equalization technology could be used to flatten the response of PA and monitoring systems. But it didn’t stop there, of course. Through the judicious application of the Number One Studio Rule — Any Process That Can Be Used To Cool Effect Can Usually Be Abused To Even Cooler Effect — many adventurous engineer-types discovered that EQ circuits could be tortured and twisted to make certain instruments leap out of a mix, to create weirded-out effects, and to even help create whole new timbres!

Today, there are no “rules” as to the proper use of EQ. I hate reading articles that purport to give you the Proper Rules on how to do this or that in the studio. Who writes these rules, anyway? I say, screw the rules! The greatest records in history were often a result of working in a creative, free-flowing environment where there were no rules.

That being said, there are still some general guidelines, developed through years of experience and often a modicum of raw talent, that most engineers find a good starting point. I urge you to read them, absorb them, learn them, apply them, and then break as many of them as you can.

Sometimes, Less is More

Having good EQ capabilities at your disposal is not an excuse to get lazy! Getting good sound on the rust is, first and foremost, a matter of choosing the right mic, placing it in just the right spot, and, of course, having a quality instrument, properly tuned, in front of that mic. Trying to EQ a kick drum at mixdown that is tuned looser than your Aunt Gertrude’s knickers can be a nightmare. Go ahead and boost 3k on that kick track all you want to — but you’ll soon learn that you can’t effectively boost what isn’t there in the first place. Good mics, proper technique, and great instruments are the ideal, and often make EQ adjustments unnecessary. If you’ve done everything right, you may very well find that the best EQ is none at all!

So much for the ideal — now let’s get practical. As we all know, time and budget constraints in the studio can create conditions that are not always ideal. You won’t always have the perfect mic at your disposal. Not every acoustic guitar you will record is going to be a $2,000 Taylor. And it can be detrimental to your client’s happiness (and thus your bottom line) to spend 90 minutes experimenting with how far off-axis you should mike that Fender Twin. In situations like this, EQ is often your only salvation. When you’ve done the best you can, yet that timbre isn’t exactly what you were going for, judicious use of EQ can mean the difference between greatness and… ugh… so-so-ness.

Musical Shoehorn

It’s often useful to think of mixing a multitrack recording as akin to putting together a giant sonic jigsaw puzzle. You job is to take all of the “pieces” (tracks), spread them across your “desk” (mixing console) and make them all fit into a beautifully assembled, suitable-for-framing portrait of a bowl of fruit or a gorgeously-rendered reproduction of Dogs Playing Poker (a song).

When listening to a soloed track, all by it’s lonesome, it may sound great. A guitar track that really spreads across the spectrum can sound wonderfully cool by itself. A bass track can sound incredible fat and punchy if it contains everything from 60Hz to 4kHz. A piano can really sparkle, and that synth patch might knock your socks off. But take all these beautiful colors and mix them together, and you’ll likely get what you’d see if you mixed all of the beautiful separate colors from a painter’s palette together– the sonic equivalent of something resembling a yucky brown goop!

The idea to allow each instrument to occupy it’s own “place” in the mix so that, like a great painting, it has powerful impact as a whole, yet you can “see” (or, in our case, hear) all the individual parts as well. There are generally four ways that producers and mixing engineers accomplish this on your favorite records:

Volume (the setting of relative track levels to achieve timbral balance)
Soundstaging (the use of panning and ambiance to separate timbres in physical space);
Time (the use of delay and/or performance/arrangement techniques to separate timbres in time);
EQ (the use of EQ to separate timbres across the frequency spectrum).

The next time you listen to a great record, try to see if you can figure out which of these three techniques are being used. Chances are, you’ll hear a bit of all four at the same time! But since this is an article about EQ, we’ll focus on that technique herein. (No duh!)

Perhaps at this point, a concrete example is in order. (By now you must be thinking, “Hey Lionel, it’s about time!”) Okay, let’s say that you are Roger Nichols. You are working with this hot band called Steely Dan, and you’ve just finished tracks for a great new song called “Peg”. (I know… this already stretches the bounds of imagination, because if you are Roger Nichols you probably have no need to read an article like this. I realize that, but come on… just work with me here, people.)

A lot of engineers like to build a mix from the bottom-up and from the center-out — at least that’s the way many engineers approach things at first. (I have no idea if Roger does it this way, but he’s my guinea pig here, so tough noogies for him). So let’s say you’ve got this smokin’ poppin’ Chuck Rainey bass track to play with, and you’ve also got that groovy Bernard Purdie kick-drum track. On most pop records, the bass and kick together represent the bottom-end foundation of the tune, providing the very basic rhythmic feel of the whole piece, which in turn greatly effects the feel of the song in general. The kick-bass relationship is one of the critical cues that all listeners key in on, whether they themselves realize it or not!

So it makes sense to ask yourself at this point, “Roger, what is the basic vibe that Donald and Walter want to convey here?” As a mixing engineer, you must have a very clear idea of the style of the music being played, and the overall feel that the artists are trying to put across. This is very important! As with most endeavors, if you have no idea where you are going, you are unlikely to end up where you wanted to be.

So you have this cool, breezy, funk-jazz-groove-13-bar-thang with blues changes and a neat turnaround happening under you. You like the nice, fat, round bass and all those cool slides. You also like that pop’n’snap thing that Chuck did, and you definitely want to keep that, too.

You note that the roundness of the bass track lies in the 60 Hz to 150 Hz range. And that pop’n’snap thing is up there around 2.5kHz to 3kHz or so. But you know that, on a lot of electric bass parts, the frequencies around 250Hz can mud up the sound. You decide to cut a little around 250 Hz and see what happens. Whoa! Can you hear meat of the kick drum a little better now? The bass and drums aren’t stepping on each other so much any more because you’ve grooved out a little part on the bass track for the kick to come through.

You blend in the guitar part now, but decide to apply a highpass EQ to that track to cut everything below 80Hz. This leaves the guitar feel intact, yet leaves plenty of room for the bass and kick to breathe. Are you starting to get it now? Cool! Your mix is starting to really come together! You continue to EQ in this manner until the song is done. Then you pop open a cool one, kick back and relax, and casually compose your Grammy acceptance speech, thinking how awesome this song will sound coming over the PA system as you stroll up to the podium…

Hey… hey YOU… WAKE UP! Back to reality. You should be starting to understand now how mixing a song is like a jigsaw puzzle (remember that metaphor?) EQ is one way to make the pieces of a song fit all together. I’m not exactly sure when all of this started to become standard practice, but I was once told that this EQ technique was first used at Motown, and if you listen to those great old Berry Gordy recordings you’ll definitely hear it happening.

Is That It?

Hell no, that ain’t it. It’s likely that this article has brought up as many new questions as it answers for you. What about the many other uses of EQ, such as a creative tool in things like synthesis and sound creation? What about EQ processes such as notch filtering for restorative purposes and feedback control? Heck, we haven’t even touched on the use of EQ in mastering. And how do you learn to recognize frequency ranges by ear, anyway? What’s all this “Hz” stuff? What’s “Q” and “parametric” and “highpass” mean?

Well, stay tuned for Part Two of this series, where we’ll delve into more Stupid EQ Tricks, decode all the mumbo-jumbo and technical stuff, and discuss equipment alternatives, including hardware-based vs. software-based EQ. In Part Three, we’ll host a Software EQ Shootout, where we’ll compare the sound and effectiveness of various software EQ programs and plug-ins from companies like Waves, Sonic Foundry, Cakewalk, Syntrillium, and more.

Until next time… here’s hoping that your equipment always works, your clients always pay you on time, and that all of your EQ experiences are happy ones.

And don’t ever forget… the music’s the thing!