James W. Pennington
Welcome to Byte-Mix Sound Design. This guide is geared towards beginners and students who are new to audio and music production. My past guides were more along the lines of setting up the room and building a studio. This guide will focus on some basic aspects of mixing a track.
DISCLAIMER: This is --NOT-- a comprehensive guide. This is strictly what I know from experience, research, listening to other home-studio musicians, and what works for me. This "guide" also assumes you are just starting out in this field.
Mixing music is a fairly complex subject. In this guide I am going to focus on the more common or basic things that go into working with a mix. There are a lot of factors that go into getting a good mix. Quality of the source recording, eliminating phase problems, equalization of the audio spectrum, instrument panning, where it sits in the mix, volume levels, reverb, compression, and other effects levels, and the list goes on. This guide is not so much a “How do I get a good mix” so much as it is a “Things to watch (or listen) for”. I cannot tell you how to get a good mix. Only your own ears can tell you that.
Temper that with the fact that “you can’t mix what you can’t hear.” So, having a good, treated room, with good monitors is highly important. In fact, getting your room established should be your first goal, and I have covered some of that in other documents.
I: Establishing a Workflow:
Before you get to work on the mix itself, you should probably take time to establish a workflow and figure out what the mix needs. What I do, after getting the raw recording, is listen to the whole track, and mentally make a note of problem areas, and overall levels. Before I start actually mixing, I’ll go through each track, and hunt down anomalous noises (pops/clicks/noise) and try to eliminate any plosives coming from the vocal track. It is important to clean up the audio before getting into mixing because these anomalies can create a false sense of where the level of audio is sitting at. I’ll also place a fast fade-in and fade-out at the start and end of all the tracks. This is so that I don’t have to remember to do it later. You want those in place in the final anyway so that you don’t have a sudden pop at the beginning or end of the track when the noise or music starts and ends.
Once the audio cleanup is finished, I’ll go through again, listening to the track as a whole, paying attention to the overall balance, what instruments or vocals need to be improved upon, or if there are there problem spots that cause genuine clipping, or sections that have a lot of sounds clashing together. Then, I’ll focus on each track individually and use some ballpark placements of EQ, Compression, Gating if it is needed, set initial pans/placement, Reverb to add space, and I’ll do this on each track, only using what effects I think it would need to correct problems or add quality and get it sounding good. I’ll generally go back and forth between individual tracks, and the song as a whole, until I get it to where I think it sounds good. At this point I have not touched the master channel (and I usually won’t touch it)
Once I think I’ve got something decent, I’ll burn a test CD, and go listen on other devices. Boom box, car stereo, other speakers, other headphones, my ear buds, etc. This is so I can gauge how well the mix translates between different systems. Afterall, the purpose of mixing in the first place, is to get a good balanced sound that will sound good on a multitude of audio systems. Usually my first attempt will be off a bit, so I’ll make a note of where the problems are, go back, fix them, tweak other settings a smidge, and not touch the stuff that sounds good already. I’ll usually have a finished mix that I’m happy with by the 3rd or 4th test CD.
II: EQ and Frequencies
In my opinion, one of the more important aspects, and one of the more difficult to get an ear for, is the EQ Spectrum. The range of audible frequencies from 20hz on up to about 20Khz or so. I understood low end, mid range, highs, and I understood the concepts of something sounding squawky, or muddy, or boomy, or hissy. But I didn’t quite have a finger on exactly where in the spectrum those qualities tend to emerge. Having a chart of frequencies and their attributes is invaluable in the learning stage. Eventually, your ears will learn, and using your ears is the most important thing you can do when mixing a track. Sure, any number of meters, software, etc. can give you visual feedback, but it doesn’t mean squat if you don’t trust your ears. In the end all that matters is the sound, and to that end, listening is key.
When I first start on a work, get everything recorded, or composed and into a track, and I’m happy with the result, I leave all the EQs flat at first and give the whole project a listen. I try to gauge what fits nicely, what sticks out too much, is something buried? Basically I use my ears to tell me what the track needs. As a rule of thumb, I tend to roll off everything below 30hz on all instrument tracks, and below 20hz on the Master channel. Granted, where you roll off depends on the type of music you’re working with. If the music is bass heavy and has a lot of rumble, you might roll off lower around 20hz or 25hz. Rock, you might go around 35 to 40hz. Classical and Jazz, probably around the same. This is done to try and get rid of the sub-bass “mud” that can diminish the clarity of the mix if there is too much present. Though, if you’re mixing for a club scene, you may want to adjust the roll off so that you have a nice rumble. Your ears will tell you what the track needs.
Next, I’ll basically try to adjust levels of individual instruments so that nothing is overpowering or buried. I listen to each instrument on its own to see if it sounds okay, isn’t too squawky, doesn’t “sizzle or hiss”. I’ll usually roll off or shelve the high frequencies (above 10 or 12Khz) if something is hissing too much.
Generally, I try to leave the midrange frequencies alone unless there is an obvious problem. Sometimes you might need to boost an acoustic instrument or vocal track in the upper mids to give it more presence or clarity. Sometimes you’ll need to cut in the lower mids to reduce a “muffled” sound. (sometimes around 400hz, sometimes around 800 to 900 hz) The goal is to adjust things so that every instrument has its own space in the mix, and that its frequencies don’t push around, or overlap too much into the frequencies of other instruments around it.
For example, you finish recording with a rock band, you’ve got all your tracks into the DAW, and you are ready to get to work. Give the raw sound a good critical listen through the whole song. First thing you’ll probably want to do is find out what frequencies jump out at you. Stuff like where the bass guitar and rhythm/lead guitars cut into each other, how the instruments sound together, and how badly they clash in the mix space. Usually you’ll want to have a bit of separation between the guitars so they don’t push each other around too much. What I try to do is set it so the bass guitar “flow” into the rhythm/lead guitars. So, I might create a low-pass filter, and set it around 250 to 300 hz maybe up to 400, depending on the type of music, and the style of bass. To separate out the rhythm/lead guitars, you’ll probably just need to pan them so they have their own space in the Left or Right channel of the mix. If together they boost the mid-range or higher end frequencies too much, you might consider a slight narrow cut to the offending frequencies, or creating a high-shelf filter in the upper frequencies if things are too “fizzy” sounding.
Next you’ll need to look at the drums. How do they sound in relation to everything else? Are the cymbals too hissy? Is the kick too boomy, or does it sound like cardboard? Does the bass guitar cut too much into the kick drum? Is anything getting buried? You’ll need to listen, and make adjustments as necessary so that nothing gets buried or interferes/muddies up the sound of other instruments.
For the vocals, I hesitate to do much if any EQing at all to them. Too strong an EQ on a vocal track can cause the voice to sound very unnatural. It is far better to make space for the vocals by EQing other instruments, than to directly EQ the vocal track. Mostly what you want to look out for on vocals is sibilance, or the hissy sounds of S’s and T’s A little bit of a high-shelf cut will often be enough to tame those a bit, along with some very light compression (which we’ll get into later) to tame any spikes. Other times, a vocal recording might have a harsh lower mid sound, depending on the singer, or the recording environment. Cutting around 800Hz to 1000Hz can help reduce some of that harshness/listening fatigue, and bring a little more clarity to the vocal sound.
So, in a nutshell, EQing is often the first step you’ll want to take in achieving a good balanced mix. You want the instruments to all have their proper place in the spectrum, and have their own space in the mix so that frequencies from other instruments don’t clash with each other and muddy up the sound. This will also lend itself to creating a good balance that will translate to other speaker systems, and that is the ultimate goal; a mix that translates well.
III: Panning and Position
The next major thing besides EQ and frequencies of instruments, is the actual position of the instruments within the mix. Try to imagine the stage of a venue. A stage exists as a three-dimensional space. Left, Right, Front, Back, Top, and Bottom. Try to imagine where the instruments generally sit within the stage. Where do the drums sit? The bass? The guitars? Vocalist? Keyboards? Every instrument requires room on the stage, and so, they will all need their location and position in the mix as well. Often you’ll hear the bass out of the left, and treble out of the right, but that’s not always the case. Often you might hear the guitars in both left and right speakers to create a larger, thicker sound (a.k.a. the wall of sound) Vocals you usually want front and center (or very close to center) Overall, you want to place things where they sound best in the mix.
When you first load your recorded tracks into tracks inside the DAW, chances are, they’ll all be panned front and center. This is not an ideal situation. There are many ways to manipulate the positions of instruments in the mix, and we’ll focus on three main areas: Panning, Volume, and Effects. Panning controls how far left, or how far right an instrument sits, whether it is completely out of the left or right speakers, out of both equally, or out of one speaker more than the other. Volume helps communicate how far forward or back an instrument sits in the mix. Effects such as reverb can also affect the position and how far forward or back a sound sits, or even the size of the room the instruments are playing in.
Let’s look at panning first. Panning is a pretty simple concept. Turn the panning knob left, and the sound is focused more and more out of the left speaker the more you turn it. Same thing if you turn it to the right; more of the sound will come from the right speaker. This is used mostly to set the location of the instrument within the stereo field. However, you can also automate a panning knob to have a sound move between the left and right speaker channels as an effect. This is often done with synth sounds in electronic music to create “ear candy” and helps to hold the listener’s interest. You might hear this done with guitar tremolos, though more subtly.
Rather than focusing on volume, we’ll next talk about how forward or back an instrument sits in a mix, and different ways to manipulate that sound. Generally in any music involving vocal work, you want the vocals to come out clear and easily understood. So you’ll usually want the main verses to be up front and center. However you probably don’t want them dominating everything else in the mix, and you probably don’t want them sounding too dry. A little verb and compression can help them sit back in the mix and gel with the instrumentation.
I try to set the bass and the kick drum to comparable levels, and then bring them up so you can just hear them above the other instruments. Again, depending on the room, low end can be difficult to work with, and it is very easy to exaggerate a cut or boost to low end frequencies in the mix. As a sanity check, I will often listen on multiple sources until the bass and kick gel with everything else.
IV: Audio Editing:
Sometimes you will need to make edits to the waveforms themselves. This includes cross-fades, fade ins and outs, removing plosives, lining tracks up with each other to get a tighter sound, and making sure you aren’t getting phase issues between instruments. This is the nitpicky dirty work of the mix, and probably the most tedious and least fun part of the job.
You might get a singer who is way close to the mic all the time, and have a lot of plosives you have to hunt down and kill. Some DAWs allow you to redraw the wave form directly. In others, you might need to get very creative with cutting and cross-fading. A plosive is very easily recognized in that it will have a much higher peak/trough compared to the other parts of the vocal. It usually has a fairly distinct sinusoidal wave form. These are usually easy to deal with, though sometimes you might get an extended plosive caused from air coming from “F”s or “S”s or other airy consonant sounds. So, you may need to play with the time selection or scrub the audio track to find the start point and stop point of the unwanted noise. Then, select the time, split the time from the rest of the track, and yank the volume down on the track itself (not the fader, as the fader will affect the whole track)
Pops and clicks are handled a little differently. A pop or click sound in a track is (usually) caused by a disconnection in the wave form. For example, if you zoom in very close on the wave, you may see a spot where there is a trough that suddenly jumps to a peak, and creates a near-vertical line. This is almost always the source of a pop or click. Sometimes you can just silence it like you would a plosive, but other times, you might need to cross-fade it to get the disconnected point to, well, reconnect.
Sometimes you might get two tracks that aren’t quite aligned in time. For example, you might have a singer and backup singers that sound slightly off. Fixing it can usually be done by simply lining up the tracks so that they have a tighter timing. You may need to split off the mistimed sections, otherwise you risk putting other parts of the track that are in time, out of time by moving the track forward or back.
Lastly, you may have phase issues that need to be dealt with. A phase problem is created when two mics recording the same source are positioned slightly out of phase. (read: disobeying the 3:1 rule) this will usually create an ugly, thinner sound that you can’t really fix no matter how much EQing you do. And God forbid those two mics are routed to the same subgroup/track during the recording phase! Usually what you can do is flip the phase switch on one of the tracks, and then make EQ adjustments to bring back the fullness of the sound. Nudging the tracks so the wave forms line up together is another way to eliminate problems in the phase.
V: Effects:
I view effects as a tool to surgically fix things, and otherwise offer ear candy and flavor to the mix. However, I view use of effects in a mix as I do spices while cooking. You don’t want to overdo it. My personal rule of thumb on using an effect is to set it to where it sounds “cool” and then dial the wet/dry knob about 2 or 3 marks more towards dry. This is usually how I handle things like reverb, delay, chorus, or anything that alters the sound.
Compression is a little different. Sure, you can drive a compressor hard and use it as an effect, and that is great for certain types of music. For what I work with, though, I use it to keep peaks and dynamics under control. However I want the music I work with to be dynamic, so I tend to use small amounts to help tame peaks in the sound. I often use it on vocals and low frequency sounds. Compression can also be a good tool to help “glue” sounds together in the mix.
Some reverb can help the mix have some breathing room and a sense of depth and space. I usually use verb on individual tracks (you generally don’t want your mix swamped in verb). Though sometimes, I’ll use it on the master channel if I want to give the sense of everything being in the same space (such as with a live recording of a band).
Delays and Chorus can help thicken sounds or create ear candy using automation and panning effects. You might increase the chorus to thicken the vocals for chorus of the song, or you might use them to beef up the lead guitar solo. There are many correct ways to use effects, and there is a lot of room for creativity and artistic sense.
I usually don’t use effects on the master channel. I leave that type of thing to the mastering guys. Though, sometimes a band just wants a demo, or they can’t afford a mastering guy, so I’ll usually do some sort of “finalization” on the master channel to help bring the gain up, and to give the overall sound a little flavor with some very slight EQ. I’ll also usually use a brick wall limiter set to -0.4dbfs to stop peaks that might clip, but I usually won’t use the makeup gain unless the overall mix is too quiet after balancing everything.
VI: Closing:
I hope this will give readers ideas on how to better prepare their mixes. There is a lot more involved than I have covered here, and there are many situations that need to be handled individually. This guide was more for giving a list of things to keep in mind while working. Of course, having a good recording will help the mix to practically mix itself. Your ears are your best judge, so trust them.