Large william stitt 162607 copy

Common Mistakes to Avoid When Recording Your Songs

15 Mar 2017

Interview kind: text · Reading time: 9 minutes

Home Recording has become (and is still becoming) more and more affordable for any aspiring sound engineer and producer. “The rise of the DAWs” allows you to basically record and produce anything that comes to mind without having to rely on higher-budget recording studios. 
Well, at least for the first part of the process: depending on what you’re looking for, home recording can be useful to gather a fair amount of material that can then be eventually mixed in a professional environment.
So if you’re an engineer with some decent enough equipment to record and produce music in your basement, here are a few mistakes you should avoid.
Mic Level
If you’re recording live instruments, correctly setting the level of the single microphones and mic preamp is the first thing you should keep in mind. Unless you have a high budget digital audio interface with some good AD/DA converters, recording quiet-sounding sources with an extremely low mic gain means recording a small amount of background noise together with the signal.
 At first sight (and hearing) you might not notice it, but when you start adding compression and increasing the overall volume, you could have some issues with it.
On the other hand, if your mic level is too high you can start having problems with distortion and digital clipping. If you’re recording on tape this might not be a big issue, since many engineers intentionally look for that nice warm-crunchy-analog harmonic distortion. 
But once you're working in the digital domain, recording above 0 dBFS means losing and deteriorating a part of the signal. Recording with a high gain also means it can be hard to add further compression and processing to the sound source since you haven’t left enough headroom. 
Make sure your recording levels have a peak around -6 dBFS and don't go below -25 dBFS.
Large per media
Too Close Miking
You could reasonably argue that this point isn’t actually a “mistake”, but it’s still a tip many sound engineers don’t always consider. For different background reasons some recordists tend to aim for a maximum isolation between the performers, placing them in isolation booths and using extreme close-miking methods which alterate the instrument’s timbres. 
If the place is big enough for a live recording and the musicians really know how to play together, try capturing the performance more than the single sources. Spills from other sources can be annoying when you listen to a soloed mic, but they can eventually glue the whole song together during the mix. 
If you’re tracking a band playing live make sure you get a good balance between the instruments directly in the live room before you start placing mics all over the place.
Remember the “3:1 rule”: the distance between two mics placed on the same source has to be at least three times as much as the distance between the source they’re capturing. 
Stereo miking techniques can be a good choice for certain sound sources, but that means being aware of other issues, such as phase distortion and mono compatibility.
Out Of Phase
When you record a sound source with multiple microphones placed in different positions (e.g. drums), the sound coming from the source will arrive to each microphone with different time delays. This means that the sum of multiple signals can bring phase interferences and a certain amount of frequency cancellation. 
A classic example is the mic placement on the snare drum: it’s a common practice to place a mic on the top batter head and another one on the bottom resonant head, both pointing at the center of the drum. 
The sound source will arrive at the same time to both the mics but with opposite phase: before hitting the record button, try “flipping” the phase of one of the two mics and see if it improves in the overall snare sound.
Try slightly moving the mics until you find the perfect result and then repeat the process on other soloed parts of the instrument (e.g. overheads and kick; overheads and snare).
Once you've finished the drum tracking and you've done all the editing editing, you can manually align the phase of the two mics: zoom-in on the bottom mic track and aligning the soundwave so it matches the top mic's one (see picture below).
If your using stereo techniques (ab, nos, xy, ortf and so on) make sure you check your mix in mono, to make sure you don’t get too much phase cancellation when summing the left and right channels. These techniques can provide a larger-than-life really cool-sounding stereo image, but at the same time they can lack of a “centered presence” of the instrument, making everything sound wide but out of focus.
Large cubase
Headphone Monitoring
Whether you’re in a big recording studio or in your friend’s basement, headphone monitoring can be an issue. Especially if you’re a singer and you’re used to play with the other band members in a room at insanely loud volumes, it can sometimes be challenging to deliver the same kind of performance in a vocal booth, on your own, with headphones. 
It’s also not easy to make an engineer understand exactly what you want in your monitor mix. If you’re an engineer, don’t underestimate the value of a good foldback mix and always look for signs of unease in the performer: if he keeps delivering under toned takes, maybe you should take a listen to his headphone mix and figure out if something’s not working in the balance. 
Another tip could be asking the singer to use their headphones one–sided, so that they can hear themselves acoustically (remember to switch the unused earbud off). 
The “Fix It Later” Approach
Like Mike Senior wrote in an article for SOS: “While recording, think like a mix engineer. But rather than using EQ to shape the sound, use your choice of mic”. 
With the tons of plugins available in the modern DAWs, it’s easy for a so-called “bedroom producer” to get lost in the mixing process even before finishing tracking. 
Producing your own music basically means mixing “on the go” while you build the song itself. There’s no big distinction between the different steps: if something sounds good just record it and throw it in the mix with all the processing it needs. Just make sure this processing doesn’t distract you from building the whole song, otherwise you’ll end up with a 30-seconds song that sounds amazing (but it’s still a 30-seconds song).
You’re not in someone else’s recording environment, so there’s not much of that usual pressure that you get in a professional studio. It’s easy to spend hours looking for the right reverb or the perfect delay timing even before you’ve recorded the basic song structure.
I’m not talking about recording everything flat and dry: it’s just a matter of finding the right balance and deciding what needs to be done now and what can be done later on. 
There’s no plug-in or audio editor that can fix a bad performance without loosing the feeling the musician gave it.
The “fix it later approach” is maybe the biggest mistake any engineer can make: if something’s not sounding the way it should, if it’s been played badly or out time with the rest (not with the click track), record another take. Now!
Squared medium io e ssl