Pretty image
Grab that guitar or horn that you haven’t touched for years and get to it.

Ain’t technology wonderful? It wasn’t that long ago that owning a professional 24-track recording studio was a very, very expensive proposition. Home recording was doable, sort of, but the technology really couldn’t compete with the pros. If you hadn’t noticed, technology has caught up.

These days you can use the very capable GarageBand that comes with your new Mac laptop, or snag Ardour or Audacity (multi-platform) for free, or go upscale with something like Logic Pro or Pro Tools. You can get a full and impressive 24 track studio—or 48, 96 tracks, or more—right on your laptop. You can record, mix, master and post your creation on the web or sell it through iTunes, all by yourself. So technology has solved all our problems and made home recording a reality, unleashing the creativity of millions, right? Well, not exactly.

It also wasn’t that long ago that people typed messages on paper using typewriters. I know, because I found several examples from the 1980’s in my filing cabinet last week. Apparently one former acquaintance of mine couldn’t be bothered with using the shift key, because all of their letters were TYPED IN UPPER CASE. SHOUTING THE WHOLE TIME. It got the message across, but it was pretty crude looking.

Of course all that went away with the advent of what we called Desktop Publishing at the time. Now the average person could produce brochures, pamphlets, letters and such all by themselves. You could mix and match fonts! In different SIZES. With images, stuck randomly on a whim. The possibilities were glorious; it got the message across, but it was pretty crude looking.

What? All that technology, marvelous fonts and images, and it still looked crappy? Yup. Just because the tools became easy to use and popular didn’t mean that the users were trained in graphic design, typography or layout. Similarly, just because now you’ve got the full horsepower of a multitrack recording studio at your fingertips, it doesn’t automatically train you in all the subtle arts required.

Now back in the day, as people shrunk in horror at the wild abuse of multiple fonts, clichéd use of Old English and random images carelessly spilled on the page, articles began to appear to try and convey some of the more important design principles. Without learning everything about graphic design, we were at least advised to show some restraint on font selection, perhaps use a grid of thirds for object placement and layout, and so on. The point of layout is to help tell the story effectively, to help highlight that which deserves highlighting, and without distracting from the message—the story you’re trying to tell.

So in this article I’d like to do something similar. I’ll try and convey some basic tips for home recording to help you tell your story more effectively and sound better. So grab that guitar or horn that you haven’t touched for years and get ready to tell your story.

Let’s start by looking at sound itself. There are two fundamental aspects to sound you want to think about: amplitude and frequency, and changes to both over time.

EQ

In the frequency domain, instruments produce notes at certain frequencies, but also produce overtones—additional frequencies stacked up at multiples of the basic frequency. For instance, suppose I play a concert A on the trumpet. The note ‘A’ at that octave is defined to be 440Hz. If you played a pure sine wave on a synthesizer, and then looked at it using a frequency spectrum analyzer on the computer, you’d see a narrow spike around 440Hz.

sound/440sine.jpg

But on a trumpet, you’d see the fundamental frequency at 440Hz and a bunch of spikes at other frequencies as well. It’s these harmonics that give an instrument its unique timbre (no, you don’t pronounce it like wood “timber”, it’s tam-burr).

sound/440tpt.jpg

Now some of these extra frequencies are critically important, and some aren’t. Some are downright undesirable, in fact, and we’d like to get rid of them completely. For this, we need to use an equalizer, or EQ.

EQ lets you sculpt the sound. You can use it to help fit sounds together in a mix, or improve different aspects of a sound. For instance, with a vocal track, the frequencies around 1-2kHz affect vocal intelligibility. If you add a slight boost to that area on a vocal track, you’ll make it easier to understand the lyrics.

A big problem many home recordists have is a muddy bottom. That is, the bass part can’t be heard clearly or distinctly, it just kind of sinks in some kind of ill-defined audio “mud.” The problem is that all the non-bass instruments in the mix also contribute some lower bass frequencies, especially the kick drum, other guitars, piano, even voice and horns.

It’s common practice to cut these unwanted lower frequencies from all these other instruments in order to give the bass room to breathe—room to be heard. To do that, you use an EQ plug-in or piece of hardware to “roll off” (remove) the frequencies below, say, 250Hz or so.

Now, as with any powerful tool, EQ can be dangerous. EQ is designed to be a small, subtle tool—not an effect. You really don’t want to use it to make a male voice sound female, or make a trombone sound like a trumpet. There are better tools for such things (hint: you might want to change the fundamental frequency as well as the formants.) Also, conventional wisdom says you generally want to use EQ to cut, not boost, specific frequencies.

This is where relying on presets might get you into trouble. Virtually all the EQ presets in Logic Pro, for instance, boost certain desired frequencies instead of cutting the undesired ones. That’s backwards, and it’s not necessarily a good practice to emulate. If you want to bring out a certain frequency, you might be better off lowering all the other frequencies by the desired amount. (As a side note, you also want to prefer to use Logic’s Linear Phase EQ instead of the standard EQ, or any phase-coherent EQ in other systems. Phase corrected EQ’s typically have a much nicer, more natural sound, at the expense of extra CPU processing required).

You can also use EQ as a listening aid for yourself, and not as part of the final product. For instance, when you’re mixing and want to increase clarity in the midrange frequencies, it’s a great trick to slap an EQ across the whole mix that cuts out everything below 250Hz and everything above 5000Hz. It makes it sound like an old-fashioned AM radio (or really bad laptop speakers). You mix like that (in mono, for extra credit) to get a nice balance between all the instruments, then take off the EQ and listen to the extra highs and lows in all their glory.

If a particular instrument sounds bad for some reason, use EQ to first make it sound worse. Emphasize the worst part of the sound by boosting those frequencies using EQ. Once you’ve identified the bad parts that way, reverse the settings and use that EQ to cut the nastiness right out.

Parts of a Sound

Now let’s move on to amplitude. Consider that the loudness of a sound might go through several phases:

  • Attack

  • Decay

  • Sustain

  • Release

The attack comes first. The first few milliseconds or tens of milliseconds tends to be the most interesting; a lot of our perception of whether a sound is a voice or a trumpet or a kazoo comes from the attack phase. Very brief spikes of audio energy during the attack are called transients. More on that in just a second.

Most natural sounds have some level of decay as the initial attack tends to wear off, then might sustain at a more-or-less fixed level for a while. When the player takes their fingers off the keys, stops blowing, bowing or banging, the sound typically doesn’t just stop cold; there’s a release period as the sound energy tapers off to nothing.

Many synthesizers use envelope generators to create control waveforms that go through these same phases. You can get some interesting effects by taking a natural sound and changing its volume envelope. You might create a snare drum that sustains, or a cymbal with an abrupt release, and so on.

Compression

It seems that all the woodworking magazines out there regularly run columns on how to hand cut dovetails. It’s a tricky skill to learn, beginners have a hard time with it, and there are many different approaches and schools of thought.

So it is with compression. One of the more powerful, and probably most misunderstood, tools in the recording toolset is the compressor. It’s a tricky tool to learn, beginners have a hard time with it, and there are many different approaches and schools of thought. I doubt I can do better than any of the many tutorials out there, but here’s a few thoughts to get you headed in the right direction.

First, remember those transients that spike during the attack phase of a sound? Well they present a problem, because if they are really loud, they’ll overload the system and we’ll have to turn down this track. But now you can’t hear it as well, and it might get lost in the mix. It gets fun because actual amplitude and perceived loudness are different things; we can take advantage of that. What we need is a magic knob to turn down the volume on the loudest parts, and quickly turn the volume back up when needed.

In essence, that’s what the compressor does. It compresses the signal when it gets louder than a certain threshold. For instance, suppose you’ve got a vocal track. Vocals can be notoriously hard to tame, with loud shouty parts and whisper-quiet parts in the same breath. Here’s a piece of vocal:

sound/vocalraw.jpg

You can adjust the threshold so that the compressor squashes down those loud parts; this lets us turn up the overall gain (“makeup gain”). There’s also usually a ratio control to adjust how much it squishes. The result looks like this:

sound/vocalcomp.jpg

You can control how quickly the compressor activates, and how soon it deactivates, using the attack and release controls. A quick attack will compress initial transients, and a slow attack will let them through. For instance, here’s a snare drum hit:

sound/snareraw.jpg

With a very fast attack and an extreme ratio, you can get a louder, rounder body of the sound at the expense of the crack:

sound/snarecomp.jpg

You don’t want to compress the transients out completely, because that’s often the most important character of the sound. In the end, the best way to learn how to use a compressor well is to play with the knobs. But consider it can be too distracting to watch the numbers and meters and graphs. You need to listen with your ears, not watch with your eyes.

Reverb

Reverb, short for reverberation, is natural effect of a sound occurring in an acoustic environment. When a sound is generated, by whatever means, the sound waves don’t make a straight line to your ears. They bounce around, hitting walls and other objects, getting absorbed, reflected, amplified and attenuated. It’s a mess. But it’s a mess we’re used to hearing; that’s the world we live in. If you record a sound completely dry—with no reverb—it doesn’t sound right. So we’ll usually add a little reverb to sweeten it up a bit; to create a space that sounds great and highlights the tracks.

Some writer once said that using reverb had similarities to using crack. At first, everything is shiny, but then very rapidly it all gets blurry and confused. With reverb, less is definitely more; it’s very easy to use more than you need and end up with a mess.

There are two major ways of using reverb in a mix. In the first model, you set up one reverb on its own bus channel. Then you route all the other channels that need reverb through this one bus; you control how “wet” a track is (how much reverb it gets) by adjusting how much you send to the bus.

For instance, here is a simple four-channel mix with a lot of reverb on the piano, just a little bit on the drums, and none on the bass. I’m using the SpaceDesigner reverb on Bus 1 (abbreviated here as SpaceDsn).

sound/oneverb.jpg

The other model is to put a separate reverb as an insert effect on each individual track, as shown here. Notice there’s no bus, just a SpaceDsn inserted on each track.

sound/multverb.jpg

Each has its advantages; by using one reverb you get a more coherent effect that produces the illusion that everything’s in the same room. It also conserves CPU resources. But you can get more interesting effects by using individual reverbs; it all depends on what you’re after.

In general though, lower frequency instruments get less reverb. Bass probably gets none at all, and instruments with a lot of natural sustain such as cymbals and acoustic guitar don’t need extra reverb either. If the overall mix gets too muddy, you might be better off using a short echo or delay instead of a reverb effect.

Modern convolution and algorithmic reverbs are very powerful and can do a really nice job of simulating a natural space. But don’t forget that recording in a shower stall, a hallway or a large room has its own sound as well.

Tragedy of Plenty

Modern recording technology has given you a horn of plenty. Unfortunately, that can also be a big drawback. Too much of anything can be bad.

Just because you have 48 tracks, you don’t have to use them all. Remember you’re telling a story, and it’s hard to follow a story when there’s more than one person speaking at a time. The flow of the story can pass between instruments, but the focus needs to be on one element at a time. The Beatles, you might recall, accomplished quite a bit with just four and eight tracks.

Then there’s volume. When mixing, you want to listen at very, very low volume levels–your ear is much more sensitive at lower volumes. Louder is not better at this stage.

In this land of plenty of tracks and plenty of effects, it’s easy to lose track of the balance of tension and release. Too much tension is noise; too much release is Kenny G. Neither is enjoyable. Loud and quiet are relative concepts; there is no loud without quiet to compare it to. It’s the contrast that generates interest and power. The worst thing to do is slap in a loop and just let it drone on for three and half minutes.

Well, maybe it’s not the worst sin (notice I haven’t talked about AutoTune at all), but it’s an easily avoidable menace. Some folks like to set up a loop and bang away on top of it, fearful of introducing too much change in case they might hit a wrong note. But consider that perhaps a boring note is also a “wrong” note.

Why Now?

There’s no better time to record your own material. Digital recording software is inexpensive and readily available, synthesizers, convincing virtual instruments and effects are plentiful. Good quality microphones and outboard gear are the cheapest they’ve ever been. You can get the gear, all you need now is a reason.

So let me offer you the following thought: music is vital to people’s lives. A commencement address by Boston Conservatory’s Karl Paulnack reveals the ancient Greek understanding of the fundamental nature of music: that music and astronomy were “two sides of the same coin.” The Greeks defined astronomy as the study of relationships between observable, permanent, external objects.

Music, on the other hand, examines that which is far more subtle: the study of relationships between invisible, internal, hidden objects. It’s all about the moving pieces within our hearts and souls.

We’re waiting to hear what you’ve got. Knock our socks off.

Andy Hunt is a Pragmatic Programmer and amateur musician.