Okay, this is an unusual one. Please fasten your seat belts, and set your tray tables to the upright and locked positions.
Personal bias alert: With pop and rock music, for me it’s all about vocals, drums, and bass. Vocals tell the story, drums handle the rhythm, and bass holds down the low end. For a given collection of songs (formerly known as an “album”), I want all three elements to be relatively consistent from one song to the next—and that’s what this week’s tip is all about. Then the other instruments can weave in and out within the mix.
It’s fantastic that you can flip back and forth between the Project page and a Song that’s been added to the Project page, make tweaks to the Song, then migrate the updated Song back to the Project page. But it’s even better when you can make the most important changes earlier in the process, before you start down the final road of mastering.
Here’s a way to match bass and vocal levels in a collection of songs. This takes advantage of the Project page, but isn’t part of the mastering process itself. Instead, you’ll deploy this technique when the mix is in good shape—it has all the needed processing, automation, etc.—but you want a reality check before you begin mastering.
We’ll cover how to match vocal levels for the songs; bass works similarly, and in some ways, more effectively. Don’t worry, I’m not advocating robo-mixing. A mathematically correct level is not the same thing as an artistically correct level. So, you may still need to change levels later in the process—but this technique lets the voice and bass start from a “level” playing field. If you then need to go back and tweak a mix, you can keep the voice and bass where they are, and work the mix around them.
(Note that it’s important to know what the LUFS and LRA metering in the Project page represent. Rather than make this tip longer, for a complete explanation of LUFS and LRA, please check out this article I wrote for inSync magazine.)
Figure 1: The songs in an album have had only their vocal tracks bounced over to the Project page, so they can be analyzed by the Project page’s analytics.
The waveforms won’t provide any kind of visual confirmation, because you adjusted the levels to make sure the songs themselves had a consistent LUFS reading. For example, if you had to attenuate one of the songs by quite a bit, visually the vocal might seem louder but remember, it’s being attenuated because it was part of a song that was louder.
Also try this technique with bass. Bass will naturally vary from song to song, but again, you may see a lager-than-expected difference, and it may be worth finding out why. In my most recent album, all the bass parts were played with keyboard bass and generated pretty much the same level, so it was easy to use this technique to match the bass levels in all the songs. Drums are a little dicier because they vary more anyway, but if the drum parts are generally similar from song to song, give it a try.
…But There’s More to the Story than LUFS
LRA is another important reading, because it indicates dynamic range—and this is where it gets really educational. After analyzing vocals on an album, I noticed that some of them had a wider dynamic range than others, which influences how loudness is perceived. So, you need to take both LUFS and LRA readings into account when looking for consistency.
For my projects, I collect all the songs I’ve worked on during a year, and release the completed project toward the end of the year. So it’s not too surprising that something mixed in February is going to sound different compared to something mixed in November, and doing something as simple as going back to song and taking a little compression off a vocal (or adding some in) is sometimes all that’s needed for a more consistent sound.
But let me emphasize this isn’t about looking for rules, but looking for clues. Your ears will be the final arbiter, because the context for a part within a song matters. If a level sounds right, it is right. It doesn’t matter what numbers say, because numbers can’t make subjective judgments.
However, don’t minimize the value of this technique, either. The reason I stumbled on it was because one particular song in my next album never seemed quite “right,” and I couldn’t figure out why. After checking it with this technique, the vocal was low compared to the other songs, so the overall mix was lower as well. Even though I could use dynamics processing to make the song reach the same LUFS reading as the other songs, this affected the dynamics within the song itself. After going back into the song, raising the vocal level, and re-focusing the mix around it, everything fell into place.
A couple previous tips dealt with how to give mono instruments, like guitar, a stereo image that won’t degrade when collapsed to mono. Widen Your Mono Guitar—Sans Problems used delay, but in a way that minimized phase issues. Delay-Free Stereo from Mono used two Multiband Dynamics, set for no compression, to separate the audio into bands that you could then pan left or right.
This tip takes the process even further—it’s versatile, relatively simple, easily customizable, and also, has no phase issues when collapsed to mono. I’ve even used it to create a subtle, artificial stereo image from old mono records.
The Stereo Separator is particularly effective with power chords and rhythm guitar, especially as an alternative to layering parts in search of a “bigger” sound—you can obtain a stereo spread with a single track, so the sound is more defined compared to using multiple layers. And of course, if you scroll to the end there’s a downloadable FX Chain, so you can start playing with this immediately.
This example assumes a mono, distorted guitar track, like what you’d obtain by using a single mic on an amp. To create a stereo image, we first need need to convert this to a stereo track. So, set the track’s Channel Mode to stereo, and bounce the track to itself (Ctrl+B) to convert it into a dual mono (i.e., stereo, but with the same audio in the left and right channels). Now we can start playing with the stereo imaging.
And now, the fun begins! Play with the Pan controls to spread the different frequency bands in the stereo field—the audio example gives a good idea of the type of effect this FX Chain can do. The first example is mono, the second widens the image a bit, and the final example does a somewhat more radical stereo image.
Of course, you can go into the routing window, and change the levels of the various splits. Or, add FX in the splits…change the split frequencies…there’s enough to keep you busy for a while. Happy stereo!
This technique dates back to when I was doing live gigs with Brian Hardgroove from Public Enemy—me on guitar, him on drums. Since there was no bass player, we needed a way to fill out the bottom end. I’ve come up with a bunch of ways to do that over the years, but the technique presented here is the easiest one yet to implement. We’ll extract a bass line from an existing guitar track, without using MIDI or virtual instruments—here’s how.
Start by copying the guitar’s audio to a new track, which will become our faux bass track. Call up the Inspector, and transpose the faux bass track down by -12 semitones (Fig. 1). This technique works best with relatively articulated guitar notes, not rhythm guitar chords.
Now, it may seem like transposing down an octave is enough, and we can all go home now. No! The faux bass track needs three processors to sound right (Fig. 2).
But as they always say, the proof is in the pudding. However, since we’re not providing a recipe about how to make pudding, check out the audio example instead.
The first two measures are the guitar by itself, while the second two measures have the faux bass playing along. Pretty cool, eh? Oh…and if you’re in a Cream tribute band, this will definitely come in handy for “Sunshine of Your Love.” Have a great weekend!
Also, this is your last reminder to enter to win from PreSonus. Read more about the #StudioOneGiveaway going on NOW on Instagram and Facebook HERE.
Your guitar is most likely mono. But sometimes you want a wide, full, stereo image. I can relate.
One technique is to send the guitar track to an FX channel, insert a delay set for a relatively short delay (like 25 ms), and then pan the original track and FX channel oppositely. But if you sum the signals to mono, then there’s the possibility of cancellation. In fact, I saw a guy in an internet video who said this was a terrible idea, and you should just overdub the part again and pan that oppositely if you want stereo.
Well, overdubbing is an option, assuming you can play tightly enough that the parts don’t sound sloppy. But don’t forget Studio One has that wonderful Channel Mode button on the Main output, so you can test stereo tracks in mono—simply adjust the delay time for minimum cancellation. You won’t be able to avoid cancellation entirely, but tweaking the time may keep it from being objectionable (especially once the delay time gets above 25 ms or so, because that’s more into doubling range). To make any phase issues even less noticeable, lower the delayed sound’s level a little bit to weight the sound more toward the dry guitar.
But I wouldn’t be writing this tip if I didn’t have a better option—so here it is.
Now, here’s where the magic happens. Set the Main output mode to mono, and you’ll hear virtually no difference between that and the “faux stereo” signal, other than the stereo imaging. The reason why is that now, we have a guitar in the center channel—so choosing mono creates a center channel buildup. This raises the main guitar’s level above the delayed sounds, so there’s virtually no chance of audible cancellation, and it balances the level better between the stereo and mono modes.
Now you have a wide guitar that sounds equally loud, and is phase-issue free, in mono or stereo—happy Friday!
The June 22, 2018 tip covered how to make mastered songs better with tempo changes, but there was some pushback because it wasn’t easy to make these kinds of changes in Studio One. Fortunately, it seems like the developers were listening, because it’s now far easier to change tempo. I’ve been refining various tempo-changing techniques over the past year (and had a chance to gauge reactions to songs using tempo changes compared to those that didn’t), so it seemed like the time is right to re-visit this topic.
WHY TEMPO CHANGES?
In the days before click tracks, music had tempo changes. However, with good musicians, these weren’t random. After analyzing dozens of songs, many (actually, most) of them would speed up slightly during the end of a chorus or verse, or during a solo, and then drop back down again.
For example, many people feel James Brown had one of the tightest rhythm sections ever—which is true, but not because they were a metronome. There were premeditated, conscious tempo changes throughout the song (e.g., speeding up during the run up to the phrase “papa’s got a brand new bag,” in the song of the same name, then dropping back down again—only to speed up to the next climax). Furthermore, the entire song sped up linearly over the course of the song.
Note that you didn’t hear these kinds of changes as something obvious, you felt them. They added to the “tension and release” inherent in any music, which is a key element (along with dynamics) in eliciting an emotional response from listeners.
THE PROBLEM WITH TEMPO CHANGES
It was easy to have natural tempo changes when musicians played together in a room. These days, it’s difficult for solo artists to plan out in advance when changes are going to happen. Also, if you use effects with tempo sync, not all of them follow tempo changes elegantly (and some can’t follow tempo changes at all). Let’s face it—it’s a lot easier to record to a click track, and have a constant tempo. However…
THE STUDIO ONE SOLUTION
Fortunately, Studio One makes it easy to add tempo changes to a finished mix—so you can complete your song, and then add subtle tempo changes where appropriate. This also lets you compare a version without tempo changes, and one with tempo changes. You may not hear a difference, but you’ll feel it.
As mentioned in last year’s tip, for the highest possible fidelity choose Options > Advanced > Audio, and check “Use cache for timestretched audio files.” Next, open a new project, and bring in the mixed file. Important: you need to embed a tempo, otherwise it’s not possible to change the tempo. So, open the Inspector, and enter a tempo under File Tempo. It doesn’t have to match the original song tempo because we’re making relative, not absolute, changes. Also choose Tempo = Timestretch, and Timestretch = Sound – Elastique Pro Formant.
MANIPULATING THE TEMPO TRACK
Working with the tempo track is now as easy as working with automation: click and drag to create ramps, and bend straight lines into curves if desired. You can set high and low tempo limits within the tempo track; the minimum difference between high and low Tempo Track values is 20 BPM, however you can change the tempo track height to increase the resolution. The bottom lines it that it’s possible to create very detailed tempo changes, quickly and easily.
So what does it sound like? Here are two examples. The first is a hard-rock cover version of “Walking on the Moon” (originally recorded by The Police, and written by Sting).
The differences are fairly significant, starting with a low of 135 BPM, going up to 141 BPM, and dropping down as low as 134 BPM.
Here’s another example, a slower song called “My Butterfly.” It covers an even greater relative range, because it goes from a low of 90 to a high of 96 BPM. You may be able to hear the speedup in the solo, not just feel it, now that you know it’s there.
Note that when possible, there’s a constant tempo at the beginning and end. It doesn’t matter so much with songs, but with dance mixes, I can add tempo changes in the track as long as there’s a constant tempo on the intro and outro so DJs don’t go crazy when they’re trying to do beat-matching.
So is it worth making these kinds of changes? All I know is that the songs I do with tempo changes get a better response than songs without tempo changes. Maybe it’s coincidence…but I don’t think so.
Mid-side (M-S) processing encodes a standard stereo track into a different type of stereo track with two separate components: the left channel contains the center of the stereo spread, or mid component, while the right channel contains the sides of the stereo spread—the difference between the original stereo file’s right and left channels (i.e., what the two channels don’t have in common). You can then process these components separately, and after processing, decode the separated components back into conventional stereo.
Is that cool, or what? It lets you get “inside the file,” sometimes to where you can almost remix a mixed stereo file. Need more kick and bass? Add some low end to the center, and it will leave the rest of the audio alone. Or bring up the level of only the sides to make the stereo image wider.
The key to M-S processing is the Mixtool plug-in, and its MS Transform button. The easiest way to get started with M-S processing is with the MS-Transform FX Chain (Fig. 1), found in the Browser’s FX Chains Mixing folder.
The upper Mixtool encodes the signal so that the left channel contains a stereo file’s center component, while the right channel contains the stereo file’s side components. This stereo signal goes to the Splitter, which separates the channels into the side and center paths. These then feed into the lower Mixtool, which decodes the M-S signal back into stereo. (The Limiter isn’t an essential part of this process, but is added for convenience.)
Even this simple implementation is useful. Turn up the post-Splitter gain slider in the Center path to boost the bass, kick, vocals, and other center components. Or, turn up the gain slider in the post-SplitterSide path to bring up the sides, for the wider stereo image we mentioned.
Fig. 2 shows a somewhat more developed FX Chain, where a Pro EQ boosts the highs on the sides. Boosting the highs adds a sense of air, which enhances the stereo image because highs are more directional.
In addition to decoding the signal back to stereo, the second Mixtool has its Output Gain control accessible to compensate for any level differences when the FX Chain is bypass/enabled. Also, you can disable the MS Decoder (last button, lower right) to prevent converting the signal back into stereo, which makes it easy to hear what’s happening in the center and sides.
And of course…you can take this concept much further. Add a second EQ in the center channel, or a compressor if you want to squash the kick/snare/vocals a bit while leaving the sides alone. Try adding reverb to the sides but not the center, to avoid muddying what’s happening in the center. Or, add some short delays to the sides to give more of a room sound….the mind boggles at the possibilities, eh?
If you’ve ever played a large venue like a sports arena, you know that reverb is a completely different animal than what the audience hears. You hear your instrument primarily, and in the spaces between your playing, you hear the reverb coming back at you from the reflections. It might seem that reverb pre-delay would produce the same kind of effect, but it doesn’t “bloom” the way reverb does when you’re center stage in a big acoustical space.
This week’s tip is inspired by the center stage sound, but taken further. The heart of the effect is the Expander, but unlike last week’s Expander-based Dynamic Brightener tip, the Expander is in Duck mode, and fed by a sidechain. Here’s the Console setup.
In the audio example, the source is a funk guitar loop from the PreSonus loop collection; but any audio with spaces in between the notes or chords works well, especially drums (if the cymbals aren’t happening a lot), vocals that aren’t overly sustained, percussion, and the like. I deliberately exaggerated the effect to get the point across, so you might want to be a little more tasteful when you apply this to your own music. Or maybe not…
The guitar’s channel has two sends. One goes to the FX Channel, which has a Room Reverb followed by an Expander. The second send goes to the Expander’s sidechain input. Both are set pre-fader so that you can turn down the main guitar sound by bringing down its fader, and that way, you can hear only the processed sound. This makes it easier to edit the following Room Reverb and Expander settings, which are a suggested point of departure. Remember to enable the Expander’s Sidechain button in the header, and click the Duck button.
The reverb time is long—almost six seconds. This is because we want it going constantly in the background, so that after the Expander finishes ducking the reverb sound, there’s plenty of reverb available to fill in the spaces.
To tweak the settings, turn down the main guitar channel so you can monitor only the processed sound. The Expander’s Threshold knob determines how much you want the reverb to go away when the instrument audio is happening. But really, there are no “wrong” settings—start with the parameters above, play around, and listen to what happens.
This is a pretty fertile field for experimentation…as the following audio example illustrates. The first part is the guitar and the reverb effect; the reverb tail shows just how long the reverb time setting is. The second part is the reverb effect in isolation, processed sound only, and without the reverb tail.
This is a whole different type of reverb effect—have fun discovering what it can do for you!
When you play an acoustic guitar harder, it not only gets louder, but brighter. Dry, electric guitar doesn’t have that quality…by comparison, the electrified sound by itself is somewhat lifeless. But I’m not here to be negative! Let’s look at a solution that can give your dry electric guitar some more acoustic-like qualities.
How It Works
Create an FX Channel, and add a pre-fader Send to it from your electric guitar track. The FX Channel has an Expander followed by the Pro EQ. The process works by editing the Expander settings so that it passes only the peaks of your playing. Those peaks then pass through a Pro EQ, set for a bass rolloff and a high frequency boost. Therefore, only the peaks become brighter. Here’s the Console setup.
The reason for creating a pre-fader send from the guitar track is so that you can bring the guitar fader down, and monitor only the FX Channel as you adjust the settings for the Expander and Pro EQ. The Expander parameter values are rather critical, because you want to grab only the peaks, and expand the rest of the guitar signal downward. The following settings are a good point of departure, assuming the guitar track’s peaks hit close to 0.
The most important edit you’ll need to make is to the Expander’s Threshold. After it grabs only the peaks, then experiment with the Range and Ratio controls to obtain the sound you want. Finally, choose a balance of the guitar track and the brightener effect from the FX Channel.
The audio example gets the point across. It consists of guitar and drums, because having the drums in the mix underscores how the dynamically brightened guitar can “speak” better in a track. The first five measures are the guitar with the brightener, the next five measures are the guitar without the brightener, and the final five measures are the brightener channel sound only. You may be surprised at how little of the brightener is needed to make a big difference to the overall guitar sound.
Also, try this on acoustic guitar when you want the guitar to really shine through a mix. Hey, there’s nothing wrong with shedding a little brightness on the situation!
You never know where you’ll find inspiration. As I was trying not to listen to the background music in my local supermarket, “She Drives Me Crazy” by Fine Young Cannibals—a song from over 30 years ago!—earwormed its way into my brain. Check it out at https://youtu.be/UtvmTu4zAMg.
My first thought was “they sure don’t make snare drum sounds like those any more.” But hey, we have Studio One! Surely there’s a way to do that—and there is. The basic idea is to extract a trigger from a snare, use it to drive the Mai Tai synth, then layer it to enhance the snare.
Skeptical? Check out the audio example.
ISOLATING THE SNARE
If you’re dealing with a drum loop or submix, you first need to extract the snare sound.
TWEAKING THE MAI TAI
Now the fun begins! Figure 3 shows a typical starting point for a snare-enhancing sound.
The reason for choosing Mai Tai as the sound source is because of its “Character” options that, along with the filter controls, noise Color control, and FX (particularly Reverb, EQ, and Distortion), produce a huge variety of electronic snare sounds. The Character module’s Sound and Amount controls are particularly helpful. The more you play with the controls, the more you’ll start to understand just how many sounds are possible.
BUT WAIT…THERE’S MORE!
If the snare is on a separate track, then you don’t need the Pro EQ or FX Channel. Just insert a Gate in the snare track, enable the Gate’s trigger output, and adjust the Gate Threshold controls to trigger on each snare drum hit. The comments above regarding the Attack, Release, and Hold controls apply here as well.
Nor are you limited to snare. You can isolate the kick drum, and trigger a massive, low-frequency sine wave from the Mai Tai to give those car door-vibrating kick drums. Toms can sometimes be easy to isolate, depending on how they’re tuned. And don’t be afraid to venture outside of the “drum enhancement” comfort zone—sometimes the wrong Gate threshold settings, driving the wrong sound, can produce an effect that’s deliciously “right.”