With the ideal mix, the balance among instruments is perfect, and you can hear every instrument (or instrument section) clearly and distinctly. However, getting there can take a while, with a lot of trial and error. Fortunately, there’s a simple trick you can use when setting up a mix to accelerate the process: Start your mix with all channel pan sliders set to center (Fig. 1).
Figure 1: All the pan sliders (outlined in white) are set to center for a reason.
With stereo tracks, changing the track interleave to mono isn’t adequate, because it will throw off the channel’s level in the mix. Instead, temporarily add a Dual Pan set for the -6dB Linear Pan Law, and center both the Left and Right panpots (fig. 2). Now your stereo track will appear in the mix as mono.
Figure 2: Use the Dual Pan, set to the -6dB Linear pan law, to convert stereo channels temporarily to mono when setting up for a mix.
Now listen carefully to your mix. Are all the instruments distinct? Monitoring in mono will reveal places where one instrument might mask or interfere with another, like kick and bass, or piano and guitar (depending on the note range).
The solution is to use EQ to carve out each instrument’s rightful place in the frequency spectrum. For example, if you want to prioritize the guitar part, you may need to reduce some of the piano’s midrange, and boost the regions above and below the guitar. For the guitar, boost a bit in the region where you cut the piano. With those tweaks in place, you’ll find it easier to differentiate between the two.
For kick/bass issues, the usual solution is to increase treble on one of them—with kick, this brings out the beater sound and with bass, string “zings” and pick noises. Another option is to add saturation to the bass, while leaving the kick drum alone. If the bass is playing relatively high notes, then perhaps a boost to the kick around 50-70 Hz will help separate the two.
Keep carving away, and adjusting the EQ until all the instruments are clear and distinct. Now when you start doing stereo placement, the sound will be open, with a huge soundstage and a level of clarity you might not obtain otherwise—or which might take a lot of tweaking to achieve.
We’re Not Done with Mono Just Yet…
Okay, now you have a great stereo mix. But it’s also important to make sure your mix collapses well to mono, because you have no control over the playback system. It might play from someone’s smartphone, and sounds mostly mono…or play back over speakers that are close to each other, so there’s not real good stereo separation. Radio is another possibility where the stereo might not be wonderful.
Some processors, especially ones that control stereo imaging with mid-side processing, may have phase or other issues when collapsed to mono. Short, stereo delays can also have problems collapsing to mono, and produce comb-filtering-type effects. So, hop on over to the main bus, and click the Channel Mode button to convert the output to mono (Fig. 3).
Figure 3: The Channel Mode button (circled in yellow) can switch the output between mono and stereo.
Hopefully, everything will sound correct—just collapsed to mono. But if not, start soloing channels and comparing what they sound like with the Channel Mode button in stereo and mono, until you chase down the culprit. Make the appropriate tweaks (which may be as simple as tweaking the delay time in one channel of a stereo delay processor), make sure the mix still sounds good in stereo, and you’re done.
I sometimes record acoustic rhythm guitars with one mic for two main reasons: no issues with phase cancellations among multiple mics, and faster setup time. Besides, rhythm guitar parts often sit in the background, so some ambiance with electronic delay and reverb can give a somewhat bigger sound. However, on an album project with the late classical guitarist Linda Cohen, the solo guitar needed to be upfront, and the lack of a stereo image due to using a single mic was problematic.
Rather than experiment with multiple mics and deal with phase issues, I decided to go for the most accurate sound possible from one high-quality, condenser mic. This was successful, in the sense that moving from the control room to the studio sounded virtually identical; but the sound lacked realism. Thinking about what you hear when sitting close to a classical guitar provided clues on how to obtain the desired sound.
If you’re facing a guitarist, your right ear picks up on some of the finger squeaks and string noise from the guitarist’s fretting hand. Meanwhile, your left ear picks up some of the body’s “bass boom.” Although not as directional as the high-frequency finger noise, it still shifts the lower part of the frequency spectrum somewhat to the left. Meanwhile, the main guitar sound fills the room, providing the acoustic equivalent of a center channel.
Sending the guitar track into two additional buses solved the imaging problem by giving one bus a drastic treble cut and panning it somewhat left. The other bus had a drastic bass cut and was panned toward the right (Fig. 1).
Figure 1: The main track (toward the left) splits into three pre-fader buses, each with its own EQ.
One send goes to bus 1. The EQ is set to around 400 Hz (but also try lower frequencies), with a 24 dB/octave slope to focus on the guitar body’s “boom.” Another send goes to bus 2, which emphasizes finger noises and high frequencies. Its EQ has a highpass filter response with a 24dB/octave slope and frequency around 1 kHz. Pan bus 1 toward the left and bus 2 toward the right, because if you’re facing a guitarist the body boom will be toward the listener’s left, and the finger and neck noises will be toward the listener’s right.
The send to bus 3 goes to the main guitar sound bus. Offset its highpass and lowpass filters a little more than an octave from the other two buses, e.g., 160 Hz for the highpass and 2.4 kHz for the lowpass (Fig. 2). This isn’t “technically correct,” but I felt it gave the best sound.
Figure 2: The top curve trims the response of the main guitar sound, the middle curve isolates the high frequencies, and the lower curve isolates the low frequencies. EQ controls that aren’t relevant are grayed out.
Monitor the first two buses, and set a good balance of the low and high frequencies. Then bring up the third send’s level, with its pan centered. The result should be a big guitar sound with a stereo image, but we’re not done quite yet.
The balance of the three tracks is crucial to obtaining the most realistic sound, as are the EQ frequencies. Experiment with the EQ settings, and consider reducing the frequency range of the bus with the main guitar sound. If the image is too wide, pan the low and high-frequency buses more to center. It helps to monitor the output in mono as well as stereo for a reality check.
Once you nail the right settings, you may be taken aback to hear the sound of a stereo acoustic guitar with no phase issues. The sound is stronger, more consistent, and the stereo image is rock-solid.
In this video, producer Paul Drew shows how VocALign seamlessly works inside Presonus Studio One Professional and almost instantly aligns the timing of multiple vocal tracks to a lead using ARA2, potentially saving hours of painstaking editing time.
ARA (Audio Random Access) is a pioneering extension for audio plug-in interfaces. Co-developed by Celemony and PreSonus, ARA technology enhances the communication between plug-in and DAW, and gives the plug-in and host instant access to the audio data. This video shows Studio One but the workflow is very similar in Cubase Pro & Nuendo, Cakewalk by Bandlab and Reaper.
Well…maybe it actually is, and we’ll cover both positive and negative flanging (there’s a link to download multipresets for both options). Both do true, through-zero flanging, which sounds like the vintage, tape-based flanging sound from the late 60s.
The basis of this is—surprise!—our old friend the Autofilter (see the Friday Tip for June 17, Studio One’s Secret Equalizer, for information on using its unusual filter responses for sound design). The more I use that sucker, the more uses I find for it. I’m hoping there’s a dishwashing module in there somewhere…meanwhile, for this tip we’ll use the Comb filter.
Flanging depended on two signals playing against each other, with the time delay of one varying while the other stayed constant. Positive flanging was the result of the two signals being in phase. This gave a zinging, resonant type of flanging sound.
Fig. 1 shows the control settings for positive flanging. Turn Auto Gain off, Mix to 100%, and set both pairs of Env and LFO sliders to 0. Adding Drive gives a little saturation for more of a vintage tape sound (or follow the May 31 tip, In Praise of Saturation, for an alternate tape sound option). Resonance is to taste, but the setting shown above is a good place to start. The Gain control setting of 3 dB isn’t essential, but compensates for a volume loss when enabling/bypassing the FX Chain.
Varying the Cutoff controls the flanging effect. We won’t use the Autofilter’s LFO, because real tape flanging didn’t use an LFO—you controlled it by hand. Controlling the flanging process was always inexact due to tape recorder motor inertia, so a better strategy is to automate the Cutoff parameter, and create an automation curve that approximates the way flanging really varied (Fig. 2)—which was most definitely not a sine or triangle wave. A major advantage of creating an automation curve is that we can make sure that the flanging follows the music in the most fitting way.
Throwing one of the two signals used to create flanging out of phase gave negative flanging, which had a hollower, “sucking” kind of sound. Also, when the variable speed tape caught up with and matched the reference tape, the signal canceled briefly due to being out of phase. It’s a little more difficult to create negative flanging, but here’s how to do it.
So is this the best flanger plug-in ever? Well if not, it’s pretty close…listen to the audio examples, and see what you think.
Both examples are adapted/excerpted from the song All Over Again (Every Day).
If you like what you hear, download the multipresets. There are individual ones for Positive Flanging and Negative Flanging. To automate the Flange Freq knob, right-click on it and choose Edit Knob 1 Automation. This overlays an automation envelope on the track that you can edit as desired to control the flanging.
And here’s a fine point for the rocket scientists in the crowd. Although most flangers do flanging by delaying one signal compared to another, most delays can’t go all the way up to 0 ms of delay, which is crucial for through-zero flanging where the two signals cancel at the negative flanging’s peak. The usual workaround is to delay the dry signal somewhat, for example by 1 ms, so if the minimum delay time for the processed signal is 1 ms, the two will be identical and cancel. The advantage of using the comb filter approach is that there’s no need to add any delay to the dry signal, yet they can still cancel at the peak of the flanging.
Finally, I’d like to mention my latest eBook—More Than Compressors – The Complete Guide to Dynamics in Studio One. It’s the follow-up to the book How to Record and Mix Great Vocals in Studio One. The new book is 146 pages, covers all aspects of dynamics (not just the signal processors), and is available as a download for $9.99.
I like anything that kickstarts creativity and gets you out of a rut—which is what this tip is all about. And, there’s even a bonus tip about how to create a Macro to make this process as simple as invoking a key command.
Here’s the premise. You have a MIDI drum part. It’s fine, but you want to add interest with a fill in various measures. So you move hits around to create a fill, but then you realize you want fills in quite a few places…and maybe you tend to fall into doing the same kind of fills, so you want some fresh ideas.
Here’s the solution: Studio One 4.5’s new Randomize menu, which can introduce random variations in velocity, note length, and other parameters. But what’s of interest for this application is the way Shuffle can move notes around on the timeline, while retaining the same pitch. This is great for drum parts.
The following drum part has a really simple pattern in measure 4—let’s spice it up. The notes follow an 8th note rhythm; applying shuffle will retain the 8th note rhythm, but let’s suppose you want to shuffle the fills into 16th-note rhythms.
Here’s a cool trick for altering the rhythm. If you’re using Impact, mute a drum you’re not using, and enter a string of 16th notes for that drum (outlined in orange in the following image). Then select all the notes you want to shuffle.
Go to the Action menu, and under Process, choose Randomize Notes. Next, click the box for Shuffle notes (outlined in orange).
Click on OK, and the notes will be shuffled to create a new pattern. You won’t hear the “ghost” 16th notes triggering the silent drum, but they’ll affect the shuffle. Here’s the pattern after shuffling.
If you like what you hear from the randomization, great. But if not, adding a couple more hits manually might do what you need. However, you can also make the randomizing process really efficient by creating a Macro to Undo/Shuffle/hit Enter.
Create the Macro by clicking on Edit|Undo in the left column, and then choose Add. Next, add Musical Functions|Randomize. For the Argument, check Shuffle notes; I also like to randomize Velocity between 40% and 100%. The last step in the Macro is Navigation|Enter. Finally, assign the Macro to a keyboard shortcut. I assigned it to Ctrl+Alt+E (as in, End Boring Drum Parts).
With the Macro, if you don’t like the results of the shuffle, then just hit the keyboard shortcut to initiate another shuffle…listen, decide, repeat as needed. (Note that you need to do the first in a series of shuffles manually because the Macro starts with an Undo command.) It usually doesn’t take too many tries to come up with something cool, or that with minimum modifications will do what you want. Once you have a fill you like, you can erase the ghost notes.
If the fill isn’t “dense” enough, no problem. Just add some extra kick, snare, etc. hits, do the first Randomize process, and then keep hitting the Macro keyboard shortcut until you hear a fill you like. Sometimes, drum hits will end up on the same note—this can actually be useful, by adding unanticipated dynamics.
Perhaps this sounds too good to be true, but try it. It’s never been easier to generate a bunch of fills—and then keep the ones you like best.
Studio One’s Autofilter has a sidechain, which is a good thing—because you can get some really tight, funky sounds by feeding a drum track’s send into the Autofilter’s sidechain. Then, use the Autofilter’s sidechain to modulate a track’s audio in time with the beat. Funky guitar, anyone?
But (there’s always a “but,” or there wouldn’t be a Friday Tip of the Week!), although this is a cool effect, a real wah pedal doesn’t start instantly in the toe-down position before sliding back to the heel-down position. Your foot moves the pedal forward, then back, and it takes a finite amount of time to do both.
The “decay-only” nature of autofilters in general is certainly useful with drums. After all, drums are a percussive instrument, and a percussive filter sweep is usually what you want. But the other day I was working on a song, and really wanted an attack/decay filter effect that was more like a real wah pedal—where the filter moved up to the peak, before moving back down again. Here’s the result.
On the Autofilter, ctrl+click on the LFO sliders to zero them out, so that the LFO isn’t adding its own signal (although of course, you can do that if you want—the 16 Step option is particularly useful if you do). The screen shot gives a good idea of a typical initial setting.
The dark blue track is the guitar, and the green track, the drum part. I often cut up tracks are that modulating other tracks, and Track 3—a copy of the main drum track—is no exception. This track’s pre-fader send goes to the Autofilter’s sidechain input. The track’s channel fader is down, so that the audio doesn’t go through the mixer. We’re using this track only to provide a signal to the Autofilter’s sidechain.
Track 2 is a reversed version of the drum part. It also has a pre-fader send that goes to the Autofilter sidechain (conveniently, you don’t need to bus signals together to send signals from multiple tracks into a Studio One effect’s sidechain). Like Track 3, the track’s channel fader is down, so that the audio doesn’t go through the mixer
The end result is that the reversed drums provide an attack time that sweeps the filter up, while the forward drums provide a decay that sweeps the filter down. So is the sound more animated than using only the forward drum part? Listen to the audio example, and decide for yourself. The first section uses the forward trigger only, while the second section adds in the attack trigger—the effect is particularly noticeable toward the end.
Comping’s goal is to piece together the best parts of multiple Takes (vocals, guitar, etc.) into a single, cohesive part. This involves Studio One’s loop recording, which repeats a section of music over and over during a looped section. You record another Take during each pass, while previous Takes are muted. Doing multiple takes without having to stop lets you get comfortable, and try different approaches. Once you have multiple versions, you audition and select the best sections.
However, when auditioning the Takes to decide which sections are best, it’s helpful to compare levels that are as similar as possible. Normalization is the right tool for this—but while it’s not yet possible to normalize individual Takes, there’s a simple solution.
If you’ve ever played a large venue like a sports arena, you know that reverb is a completely different animal than what the audience hears. You hear your instrument primarily, and in the spaces between your playing, you hear the reverb coming back at you from the reflections. It might seem that reverb pre-delay would produce the same kind of effect, but it doesn’t “bloom” the way reverb does when you’re center stage in a big acoustical space.
This week’s tip is inspired by the center stage sound, but taken further. The heart of the effect is the Expander, but unlike last week’s Expander-based Dynamic Brightener tip, the Expander is in Duck mode, and fed by a sidechain. Here’s the Console setup.
In the audio example, the source is a funk guitar loop from the PreSonus loop collection; but any audio with spaces in between the notes or chords works well, especially drums (if the cymbals aren’t happening a lot), vocals that aren’t overly sustained, percussion, and the like. I deliberately exaggerated the effect to get the point across, so you might want to be a little more tasteful when you apply this to your own music. Or maybe not…
The guitar’s channel has two sends. One goes to the FX Channel, which has a Room Reverb followed by an Expander. The second send goes to the Expander’s sidechain input. Both are set pre-fader so that you can turn down the main guitar sound by bringing down its fader, and that way, you can hear only the processed sound. This makes it easier to edit the following Room Reverb and Expander settings, which are a suggested point of departure. Remember to enable the Expander’s Sidechain button in the header, and click the Duck button.
The reverb time is long—almost six seconds. This is because we want it going constantly in the background, so that after the Expander finishes ducking the reverb sound, there’s plenty of reverb available to fill in the spaces.
To tweak the settings, turn down the main guitar channel so you can monitor only the processed sound. The Expander’s Threshold knob determines how much you want the reverb to go away when the instrument audio is happening. But really, there are no “wrong” settings—start with the parameters above, play around, and listen to what happens.
This is a pretty fertile field for experimentation…as the following audio example illustrates. The first part is the guitar and the reverb effect; the reverb tail shows just how long the reverb time setting is. The second part is the reverb effect in isolation, processed sound only, and without the reverb tail.
This is a whole different type of reverb effect—have fun discovering what it can do for you!
When you play an acoustic guitar harder, it not only gets louder, but brighter. Dry, electric guitar doesn’t have that quality…by comparison, the electrified sound by itself is somewhat lifeless. But I’m not here to be negative! Let’s look at a solution that can give your dry electric guitar some more acoustic-like qualities.
How It Works
Create an FX Channel, and add a pre-fader Send to it from your electric guitar track. The FX Channel has an Expander followed by the Pro EQ. The process works by editing the Expander settings so that it passes only the peaks of your playing. Those peaks then pass through a Pro EQ, set for a bass rolloff and a high frequency boost. Therefore, only the peaks become brighter. Here’s the Console setup.
The reason for creating a pre-fader send from the guitar track is so that you can bring the guitar fader down, and monitor only the FX Channel as you adjust the settings for the Expander and Pro EQ. The Expander parameter values are rather critical, because you want to grab only the peaks, and expand the rest of the guitar signal downward. The following settings are a good point of departure, assuming the guitar track’s peaks hit close to 0.
The most important edit you’ll need to make is to the Expander’s Threshold. After it grabs only the peaks, then experiment with the Range and Ratio controls to obtain the sound you want. Finally, choose a balance of the guitar track and the brightener effect from the FX Channel.
The audio example gets the point across. It consists of guitar and drums, because having the drums in the mix underscores how the dynamically brightened guitar can “speak” better in a track. The first five measures are the guitar with the brightener, the next five measures are the guitar without the brightener, and the final five measures are the brightener channel sound only. You may be surprised at how little of the brightener is needed to make a big difference to the overall guitar sound.
Also, try this on acoustic guitar when you want the guitar to really shine through a mix. Hey, there’s nothing wrong with shedding a little brightness on the situation!
You never know where you’ll find inspiration. As I was trying not to listen to the background music in my local supermarket, “She Drives Me Crazy” by Fine Young Cannibals—a song from over 30 years ago!—earwormed its way into my brain. Check it out at https://youtu.be/UtvmTu4zAMg.
My first thought was “they sure don’t make snare drum sounds like those any more.” But hey, we have Studio One! Surely there’s a way to do that—and there is. The basic idea is to extract a trigger from a snare, use it to drive the Mai Tai synth, then layer it to enhance the snare.
Skeptical? Check out the audio example.
ISOLATING THE SNARE
If you’re dealing with a drum loop or submix, you first need to extract the snare sound.
TWEAKING THE MAI TAI
Now the fun begins! Figure 3 shows a typical starting point for a snare-enhancing sound.
The reason for choosing Mai Tai as the sound source is because of its “Character” options that, along with the filter controls, noise Color control, and FX (particularly Reverb, EQ, and Distortion), produce a huge variety of electronic snare sounds. The Character module’s Sound and Amount controls are particularly helpful. The more you play with the controls, the more you’ll start to understand just how many sounds are possible.
BUT WAIT…THERE’S MORE!
If the snare is on a separate track, then you don’t need the Pro EQ or FX Channel. Just insert a Gate in the snare track, enable the Gate’s trigger output, and adjust the Gate Threshold controls to trigger on each snare drum hit. The comments above regarding the Attack, Release, and Hold controls apply here as well.
Nor are you limited to snare. You can isolate the kick drum, and trigger a massive, low-frequency sine wave from the Mai Tai to give those car door-vibrating kick drums. Toms can sometimes be easy to isolate, depending on how they’re tuned. And don’t be afraid to venture outside of the “drum enhancement” comfort zone—sometimes the wrong Gate threshold settings, driving the wrong sound, can produce an effect that’s deliciously “right.”