Summer may be over in the northern hemisphere, but we can still splash around. This is one of those “hiding in plain sight” kind of tips, but it’s pretty cool.
The premise: Sometimes you don’t want reverb all the time, so you kick up the send control to push something like a snare hit into the reverb for a quick reverb “splash” (anyone who’s listened to my music knows this is one of my favorite techniques). The reverb adds a dramatic emphasis to the rhythm, but is short enough that it doesn’t wear out its welcome—listen to the audio example, which demos this technique with Studio One’s Crowish Acoustic Chorus 1 drum loop.
However, although this technique is great with drums, it also works well with rhythm guitar, hand percussion, synths, you name it… even kick works well in some songs. I’m not convinced about bass, but aside from that, this has a lot of uses.
Studio One offers an easy way to produce regular splashes automatically (like on the second and fourth beats of a measure, where an emphasizing element hits). Insert X-Trem before the reverb, select 16 Steps as the “waveform,” click Sync, and choose your rhythm. The screenshot shows Beats set to 1/2 so that the reverb splash happens on 2 and 4, which in the case of the audio example, adds reverb to the snare on 2, and to the closed high-hat on 4.
And that’s pretty much it. Because the reverb is in a bus, set Mix to 100%. The 480 Hall from Halls > Medium Halls is one of my faves for this application, but hey… use whatever ’verb puts a smile on your face.
I’m not one of those people who wants to do heavy compression all the time, but I do feel bass is an exception. Mics, speakers, and rooms tend to have response anomalies in the bass range; even if you’re using bass recorded direct, compression can help even out the response for a smoother, rounder sound.
Although stereo compressors are the usual go-to for bass, I often prefer a multiband dynamics processor because it can serve simultaneously as a compressor and EQ. Typically, I’ll apply a lot of compression to the lowest band (crossover below 200 Hz or so), light compression to the low-mid bands (as well as reduce their levels in the overall mix), and medium compression to the high-mid band (from about 1.2 kHz to 6 kHz). I often turn down the level for the band above 5-6 kHz or so (there’s not a lot happening up there with bass anyway), but sometimes I’ll set a ratio below 1.0 so that the highest band turns into an expander. If there’s any hiss in the very highest band, this will help reduce it. Another advantage of using Multiband Dynamics is that you can tweak the high and low band gain parameters so that the bass fits well with the rest of the tracks.
The preset in the following screenshot gives a sound like “Tuned Thunder,” thanks to heavy compression in the lowest band. To choose a loop that’s good for demoing this sound, choose Rock > Bass > Clean, and then select 08 02 P Ransack D riff.audioloop. Insert the Multiband Dynamics processor, and start with the default preset.
As with most dynamic processing presets, the effect is highly dependent on the input level. For this preset, normalize the bass loop. Then change the L band to 125 Hz, with a ratio of 15:1, and a Low Threshold of -30 dB. Mute the LM band.
With the Multiband Dynamics processor bypassed, observe the peak value for the bass track. Now enable Multiband Dynamics, and adjust the Low band’s Gain until the peak value matches the peak value with the Multiband Dynamics bypassed-—you’ll hear a big, fat, round sound that sort of tunnels through a mix.
Now let’s go to the other extreme. A significant treble boost can help a bass hold its own against other tracks, because the ear/brain combination will fill in the lower frequencies. The next screen shot shows settings for extreme articulation so the bass really “pops,” and cuts through a track. Again, start with the default preset but set the Low band frequency to 110 Hz or so.
The only band that’s compressed is the Mid band (320 – 1.2 kHz, with parameter settings shown in the screen shot). A bit of gain for the High Mid band emphasizes pick noise and harmonics—5 dB or so seems about right—and to compensate for the extra highs, add some gain to the low band below 110 Hz. Again, about 4-5 dB seems to work well.
When adjusting the Multiband Dynamics processor, note that you can zero in on the exact effect you want for each band by using the Solo and Mute buttons on individual stages. So next time you want to both compress and equalize bass, consider using Multiband Dynamics instead—and get the best of both worlds.
The “pumping” effect is a cool EDM staple that also works with other intense forms of music. One of the best-known examples is Eric Prydz’s seminal EDM track from 2004, “Call on Me.” Usually, this technique requires sidechaining, but with the PreSonus Compressor sidechain filter, we’re covered. The effect works best if there are some sustaining sounds with which it can work—like cymbals for drum parts, or pads if you want to pump a non-drum track. Listen to the audio example to hear how the pumping effect alters a drum track.
To start, let’s try pumping some drums. Insert the Compressor in the track, and click on the Compressor’s Filter and Listen Filter buttons. To have the kick create the signal that provides the pumping, set the Lowcut frequency to off, and lower the Highcut filter until you hear pretty much nothing but kick. Once you’ve isolated the kick (or snare, or whatever you want to isolate), turn off Listen Filter but leave Filter on.
The control settings are quite crucial; the screenshot shows some potential initial settings, but you’ll need to edit the controls based on the source audio and the desired effect.
The effect’s depth, like any compression effect, depends on the Threshold and Ratio settings. For a pretty heavy-duty effect, set Threshold between -20 and -30 dB and Ratio around 10. You’ll want to tweak this depending on the program material, but it’s a good place to start.
Now for the pumping. Start with Attack at minimum, and set Release for the desired amount of pumping—you’ll probably want a time between 100 and 300 ms, depending on the song and the material. To restore some of the attack at the start of the pumping, increase the Attack time. Even a little bit, like 5 ms, restores most of the attack’s effect.
Finally, note that because this effect does in fact compress, you’ll probably want to add some makeup gain. And once you do, there you have it—the pumping sound.
I’ve always appreciated Studio One’s analytics—the spectrum analyzer, the dynamic range meter in older versions and the more modern LUFS metering in Studio One 4, the K-Scale meters based on Bob Katz’s research, the strobe tuner, and the ability to stretch the faders in the Mix view when you want to couple high resolution with long fader travel. But I wonder if the Phase Meter and its companion Correlation Meter get the props they deserve, so let’s look at what this combo can do for you.
Phase Meters—Not Just for Mixdowns!
Most people consider a tool like the Phase Meter as being only for checking final mixes. However, one very useful technique is putting it in the master output bus, and soloing one track at a time (remember, you can Alt+click on a track’s Solo button for an “exclusive solo” function). This gives some insights into the phase, level, and stereo spread of individual tracks in a way that’s more revealing than just looking over panpots.
Correlation Meter Basics
In brief, the Correlation meter (the bar graph at the Phase meter’s bottom) indicates a stereo signal’s mono compatibility. This was of crucial importance when mastering for vinyl, because it could indicate if there were out-of-phase audio components in the audio that could possibly cause the stylus to jump out of its groove. These days, it’s largely a stereo world but it’s still important to check for mono compatibility—after all, when listening to speakers, you don’t have perfect stereo separation. You’ll usually monitor correlation in the master bus, but for individual tracks, it can indicate whether (for example) a signal processor is throwing a track’s left and right channels out of phase.
The Correlation meter reading spans the range between -1 (the right and left channels are completely out of phase, with no correlation) and +1 (the right and left channels are identical, and correlate completely). With most mixes, the bar graph will fluctuate between 0 and +1.
If the Phase meter displays a single vertical line, then the left and right channels are identical, and the track is mono. The Correlation bar graph meter at the bottom confirms this with its reading of 1.00, which means the left and right channels correlate completely—in other words, they aren’t just similar, but identical.
Left and Right Readings
If there’s a single, diagonal line on the L axis, that means that all the signal’s energy is concentrated in the left channel. Similarly if there’s a single, diagonal line on the R axis, then all the signal’s energy is concentrated in the right channel. If you pan a track where the left and right channels are identical (as shown by the Correlation meter displaying 1.00), then the line will move from one channel to the other.
With stereo, you’ll see an excellent visual representation of how much the signal extends into the stereo field. The vertical size indicates the level. As you pan the signal left or right, the stereo field will become narrower around the line that moves from left to right until at one extreme or the other, you’ll see only a diagonal line on the L or R axis.
Note the correlation meter is showing +0.47. This means that there’s about an equal amount of similarity between the left and right channels as there are differences, but nothing is out of phase.
Mid-Side Encoded Audio
With Mid-Side encoded audio, you’ll see amplitude around the L and R axes, as well as along the M axis. Because the L signal is the center and the R signal the sides, you’ll see a lot more level along the L axis. Also, note the Correlation meter setting of 0.00—this means that there’s no similarity between the right and left channels, which is what you’d expect with a Mid-Side encoded signal.
Binaural Pan Signal
Studio One’s Binaural Pan processor widens the stereo image so that there’s much more energy in the right and left sides than in the center; this image shows what happens when you set the widening to maximum. Compare this to the reading for stereo signals—you can see that in this case, the energy extends further out to the right and left. Furthermore, the Correlation meter shows that there are no significant similarities between the right and left channels, which is a result of the Binaural Pan processor being based on Mid-Side processing.
Here, the Correlation meter shows a negative number, which means there are out-of-phase elements within the stereo mix. Occasional negative blips aren’t a problem, but if the Correlation meter spends a substantial amount of time to the left of 0, then there’s a phase issue that will interfere with mono compatibility.
The Shepard Tone (aka Barberpole) is an audio illusion where a tone always seems to keep rising (or falling). You may have heard it before—to build tension in music by Swedish House Mafia, Beatsystem, Data Life, and Franz Ferdinand, as the sound effect for the endless staircase in Super Mario 64, for the sound of constant acceleration for the Batpod in The Dark Knight and The Dark Knight Rises, at the end of Pink Floyd’s “Echoes” from the Meddle album, or in the soundtrack for the film Dunkirk in sections where the goal was to produce a vibe of increasing intensity. Check out the audio example, and you’ll hear how the tone just goes on forever.
Thanks to Studio One’s Tone Generator, it’s easy to produce a Shepard Tone loop—just follow the step-by-step instructions, in a song with the tempo set to 120 BPM.
This tip is for those who won’t sign off on a mix until they’ve heard it in a car. There may be a scientific reason why this is beneficial: Noise tends to mask sounds, so if one instrument you want to hear gets lost in the noise and another jumps out, try a mix that raises and lowers those levels, respectively.
The ear doesn’t discriminate level differences as accurately as pitch differences, so without noise masking a sound, the level may seem okay. But as soon as you mix in noise, an important sound may disappear. If you increase the level just a bit so you can hear it, when you remove the noise there’s a very good chance you’ll like the new level setting better. Think of this as doing something similar to compression, but without applying any actual dynamics. You’re just making sure that the levels needing parity, have parity.
Of course this doesn’t mean you want everything jumping out of the noise—those tambourine and shaker parts are probably just fine as they are. The main sounds to listen to here are vocals, leads, drums, and bass, as well as their relationship to each other.
This also doesn’t mean you should mix consistently with noise, as it will bias your hearing (and besides, it’s truly annoying). I add noise in with a mix as a last diagnostic step. If the mix has sounded fine up until then and passes this final test, I consider it ready to master. And I don’t need to go driving anywhere, either.
Just follow the steps, and you’ll be good to go.
One very cool aspect of the Tone Generator’s noise is that it’s true stereo where the left and right channels don’t correlate, so you don’t get any center channel buildup (as would happen with a mono noise signal).
As to how much noise to add, it’s kind of like maximizing. Set it 6 dB below the mix’s peaks, and you’ll hear what occupies the upper 6 dB of dynamic range. Set it 12 dB below the mix’s peaks, and you’ll hear what’s in the upper 12 dB of dynamic range. This isn’t an exact spec per se, but it provides a rough standard of comparison.
As crazy as this idea sounds, try it sometime and tweak your mix. Then turn off the noise, take a short break so your ears get acclimated back to normal hearing, and then check the mix again. I won’t be surprised if you hear an improvement!
This tip is for those of you who didn’t see my Studio One workshop at Sweetwater GearFest 2018, were turned away because of that pesky fire marshal’s rules about crowds, or who didn’t realize Studio One 4 has some pretty advanced looping capabilities—as well as the ability to trigger pitch transpositions for loops.
With Impact XT, you can load loops on pads, and then trigger them (on and off) in real time via MIDI notes. Assign each output from an Impact XT pad to a track input (in the screen shot, Track 5 is recording the output of Impact XT M4), set all the tracks to record, and you can record the results of your improvisations.
The following screen shot shows the results of recording the first part of a potential song. Note how some tracks have sounds that extend the length of the recording, while other tracks had their sounds brought in at specific times by triggering an Impact XT pad.
This by itself is pretty cool, because you can weave loops in and out to create an arrangement. The song goes longer than this, but the above shows what you’re hearing in the following audio example. Granted, it’s not much of a song—it just kinda drones on and on. But keep reading…this is just the start.
The process becomes far more interesting when you bring the chord track into play, because you can transpose the loops to create a chord progression that becomes the basis for a song. All the tracks, even the drums, were set to follow the chord track. Listen to how although some of the original loops added a fourth to the tonic, when this was synched to the chord track, all of the loops followed a tonic-to-fourth chord progression. In other words, it wasn’t just one loop adding a fourth, but the entire song transposing to the fourth. We also gained an intro; here’s the chord progression that was used.
And here’s what the chord progression sounded like after harmonic editing. The major difference is in the intro, and transposing to D to kick off the second half of each verse.
Working this way can be very inspirational because you can create a basic arrangement with loops, and then use the Chord Track to create a chord progression. Although PreSonus is careful to point out that Harmonic Editing is more for “prototyping” songs and they expect that you’ll want to replace the “scratch” parts, I’ve found that many times the scratch parts end up being keepers—and I gotta say, I love what happens when you tell drums to follow the chord track!
So there you are, with your shiny new Impact XT virtual instrument. You want to populate the pads with some fun drum sounds, and although you like the included kits, you’re itching to get creative and come up with some kits of your own. Fortunately, it’s easy to use Audioloops to populate your Impact XT with a custom selection of drum sounds.
Open the Browser, and under Loops, look for files with the .audioloop suffix. The reason why .audioloop files stretch elegantly is because the loop is cut into slices, with each slice representing an individual “block” of sound—kick, snare, clap, kick and cymbal hitting at the same time, snare and high-hat hitting at the same time…whatever.
When you expand an .audioloop, you’ll see each slice listed individually. Some have only one or two slices, but others—for example, the Combo Beat loops under Electronic > Drums > Loop (Fig. 1)—are rich sources of slices.
Figure 1: The Browser’s Loop tab is loaded with slices, just waiting to be used with Impact XT to create custom kits.
Next, open up Impact XT. To audition the slices, toward the bottom of the browser turn off the loop and metronome options, select a slice, and then click on the Play > button. Click on various slices and when you hear something you like, drag it over to an Impact XT pad (Fig. 2). You won’t have to click the Play button again to audition slices until after you drag a slice over.
Figure 2: Drag slices over to Impact XT pads.
The real fun begins when you start to use Impact XT’s sound-shaping options. For example in Figure 2, one of the kick slices has been dropped in pitch, truncated, filtered, and given a new Amp decay setting to sound more like an explosion. Note that the pad’s name will be the same as the .audioloop, so if you’re using multiple slices from the same .audioloop, rename the pad to avoid confusion (right-click on the pad and choose Rename).
And remember, you’re not limited to dragging over slices from the Browser—you can split any file in the Edit window at the Bend markers, and drop those slices into Impact XT.
Sure, Impact XT comes with a lot of preset, ready-to-go kits when you just want to load something and start grooving. But you might be surprised how doing a little mixing and matching with the Browser slices can create something new and different—and each new pad sound is only a click + drag away.
A rotating speaker is an extremely complex signal processor (as most mechanical signal processors are—like plate reverbs). It combines phase shifts, Doppler shifts, positional changes, timbral variations, and more. And of course, Studio One includes the Rotor processor, which does a fine job of capturing the classic rotating speaker sound.
However, I’ve always felt that rotating speakers have a lot more potential as an effect than just emulating physical versions—hence this FX chain. By “deconstructing” the elements that make up the rotating speaker sound, you can customize it not only to tweak the rotating speaker effect to your liking, but to create useful variations that don’t necessarily relate to “the real thing.” What if you want a speed that’s between slow and fast? Or a subtler effect that works well with guitar? Or simulate the way that the horn spins faster when changing speeds because it has less inertia than the woofer? This FX chain provides a useful, more subtle variation on Rotor’s rotating speaker sound—check out the audio example—but the best way to take advantage of this week’s tip is to download the multipreset, roll up your sleeves, and start playing around.
Rotating speaker basics. There are two rotating speakers—one high-frequency driver, and one low-frequency drum. A crossover splits the signal to these two paths, so we’ll start the emulation by setting the Splitter to Frequency Split mode around 800 Hz. Here’s the routing.
The high-frequency and low-frequency paths each go into a Flanger to provide Doppler and phase shifts, and an X-Trem for subtle panning to provide the positional cues. Let’s look at the individual module settings.
The Analog Delay adds a 23 ms reflection for a bit of a room sound vibe, with some modulation to add a Doppler shift accent. Finally, an Open Air reverb (using the 480 Hall from Medium Halls) creates a space for the rotating speaker.
Knob Control. This was the hardest part of the emulation, because changing speed has to alter (of course) Flanger speed, but also the Flanger’s LFO Width because you want less width at faster LFO speeds. The X-Trem speed and Analog Delay LFO speed also need to follow the range from slow to fast.
However, the curves for the control changes are quite challenging because the controls don’t all cover the same range. Fortunately you can “bend” curves in FX Chains, but you can’t have more than one node. As a result, I optimized the knob settings for the lowest and highest speeds—besides, a real rotating speaker switches to either speed, and “glides” between the two settings as it changes from one to the other. An additional subtlety is that the high-frequency “speaker” needs to rotate just a little faster than the low-frequency one. Also, they shouldn’t track each other exactly when going from the slowest to the fastest speed because with a physical rotating speaker, the low-frequency drum has more inertia.
All these curves do complicate editing any automation, because you need to write-enable each parameter when you turn the knob. So if you need to change some automation moves you made, I recommend not trying to edit each curve—just try another performance with the knob.
Oh, and don’t forget to try this on instruments other than organ!
I’ve always been fascinated with using one instrument to modulate another—like using a vocoder on guitar or pads, but with drums as the modulator instead of voice. This kind of processing is a natural for dance music, and using a noise gate’s sidechain to gate one instrument with another (e.g., bass gated by kick drum) is a common technique.
However, the sound of gating has always seemed somewhat abrupt to me, regardless of how I tweaked a gate’s attack, decay, threshold, and range parameters. I wanted something that felt a little more natural, a little less electro, and gave more flexibility. The answer is a bit off the wall, but try it—or at least listen to the audio example.
Setup requires copying the track you want to modulate (the middle track below), and then using the Mixtool to flip the copy’s audio 180 degrees out of phase (i.e., enable Invert Phase). This causes the audio from the original track and its copy to cancel. Then, insert a compressor in the copy, and feed its sidechain with a send from the track doing the modulating. In this case, it’s the drum track at the top.
When the compressor kicks in, it reduces the gain of the audio that’s out of phase, thus reducing the amount of cancellation. However, as you’ll hear in the example, the gain changes don’t have the same character as gating.
You can also take this technique further with automation. The screen shot shows automation that’s adjusting the compressor’s threshold; the lower the threshold, the less cancellation. Raising the threshold determines when the “gating” effect occurs. Also, it’s worth experimenting with the Auto and Adaptive modes for Attack and Release, as well as leaving them both turned off and setting their parameters manually.
Using a compressor for “gating” allows for flexibility that eluded me when adjusting a standard noise gate. If you want super-tight rhythmic sync between two instruments, this is an unusual—but useful—alternative to sidechain-based gating.