Sometimes it seems that certain recorded sounds, like acoustic guitar attacks and percussion, just don’t have the “sparkle” you hear when they’re playing live. The Sparkler is a sophisticated brightening FX Chain that adds definition—without treble equalization.
The Sparkler is a parallel effect. Referring to the FX Chain structure, a Splitter in normal mode creates a dry path through the Mixtool. This increases the level by 6 dB to compensate for the volume drop that occurs when bypassing an FX Chain where one of the splits contributes no significant level. The other split goes to the Sparkler effect, which consists of the Pro EQ, Redlight Dist, and Dual Pan.
How it works. First, the signal goes through the Pro EQ, set for a steep (48 dB/octave) high-pass filter that leaves only the very highest frequencies intact. The Low Cutoff control varies the cutoff from 7.6 kHz to 12.5 kHz. The Redlight Dist synthesizes harmonics from those high frequencies. (Even though it has a High Freq control, that’s not drastic enough a cutoff—hence the Pro EQ.) The Soft/Hard control chooses between 1 or 2 distortion stages; stage 1 is my preference because it sounds more natural, but people with anger management issues might prefer 2 stages, which gives a nastier, more aggressive sound.
The Amount control sets the Redlight Dist output, which determines how much Sparkle gets added in parallel with the main signal. Use the Sparkle Bypass button to compare the sound with and without the Sparkle effect.
The reason for the Dual Pan module requires some explanation. The Sparkle FX Chain is intended for individual tracks, buses, and even master mixes when used subtly. Highs are very directional, so if with a bus or master there’s a trebly instrument mixed off to one side, like tambourine, the Sparkle effect can “tilt” the image toward the channel with more highs. The Center Highs control, when turned clockwise, brings the Left and Right “sparkle” channels more to center until when fully clockwise, the highs for both channels are centered.
Applying the Sparkle. To learn that the Sparkle effect does, it’s best to listen to the effect by itself and manipulate the controls to hear the results. Unfortunately you can’t assign FX Chain controls to Splitter parameters, so if you want to hear the Sparkle sound in isolation, go into the FX Chain and bring down the post-Mixtool level control all the way. As you tweak the Sparkle sound in isolation, grab only the highest audible frequencies, and avoid harsh distortion—you want just a hint of breakup, and only at the highest frequencies.
When using the Sparkle effect in context with a track of bus, start with the Amount control at minimum, and bring it up slowly. Use the Bypass button for a reality check—you want just a subtle brightening, not highs that hit you over the head and make dogs run away in panic. It takes a little effort to master what this effect can do, and it’s not something you want to use all the time. But when used properly, it can really add—well, sparkle—to tracks that need it.
I did a Harmonic Tremolo as a Sonar FX Chain tip, and it was very popular—so here’s a Studio One-specific version. For those not familiar with the term, some of the older, Fender “brown” amps used a variation on the standard, amplitude-oriented tremolo which the company called “harmonic tremolo.” It splits the signal into high and low bands, and then an LFO amplitude-modulates them out of phase so that the while the highs get louder, the lows get softer and vice-versa. The sound is quite different from a standard tremolo, and many players feel the sound is “sweeter.” But unlike a guitar amp, you can sync this tremolo to the rhythm—and that makes it a useful addition to groove-oriented music as well.
Here’s the FX Chain “schematic.”
X-Trem needs to be in Pan Mode or this won’t work. As a result, this FX Chain must be inserted in a stereo track—a mono track switches X-Trem to Tremolo Mode (although a mono file inserted in a track set to stereo will work). If you switch a stereo track to mono accidentally and then switch it back to stereo, you’ll need to click on the Reset button in the FX Chain to return X-Trem to Pan Mode.
In Pan mode, while the left X-Trem channel gets louder, the other becomes softer and vice-versa. The Splitter (in Channel Split mode) sends the left split to a Pro EQ set to High Cut, while the right split goes to a Pro EQ set to Low Cut; their frequencies track to set the split point between the high and low bands.
Finally the two outputs go to a Dual Pan, which provides several functions.
Crossover links the Pro EQ HC and LC Freq controls so you can adjust the split frequency between the high and low bands. At either the full clockwise or counter-clockwise position, the Harmonic Tremolo acts like a conventional tremolo.
Reset connects to the X-Trem Mode so you can reset it to Pan if needed.
LFO Speed controls the X-Trem speed from minimum to about 15 Hz. This control is inactive if the LFO Sync switch is on.
LFO Beats chooses the X-Trem sync rhythm and requires that the LFO Sync switch be on.
LFO Depth controls the X-Trem depth.
LFO Type chooses among the four standard waveforms (triangle, sine, sawtooth, square).
Lo/Hi Balance ties to the Dual Pan’s Input Bal knob. Fully counter-clockwise gives only the low band, clockwise gives only the high band, and the settings in between set the balance between the two bands.
Width controls the Dual Pan’s Width control. For the most authentic sound, leave this centered for mono operation (the Dual Pan should have Link enabled, and Pan set to <C>.
After assigning the controls, congratulations! You have your own Harmonic Tremolo… no soldering or guitar amps required!
Flanging that Actually Sounds Like Vintage Tape Flanging
Personal bias alert: I like digital flangers, but most can’t do what true, analog-based, tape flanging could do. Back in the day, the sessions for my band’s second album were booked following Jimi Hendrix’s sessions for Electric Ladyland. His flanging setup remained after the session, so we took advantage of it and used it on our album… and the sound of true, tape-based flanging was burned into my brain. This tip is about obtaining that elusive sound.
The tape flanging process used two tape recorders, one with a fixed delay and one with variable speed. As you sped up and slowed down one recorder, it could lag or lead the other recorder, and the time difference produced the flanging effect. If the audio path for one of them was out of phase, as one tape recorder pulled ahead of the other one (or fell behind after pulling ahead), the audio passed through the “through-zero” point where the audio canceled. This left a brief moment of silence when the flange hit its peak.
To nail “that sound,” first you need two delays. One has to be able to go forward in time, but since that’s not possible without violating the laws of physics (which can lead to a hefty fine and up to five years in jail), a second delay provides a fixed delay so the other can get ahead of it. Second, don’t use LFO control—if you don’t control the flanging effect manually, it sounds bogus.
In this implementation, a Splitter in normal mode feeds two Analog Delays. One of them goes through the Mixtool to flip the phase for the through-zero effect. Start with the Analog Delay settings shown in the screen shot; they’re identical for both delays, except for the Factor control on the delay that feeds the Mixtool.
To hear the tape flanging effect, move the Factor control from full counter-clockwise to clockwise. At the center point, you’ll hear the through-zero effect as the signals cancel. (Actually you can move either Factor control as long as the other one is pointing straight up.)
Variations on a Theme
It’s also fun to make an FX chain to allow for more variations. The left-most knob controls the Factor knob, whose parameter is called Delay Speed. Delay Time chooses how low the flanger goes. It’s scaled to a range from 1 ms to 13 ms; I find 4 – 9 ms about right (copy this curve for the second Analog Delay, because you want their times to track). Delay Inertia uses the control on the same Analog Delay as the Factor knob being controlled. This adds a bit of “tape transport inertia vibe” when you move the Factor knob.
The Mix knob controls the mix on one of the delays from 0% to 100%. (Note that if the Mix controls on both delays are at 0%, the audio should cancel; if it doesn’t, adjust the Mixtool Gain knob until it does.) 100% gives the most dramatic flanging effect, but back in the day, cancellations weren’t “digitally perfect” so setting Mix for one delay to 60-75% gives a smoother through-zero sound. Saturation controls the Saturation parameter on both delays when you want a little more grit, and a Low Cut control for both Analog Delays reduces some of the muddiness that can occur with long Delay Time settings. The Feedback control also ties to both Analog Delays. You’ll usually want to leave this in the stereo position (full clockwise). Finally, -/+ Flange controls the Invert Left and Invert Right buttons on the Mixtool module. Enable them for through-zero (“negative”) flanging, disable for positive flanging.
So does it really sound like tape flanging? Listen for yourself. I took an excerpt from a song on my YouTube channel, applied flanging to it, and posted it as an audio example on craiganderton.com (click on the Demos tab).
Bonus fun: Stick Binaural Pan after the two splits mix back together, and set Width to 200%. If Feedback is set to stereo, this produces a variation on the flanging effect.
Vocals are the most direct form of communication with your audience—so of course, you want your vocal to be a kind of tractor beam that draws people in. Many engineers give a more intimate feel to vocals by using dynamics control, like limiting or compression. While that has its uses, the downside is often tell-tale artifacts that sound unnatural.
The following technique of phrase-by-phrase gain edits can provide much of the intimacy and presence associated with compression—but with a natural, artifact-free sound. Furthermore, if you do want to compress the vocal further, you won’t need to use very much dynamics control because the phrase-by-phrase gain edits will have done the majority of the work the compressor would have needed to do.
The top track shows the original vocal. In the second track, I used the split tool to isolate sections of the vocal with varying levels (snap to grid needs to be off for this). The next step was clicking on the volume box in the center of the envelope, and dragging up to increase the level on the sections with lower levels. Although you can make a rough adjustment visually, it’s crucial to listen to the edited event in context with what comes before and after to make sure there aren’t any continuity issues—sometimes soft parts are supposed to be soft.
The third track shows the finished vocal after bouncing all the bits back together. Compared to the top track, it’s clear that the vocal levels are much more consistent.
There are a few more tricks involved in using this technique. For example, suppose there’s a fairly loud inhale before a word. A compressor would bring up the inhale, but by splitting and changing gain, you can split just after the inhale and bring up the word or phrase without bringing up the inhale. Also, I found that it was often possible to raise the level on one side of a split but not on the other, and not hear a click from the level change. Whether this was because of being careful to split on zero crossings, dumb luck, or Studio One having some special automatic crossfading mojo, I don’t know…but it just works and if it doesn’t, you can always add crossfades.
That’s all there is to it. If you want to hear this technique in action, here’s a link to a song on my YouTube channel that uses this vocal normalization technique.
One of the main differences between guitar and keyboard is chord voicing. Guitar chords typically have six widely separated notes, whereas keyboard notes tend cluster around two areas accessible by each hand. For example, check out the notes that make up an E major chord on guitar.
If you’re a keyboard player using chords to define a chord progression, it’s easy enough to have chords hit on, for example, the beginning of a measure. But “strumming” the chord can add interest and a more guitar-like quality. Although you can edit the notes in a chord so that successively higher notes of the chord have increasing delay compared to the start of the measure, that’s pretty time-consuming. Fortunately, there’s an easy way to do guitar voicings—and strum them.
Stepping Out. The core of this technique is step recording, which is easy to do in Studio One once you’ve inserted a virtual instrument. Steps are keyed to numbers on the screen shot. This assumes the strummed chord will start on the beat.
The moral of the story is that chord notes don’t always need to hit right on the beat—try some strumming, and add variety to your music.
The Pro EQ isn’t the only equalizer in Studio One: there’s also a very flexible graphic equalizer, but it’s traveling incognito. Although the Pro EQ can create typical graphic equalizer responses, there are still situations where a good graphic equalizer can be the quickest and easiest way to dial in the sound you want—and the one in Studio One has some attributes you won’t find in standard graphic EQs. Once you start realizing the benefits of this technique, you just may wish you had discovered it sooner.
The secret is the Multiband Dynamics processor. A multiband dynamics processor is basically a graphic EQ with individual dynamics control for each band, but we can ignore the dynamics control aspect and use just the equalization.
The reason why setting up this EQ can be so fast is because of being able to solo and mute individual bands, and move the band’s upper and lower limits around freely to focus precisely on the part of the spectrum you want to affect. Of course you can enable/disable individual bands in the Pro EQ, but you’ll still hear the unprocessed sound at all times. With the Multiband Dynamics serving as a graphic EQ, the ability to focus on a specific band of frequencies is something that’s not possible with standard parametric-based EQs.
The first step is to defeat the compression, so set the Ratio for all bands to 1.0. Attack, Release, Knee, and Thresholds don’t matter because there’s no compression.
Now you can adjust the frequency ranges and level for individual bands, and this is where being able to mute and solo bands is incredibly helpful. For example, suppose you want to zero in on the part of a vocal that adds intelligibility. With a parametric EQ you would need to go back and forth between the frequency, bandwidth, and gain to find the “sweet spot.” With the Multiband Dynamics processor, just solo the HM band and move the range dividers until you focus on the vocal frequencies with the most articulation, then boost that band’s gain. Simple.
Even better, the Multiband Dynamics processor has a Mix control so you can blend the processed and unprocessed sound to make the overall effect of the EQ more or less drastic. And speaking of drastic, the Gain control does ±36 dB so you have more control over level than most parametric EQs.
Being able to define individual bands, solo them, and adjust their gain and frequency ranges precisely can be a very useful technique that supplements what you can do with the Pro EQ. For general tone-shaping, try the Multiband Dynamics processor—you might be surprised at how fast you can dial in just the right sound.
Convolving white noise with audio produces reverb but frankly, the results aren’t all that inspiring compared to the impulses obtained from “sampling” real rooms. However, there are ways to make white noise impulses that provide a unique, “idealized” sound compared to standard impulses.
Now bring the WAV file you just saved into Open Air, and check out the clarity and smoothness of the sustain—it has an “idealized” quality, sort of like how CGI is an idealized version of an image. Listen to the audio example processing some percussive sounds from Impulse, and you’ll hear what I mean.
Here are a few other hints:
The bottom line is this is an incredibly flexible way to come up with reverb sounds…and you can end up with different reverb sounds than any other reverb processor on your hard drive. Have fun!
We’ll use a fairly basic example of sidechaining to create this tightness. While most people understand the principles behind sidechaining, I haven’t heard very many people actually use this particular application. But with electric bass, using a drum sidechain signal to gate the bass adds a percussive overlay to the bass’s melodic character that fits perfectly with drums.
For the bass sound, in this example I’m using my bass expansion pack for Cakewalk’s Rapture Pro (I’ll be porting the samples over to Presence XT soon). The drum loop track has a send that drives a Gate inserted in the bass track, with the Gate’s sidechain set to External so it’s triggered by the drum’s audio.
Although different situations call for different Gate settings, I find the key to getting good results with electric bass is the Gate’s Release control. Because bass has a natural decay, a little release time prevents the bass from sounding too percussive—the attacks are all properly in place, but the bass note trails off gracefully, even though the drum transient may be long gone.
However with more electro-oriented material, using a sharp decay with an electric bass provides an unusual type of effect—you have the organic, natural sound of the electric bass modulated by the clipped, percussive decays caused by gating with the drums. As always, experimentation can yield interesting—and sometimes delightfully unexpected—results. Try it!
I admit it…I’m very picky about kick drums. But I also like using drum loops, so I often want to replace the kick. Fortunately, it’s not hard to do with Studio One, and you don’t need a dedicated drum replacement application to do it.
In this example, the audio track with the drum loop (Acoustic Verse 2 in the screen shot, toward the left) has two pre-fader sends. One goes to the Main Drums bus, which carries the drum loop audio. The reason for having a separate bus with the drum audio (and for turning down the original drum audio track) is because we want to reduce the level of the loop’s kick as much as possible. So the loop’s audio has two Pro EQs in series—both set to 48 dB Low Cut at around 100 Hz—to create a super-steep slope and get rid of most of the kick.
The other send goes to the Impact Kick Trigger bus, which exists only to hold a Gate (that’s why the bus fader is all the way down—we don’t want to hear the audio). To isolate the kick for triggering, turn down the Gate’s HC control so that the Gate responds only the lowest frequencies where the kick drum lives. Whenever the Gate opens, it can send out a MIDI note with your choice of note and a fixed velocity (you’ll have to add any dynamics yourself), and an instrument track can respond to that note. I set up an instance of Impact with a suitable kick drum, and assigned the Gate trigger to it. So, Impact provides the replacement kick drum sound and the Main Drums bus has the original drum loop without the kick. You can listen to the kick in real time as you trigger it, but you can also record the MIDI trigger in the instrument track.
The only caution is that the Gate parameter settings for Threshold and Attack/Release/Hold are critical for reliable triggering. For example if there are 16th-note kicks, you have to make sure that hold is short enough to allow retriggering; and you want Threshold high enough to it’s triggered only by the kick.
There are many variations on this theme…you may want to double an existing kick with the kick replacement, rather than reduce the original kick’s level as much as possible, or use the Pro EQ Low Cut filters to take out only the very lowest frequencies, so the original kick provides the higher-frequency beater sounds…whatever sounds best.
Of course if the drums are on individual tracks, then it’s easy to replace the drum sounds. But even with a mixed drum loop, it’s often possible to isolate at least the kick and snare to give drum loops a whole new character.
In the previous Friday Tip of the Week, we covered how recording soft synths and amp sims at higher sample rates (like 96 kHz) can give higher sound quality in some situations. However, we also discussed some issues involved with recording at higher sample rates that aren’t so wonderful.
So this week, it’s time for a solution. Offline upsampling to higher sample rates can let you retain the CPU efficiencies of running at a lower sample rate, while reaping the sonic benefits of recording at higher sample rates… and you can do this in Studio One by upsampling in a separate project, rendering the file, and then importing the rendered file back into your original project.
But wait—wouldn’t you lose the benefits of upsampling when you later convert the sample rate back down to 44.1 kHz? The answer is no: Rendering at the higher sample rate eliminates any foldover distortion in the audio range, sample-rate converters include an anti-alias filter to avoid this problem, and 44.1 kHz has no problem playing back sounds in the audio range.
However, note that upsampling can’t fix audio that already has aliasing distortion; upsampling audio to 96 kHz that already contains foldover distortion will simply reproduce the existing distortion. This technique applies only to audio created in the computer. Similarly, it’s unlikely that upsampling something recorded via a computer’s audio interface will yield any benefits, because the audio interface itself will have already band-limited the signal’s frequency range so there will be no harmonics that interfere with the clock frequency.
UPSAMPLING IN STUDIO ONE
We’ll assume a 44.1 kHz project sampling rate, and that the virtual instrument’s MIDI track has been finalized but you haven’t transformed it to audio yet. Here’s how to upsample virtual instruments.
That’s all there is to it. If you want to upsample an amp sim, the process is similar: export the (presumably guitar) track, save the amp sim preset, render at 96 kHz, then import the rendered file into the 44.1 kHz project.
Listen to the audio example “Upsampling with Amp Sim,” which plays the sound of an amp sim at 44.1 kHz and then after upsampling to 96 kHz. The difference isn’t as dramatic as last week’s synth example, but you’ll still hear that the upsampled version is clearer, with more presence.
Do bear in mind you may not want the difference caused by upsampling. When I did an upsampling demo at a seminar with a particular synthesizer, most people preferred the sound with the aliasing because the upsampled sound was brighter than what they expected. However when I did upsampling with an amp sim, and with a different synth, the consensus was that the upsampled version sounded much better. Regardless, the point is now you have a choice—hear the instrument the way it’s supposed to be heard to decide if you like that better, or leave it as is. After all, distortion isn’t necessarily that horrible—think of how many guitar players wouldn’t have a career without it!
Although upsampling isn’t a panacea, don’t dismiss it either. Even with synths that don’t oversample, upsampling may make no audible difference. However, sometimes synths that do oversample still benefit from upsampling; with some sounds, it can take 4x or even 8x oversampling to reproduce the sound accurately. As always, use your ears to decide which sound works best in your music.