Flanging that Actually Sounds Like Vintage Tape Flanging
Personal bias alert: I like digital flangers, but most can’t do what true, analog-based, tape flanging could do. Back in the day, the sessions for my band’s second album were booked following Jimi Hendrix’s sessions for Electric Ladyland. His flanging setup remained after the session, so we took advantage of it and used it on our album… and the sound of true, tape-based flanging was burned into my brain. This tip is about obtaining that elusive sound.
The tape flanging process used two tape recorders, one with a fixed delay and one with variable speed. As you sped up and slowed down one recorder, it could lag or lead the other recorder, and the time difference produced the flanging effect. If the audio path for one of them was out of phase, as one tape recorder pulled ahead of the other one (or fell behind after pulling ahead), the audio passed through the “through-zero” point where the audio canceled. This left a brief moment of silence when the flange hit its peak.
To nail “that sound,” first you need two delays. One has to be able to go forward in time, but since that’s not possible without violating the laws of physics (which can lead to a hefty fine and up to five years in jail), a second delay provides a fixed delay so the other can get ahead of it. Second, don’t use LFO control—if you don’t control the flanging effect manually, it sounds bogus.
In this implementation, a Splitter in normal mode feeds two Analog Delays. One of them goes through the Mixtool to flip the phase for the through-zero effect. Start with the Analog Delay settings shown in the screen shot; they’re identical for both delays, except for the Factor control on the delay that feeds the Mixtool.
To hear the tape flanging effect, move the Factor control from full counter-clockwise to clockwise. At the center point, you’ll hear the through-zero effect as the signals cancel. (Actually you can move either Factor control as long as the other one is pointing straight up.)
Variations on a Theme
It’s also fun to make an FX chain to allow for more variations. The left-most knob controls the Factor knob, whose parameter is called Delay Speed. Delay Time chooses how low the flanger goes. It’s scaled to a range from 1 ms to 13 ms; I find 4 – 9 ms about right (copy this curve for the second Analog Delay, because you want their times to track). Delay Inertia uses the control on the same Analog Delay as the Factor knob being controlled. This adds a bit of “tape transport inertia vibe” when you move the Factor knob.
The Mix knob controls the mix on one of the delays from 0% to 100%. (Note that if the Mix controls on both delays are at 0%, the audio should cancel; if it doesn’t, adjust the Mixtool Gain knob until it does.) 100% gives the most dramatic flanging effect, but back in the day, cancellations weren’t “digitally perfect” so setting Mix for one delay to 60-75% gives a smoother through-zero sound. Saturation controls the Saturation parameter on both delays when you want a little more grit, and a Low Cut control for both Analog Delays reduces some of the muddiness that can occur with long Delay Time settings. The Feedback control also ties to both Analog Delays. You’ll usually want to leave this in the stereo position (full clockwise). Finally, -/+ Flange controls the Invert Left and Invert Right buttons on the Mixtool module. Enable them for through-zero (“negative”) flanging, disable for positive flanging.
So does it really sound like tape flanging? Listen for yourself. I took an excerpt from a song on my YouTube channel, applied flanging to it, and posted it as an audio example on craiganderton.com (click on the Demos tab).
Bonus fun: Stick Binaural Pan after the two splits mix back together, and set Width to 200%. If Feedback is set to stereo, this produces a variation on the flanging effect.
Vocals are the most direct form of communication with your audience—so of course, you want your vocal to be a kind of tractor beam that draws people in. Many engineers give a more intimate feel to vocals by using dynamics control, like limiting or compression. While that has its uses, the downside is often tell-tale artifacts that sound unnatural.
The following technique of phrase-by-phrase gain edits can provide much of the intimacy and presence associated with compression—but with a natural, artifact-free sound. Furthermore, if you do want to compress the vocal further, you won’t need to use very much dynamics control because the phrase-by-phrase gain edits will have done the majority of the work the compressor would have needed to do.
The top track shows the original vocal. In the second track, I used the split tool to isolate sections of the vocal with varying levels (snap to grid needs to be off for this). The next step was clicking on the volume box in the center of the envelope, and dragging up to increase the level on the sections with lower levels. Although you can make a rough adjustment visually, it’s crucial to listen to the edited event in context with what comes before and after to make sure there aren’t any continuity issues—sometimes soft parts are supposed to be soft.
The third track shows the finished vocal after bouncing all the bits back together. Compared to the top track, it’s clear that the vocal levels are much more consistent.
There are a few more tricks involved in using this technique. For example, suppose there’s a fairly loud inhale before a word. A compressor would bring up the inhale, but by splitting and changing gain, you can split just after the inhale and bring up the word or phrase without bringing up the inhale. Also, I found that it was often possible to raise the level on one side of a split but not on the other, and not hear a click from the level change. Whether this was because of being careful to split on zero crossings, dumb luck, or Studio One having some special automatic crossfading mojo, I don’t know…but it just works and if it doesn’t, you can always add crossfades.
That’s all there is to it. If you want to hear this technique in action, here’s a link to a song on my YouTube channel that uses this vocal normalization technique.
One of the main differences between guitar and keyboard is chord voicing. Guitar chords typically have six widely separated notes, whereas keyboard notes tend cluster around two areas accessible by each hand. For example, check out the notes that make up an E major chord on guitar.
If you’re a keyboard player using chords to define a chord progression, it’s easy enough to have chords hit on, for example, the beginning of a measure. But “strumming” the chord can add interest and a more guitar-like quality. Although you can edit the notes in a chord so that successively higher notes of the chord have increasing delay compared to the start of the measure, that’s pretty time-consuming. Fortunately, there’s an easy way to do guitar voicings—and strum them.
Stepping Out. The core of this technique is step recording, which is easy to do in Studio One once you’ve inserted a virtual instrument. Steps are keyed to numbers on the screen shot. This assumes the strummed chord will start on the beat.
The moral of the story is that chord notes don’t always need to hit right on the beat—try some strumming, and add variety to your music.
The Pro EQ isn’t the only equalizer in Studio One: there’s also a very flexible graphic equalizer, but it’s traveling incognito. Although the Pro EQ can create typical graphic equalizer responses, there are still situations where a good graphic equalizer can be the quickest and easiest way to dial in the sound you want—and the one in Studio One has some attributes you won’t find in standard graphic EQs. Once you start realizing the benefits of this technique, you just may wish you had discovered it sooner.
The secret is the Multiband Dynamics processor. A multiband dynamics processor is basically a graphic EQ with individual dynamics control for each band, but we can ignore the dynamics control aspect and use just the equalization.
The reason why setting up this EQ can be so fast is because of being able to solo and mute individual bands, and move the band’s upper and lower limits around freely to focus precisely on the part of the spectrum you want to affect. Of course you can enable/disable individual bands in the Pro EQ, but you’ll still hear the unprocessed sound at all times. With the Multiband Dynamics serving as a graphic EQ, the ability to focus on a specific band of frequencies is something that’s not possible with standard parametric-based EQs.
The first step is to defeat the compression, so set the Ratio for all bands to 1.0. Attack, Release, Knee, and Thresholds don’t matter because there’s no compression.
Now you can adjust the frequency ranges and level for individual bands, and this is where being able to mute and solo bands is incredibly helpful. For example, suppose you want to zero in on the part of a vocal that adds intelligibility. With a parametric EQ you would need to go back and forth between the frequency, bandwidth, and gain to find the “sweet spot.” With the Multiband Dynamics processor, just solo the HM band and move the range dividers until you focus on the vocal frequencies with the most articulation, then boost that band’s gain. Simple.
Even better, the Multiband Dynamics processor has a Mix control so you can blend the processed and unprocessed sound to make the overall effect of the EQ more or less drastic. And speaking of drastic, the Gain control does ±36 dB so you have more control over level than most parametric EQs.
Being able to define individual bands, solo them, and adjust their gain and frequency ranges precisely can be a very useful technique that supplements what you can do with the Pro EQ. For general tone-shaping, try the Multiband Dynamics processor—you might be surprised at how fast you can dial in just the right sound.
Convolving white noise with audio produces reverb but frankly, the results aren’t all that inspiring compared to the impulses obtained from “sampling” real rooms. However, there are ways to make white noise impulses that provide a unique, “idealized” sound compared to standard impulses.
Now bring the WAV file you just saved into Open Air, and check out the clarity and smoothness of the sustain—it has an “idealized” quality, sort of like how CGI is an idealized version of an image. Listen to the audio example processing some percussive sounds from Impulse, and you’ll hear what I mean.
Here are a few other hints:
The bottom line is this is an incredibly flexible way to come up with reverb sounds…and you can end up with different reverb sounds than any other reverb processor on your hard drive. Have fun!
We’ll use a fairly basic example of sidechaining to create this tightness. While most people understand the principles behind sidechaining, I haven’t heard very many people actually use this particular application. But with electric bass, using a drum sidechain signal to gate the bass adds a percussive overlay to the bass’s melodic character that fits perfectly with drums.
For the bass sound, in this example I’m using my bass expansion pack for Cakewalk’s Rapture Pro (I’ll be porting the samples over to Presence XT soon). The drum loop track has a send that drives a Gate inserted in the bass track, with the Gate’s sidechain set to External so it’s triggered by the drum’s audio.
Although different situations call for different Gate settings, I find the key to getting good results with electric bass is the Gate’s Release control. Because bass has a natural decay, a little release time prevents the bass from sounding too percussive—the attacks are all properly in place, but the bass note trails off gracefully, even though the drum transient may be long gone.
However with more electro-oriented material, using a sharp decay with an electric bass provides an unusual type of effect—you have the organic, natural sound of the electric bass modulated by the clipped, percussive decays caused by gating with the drums. As always, experimentation can yield interesting—and sometimes delightfully unexpected—results. Try it!
I admit it…I’m very picky about kick drums. But I also like using drum loops, so I often want to replace the kick. Fortunately, it’s not hard to do with Studio One, and you don’t need a dedicated drum replacement application to do it.
In this example, the audio track with the drum loop (Acoustic Verse 2 in the screen shot, toward the left) has two pre-fader sends. One goes to the Main Drums bus, which carries the drum loop audio. The reason for having a separate bus with the drum audio (and for turning down the original drum audio track) is because we want to reduce the level of the loop’s kick as much as possible. So the loop’s audio has two Pro EQs in series—both set to 48 dB Low Cut at around 100 Hz—to create a super-steep slope and get rid of most of the kick.
The other send goes to the Impact Kick Trigger bus, which exists only to hold a Gate (that’s why the bus fader is all the way down—we don’t want to hear the audio). To isolate the kick for triggering, turn down the Gate’s HC control so that the Gate responds only the lowest frequencies where the kick drum lives. Whenever the Gate opens, it can send out a MIDI note with your choice of note and a fixed velocity (you’ll have to add any dynamics yourself), and an instrument track can respond to that note. I set up an instance of Impact with a suitable kick drum, and assigned the Gate trigger to it. So, Impact provides the replacement kick drum sound and the Main Drums bus has the original drum loop without the kick. You can listen to the kick in real time as you trigger it, but you can also record the MIDI trigger in the instrument track.
The only caution is that the Gate parameter settings for Threshold and Attack/Release/Hold are critical for reliable triggering. For example if there are 16th-note kicks, you have to make sure that hold is short enough to allow retriggering; and you want Threshold high enough to it’s triggered only by the kick.
There are many variations on this theme…you may want to double an existing kick with the kick replacement, rather than reduce the original kick’s level as much as possible, or use the Pro EQ Low Cut filters to take out only the very lowest frequencies, so the original kick provides the higher-frequency beater sounds…whatever sounds best.
Of course if the drums are on individual tracks, then it’s easy to replace the drum sounds. But even with a mixed drum loop, it’s often possible to isolate at least the kick and snare to give drum loops a whole new character.
In the previous Friday Tip of the Week, we covered how recording soft synths and amp sims at higher sample rates (like 96 kHz) can give higher sound quality in some situations. However, we also discussed some issues involved with recording at higher sample rates that aren’t so wonderful.
So this week, it’s time for a solution. Offline upsampling to higher sample rates can let you retain the CPU efficiencies of running at a lower sample rate, while reaping the sonic benefits of recording at higher sample rates… and you can do this in Studio One by upsampling in a separate project, rendering the file, and then importing the rendered file back into your original project.
But wait—wouldn’t you lose the benefits of upsampling when you later convert the sample rate back down to 44.1 kHz? The answer is no: Rendering at the higher sample rate eliminates any foldover distortion in the audio range, sample-rate converters include an anti-alias filter to avoid this problem, and 44.1 kHz has no problem playing back sounds in the audio range.
However, note that upsampling can’t fix audio that already has aliasing distortion; upsampling audio to 96 kHz that already contains foldover distortion will simply reproduce the existing distortion. This technique applies only to audio created in the computer. Similarly, it’s unlikely that upsampling something recorded via a computer’s audio interface will yield any benefits, because the audio interface itself will have already band-limited the signal’s frequency range so there will be no harmonics that interfere with the clock frequency.
UPSAMPLING IN STUDIO ONE
We’ll assume a 44.1 kHz project sampling rate, and that the virtual instrument’s MIDI track has been finalized but you haven’t transformed it to audio yet. Here’s how to upsample virtual instruments.
That’s all there is to it. If you want to upsample an amp sim, the process is similar: export the (presumably guitar) track, save the amp sim preset, render at 96 kHz, then import the rendered file into the 44.1 kHz project.
Listen to the audio example “Upsampling with Amp Sim,” which plays the sound of an amp sim at 44.1 kHz and then after upsampling to 96 kHz. The difference isn’t as dramatic as last week’s synth example, but you’ll still hear that the upsampled version is clearer, with more presence.
Do bear in mind you may not want the difference caused by upsampling. When I did an upsampling demo at a seminar with a particular synthesizer, most people preferred the sound with the aliasing because the upsampled sound was brighter than what they expected. However when I did upsampling with an amp sim, and with a different synth, the consensus was that the upsampled version sounded much better. Regardless, the point is now you have a choice—hear the instrument the way it’s supposed to be heard to decide if you like that better, or leave it as is. After all, distortion isn’t necessarily that horrible—think of how many guitar players wouldn’t have a career without it!
Although upsampling isn’t a panacea, don’t dismiss it either. Even with synths that don’t oversample, upsampling may make no audible difference. However, sometimes synths that do oversample still benefit from upsampling; with some sounds, it can take 4x or even 8x oversampling to reproduce the sound accurately. As always, use your ears to decide which sound works best in your music.
The controversy about whether people can tell the difference on playback between audio recorded at 96 kHz that’s played back at 44.1 kHz or a higher sample rate (such as 96 kHz) has never really been resolved. However, under some circumstances, recording at a higher sample rate can give an obvious, audible improvement in sound quality. In this week’s tip we’ll investigate why this happens, and in next week’s tip, tell how to obtain the benefits of recording at a higher sample rate in Studio One with 44.1 and 48 kHz projects.
REALLY? CONVINCE ME!
A Song’s sample rate can make a difference with sounds generated “in the box,” for instance using a virtual instrument plug-in that synthesizes a sound, or distortion created by an amp simulator. Any improvement heard with high sample rates comes from eliminating foldover distortion, also known as aliasing.
Theory time: A digital system can accurately represent audio at frequencies lower than half the sampling rate (e.g., 22.05 kHz in a 44.1 kHz project). If an algorithm within a plug-in generates harmonic content above this Nyquist limit—say, at 40 kHz—then you won’t hear a 40 kHz tone, but you will hear the aliasing created when this tone “folds down” below the Nyquist limit (to 4.1 kHz, in this case). Aliasing thus appears within the audible range, but is harmonically unrelated to the original signal, and generally sounds pretty ugly.
Foldover distortion can happen with synthesized waveforms that are rich in harmonics, like pulse waves with sharp rise and fall times. (Amp sims can also be problematic; although their harmonics may be weak, if you’re applying 60 dB of gain to create overdrive or distortion, the harmonics can be strong enough to cause audible aliasing).
SO IS IT A PROBLEM, OR NOT?
Not all plug-ins will exhibit these problems, for one of four reasons:
Many modern virtual instruments and amp sims oversample, and DAWs can handle higher sample rates, so you’d think that might be the end of it. Unfortunately, there can be limitations with oversampling and higher project sample rates.
Furthermore, with plug-ins that oversample, the sound quality will be influenced by the quality of the sample-rate conversion algorithms. It’s not necessarily easy to perform high-quality sample-rate conversion: check out comparisons for various DAWs at http://src.infinitewave.ca (where, incidentally, Studio One rates as one of the best), and remember that the conversion algorithms for a plug-in might be more “relaxed” than what’s used in a DAW.
So what’s a musician to do? In next week’s Friday Tip of the Week, we’ll cover how to do upsampling in Studio One to reap the benefits of high-sample-rate recording at lower sample rates. Meanwhile, if you still need to be convinced recording at different sample rates makes a difference, check out this audio example of a synthesizer recorded in Studio One first at 44.1 kHz, and then at 96 kHz:
A Sweeter, Beefier Ampire
Let’s transform Ampire’s Crunch American from a motor scooter into a Harley. Here’s our point of departure:
Insert the Multiband Dynamics before Ampire. The default patch is fine, but drag the High Mid and High gain and ratio settings down all the way. The goal here is to add a bit of compression to give more even distortion in the mids and lower mids but also, to get rid of high frequencies that, when distorted, create harsh harmonics.
After Ampire, insert the Pro EQ. The steep notch around 8 kHz gets rid of the whistling sound you’ll really notice in the before-and-after audio example, while the high-frequency shelf adds brightness to offset the reduced high frequencies going into Ampire. But this time, we’re increasing the “good,” post-distortion high frequencies instead of the nasty pre-distortion ones.
Those two processors alone make a big difference, but let’s face it—people don’t listen to an amp with their ear a couple inches from the speaker, but in a room. So, let’s create a room and give the sound a stereo image with the Open Air convolution reverb. I’ve loaded one of my custom, synthetic IR responses; these are my go-to impulses for pretty much everything I do involving convolution reverb, and may be available in the PreSonus shop someday. Meanwhile, feel free to use your own favorite impulses.
Of course, you can take this concept a lot further with the Channel Editor if you want to tweak specific parameters to optimize the sound for your particular playing style, choice of pickups, pickup type, and the like…hmmm, seems like that might be a good topic for a future tip.
That’s it! Now all that’s left is to compare the before and after example below. Hopefully you’ll agree that the “after” is a lot more like a Harley than a motor scooter.