The How to Make Spotify Happy blog post picked up a lot of interest, so let’s take the concept a bit further. Even if you’re not dealing with a streaming service, having consistent listening levels in your music makes sense—especially with a collection of songs. But what happens if a song is compressed for artistic reasons, yet you still want to aim for a standard listening level (streaming or otherwise)?
The beauty of the LUFS specification is that it avoids penalizing those who want to take advantage of dynamic range in their music (jazz, classical, etc.). But not everyone creates music that requires maximum dynamic range. I add about 4 to 6 dB of gain reduction when mastering my music, and aim for -13.0 LUFS, because I like what a little compression does to glue the tracks together. However, I do want the music to be streaming-friendly—and the whole point of this post is that Limiter2 makes it easy to hit both LUFS and True Peak settings recommended by various streaming services.
Fig. 1 shows the screen shot for an exported, pre-mastered song. It has a -18.0 LUFS reading. My goal is -13.0 LUFS, with a true peak value below 0.0.
To get closer to -13.0 LUFS, let’s start with a Limiter2 Threshold of -5.00, because applying -5.00 dB of gain reduction to -18.00 LUFS should put us somewhere around -13 dB LUFS. To control True Peak, we’ll use Mode A, Fast Attack (Fig. 2).
We’ve come close to -13.0, but the true peak is well above 0.0. Bringing down the Ceiling by -2 dB puts that 1.6 dB True Peak reading under 0.0 (Fig. 3).
We’ve brought down the peaks, but because the output is lower, the perceived level is lower too (-13.5 LUFS). Dropping the threshold by ‑0.7 dB brought the LUFS to -13.0, while maintaining a TP value under 0.0 (Fig. 4).
We can also make streaming services (like Spotify) happy, with -14.0 LUFS and -1.0 TP values (Fig. 5).
But suppose you really like to compress stuff, not because you want to win the loudness wars per se, but just because you like that sound. Fair enough—let’s give listeners music with -9.0 LUFS, and not worry about True Peak (Fig. 6).
But What About the Loudness Wars?
I’m glad you asked. If your music doesn’t meet a streaming service’s specs, they’ll turn it down to an equal perceived level. But what does that sound like?
In the audio example, the first part is of an unmastered song at -9.0 LUFS. It’s fairly loud, and could win at least a skirmish in the loudness wars. The second part is the same unmastered song at -14.0 LUFS, which sounds much quieter.
The third part turns down the -9.0 LUFS section to -14.0 LUFS. Although it has the same overall perceived level as the second part, it sounds compressed, so it has a different character. Bottom line: If you like the sound of compression, a streaming service will not change that sound; it will simply turn down the volume to match material that’s not as compressed. So feel free to use a rational amount of compression—the sound you want will still be there, just at a lower level.
And if you want a higher level…well, that’s why the volume control was invented… right?
It’s not surprising a lot of Studio One users also have Ableton Live, because they’re quite different. I’ve always felt Studio One is a pro recording studio (with a helluva backline) disguised as software, while Ableton is a live performance instrument disguised as software.
Fortunately, if you like working simultaneously with Live’s loops and scenes and Studio One’s rich feature set, Studio One can host Live as a ReWire client. Even better, ATOM SQ can provide full native integration with Ableton Live when it’s ReWired as a client—once you know how to set up the MIDI ins and outs for both programs.
Now ATOM SQ will act as an integrated controller with Ableton Live while it’s ReWired into Studio One. Cool, eh?
To return control to Studio One, reverse the process—in Live, set Control Surface to None, and toggle the MIDI Ports that relate to ATOM SQ from On to Off. In Studio One’s Options > External Devices, For ATOM SQ, reconnect ATOM SQ to Receive From and Send To.
Note that with ATOM SQ controlling Studio One, the Transport function still controls both Live and Studio One. Also, if Live has the focus, any QWERTY keyboard assignments for triggering Clips and Scenes remain valid. So even while using ATOM SQ in the native mode for Studio One, you can still trigger different Clip and Scenes in Live. If you switch the focus back to Studio One, then any QWERTY keyboard shortcuts will trigger their assigned Studio One shortcuts.
Note: When switching back and forth between Live and Studio One, and enabling/disabling Studio One and Ableton Live modes for ATOM SQ, to return to Live you may need to “refresh” Live’s Preferences settings. Choose None for the Control Surface and then re-select ATOM SQ. Next, turn the various MIDI Port options off and on again.
Vocoders are processors that use the audio from vocals (called the modulation source, or modulator) to modulate another sound, like a synthesizer pad (called the carrier). However, no law says you have to use vocals as a modulator, and I often use drums to modulate pads, power chords, and more. While Studio One’s toolset doesn’t have enough resolution for super-intelligible vocoding with voice, it’s perfect for drumcoding, which actually benefits from the lower resolution.
This tip is for advanced users and requires a fairly complex setup. Rather than go into too much detail about how it works, simply download the Drumcoder.song file, linked below, which has a complete drumcoding setup. Load Drumcoder.song into Studio One 5, press play, and you’ll hear what drumcoding is all about. (Note that the file format isn’t compatible with previous Studio One versions. However, based on the description in this tip, you should be able to “roll your own” drumcoding setup in previous Studio One versions.)
Let’s check out an audio demo. The first half has the drumcoded sound only, while the second half mixes in the drum (modulator) sound.
But wait—there’s more! Although the drumcoder isn’t designed to be the greatest vocoder in the world (and it isn’t), you can still get some decent results. Here, the voice is saying “Even do some kinds of vocal effects with the PreSonus drumcoder—have fun!’
Next, we’ll explore how it works…or if you’re impatient, just reverse-engineer the song.
Vocoding splits the modulator (like voice or drums) into multiple frequency bands. In a traditional vocoder, each band produces a control voltage that corresponds to the audio’s level in each band. Similarly, the carrier splits into the same frequency bands. A VCA follows each carrier band, and the VCAs are fed by the modulator’s control voltages. So, if there’s midrange energy in the modulator, it opens the VCA for the carrier’s midrange audio. If there’s bass energy in the modulator, it opens the VCA for the carrier’s bass audio. With a vocoder, as different energy occurs in different bands that cover a vocal’s frequency range, the carrier mimics that same distribution of energy in its own bands. This is what generates talking instrument effects.
Vocoders typically need at least eight frequency bands to make voices sound intelligible. Studio One’s Splitter can divide incoming audio into five bands, which is enough resolution for drumcoding. Fig. 1 (which takes some graphic liberties with Studio One’s UI), shows the signal flow.
Figure 1: Drumcoder signal flow.
The Drums track provides the modulator signal, and the Mai Tai synthesizer provides the carrier. The Drums track has five pre-fader sends to distribute the drum sound to five buses. As shown in Fig. 2, each of the five buses has a Splitter (but no other effects) set to Frequency Split mode, with splits at 200, 400, 800, and 1600 Hz. The < 200 Hz bus mutes all Splits except for 1, the 200-400 bus mutes all Splits except for 2, the 400-800 bus mutes all splits except for 3, the 800 – 1.6 kHz mutes all splits except for 4, and the > 1.6 kHz bus mutes all splits except for 5. Now each bus output covers one of the five bands.
Figure 2: Splitter settings for the five buses.
The Mai Tai carrier has a splitter set to the same frequencies. Each split goes to an Expander, which basically acts like a VCA; see Fig. 3. We don’t need to break out the Splitter outputs, because you can access the sidechain for the Expanders located within the Splitter. (A Mixtool follows each Expander, but it’s there solely to provide a volume control for each of the carrier’s bands in the control panel.)
Figure 3: Effects used for the Mai Tai synthesizer carrier track.
As to the bus outputs, the < 200 Hz bus has a send that goes to the sidechain of the Expander in the carrier’s < 200 Hz split. The 200-400 Hz bus has a send to the sidechain of the Expander in the carrier’s 200-400 Hz split. The 400-800 Hz bus has a send to the sidechain of the Expander in the carrier’s 400-800 Hz split…you get the idea. Basically, each bus provides a “control voltage” for the corresponding “VCA” (Expander) that controls the level of the carrier’s five bands.
Fig. 4 shows the Control panel.
Figure 4: Drumcoder macro controls.
Threshold, Ratio, and Range cover the full range of Expander controls. They affect how tightly the Expander follows the modulator, which controls the effect’s percussive nature. Just play around with them until you get the sound you want. The Expander Envelope settings aren’t particularly crucial, but I find 0.10 ms Attack and 128.0 ms Release work well. Of course, you also need to enable the sidechain for each Expander, and make sure it’s listening to the bus that corresponds to the correct band.
The five knobs toward the right control the level of the individual bands by altering the Gain of the Mixtool that follows each band’s Expander. The five associated buttons enable or bypass the Expander for a particular band, which can give some really cool effects. For example, turn off the Expander on the Mid band, and with the Song’s Mai Tai preset, it almost sounds like a choir is singing along with the drumcoded drums.
Although the Drumcoder isn’t really designed for vocal effects, it still can be fun. The key is to bring up the > 1.6 kHz Bus slider, as this mixes in some of the voice’s “s” sounds, which give intelligibility. Experiment with the Expander controls to find what works well. If you really want to dig into vocal applications, edit the Splitter frequencies to optimize them for the vocal range instead of drums…or leave a comment asking me to pursue this further.
Due to the complexity, if I think I’m going to use the Drumcoder, I’ll just treat this song like a template and build the rest of the song from there. But once you understand the principle of operation, you can always add the effect in to an existing song as needed. I have to say this is one of my favorite Friday tips ever… I hope you enjoy playing with the Drumcoder!
Although Studio One 5 doesn’t have a tape emulator plug-in per se, it can emulate some of the most important characteristics that people associate with “the tape sound.” Truly emulating tape can go down a serious rabbit hole because tape is a complicated signal processor; no two vintage tape recorders sounded the same because they required alignment (influenced by the engineer’s preferences), used different tape formulations, and were in various states of maintenance. However, emulating three important characteristics provides what most people want from tape emulation.
Check out the audio example to hear what this FX Chain can do. The first part is unprocessed, while the second part uses the default FX Chain control settings with a little underbiasing and head bump. The difference is subtle, but it adds that extra “something.”
This FX Chain starts with a Splitter, which creates three signal paths: one for saturation, one for hiss, and one for hum (Fig. 1).
Figure 1: FX Chain block diagram.
After auditioning all available Studio One 5 saturation options, I liked the TriComp best for this application. The Pro EQ stage preceding the TriComp provides the head bump EQ and has a control to emulate the effect of underbiasing tape (more highs, which pushes more high-frequency level into the TriComp and therefore increases distortion in that range) or overbiasing (less highs, less distortion).
At first, I wasn’t going to include tape hiss and hum, but if someone needs to use this FX Chain for sound design (i.e., an actor starts a tape in a theatrical production), then including hiss and hum sounds more authentic. An additional knob chooses 50 or 60 Hz hum, which represents the power standards in different countries. (Note that the closest you can get to these frequencies is 50.4 and 59.1 Hz, but that’s good enough). However, I draw the line at including wow and flutter! Good riddance to both of them.
Because creating three splits reduces each split’s level, the TriComp Gain control provides makeup gain.
Turning Bump on adds a boost at the specified frequency, but also adds a 48 dB low-cut filter around 23 Hz to emulate the loss of very low frequencies due to the head bump. As a result, depending on the program material, adding the bump may increase or decrease the total apparent bass response. For additional flexibility, if you turn Bump Amount down all the way, the Bump On/Off switch enables or disables only the 48 dB/octave low-cut filter.
Fig. 2 shows some typical spectra from using the FX Chain.
Figure 2: The top curve shows the head bump enabled, with underbiasing. The lower curve shows minimal added bump, but with the ultra-low cut filter enabled, and overbiasing.
The controls default to rational settings (Fig. 3), which are used in the audio example. But as usual with my FX chains, the settings can go beyond the realm of good taste if needed.
Figure 3: Control panel for the Tape Emulator.
For example, I rarely go over 2-3% saturation, but I know some of you are itching to kick it up to 10%. Ditto tape hiss, in case you want to emulate recording on an ancient Radio Shack cassette recorder—with Radio Shack tape. Just remember that the Bias control is clockwise to overbias (less highs), and counter-clockwise to underbias (more highs).
There’s a lot of mythology around tape emulations, and you can find some very good plug-ins that nail the sound of tape. But try this FX Chain—it may give you exactly what you want. Best of all, I promise you’ll never have to clean or demagnetize its tape heads.
With physical audio media in its twilight, streaming has become the primary way to distribute music. A wonderful side effect has been the end of the loudness wars, because streaming services like Spotify turn levels up or down as needed to attain a specific, consistent perceived level—squashing a master won’t make it sound any louder.
However, the “garbage in, garbage out” law remains in effect, so you need to submit music that meets a streaming service’s specs. For example, Spotify prefers files with an LUFS of -14.0 (according to the EBU R128 standard), and a True Peak reading of -1.0 or lower. This avoids adding distortion when transcoding to lossy formats. If the LUFS reading is above -14.0, then Spotify wants a True Peak value under -2.0.
Fortunately, when you Detect Loudness for a track on the mastering page, you’ll see a readout of the LUFS and LRA (a measure of overall dynamic range), as well as the True Peak, RMS (average signal level), and DC offset for the left and right channels. Fig. 1 shows an example of the specs generated by detecting loudness.
Note that this hits Spotify’s desired LUFS, but the left channel’s True Peak value is higher than what’s ideal. This readout also shows that the average RMS levels for each channel are somewhat different—the left channel is 1.2 dB louder than the right one, which also accounts for the higher True Peak value. This may be the way the artist wants the mix to sound, but it could also indicate a potential problem with the mix, where the overall sound isn’t properly centered.
A simple fix is to insert a Dual Pan into the Inserts section. Use the Input Balance control to “weight” the stereo image more to one side for a better balance. After doing so and readjusting the LUFS, we can now give Spotify exactly what it wants (Fig. 2). Also note that the left and right channels are perfectly balanced.
A Crucial Consideration!
You don’t want to mix or master based on numbers, but on what you hear. If you set up Dual Pan to balance the channels, make sure that you enable/bypass the plug-in and compare the two options. You might find that balancing the left and right channels not only accommodates Spotify’s requirements, but improves the mix’s overall balance. If it doesn’t, then leave the balance alone, and lower the track’s overall output level so that True Peak is under -1.0 for both channels (or under -2.0 for LUFS values above ‑14.0). This will likely lower the LUFS reading, but don’t worry about it: Spotify will turn up the track anyway to reach -14.0 LUFS.
Coda: I always thought that squashing dynamic range to try and win the loudness wars made listening to music a less pleasant experience, and that’s one of the reasons CD sales kept declining. Does the end of the loudness wars correspond to the current music industry rebound from streaming? I don’t know… but it wouldn’t surprise me.
My mastering specialty is salvage jobs, which has become easier to do with Studio One. But this gig was something else.
Martha Davis’s last solo album (I Have My Standards, whose mastering challenges were covered in this blog post) has done really well. Since the pandemic has sidelined her from touring as Martha Davis and the Motels or going into the studio, she’s releasing a new song every month online. These involve excellent, but unreleased, material.
That’s THE good news. The bad news is that her latest song choice, “In the Meantime,” had the drum machine kick mixed so loud the song should have been credited as “Solo Kick Drum with Vocal Accompaniment.” With a vocalist like Martha (listen to any of her many hits from the 80s), that’s a crime. She was hoping I could fix it.
Don’t tune out, EDM/hip-hop fans. What about those TR-808 “toms” that are always mixed way too high? When I was given a Boy George song to remix, those toms were like sonic kryptonite before I figured out how to deal with them. And let’s not get into those clichéd 808 claps, okay? But we have a solution.
I tried everything to deal with the kick, including EQ, iZotope RX7 spectral reduction, mid-side processing using the Mixtool, and more. The mix was mostly mono, and the kick was full-frequency—from low-frequency boom to a nasty click that was louder than the lead vocal. Multiband dynamics didn’t work because the kick covered too wide a frequency range.
In desperation, I thought maybe I could find an isolated kick sound, throw it out of phase, and cancel the kick wherever it appeared in the song. Very fortunately, the song intro had a kick sound that could be isolated as an individual sample. So instead of going directly to Studio One’s mastering page, I went into the Song page, imported the stereo mix into one track, created a second track for only the kick, and dragged the copied kick to match up with every kick instance in the song (yes, this did take some time…). It wasn’t difficult to line up the copied kicks with sample- (or at least near-sample) accuracy (Fig. 1).
Figure 1: The top track is from the original song, while the lower track is an isolated kick. After lining the sounds up with respect to timing, flipping the kick track phase removed the kick sound from the mixed tracks.
The payoff was inserting Mixtool in the kicks-only track and flipping its phase 180 degrees. It canceled the kick! Wow—this physics stuff actually works.
But now there was no kick. So, I added the Waves LinEQ Broadband linear-phase equalizer (a non-linear-phase EQ can’t work in this context) in the kick drum track. This filtered out some of the kick drum’s lower frequencies so there was less cancellation while leaving the highs intact so they would still cancel as much as possible. Adjusting the shelving frequency and attenuation let in just enough of the original kick, without overwhelming the track. Even better, because the kick level was lower, I could bring up the low end to resurrect the bass part that had been overshadowed by the kick.
The mix traveled to the mastering page for a little more processing (Studio One’s Pro EQ and Binaural Pan, IK Multimedia’s Stealth maximizer, and Studio One’s metering). After hitting the desired readings of -13.0 LUFS with -0.2 True Peak readings, the mastering was done. Sure, I would much rather have had the individual tracks to do a remix, but it was what it was—a 28-year-old two-track mix.
To hear how this ended up, the audio example first plays an excerpt from the mastered version. Then there’s a brief pause, followed by the same section with the original file. I’m sure you’ll hear the difference in the kick drum.
Listen to an audio example from In the Meantime here:
Although it’s always better to fix issues at the source, here’s a tip to help repair recorded vocals during the mixing phase. The technique (which is featured in the new book How to Record and Mix Great Vocals in Studio One – 2nd Edition) combines multiband dynamics processing with equalization to both de-ess and reduce plosives. Although the screen shot shows the Multiband Dynamics processor in Studio One 5, this technique will work with previous Studio One versions if you duplicate the settings.
In the screen shot, the Multiband Dynamics’ Low band settings are outlined in red, and the High band settings are outlined in blue. (Note this is not the actual interface; the high band panel is pasted into the image from a different screen shot so you can see both the Low and High band settings simultaneously.)
The High band acts as a de-esser, because it applies compression to only the high frequencies. This helps tame sibilance. The Low band compresses only the low frequencies, which reduces pops. However, this preset also takes advantage of the way Multiband Dynamics combines equalization with dynamics control. Turning down the Low stage Gain all the way further reduces the low frequencies, where pops like to hang out and cause trouble.
For the High band, vary compression to taste. The compression settings are less critical for the Low band if you turn the Gain down all the way, but in either case, you’ll need to tweak the settings for your particular vocal track.
And that’s all there is to it. When a loud pop or sibilant sound hits the Multiband Dynamics, it’s compressed to be less annoying, while leaving the rest of the vocal intact. Vocal repaired!
Like being able to change what happens when one Event overlaps (covers over) a different Event.
Prior to Version 5, overlapped Events were treated the same. The overlapping Event became translucent, so you could see the waveform or note data of the Event underneath it. This is ideal for making audio crossfades, which is one of the main reasons for overlapping audio Events. To create a crossfade, type X, and optionally, click and drag up/down at the crossfade junction to shape the crossfade curve. Then you can shift+click on the overlapped Event, type ctrl+B, and combine them into a single Event. With note data, overlapping Events is helpful when combining, for example, the main snare hits on one track with alternate snare hits on a different track.
Another option after overlapping Events is mixing them together. Shift+click on the overlapped Event to include it with the overlapping Event on top. Then type Ctrl+B to mix audio, or G to merGe note data.
However, if you don’t crossfade or mix, then the region below the overlapping Event is still there. The overlapping Event is grayed, which can get confusing if you have a lot of muted sections; and if you remove the overlapping Event because you want to replace it with something else, it’s not obvious where the overlap occurred.
Some programs default to deleting, not just covering over, a section that’s being overlapped by another clip. This is useful when you’re doing lots of edits, because you’re not left with vestigial pieces of regions that still exist, but don’t do anything. To accommodate this type of workflow, Studio One 5 now offers a “no overlap” mode for Events. There are three ways to access this (Fig. 1).
Figure 1: In addition to using a keyboard shortcut, Studio One can default to “No overlap when editing events,” as chosen in the Arranger view or under Options.
Selecting “No overlap when editing events” deletes the overlapped part of an Event, and the replaced section looks like it’s part of the track (i.e., not grayed). However, if you later decide you didn’t really want to delete the overlapped region, then just remove the section that overlapped it. Now you can slip-edit the edge of the underlying Event back to where it was.
(Note that if you enabled Play Overlaps in a track’s Inspector, or chose “Enable Play Overdubs for New Audio Tracks” in Options/Advanced/Audio, so that you could overdub over an existing track and hear both the original track and the overdub on playback, enabling “No overlap when editing events” overrides this setting.)
Granted, this may seem like a small change, but it accommodates more workflow possibilities—especially if you learn the keyboard shortcut, and choose the right option at the right time.
This tip is excerpted from the updated/revised 2nd Edition of How to Record and Mix Great Vocals in Studio One. The new edition includes the latest Studio One 5 features, as well as some free files and Open Air impulses, but also has 35% more content than the first edition—it’s grown from 121 to 194 pages. And as a “thank you” to those who bought the original version, you’re eligible for a 50% discount on the 2nd edition. There’s also a bundle with the book and my complete set of 128 custom impulses for Open Air…but so much for how I spent my summer vacation, LOL. Let’s get to the tip.
Suppose you’ve laid down your raw vocal—great! Now it’s time to overdub some instrumental parts and background vocals. Unfortunately, though, that raw vocal is kind of…uninspiring. So you start browsing effects, tweaking them, trying different settings—and before you know it, you’re going down a processing rabbit hole in the middle of your session.
Next time, open up the Vocal QuickStrip. Insert this vocal processing’s “greatest hits” FX Chain in your vocal track, tweak a few settings, admire how wonderful the vocal sounds, and then carry on with your project.
There’s a download link for the Vocal QuickStrip.multipreset file, so you don’t need to assemble the chain yourself. It works with Studio One 4 as well as 5 (note that the Widen button for the Doubler is functional only in Studio One 5).
The Fat Channel (Fig. 1) is the heart of the chain. Of its three available compressors, the Tube Comp model emulates the iconic LA-2A compressor—the go-to vocal compressor for many engineers.
Figure 1: Fat Channel settings for the Vocal QuickStrip FX Chain.
The Fat Channel also includes a built-in high-pass filter. You can place the EQ either before or after the compressor; here, the EQ is before the compressor because boosting certain frequencies “pushes” the compressor harder. This contributes to the Vocal QuickStrip’s character.
The EQ uses all four stages. The most interesting aspect is how the Low Frequency and Low-Mid Frequency stages interact subtly when you edit the Bottom control. The Low-Frequency stage is fixed at 110 Hz with 1 dB of gain, but its Q tracks the Low-Mid Frequency stage’s Gain control. So, when you pull the LMF Gain down, the LF stage’s Q gets broader; increase Gain, and the Q goes up somewhat.
The High-Mid Frequency stage sits at 3 kHz, because boosting in this frequency range can improve intelligibility. The High-Frequency section adds “air” around 10 kHz. However, as you increase the Top control, the frequency goes just a bit lower so that the boost covers a wider section of the high-frequency range. This makes the effect more pronounced.
The Chorus is the next processor in the chain, but it’s used for doubling, not chorusing (Fig. 2).
Figure 2: The Chorus provides a voice-doubling ADT effect.
The parameters are preset to a useful doubling effect, and there are only two control switches—one to enable/bypass the effect, the other to increase the stereo spread.
For echo/delay effects, the Analog Delay comes next (Fig. 3). Although many of the parameters are well-suited to being macro controls, there had to be a few tradeoffs to leave enough space for the crucial controls from other effects.
Figure 3: The Analog Delay is set up for basic echo functionality.
For example, the Delay Time controls beats rather than being able to choose between beats and sweeping through a continuous range. Feel free to change the Macro control assignment. Also, the LFO isn’t used, so if you want to modify the ping-pong effects, you’ll need to open the interface and do so manually. In any event, the Delay Beats, Feedback, and Mix parameters cover what you need for most vocal echo effects.
The final link in the chain is the Open AIR reverb (Fig. 4). Normally I use my own impulse responses (see the Friday Tip Synthesize Open AIR Reverb Impulses in Studio One for info on how to create your own impulse responses), but of the factory impulses, for vocals I’m a big fan of the Gold Plate impulse. (If you have my Surreal Reverb Impulse Responses pack that’s available from the PreSonus shop, I’d recommend using the 1.2 Fast Damped, 1.5 Fast Damped, or 2.25 Fast Damped vocal reverbs. However, note that these three files are also included for free with the second edition of the Vocals book)
Figure 4: The Open AIR reverb plug-in’s Gold Plate impulse response is one of my favorite factory impulses for vocals.
When designing an FX Chain with so many available parameters, you need to choose which parameters (or combinations of parameters) are most important for Macro controls (Fig. 5).
Figure 5: The Vocal QuickStrip Macro controls.
Compress varies both the Peak Reduction and Gain to maintain a fairly constant output—an old trick (see the EZ Squeez One-Knob Compressor tip), but a useful one. Bottom, Push, and Top control three EQ stages. All of these, and the Compressor, have bypass switches so it’s easy to compare the dry and processed settings.
Delay also has a bypass switch, as well as controls for delay time in beats, delay feedback, and dry/wet delay mix. The only switches for the chorus-based doubling function are bypass and narrower/wider. Reverb includes a bypass button and dry/mix control, because that’s really all you need when you have a gorgeous convolution reverb in the chain.
So go ahead and download the Vocal QuickStrip, use it, and have fun. But remember that an FX Chain like this lends itself to modifications—for example, insert a Binaural Pan after the Open AIR reverb, or optimize some EQ frequencies to work better with your mic or voice. Try the other two compressors in the Fat Channel (or if you’re a PreSonus Sphere member, then try the other eight compressors—they all have different characters). With a little experimentation, you can transform an FX Chain that works for me (and will hopefully work well for you) to an FX Chain that’s perfect for you. Go for it!
Full disclosure: I’m not a big fan of chorusing. In general, I think it’s best relegated to wherever snares with gated reverbs, orchestral hits, DX7 bass presets, Fairlight pan pipes, and other 80s artifacts go to reminisce about the good old days.
But sometimes it’s great to be wrong, and multiband chorusing has changed my mind. This FX Chain (which works in Studio One Version 4 as well as Version 5) takes advantage of the Splitter, three Chorus plug-ins, Binaural panning, and a bit of limiting to produce a chorus effect that covers the range from subtle and shimmering, to rich and creamy.
There’s a downloadable .multipreset file, so feel free to download it, click on this window’s close button, bring the FX Chain into Studio One, and start playing. (Just remember to set the channel mode for guitar tracks to stereo, even with a mono guitar track.) However, it’s best to read the following on what the controls do, so you can take full advantage of the Multiband Chorus’s talents.
The Splitter creates three splits based on frequency, which in this case, are optimized for guitar with humbucking pickups. These frequencies work fine with other instruments, but tweak as needed. The first band covers up to 700 Hz, the second from 700 Hz to 1.36 kHz, and the third band, from 1.36 kHz on up (Fig. 1).
Figure 1. FX Chain block diagram and Macro Controls panel for the Multiband Chorus.
Each split goes to a Chorus. The mixed output from the three splits goes to a Binaural Pan to enhance the stereo imaging, and a Limiter to make the signal “pop” a little more.
Regarding the control panel, the Delay, Depth, LFO Width, and 1/2 Voices controls affect all three Choruses. Each Chorus also has its own on/off switch (C1, C2, and C3), Chorus/Double button (turning on the button enables the Double mode), and LFO Speed control. You’ll also find on/off buttons for the Binaural Pan and Limiter, as well as a Width control for the Binaural Pan. Fig. 2 shows the initial Chorus settings when you call up the FX Chain.
Figure 2. Initial FX Chain Chorus settings.
Because chorusing occurs in different frequency bands, the sound is more even and has a lusher sound than conventional chorusing. Furthermore, setting asynchronous LFO Speeds for the three bands can give a more randomized effect (at least until there’s an option for smoothed, randomized waveform shapes in Studio One).
A major multiband advantage comes into play when you set one of the bands to Doubler mode instead of Chorus. You may need to readjust the Delay and Width controls, but using Doubler mode in the mid- or high-frequency band, and chorusing for the other bands, gives a unique sound you won’t find anywhere else. Give it a try, and you’ll hear why it’s worth resurrecting the chorus effect—but with a multiband twist.