By Craig Anderton
Calling all beats/hip-hop/EDM/hard rock fans: This novel effects starts with drums modulating the Vocoder’s white noise carrier, and takes off from there. The sound can be kind of like a strange, aggressive reverb—or not, because the best part of this tip is the crazy variety of sounds that editing or automating parameters can create.
The following audio example plays just a few of the possibilities. The first two measures are the original loop. Then, several 2-measure examples alter Vocoder parameters.
Fig. 1 shows the track layout:
Figure 1: Track layout for Tuff Beats processing.
Figure 2: Tone Generator settings.
Editing the Effect
Figure 3: Typical Vocoder settings.
The only crucial setting is that the Carrier Source must be set to Side-Chain (fig. 3). Aside from that, you have plenty of options for subverting the sound:
It doesn’t take much effort to come up with some pretty novel sounds, so…have fun!
Studio One offers several ways to “de-ess” excessive sibilants (“s” sounds). De-essing combines compression and EQ. The EQ focuses on the frequency range where sibilants are most prominent. Then, compression reduces this range’s level when sibilants are present. Prior to version 6, using Multiband Dynamics was the best way to do de-essing.
With version 6, the Pro EQ3’s new dynamic EQ functionality is excellent for reducing ess sounds. However, the equally new De-Esser (fig. 1) is designed specifically for the job of fixing excessive sibilance, quickly and effectively.
To get the most out of the De-Esser, note that some of the parameters work together as a team. So, alternating edits between some controls is often the best way to optimize the effect.
Ess sounds tend to be short. By the time you’ve started to tweak a parameter, the ess sound has likely already ended. So, for easy editing, create a short loop on a word with the prominent ess sound.
Frequency and Listen
1. Enable Listen.
2. Vary Frequency until you find the frequency where the ess sound is most prominent.
Solo and S-Reduction
3. After identifying the frequency, enable Solo. You’ll hear only the ess sound whose volume is being reduced.
4. Adjust S-Reduction to get a feel for the optimum ess reduction amount. At 0.00 dB, there’s no reduction. At ‑60.00, sounds in addition to the ess sound will likely be reduced.
5. Next, turn off Solo, and adjust S-Reduction in context with the looped word. Less negative S-Reduction values concentrate on reducing the ess sound’s initial transient. More negative values reduce more of the ess sound past the initial transient.
6. Do a final Frequency parameter check to optimize the high-frequency response with de-essing. For example, you may be able to raise the frequency to retain more highs, yet still have effective ess reduction.
The metering is helpful in optimizing the De-Esser’s settings. The orange meter shows the amount of reduction. The blue meter shows the input level.
How to Use the Shape Parameter
Ess sounds cover a fairly small range of high frequencies. The Narrow Shape is best for this application because it compresses a narrow band. Frequencies above and below that band remain untouched.
In addition to ess sounds, the De-Esser can also reduce “shhh” sounds (e.g., like the shh sound in “action” or “compression”). Shh sounds cover a wider range of frequencies, and often require a lower Frequency setting. For these sounds, the Wide Shape splits the audio into high and low bands, and processes the entire high band.
You may need to do two passes, one with a Wide Shape for shh sounds, and one with a Narrow Shape for ess sounds. Be conservative with the settings for the two passes, because the changes will reinforce each other.
How to Use the Range Parameter
This parameter is the De-Esser’s unsung hero. Setting Range to Full allows the full amount of reduction dialed in by S-Reduction. Gentle Range restricts reduction to ‑6 dB.
The Gentle setting is useful for more than just guaranteeing a subtler effect. With the Gentle Range enabled, you can dial in huge amounts of S-Reduction. This allows processing as much of the ess or shh sound as possible, not just the initial transient. However, limiting the amount of reduction to -6 dB prevents the amount of reduction from being objectionable.
The De-Esser is not just for singing, but also podcasts, voiceovers, and narration. It can even reduce harshness with amp sims, as described in De-Esser Meets Amp Sims. And it can probably do other things that are yet to be discovered!
In the 60s, flanging was an electro-mechanical process that involved two turntables or two tape recorders. Since then, flanging has evolved into a digitally driven effect, with a variety of cool bells and whistles. Paradoxically, though, many of today’s digital flangers can’t reproduce the period-correct sound of 60s flanging.
Five years ago, I wrote a blog post about a vintage flanger FX Chain that took advantage of Studio One Pro’s Splitter and Extended FX Chains features. Although this week’s tip takes a little more effort to set up than just loading an FX Chain, and doesn’t have Macro Controls, it gives the same authentic flanging sound for Studio One Artist—check out the audio example.
The 60s flanging sound has three important qualities:
Fig. 1 shows the track setup for the flanging effect.
1. The track you want to flange feeds two FX buses via pre-fader Sends. Turn the track’s channel fader down all the way. The Sends must have the same level (e.g., -6.0 dB).
2. Insert an Analog Delay in each FX Channel. Use the settings in fig. 2 for both of them.
3. Insert a Mixtool after the Analog Delay in the Variable Delay FX Channel. Turn on the Mixtool’s Invert Left and Invert Right buttons.
4. The channel faders settings for the Fixed Delay and Variable Delay channels need to be identical, and track each other. It’s a good idea to Group them.
How It Works
The Fixed Delay channel has a 5 ms delay. This is the “dry” channel. The Variable Delay channel’s Mixtool flips the FX Channel’s phase. The Analog Delay in the Variable Delay FX Channel can be longer or shorter than 5 ms. So, when the audio passes through 5 ms of delay, there’s the cancellation effect of through-zero flanging. By delaying the “dry” path, we’ve effectively allowed the Variable Delay to go into the future…at least as far as the dry audio path is concerned.
Create the flanging effect with the Analog Delay controls in the Variable Delay channel:
This week, I wanted to give y’all a little gift: 15 “analog cab” IRs that provide alternate User Cabinet sounds for Ampire. Just hit the download link at the bottom, and unzip.
If you’re not familiar with the concept of an analog cab, it’s about using EQ (not the usual digital convolution techniques) to model a miked cab’s response curve. This gives a different sonic character compared to digitally-generated cabs. (For more information, see Create Ampire Cabs with Pro EQ2.) An analogy would be that convolution creates a photograph of a cab, while analog techniques create a painting.
The 15 impulse responses (IRs) in the Ampire Analog Cab IRs folder were made by sending a one-sample impulse through EQ, and rendering the result. This process creates the WAV file you can then load into Ampire’s User Cabinet. The IRs include the following cab configurations: 1×8, 1×10, (4) 1×12, (3) 2×12, 4×10, and (5) 4×12.
How to Use Analog Cabs
In any event, whether you go for individual impulses, layering, or creating stereo impulses, I think you’ll find that “analog” cab IRs extend Ampire’s sonic palette even further. And if you have any questions, or feedback on using analog cabs, feel free to take advantage of the Comments section!
Download the Ampire Analog Cab IRs.zip file below:
Many of these tips have their genesis in asking “What if?” That question led to the Higher-Def Amp Sim Sounds blog post, which people seemed to like. But then I thought “What about taking this idea even further?” Much to my surprise, it could go further. This week’s tip, based on the Ampire High Density pack, is ideal for increasing the definition and articulation of high-gain and metal amp sims.
Fig. 1 shows the FX Chain (the download link is available at the end of this post). The Splitter is in Channel Split mode. If your guitar track is mono so it doesn’t have two channels, change the track mode to stereo and then bounce the Event to itself. This creates a dual mono track, which is optimum for this application.
With traditional multiband processing, each band represents a range of frequencies. Distorting a limited range of frequencies reduces intermodulation distortion. The result is a more defined, articulated sound quality.
Fig. 1 implements a variation on multiband processing. It has four amps, but inserts Ampire’s Ten Band Graphic Equalizer before each amp. The graphic EQ sends two narrow frequency bands into each amp. Choosing frequency bands that are as far apart as possible reduces intermodulation distortion even further than standard multiband processing.
Referring to fig. 2, two bands in each graphic EQ are at +6 dB. The others are all at 0. Note how the various EQs offset the bands to different frequencies.
The Dual Pan plug-ins create a stereo image. With a traditional multiband setup, I tend to pan the low- and high-frequency bands to center, and spread the lower mids and upper mids in stereo. That doesn’t apply here, because there aren’t wide frequency ranges. Use whatever panning gives a stereo image you like.
A waveform is worth a thousand words, so check out the audio example. The first half is guitar going through Ampire’s German Metal amp sim. The second half uses this technique, with the same guitar track and amp sim settings. I think you’ll hear quite a difference.
Can This Be Taken Even Further?
Yes, it can—I also tried using eight splits. Because the Splitter module handles a maximum of five splits, I duplicated (complete) the track with the FX Chain, and fed both tracks with the same guitar part. The 31.2 Hz and 16 kHz bands aren’t particularly relevant, so I ignored those and fed one band from each EQ into an amp. As expected, this asks quite a bit of your CPU. Consider transforming the track to rendered audio (and preserving the realtime state, in case you need edits in the future).
However, I’m not convinced I liked the sound better. That level of definition seemed a little too clean for a metal amp sim. Sure, give it a try—but I feel the setup in this tip is the sweet spot of sound quality and convenience.
Download the FX Chain below!
Over four years ago, the blog post Colorization: It’s Not Just about Eye Candy covered the basics of using color. However, v6.1’s Custom Colors feature goes way beyond Studio One’s original way of handling colors.
The Help Viewer describes Custom Color operations, so we’ll concentrate on the process of customizing colors efficiently for your needs. For example, my main use for colors is to differentiate different track types (e.g., drums, synth, loops, voice, guitar, etc.). Then, changing the color’s brightness or saturation can indicate specific attributes within a track group, like whether a track is a lead part or background part, or whether a part is finished or needs further editing.
Opening the Custom Colors window and seeing all those colors may seem daunting. But as you’ll see, specifying the colors you want is not difficult.
What Are Hex, HSL, and RGB?
Electronic displays have three primary colors—red, green, and blue. Combining these produces other colors. For example, combining red and blue creates purple, while combining green and blue creates cyan. The three fields at the bottom of the expanded Custom Colors window (fig. 1) show the three main ways to define colors (left to right): Hex, HSL (Hue, Saturation, Lightness), and RGB (Red, Green, Blue). These are simply three different ways to express the same color.
RGB uses three sets of numbers, from 0 to 255, to express the values of Red, Green, and Blue. 255, 0, 0 would mean all red, no green, and no blue.
Hex strings together three sets of two hex digits. The first two digits indicate the amount of red, the second two the amount of green, and the final two the amount of blue.
HSL is arguably the most intuitive way to specific custom colors, so that’s the option selected in fig. 1.
You can think of the spectrum of colors as a circle that starts at red, goes through the various colors, and ends up back at red. So, each color represents a certain number of degrees of rotation on that wheel. The number of degrees corresponds to the Hue (color), represented by the H in HSL. Each main color is 30 degrees apart along the wheel:
S represents the amount of saturation, from 0 to 100%. This defines the color’s vibrancy—with 100% saturation, the color is at its most vibrant. Pulling back on saturation mutes the color more. L is the luminance, which is basically brightness. Like saturation, the value goes from 0 to 100%. As you turn up luminance, the color becomes brighter until it washes out and becomes white. Turn luminance down, and the color becomes darker.
The Payoff of Custom Colors
Here’s why it’s useful to know the theory behind choosing colors. As mentioned at the beginning, I use two color variations for each group of tracks. For example, vocal tracks are green. I wanted bright green for lead vocals, and a more muted green for background vocals. For the bright green color, I created a custom color with HSL values of 120, 100%, and 50%. For the alternate color, I used the same values except for changing Saturation to 50%.
Fig. 2 shows the custom color parameter values used for the 12 main track groups. The right-most column in fig. 1 shows the main track group colors. The next column to the left shows the variation colors, which have 50% saturation. In the future, I’ll be adding more colors to the 12 original colors (for example, brown is the 13th color down from the top in fig. 1’s custom colors). Fortunately, the custom color feature lets you save and load presets.
The brain can parse images and colors more quickly than words, and this activity occurs mostly in the brain’s right hemisphere. This is the more intuitive, emotional hemisphere, as opposed to the left hemisphere that’s responsible for analytical functions like reading words. When you’re in the right hemisphere’s creative zone, you want to stay there—and v6.1’s track icons and custom colors help you do that.
But Wait…There’s More!
Don’t forget that Studio One also has overall appearance preferences at Options > Appearance. This is kind of like a “master volume control” for colors. If you increase contrast, the letters for track names, plugins, etc. really “pop.” For my custom colors, increasing the overall Luminance emphasizes the difference between the main track color and the variation track color.
There’s nothing new about using an FX Channel to add an effect in parallel to a main track. But we can make effects even more effective by “tuning” them, to provide more focus.
This process works by inserting a Pro EQ3 before an FX Channel effect or effects (fig. 1). Then, use the EQ’s Low Cut and High Cut filters to tune a specific frequency range that feeds the effect. For example, I’ve mentioned restricting high and low frequencies prior to feeding amp sims, but we can use this focusing technique with any processor.
There are several reasons for placing the Pro EQ3 before the effect. With saturation effects, this reduces the possibility of intermodulation distortion. With other effects, reducing the level of unneeded frequencies opens up more headroom in the effect itself. Finally, with effects that respond to dynamics (autofilter, compressor, etc.), you won’t have frequencies you don’t want pushing the frequencies you do want over the processor’s threshold.
Here are some specific examples to help get your creative juices flowing.
Distortion or Saturation with Drums
The audio example plays four measures of drums going into the RedlightDist, with no focus. The next four measures focus on the high frequencies. This gives an aggressive “snap” to the snare. The next four measures focus on the low frequencies, to push the kick forward.
Fig. 2 shows the tunings for the high- and low-frequency focus.
Reverb with Guitar
The audio example plays four measures of midrange-frequency focus feeding reverb, then four measures using a high-frequency focus. Focusing is helpful with longer reverb times, because there are fewer frequencies to interfere with the main sound.
Fig. 3 shows the tunings for the midrange- and high-frequency focus filters.
Delay with Synth Solo
For our last example, the first five measures are synth with no focus. The next five measures focus on the lower frequencies. The difference is subtle, but it “tucks away” the reverb behind the solo line. The final five measures focus on the high frequencies, for a more distant echo vibe.
Fig. 4 shows the tunings for the midrange- and high-frequency focus filters.
These are just a few possibilities—another favorite of mine is sending focused frequencies to a chorus, so that the chorus effect doesn’t overwhelm an instrument. Expanders also lend themselves to this approach, as does saturation with bass and electric pianos.
Perhaps most importantly, focusing the effects can give a less cluttered mix. Even tracks with heavy processing can stand out, and sound well-defined.
The March 2020 blog post, Taming the Wild Autofilter, never appeared in any of The Huge Book of Studio One Tips & Tricks eBook updates. This is because the tip worked in Studio One 4, but not in Studio One 5. However, Studio One 6 has brought the Autofilter back to its former glory (and then some). Even better, we can now take advantage of FX Bus sends and dynamic EQ. So, this tip is a complete redo of the original blog post. (Compared to a similar tip in eBook version 1.4, this version replaces the Channel Strip with the Pro EQ3 for additional flexibility.)
The reason for coming up with this technique was that although I’d used the Autofilter for various applications, I couldn’t get it to work quite right for its intended application with guitar or bass. Covering the right filter cutoff range was a problem—for example, it wouldn’t go high enough if I hit the strings hard, but if I compensated for that by turning up the filter cutoff, then the cutoff wouldn’t go low enough with softer picking. Furthermore, the responsiveness varied dramatically, depending on whether I was playing high on the neck, or hitting low notes on the low E and A strings. This tip solves these issues.
The guitar track’s audio uses pre-fader sends to go to two FX Buses (fig. 1). The Autofilter Out FX Bus produces the audio output. The Autofilter Trig FX bus processes the audio going to the Autofilter’s sidechain. By processing the Guitar track’s send to the sidechain, we can make the Autofilter respond however we want. Furthermore, if needed, you can feed a low-level signal from the Guitar track’s pre-fader send into the Autofilter, to avoid distortion with high-resonance settings. This is possible because the Autofilter Trig bus—which you don’t need to hear, and can be any level you want—controls the Autofilter’s action.
Perhaps best of all, this also means the Autofilter no longer depends on having an input signal with varying dynamics. You can insert an amp sim, overdrive, compressor, or anything else that restricts dynamic range in front of the Autofilter. The Autofilter will still respond to the original Guitar track’s dynamics, as processed by the dynamic EQ.
The Pro EQ3 (fig. 2) conditions the send to make the Autofilter happy. The dynamic EQ attenuates lower frequencies that exceed the Threshold, but amplifies higher frequencies that exceed the Threshold. So, the Autofilter’s response to the higher-output, lower strings can be consistent with the upper strings.
The Autofilter (fig. 3) sets the LFO Cutoff Modulation to 0, because I wanted only the envelope to affect the filter. The settings for the Autofilter and Pro EQ3 interact with each other, as well as with the guitar and pickups. In this case, I used a Telecaster with a single-coil treble pickup. For humbucking pickups, you may need to attenuate the low frequencies more.
Like Autofilters in general, it takes some experimenting to dial in the ideal settings for your playing style, strings, pickups, musical genre, and so on. However, the big advantage of this approach is that once you find the ideal settings, the response will be less critical, more consistent, and more forgiving of big dynamic changes in your playing.
And here’s a final tip: Processing the signal going to the Autofilter’s sidechain has much potential. Try using Analog Delay, X-Trem, and other effects. Also, although the original Guitar track and Autofilter Trig faders are shown at 0, no law says they have to be. Feel free to mix in some of the original guitar sound, and/or the equalized Autofilter Trig bus audio.
High-gain distortion is great for lead guitar sustain and tone, but it also brings up that “splat” of pick noise at the note’s beginning. Sometimes, you want the gritty, dirty feel it adds. But it can be a distraction when your goal is a gentler, more lyrical tone that still retains the sound of heavy distortion.
This technique gives the best of both worlds for single-note leads, and is particularly effective with full mixes where the lead guitar has a lot of echo. Normally the echo will repeat the pick noise, so reducing it reduces clutter, and gives more clarity to the mix.
1. Open the lead part in the Edit window.
2. Choose Action, and under the Audio Bend tab, select Detect Transients.
3. Zoom in to verify there’s a Bend Marker at the beginning of each note’s first peak (fig. 1). If you need to add a Bend Marker, click at the note’s beginning using the Bend tool. To move a Bend Marker for more precise placement, hold Alt/Opt while clicking on the marker with the Bend tool, and drag.
4. Choose Action, and under the Audio Bend tab, select Split at Bend Markers. Now, each note is its own Event (fig. 2).
5. Make sure all the notes are selected (fig. 3). The next steps show any needed edits being made to one Event. However, because all the notes are selected, any edit affects all notes equally. To show the edits in more detail, the following steps zoom in on two notes.
6. Trim the note ends to remove some of the pre-note “dirt” (fig. 4).
7. Add a fade-in and fade-out (fig. 5). This doesn’t have to be exact, because you’ll optimize the times in step 9.
8. There’s a gap between notes, so time-stretch the end of the note to cover the gap. Alt/Opt+click on the end of a note, and drag to the right until the note end is up against the beginning of the next note (fig. 6).
9. That may seem like a lot of work, but once you’ve defined the bend markers, having to edit only one note to edit all the notes speeds the process.
Start playback with all the notes still selected, listen, and vary the fade times. Also experiment with the curve shape. A concave curve can work well with attacks. I often try for the minimum amount of attack and decay that gives the desired result, but not always—when taken to extremes, being able to shape notes enables options that sound almost like a synthesizer.
The audio example shows how this tweak affects a single-note lead. The first part is as recorded, the second part uses this tip.
Some virtual instruments can accept external audio inputs. This lets you process audio through the synthesizer’s various modules like filters, VCAs, effects, and so on. Essentially, the synthesizer becomes an effects processor. To accommodate this, Version 6 introduced a sidechain audio input for virtual instruments.
Not all instruments have this capability. I’ve tested the audio sidechain input successfully with Cherry Audio’s CA2600, Miniverse, PS-30, Rackmode Vocoder, and Voltage Modular. Arturia’s Vocoder V also works. I’d really appreciate any notes in the Comments section about other instruments that work with this feature.
Is My Virtual Instrument Compatible?
Insert the synth, and click on the sidechain symbol in its header. If you see a box with Send and Output options (fig. 1), you can feed audio into the synthesizer. Check the box for either a Send from a track (pre- or post-fader), or the track output.
You’ll probably need to enable the virtual instrument’s external audio input. Fig. 2 shows how to do this with Cherry Audio’s Miniverse, which emulates how the Minimoog accepted external inputs:
Studio One Setup
Fig. 3 shows the track layout for Studio One. Ignore the Gate for now, we’ll cover that shortly.
I chose a post-fader Send from the audio track, not the track output, to drive the synth. This is because I wanted to be able to mix parallel tracks—the audio providing the input, and the audio processed by the synthesizer.
Using the Gate
You won’t hear anything from the synth unless you trigger the VCA to let the external audio signal through. You can play a keyboard to trigger the synth for specific sections of the audio track, but the Gate can provide automatic triggering (fig. 4).
With Triggering enabled, the Gate produces a MIDI note trigger every time it opens. So, Insert the Gate in the audio track, and set the Instrument track’s MIDI input to Gate. Now, the audio will trigger the synth. Adjust the Gate Threshold for the most reliable triggering. This is particularly useful with instruments that have attacks, like drums, guitar, piano, etc.