There’s nothing new about using an FX Channel to add an effect in parallel to a main track. But we can make effects even more effective by “tuning” them, to provide more focus.
This process works by inserting a Pro EQ3 before an FX Channel effect or effects (fig. 1). Then, use the EQ’s Low Cut and High Cut filters to tune a specific frequency range that feeds the effect. For example, I’ve mentioned restricting high and low frequencies prior to feeding amp sims, but we can use this focusing technique with any processor.
There are several reasons for placing the Pro EQ3 before the effect. With saturation effects, this reduces the possibility of intermodulation distortion. With other effects, reducing the level of unneeded frequencies opens up more headroom in the effect itself. Finally, with effects that respond to dynamics (autofilter, compressor, etc.), you won’t have frequencies you don’t want pushing the frequencies you do want over the processor’s threshold.
Here are some specific examples to help get your creative juices flowing.
Distortion or Saturation with Drums
The audio example plays four measures of drums going into the RedlightDist, with no focus. The next four measures focus on the high frequencies. This gives an aggressive “snap” to the snare. The next four measures focus on the low frequencies, to push the kick forward.
Fig. 2 shows the tunings for the high- and low-frequency focus.
Reverb with Guitar
The audio example plays four measures of midrange-frequency focus feeding reverb, then four measures using a high-frequency focus. Focusing is helpful with longer reverb times, because there are fewer frequencies to interfere with the main sound.
Fig. 3 shows the tunings for the midrange- and high-frequency focus filters.
Delay with Synth Solo
For our last example, the first five measures are synth with no focus. The next five measures focus on the lower frequencies. The difference is subtle, but it “tucks away” the reverb behind the solo line. The final five measures focus on the high frequencies, for a more distant echo vibe.
Fig. 4 shows the tunings for the midrange- and high-frequency focus filters.
These are just a few possibilities—another favorite of mine is sending focused frequencies to a chorus, so that the chorus effect doesn’t overwhelm an instrument. Expanders also lend themselves to this approach, as does saturation with bass and electric pianos.
Perhaps most importantly, focusing the effects can give a less cluttered mix. Even tracks with heavy processing can stand out, and sound well-defined.
The March 2020 blog post, Taming the Wild Autofilter, never appeared in any of The Huge Book of Studio One Tips & Tricks eBook updates. This is because the tip worked in Studio One 4, but not in Studio One 5. However, Studio One 6 has brought the Autofilter back to its former glory (and then some). Even better, we can now take advantage of FX Bus sends and dynamic EQ. So, this tip is a complete redo of the original blog post. (Compared to a similar tip in eBook version 1.4, this version replaces the Channel Strip with the Pro EQ3 for additional flexibility.)
The reason for coming up with this technique was that although I’d used the Autofilter for various applications, I couldn’t get it to work quite right for its intended application with guitar or bass. Covering the right filter cutoff range was a problem—for example, it wouldn’t go high enough if I hit the strings hard, but if I compensated for that by turning up the filter cutoff, then the cutoff wouldn’t go low enough with softer picking. Furthermore, the responsiveness varied dramatically, depending on whether I was playing high on the neck, or hitting low notes on the low E and A strings. This tip solves these issues.
The guitar track’s audio uses pre-fader sends to go to two FX Buses (fig. 1). The Autofilter Out FX Bus produces the audio output. The Autofilter Trig FX bus processes the audio going to the Autofilter’s sidechain. By processing the Guitar track’s send to the sidechain, we can make the Autofilter respond however we want. Furthermore, if needed, you can feed a low-level signal from the Guitar track’s pre-fader send into the Autofilter, to avoid distortion with high-resonance settings. This is possible because the Autofilter Trig bus—which you don’t need to hear, and can be any level you want—controls the Autofilter’s action.
Perhaps best of all, this also means the Autofilter no longer depends on having an input signal with varying dynamics. You can insert an amp sim, overdrive, compressor, or anything else that restricts dynamic range in front of the Autofilter. The Autofilter will still respond to the original Guitar track’s dynamics, as processed by the dynamic EQ.
The Pro EQ3 (fig. 2) conditions the send to make the Autofilter happy. The dynamic EQ attenuates lower frequencies that exceed the Threshold, but amplifies higher frequencies that exceed the Threshold. So, the Autofilter’s response to the higher-output, lower strings can be consistent with the upper strings.
The Autofilter (fig. 3) sets the LFO Cutoff Modulation to 0, because I wanted only the envelope to affect the filter. The settings for the Autofilter and Pro EQ3 interact with each other, as well as with the guitar and pickups. In this case, I used a Telecaster with a single-coil treble pickup. For humbucking pickups, you may need to attenuate the low frequencies more.
Like Autofilters in general, it takes some experimenting to dial in the ideal settings for your playing style, strings, pickups, musical genre, and so on. However, the big advantage of this approach is that once you find the ideal settings, the response will be less critical, more consistent, and more forgiving of big dynamic changes in your playing.
And here’s a final tip: Processing the signal going to the Autofilter’s sidechain has much potential. Try using Analog Delay, X-Trem, and other effects. Also, although the original Guitar track and Autofilter Trig faders are shown at 0, no law says they have to be. Feel free to mix in some of the original guitar sound, and/or the equalized Autofilter Trig bus audio.
High-gain distortion is great for lead guitar sustain and tone, but it also brings up that “splat” of pick noise at the note’s beginning. Sometimes, you want the gritty, dirty feel it adds. But it can be a distraction when your goal is a gentler, more lyrical tone that still retains the sound of heavy distortion.
This technique gives the best of both worlds for single-note leads, and is particularly effective with full mixes where the lead guitar has a lot of echo. Normally the echo will repeat the pick noise, so reducing it reduces clutter, and gives more clarity to the mix.
1. Open the lead part in the Edit window.
2. Choose Action, and under the Audio Bend tab, select Detect Transients.
3. Zoom in to verify there’s a Bend Marker at the beginning of each note’s first peak (fig. 1). If you need to add a Bend Marker, click at the note’s beginning using the Bend tool. To move a Bend Marker for more precise placement, hold Alt/Opt while clicking on the marker with the Bend tool, and drag.
4. Choose Action, and under the Audio Bend tab, select Split at Bend Markers. Now, each note is its own Event (fig. 2).
5. Make sure all the notes are selected (fig. 3). The next steps show any needed edits being made to one Event. However, because all the notes are selected, any edit affects all notes equally. To show the edits in more detail, the following steps zoom in on two notes.
6. Trim the note ends to remove some of the pre-note “dirt” (fig. 4).
7. Add a fade-in and fade-out (fig. 5). This doesn’t have to be exact, because you’ll optimize the times in step 9.
8. There’s a gap between notes, so time-stretch the end of the note to cover the gap. Alt/Opt+click on the end of a note, and drag to the right until the note end is up against the beginning of the next note (fig. 6).
9. That may seem like a lot of work, but once you’ve defined the bend markers, having to edit only one note to edit all the notes speeds the process.
Start playback with all the notes still selected, listen, and vary the fade times. Also experiment with the curve shape. A concave curve can work well with attacks. I often try for the minimum amount of attack and decay that gives the desired result, but not always—when taken to extremes, being able to shape notes enables options that sound almost like a synthesizer.
The audio example shows how this tweak affects a single-note lead. The first part is as recorded, the second part uses this tip.
Some virtual instruments can accept external audio inputs. This lets you process audio through the synthesizer’s various modules like filters, VCAs, effects, and so on. Essentially, the synthesizer becomes an effects processor. To accommodate this, Version 6 introduced a sidechain audio input for virtual instruments.
Not all instruments have this capability. I’ve tested the audio sidechain input successfully with Cherry Audio’s CA2600, Miniverse, PS-30, Rackmode Vocoder, and Voltage Modular. Arturia’s Vocoder V also works. I’d really appreciate any notes in the Comments section about other instruments that work with this feature.
Is My Virtual Instrument Compatible?
Insert the synth, and click on the sidechain symbol in its header. If you see a box with Send and Output options (fig. 1), you can feed audio into the synthesizer. Check the box for either a Send from a track (pre- or post-fader), or the track output.
You’ll probably need to enable the virtual instrument’s external audio input. Fig. 2 shows how to do this with Cherry Audio’s Miniverse, which emulates how the Minimoog accepted external inputs:
Studio One Setup
Fig. 3 shows the track layout for Studio One. Ignore the Gate for now, we’ll cover that shortly.
I chose a post-fader Send from the audio track, not the track output, to drive the synth. This is because I wanted to be able to mix parallel tracks—the audio providing the input, and the audio processed by the synthesizer.
Using the Gate
You won’t hear anything from the synth unless you trigger the VCA to let the external audio signal through. You can play a keyboard to trigger the synth for specific sections of the audio track, but the Gate can provide automatic triggering (fig. 4).
With Triggering enabled, the Gate produces a MIDI note trigger every time it opens. So, Insert the Gate in the audio track, and set the Instrument track’s MIDI input to Gate. Now, the audio will trigger the synth. Adjust the Gate Threshold for the most reliable triggering. This is particularly useful with instruments that have attacks, like drums, guitar, piano, etc.
This builds on last week’s tip about splitting and navigating within the Video Track, because one of the main reasons for creating splits is to import additional material. Although the Video Track can accept common video file formats, sometimes you’ll need to import static JPG or PNG images. These could be a band logo, screen shots for a tutorial video, a slide with your web site and contact information, photos from a smartphone, public domain images, etc. To bring them into the video track, you need to convert them into a compatible format, like MP4.
Many online sites offer to “convert JPEG to MP4 for free!” However, I’m skeptical of those kinds of sites. Fortunately, modern Mac and Windows operating systems include tools that can do any needed conversion.
Converting with the Mac
1. Open iMovie, click on Create New, and then choose Movie.
2. Click on Import Media. Navigate to the location of the image you want to convert, and open it in iMovie.
3. Choose File > Share > File.
4. In the window that opens, click Next…
5. Navigate to where you want to save the file, and click on Save. You now have an MP4 file.
If you need a longer video than the default 3 seconds, drag more copies of the file to the timeline before saving. Or, drag the file into Studio One multiple times.
Converting with Windows
1. Open Video Editor (it’s not necessary to install Clipchamp). Click on New Video, and name it.
2. Click on the nearly invisible Project Library button.
3. Drag the image you want to convert into the Library. Then right-click on the image, and choose Place in Storyboard. The default length is 3 seconds. If it needs to be longer, select Place in Storyboard again. Or, drag the finished file into Studio One multiple times.
4. Click on Finish video (in the upper right), then click on Export.
5. Name the file, navigate to where you want to save it, then click on Export. Done! Your image is now an MP4 video you can insert into Studio One.
What’s better than one vocoder? Two vocoders, of course 😊. This tip is more about a technique than an application, although we’ll cover an application to illustrate the technique. But the main goal is to inspire you to try stereo vocoding and come up with your own applications, so there are additional tips at the end.
Long-time blog readers may have noticed my fascination with fusing melodic and percussive components. The easiest way to do this is to have a drum track (or reverb, pink noise, hand percussion, whatever) follow the Chord Track via Harmonic Editing. Although this tip takes that concept further, it’s about more than just percussion. Inserting a Splitter in an FX Chain, and following it with two vocoders, opens sonic options you can’t obtain any other way.
The FX Chain and Track Layout
Fig. 1 shows the stereo vocoder FX Chain. This technique will also work with Artist. However, it requires three tracks:
This application uses stereo drums, so the Splitter mode is Channel Split. The Dual Pan modules at the vocoder outputs provide stereo imaging. I typically pan one vocoder full left and the other full right, but sometimes I’ll weight them more to center, or to one side of the stereo field.
Fig. 2 shows the track layout. Each Mai Tai instrument track has a Send. These feed the sidechains for the two vocoders to provide the carrier audio. Although the Mai Tai faders are at minimum, mixing in some instrument sound provides yet another character.
Applications
This brief audio example adds a melodic component to drums. The two Mai Tai MIDI tracks are offset by an octave.
Fig. 3 shows the vocoder patch matrices. These particular settings are of no real consequence, they just emphasize that using different patch matrix settings for the left and right channel vocoders can have a major impact on the sound.
As to other applications:
You’re probably familiar with Studio One’s Micro Edit view, where you can expand/collapse an effect in the Insert Device Rack. However, Studio One 6 extends this ability to third-party plug-ins. You can treat the Insert Device Rack as your plug-in GUI, and tweak or write automation for parameters without having to open a plug-in’s interface. The Micro Edit view is ideal for quick tweaks, but also, for effects that interact with each other, like EQ and compression. Instead of having to bounce back and forth between two different plug-in UIs, all the necessary parameters are laid out in front of you.
Fig. 1 shows an example of using a Micro Edit view in the master bus. The processors are Waves’ UM226 stereo-to-5.1 surround upmix plug-in, followed the IK Multimedia’s Stealth Limiter as a post-master fader effect. Although the UM226 is designed to synthesize surround, I also use it as a “secret weapon” to give more of an immersive feel, and sense of space, to stereo mixes. Its UI is relatively large, so this lets me tweak it while doing a mix with other plug-ins open. Micro Edit view lets you choose the parameters you want to include from third-party plug-ins. The ones shown are the ones I edit most often.
You can choose as many or as few parameters as you want, and all of them are automatable. However, also note that the standard automation options don’t have to overlap with these parameters. For example, I use standard automation for switched functions that might only be needed to change once or twice during a song. There’s no need to have them take up space in the Micro Edit view.
Serious Micro Edit Automation
Opening the view in the Insert Device Rack limits the fader length. That’s okay for tweaking parameters, but for automation, it’s better to have a longer fader. Version 6’s Channel Overview takes care of that (fig. 2).
Assigning Parameters
With PreSonus plug-ins, Micro Edit View parameters are pre-assigned to the most-used functions. With third-party effects, you select the Micro Edit view parameters, similarly to how you choose automation parameters.
Right-click on the effect name in the Insert Device rack, and choose Setup Micro Edit Parameters. Click on the parameters you want to add in the right pane, and choose Add. Now the parameters will be in the left pane, and visible as Micro Edit view parameters.
Fig. 3 shows how this can save a huge amount of screen real estate when mixing. The Ampeg SVT suite bass plug-in’s UI takes up a lot of space. So, I assigned all the parameters for the amp being used to Micro View parameters. Now tweaking the parameters simply requires expanding the Micro Edit view, and collapsing it again when done (expanding and collapsing are both one-click operations).
This is the kind of feature that may not seem like much of a big deal—until you start using it. Micro Edit view makes for a less cluttered, more compact, and visually streamlined mixing experience.
Heads-up: Version 1.3 of The Huge Book of Studio One Tips and Tricks is now available! This 637-page book with 230 innovative tips is a free update to owners of previous versions ($19.95 to new buyers). Download the update from your PreSonus or Sweetwater account the same way you downloaded your previous version. For more information, check out the series of Studio One eBooks. Please note: version 1.3 does not cover the new features in version 6, although there will be a free update in the future. If you have questions about the tips, suggestions for future updates, or want news about the next version, please visit the dedicated support forum.
Quantizing audio works best with short, percussive sounds—like drums. Rhythm guitar is more of a challenge, because the attacks aren’t always defined well, and volume fluctuations can add spurious transients as a chord decays. I know quite a few guitarists who set up their quantize parameters, hit “Apply,” and then wonder why the guitar part sounds worse, not better. So, they hit “Undo” and assume quantizing doesn’t work well for guitar. But it can, if you know the two tips in this week’s blog post.
To start the quantization process, select the event, open it in the Edit view, right-click on it, and choose Detect Transients. Do not choose Quantize, because we’ll need to edit some of the transient markers prior to quantizing. Then open the Audio Bend panel. The default transient detection is Standard analysis, with a Threshold of 80% (fig. 1). These are good initial settings for guitar.
Tip #1: First, Do No Harm
With complex chords and lots of strings vibrating, Studio One may dutifully identify each attack with a transient marker. However, when listening, you might not hear any attacks in those places. Applying quantization to places that don’t need to be quantized can degrade the sound.
In fig. 2, after playing back the part, it didn’t seem the markers colored red (top view) needed to be there—only the marker between the two chord attacks correlated to an audible transient.
Quantizing with all the markers in place created awkwardly stretched audio (middle view). After removing the markers in red and quantizing (bottom view), the quantization occurred exactly as desired.
Bottom line: If a transient marker doesn’t correlate to a transient you can hear, you’re probably better off deleting it. Sometimes, lowering the Threshold percentage in the Audio Bend panel takes care of removing unneeded markers for the entire Event.
Tip #2: Simplify Double Attacks
This is a big problem with rhythm guitar, because it’s physically impossible to hit all 6 strings at the same time. In fig. 3’s top view, note the doubled attack just before 3.4 (one of the attacks is colored red for clarity).
The middle view applies quantization. Studio One has dutifully shifted the two transients to 16th notes, as instructed by the quantization parameters. But this overstretches the audio between the two transients, and doesn’t sound good.
The bottom view shows the audio after removing the transient shown in red in the top view, and then applying quantization. Studio One has shifted the remaining attack transient to the nearest 16th note, which is what we want.
Here’s another tip that relates to fig. 3. Note how the quantized transients at 3.4 and 4 seem a little late, like they missed the first part of the attack. However, I’ve found that if Studio One doesn’t have to deal with double attacks, it makes good judgment calls about where to define the chord’s attack. I’ve tried placing the transient closer to where the attack starts and quantizing, but most of the time, the chord then sounds a little late. Trust the force.
In any case, a little manual tweaking prior to quantization can make the difference between a natural-sounding part that doesn’t seem quantized, and quantized audio with unwanted glitches. I never quantize a rhythm guitar part without looking over the transients first, and making sure they don’t exhibit any of the issues described here.
For me, the gold standard for sound isn’t what comes out a studio, but live music. One of the reasons is that mono sound does not exist in an acoustic environment—it’s always interacting with the acoustic environment.
Sure, we can add reverb to give a mono instrument like electric guitar a static position in a stereo field. However, that will always require some kind of time-based manipulation. For a recent song project, I wanted a background guitar part to have motion within a stereo field—but without using any kind of delay, reverb, panning, or EQ. In other words: dry, electric, mono-output guitar with a stereo image. Impossible? Let’s find out.
The Track Setup
The way this works is so simple I’m surprised I never figured it out before. Fig. 1 shows the track layout. The guitar (with added chorus, as required by the song) pans full left, and has two sends. One send provides the guitar’s audio to a bus. The other send controls the sidechain of a Compressor inserted in the bus. The bus is panned full right.
As the Compressor’s level changes, there’s a sense of motion as the dynamics and levels in the left and right channels change. Even better, these changes are tied to the instrument’s dynamics. This makes for a more natural effect.
This audio example plays the guitar by itself, panned to center, and going through the PreSonus chorus.
Now let’s hear the magic stereo effect. Although it’s most obvious on headphones, you’ll hear the effect on speakers as well.
The final audio example plays it in context with the mix. It adds a sense of animation to the guitar you can’t get in any other way. I also included this example because the drums are following the chord track. It has nothing to do with this tip, but I love the cool melodic quality it adds.
Compressor Settings
The Compressor settings are crucial. The ones shown in fig. 2 are a good place to start.
Adjust the settings by panning the source track full left. Increase the processed track’s level until both channels are at about the same level. Experiment with the Compressor’s Threshold to find the sweet spot between the compressor having no effect, or having too much of an effect. The Release can also affect this—too short or too long a release neuters the effect. You’ll likely have to readjust the processed channel’s level quite a bit as you zero in on the ideal settings.
With 15 ms Attack, both channels have their attacks hit at the same time and with the same intensity. So, the attack sound appears in the center, has the advantage of center-channel buildup, and helps anchor the part. As the audio decays and the channels are more dissimilar, the audio “wanders” more in the stereo field.
This technique is also great with background vocals. I didn’t use it in this song because there was so much motion going on overall I wanted the background vocals to be a constant. However I’ve used this technique with other songs, and it’s very effective—especially if you have multiple tracks of background vocals, and you apply individual magic stereo processing to each one.
Who reading this wouldn’t want to make a little more money from their music? Okay, dumb question.
When people think “soundtrack,” images of Hollywood scoring stages come to mind. But soundtracks are a much broader topic now. YouTube videos, podcasters, corporate presenters, educational videos, and local businesses doing radio or TV ads all need soundtracks.
In 1992, I wrote an article called “Subtractive Sequencing” for Keyboard magazine that described filling a piece of music with loops, and then cutting out sections to make an arrangement. It didn’t get much attention. But over 20 years later (!), a blog post called “Subtractive Arranging—Novel Production Method from Danny J Lewis” presented the same technique. This time, it did get attention, to the point where the inevitable “Why I Don’t Use Subtractive Arranging” appeared in someone else’s blog.
Why? He didn’t like how this technique created static arrangements. But those “static arrangements” are actually ideal for… wait for it… soundtracks.
A good soundtrack fills space behind visuals or narration, but always plays a supporting role. So, I use subtractive arrangements to create soundtracks. I just take a song, and remove anything with a human voice, lead lines, some of the layers in layered parts, and most ear candy unless it complements the visuals (fig. 1). Then I render the mix, and compress the heck out of it—not to win the loudness wars, but to maintain a constant level that can happily sit -12 to -15 dB or so below the narration or dialogue.
In fig. 1, subtracted events are filled in with white for clarity. The tracks Lead Vocal and Smile (some ear candy) were muted. So was a layered, Nashville tuned guitar track, because the high frequencies stood out too much. The layered Standard tuning part was left in, but extra tempo-synched echo for both were cut. Similarly, there were two tracks of “Beat Filters,” loops from my AdrenaLinn Guitars loop library. They were panned to center and side for a cool stereo effect—that no one would care about. So, I muted one of them. Sections of a house bass loop were also removed.
The final, mixed soundtrack is exactly 1.5 minutes, and consists of drums, rhythm guitar triggered by a drum sidechain signal, bass in some parts, a beat filter loop in some parts, and a little acoustic guitar. That’s all that was needed. One last tip: You can mix bass up pretty high, because it helps drive a song and is out of the speech range.
Every now and then, I go through some of my older projects, strip out anything distracting, create a soundtrack, and save it into my “Soundtracks” folder. Whenever I need a soundtrack, there’s always something suitable in there. And even better, if you get into creating soundtracks, you might find some interesting opportunities to make money from them.
Here’s the finished soundtrack:
For more tips on how to get the most out of Studio One, check out the series of Studio One eBooks that cover tips & tricks, creative mixing, recording/mixing vocals, dynamics processors, and recording/mixing guitar. Remember, just like software, eBook owners can download the latest “point” updates for free from their PreSonus account (or Sweetwater account, if purchased from there). Owners are also eligible for new editions at a reduced price.