PreSonus Blog

Tag Archives: Craig Anderton


Friday Tips: Synth + Sample Layering

One of my favorite techniques for larger-than-life sounds is layering a synthesizer waveform behind a sampled sound. For example, layering a sine wave along with piano or acoustic guitar, then mixing the sine wave subtly in the background, reinforces the fundamental. With either instrument, this can give a powerful low end. Layering a triangle wave with harp imparts more presence to sampled harps, and layering a triangle wave an octave lower with a female choir sounds like you’ve added a bunch of guys singing along.

Another favorite, which we’ll cover in detail with this week’s tip, is layering a sawtooth or pulse wave with strings. I like those syrupy, synthesized string sounds that were so popular back in the 70s, although I don’t like the lack of realism. On the other hand, sampled strings are realistic, but aren’t lush enough for my tastes. Combine the two, though, and you get lush realism. Here’s how.

  1. Create an instrument track with Presence, and call up the Violin Full preset.
  2. Drag Mai Tai into the same track. You’ll be asked if you want to Replace, Keep, or Combine. Choose Combine.

  1. After choosing Combine, both instruments will be layered within the Instrument Editor (see above).
  2. Program Mai Tai for a very basic sound, because it’s there solely to provide reinforcement—a slight detuning of the oscillators, no filter modulation, very basic LFO settings to add a little vibrato and prevent too static a waveform, amplitude envelope and velocity that tracks the Presence sound as closely as possible, some reverb to create a more “concert hall” sound, etc. The screen shot shows the parameters used for this example. The only semi-fancy programming tricks were making one of the oscillators a pulse wave instead of a sawtooth, and panning the two oscillators very slightly off-center.

 

  1. Adjust the Mai Tai’s volume for the right mix—enough to supplement Presence, but not overwhelm it.

 

That’s all there is to it. Listen to the audio example—first you’ll hear only the Presence sound, then the two layers for a lusher, more synthetic vibe that also incorporates some of the realism of sampling. Happy orchestrating!

Friday Tips: Studio One’s Amazing Robot Bassist

When Harmonic Editing was announced, I was interested. When I used it for the first time, I was intrigued. When I discovered what it could do for songwriting…I became addicted.

Everyone creates songs differently, but for me, speed is the priority—I record scratch tracks as fast as possible to capture a song’s essence while it’s hot. But if the tracks aren’t any good, they don’t inspire the songwriting process. Sure, they’ll get replaced with final versions later, but you don’t want boring tracks while writing.

For scratch drums on rock projects, I have a good collection of loops. Guitar is my primary instrument, so the rhythm and lead parts will be at least okay. I also drag the rhythm guitar part up to the Chord Track to create the song’s “chord chart.”

Then things slow down…or at least they did before Harmonic Editing came along. Although I double on keyboards, I’m not as proficient as on guitar but also, prefer keyboard bass over electric bass—because I’ve sampled a ton of basses, I can find the sound I want instantly. And that’s where Harmonic Editing comes in.

The following is going to sound ridiculously easy…because it is. Here’s how to put Studio One’s Robot Bassist to work. This assumes you’ve set the key (use the Key button in the transport, or select an Instrument part and choose Event > Detect Key Signature), and have a Chord Track that defines the song’s chord progression.

 

  1. Play the bass part by playing the note on a MIDI keyboard that corresponds to the song’s key. Yes, the note—not notes. For example, if the song is in the key of A, hit an A wherever you want a bass note.
  2. Quantize what you played. It’s important to quantize because presumably, the chord changes are quantized, and the note attack needs to fall squarely at the beginning of, or within, the chord change. You can always humanize later.
  3. Open the Inspector, unfold the Follow Chords options, and then choose Bass (Fig. 1).

 

Figure 1: Choose the Bass option to create a bass part when following chords.

  1. Now you have a bass part! If the bass part works, choose the Edit tab, select all the notes, and choose Action > Freeze Pitch. This is important, because the underlying endless-string-of-notes remains the actual MIDI data. So if you copy the Event and paste it, unless you then ask the pasted clip to follow chords, you have the original boring part instead of the robotized one.
  2. After freezing, turn off Follow Chords, because you’ve already followed the chords. Now is the time to make any edits. (Asking the followed chords to follow chords can confuse matters, and may modify your edits.)

The bottom line: with one take, a few clicks, and (maybe) a couple quick edits—instant bass part (Fig. 2).

Figure 2: The top image is the original part, and yes, it sounds as bad as it looks. The lower image is what happened after it got robotized via Harmonic Editing, and amazingly, it sounds pretty good.

Don’t believe me? Well, listen to the following.

 

You’ll hear the bass part shown in Fig. 2, which was generated in the early stages of writing my latest music video (I mixed the bass up a little on the demo so you can hear it easily). Note how the part works equally well for the sustained notes toward the beginning, and well as the staccato parts at the end. To hear the final bass part, click the link for Puzzle of Love [https://youtu.be/HgMF-HBMrks]. You’ll hear I didn’t need to do much to tweak what Harmonic Editing did.

But Wait! There’s More!

Not only that, but most of the backing keyboard parts for Puzzle of Love (yes, including the piano intro) were generated in essentially the same way. That requires a somewhat different skill set than robotizing the bass, and a bit more editing. If you want to know more (use the Comments section), we’ll cover Studio One’s Robot Keyboardist in a future Friday Tip.

Friday Tips: Demystifying the Limiter’s Meter Options

Limiters are common mastering tools, so they’re the last processor in a signal chain. Because of this, it’s important to know as much as possible about its output signal, and Studio One’s Limiter offers several metering options.

PkRMS Metering

The four buttons under the meter’s lower left choose the type of meter scale. PkRMS, the traditional metering option, shows the peak level as a horizontal blue bar, with the average (RMS) level as a white line superimposed on the blue bar (Fig. 1). The average level corresponds more closely to how we perceive musical loudness, while the bar indicates peaks, which is helpful when we want to avoid clipping.

The TP Button

Enabling the True Peak button takes the possibility of intersample distortion into account. This type of distortion can occur on playback if some peaks use up the maximum available headroom in a digital recording, and then these same peaks pass through the digital-to-analog converter’s output smoothing filter to reconstruct the original waveform. This reconstructed waveform might have a higher amplitude than the peak level of the samples, which means the waveform now exceeds the maximum available headroom (Fig. 2).

 

Figure 2: How intersample distortion occurs.

For example, you might think your audio isn’t clipping because without TP enabled, the output peak meter shows -0.1 dB. However, enabling True Peak metering may reveal that the output is as much as +3 dB over 0 when reconstructed. The difference between standard peak metering and true peak metering depends on the program material.

K-System Metering

The other metering options—K-12, K-14, and K-20 metering—are based on a metering system developed by Bob Katz, a well-respected mastering engineer. One of the issues any mix or mastering engineer has to resolve is how loud to make the output level. This has been complicated by the “loudness wars,” where mixes are intended to be as “hot” as possible, with minimal dynamic range. Mastering engineers have started to push back against this not just to retain musical dynamics, but because hot recordings cause listener fatigue. Among other things, the K-System provides a way to judge a mix’s perceived loudness.

A key K-System feature is an emphasis on average (not just peak) levels, because they correlate more closely to how we perceive loudness. A difference compared to conventional meters is that K-System meters use a linear scale, where each dB occupies the same width (Fig. 3). A logarithmic scale increases the width of each dB as the level gets louder, which although it corresponds more closely to human hearing, is a more ambiguous way to show dynamic range.

Figure 3: The K-14 scale has been selected for the Limiter’s output meter.

 

Some people question whether the K-System, which was introduced two decades ago, is still relevant. This is because there’s now an international standard (based on a recommendation by the International Telecommunications Union) that defines perceived average levels, based on reference levels expressed in LUFS (Loudness Units referenced to digital Full Scale). As an example of a practical application, when listening to a streaming service, you don’t want massive level changes from one song to the next. The streaming service can regulate the level of the music it receives so that all the songs conform to the same level of perceived loudness. Because of this, there’s no real point in creating a hot master—it will just be turned down to bring it in line with songs that retain dynamic range; and the latter will be turned up if needed to give the same perceived volume.

 

Nonetheless, the K-System remains valid, particularly when mixing. When you mix, it’s best to have a standardized, consistent monitoring level because human hearing has a different frequency response at different levels (Fig. 4).

 

Figure 4: The Fletcher-Munson curve shows that different parts of the audio spectrum need to be at different levels to be perceived as having the same volume. Low frequencies have to be substantially louder at lower levels to be perceived as having equal volume.

 

The K-System links monitoring levels with meter readings, so you can be assured that music reaching the same levels will sound like they’re at the same levels. This requires calibrating your monitor levels to the meter readings with a sound level meter. If you don’t have a sound level meter, many smartphones can run sound level meter apps that are accurate enough.

 

Note that in the K-System, 0 dB does not represent the maximum possible level. Instead, the 0 dB point is shifted “down” from the top of the scale to either -12, -14, or -20 dB, depending on the scale. These numbers represent the amount of headroom above 0, and therefore, the available dynamic range. You choose a scale based on the music you’re mixing or mastering—like -12 for music with less dynamic range (e.g., dance music), -14 for typical pop music, and -20 dB for acoustic ensembles and classical music. You then aim for having the average level hover around the 0 dB point. Peaks that go above this point will take advantage of the available headroom, while quieter passages will go below this point. Like conventional meters, the K-Systems meters have green, yellow, and red color-coding to indicate levels. Levels above 0 dB trigger the red, but this doesn’t mean there’s clipping—observe the peak meter for that.

 

Calibrating Your Monitors

 

The K-System borrows film industry best practices. At 0 dB, your monitors should be putting out 85 dBSPL for stereo material. Therefore, you’ll need a separate calibration for the three scales to make sure that 0 dB on any scale has the same perceived loudness. The simplest way to calibrate is to send pink noise through your system until the chosen K-System meter reads 0 dB (you can download pink noise samples from the web, or use the noise generator in the Mai Tai virtual instrument). Then, using the sound level meter set to C weighting and a slow response, adjust the monitor level for an 85 dB reading. You can put labels next to the level control on the back of your speaker to show the settings that produce the desired output for each K-Scale.

 

But Wait! There’s More

 

We’ve discussed the K-System in the context of the Limiter, but if you’re instead using the Compressor or some other dynamics processor that doesn’t have K-System metering, you’re still covered. There’s a separate metering plug-in that shows the K-System scale (Fig. 5).

 

Figure 5: The Level meter plug-in shows K-System as well as the R128 spec that reads out the levels in LUFS. Enabling TP converts the meter to PkRMS, and shows the True Peak in the two numeric fields.

Finally, the Project Page also includes K-System Metering along with a Spectrum Analyzer, Correlation Meter, and LUFS metering with True Peak (Fig. 6).

Figure 6: The Project Page metering tells you pretty much all you need to need to know what’s going on with your output signal when mastering.

 

 

Friday Tip: MIDI Guitar Setup with Studio One

I was never a big fan of MIDI guitar, but that changed when I discovered two guitar-like controllers—the YRG1000 You Rock Guitar and Zivix Jamstik. Admittedly, the YRG1000 looks like it escaped from Guitar Hero to seek a better life, but even my guitar-playing “tubes and Telecasters forever!” compatriots are shocked by how well it works. And Jamstik, although it started as a learn-to-play guitar product for the Mac, can also serve as a MIDI guitar controller. Either one has more consistent tracking than MIDI guitar retrofits, and no detectable latency.

The tradeoff is that they’re not actual guitars, which is why they track well. So, think of them as alternate controllers that take advantage of your guitar-playing muscle memory. If you want a true guitar feel, with attributes like actual string-bending, there are MIDI retrofits like Fishman’s clever TriplePlay, and Roland’s GR-55 guitar synthesizer.

In any case, you’ll want to set up your MIDI guitar for best results in Studio One—here’s how.

Poly vs. Mono Mode

MIDI guitars usually offer Poly or Mono mode operation. With Poly mode, all data played on all strings appears over one MIDI channel. With Mono mode, each string generates data over its own channel—typically channel 1 for the high E, channel 2 for B, channel 3 for G, and so on. Mono mode’s main advantage is you can bend notes on individual strings and not bend other strings. The main advantage of Poly mode is you need only one sound generator instead of a multi-timbral instrument, or a stack of six synths.

In terms of playing, Poly mode works fine for pads and rhythm guitar, while Mono mode is best for solos, or when you want different strings to trigger different sounds (e.g., the bottom two strings trigger bass synths, and the upper four a synth pad). Here’s how to set up for both options in Studio One.

 

  1. To add your MIDI guitar controller, choose Studio One > Options > External Devices tab, and then click Add…

    Figure 1: Check “Split Channels” if you plan to use a MIDI guitar in mono mode.

    1. To use your guitar in Mono mode, check Split Channels and make sure All MIDI channels are selected (Fig. 1). This lets you choose individual MIDI channels as Instrument track inputs.

     

    1. For Poly mode, you can follow the same procedure as Mono mode but then you may need to select the desired MIDI channel for an Instrument track (although usually the default works anyway). If you’re sure you’re going to be using only Poly mode, don’t check Split Channels, and choose the MIDI channel over which the instrument transmits.

    Note that you can change these settings any time in the Options > External Devices dialog box by selecting your controller and choosing Edit.

    Choose Your Channels

    For Poly mode, you probably won’t have to do anything—just start playing. With Mono mode, you’ll need to use a multitimbral synth like SampleTank or Kontakt, or six individual synths. For example, suppose you want to use Mai Tai. Create a Mai Tai Instrument track, choose your MIDI controller, and then choose one of the six MIDI channels (Fig. 2). If Split Channels wasn’t selected, you won’t see an option to choose the MIDI channel.

    Figure 2: If you chose Split Channels when you added your controller, you’ll be able to assign your instrument’s MIDI input to a particular MIDI channel.

    Next, after choosing the desired Mai Tai sound, duplicate the Instrument track five more times, and choose the correct MIDI channel for each string. I like to Group the tracks because this simplifies removing layers, turning off record enable, and quantizing. Now record-enable all tracks, and start recording. Fig. 3 shows a recorded Mono guitar part—note how each string’s notes are in their own channel.

    Figure 3: A MIDI guitar part that was recorded in Mono mode is playing back each string’s notes through its own Mai Tai synthesizer.

    To close out, here are three more MIDI guitar tips.

    • In Mono mode with Mai Tai (or whatever synth you use), set the number of Voices to 1 for two reasons. First, this is how a real guitar works—you can play only one note at a time on a string. Second, this will often improve tracking in MIDI guitars that are picky about your picking.
    • Use a synth’s Legato mode, if available. This will prevent re-triggering on each note when sliding up and down the neck, or doing hammer-ons.
    • The Edit view is wonderful for Mono mode because you can see what all six strings are playing, while editing only one.

    MIDI guitar got a bad rap when it first came out, and not without reason. But the technology continues to improve, dedicated controllers overcome some of the limitations of retrofitting a standard guitar, and if you set up Studio One properly, MIDI guitar can open up voicings that are difficult to obtain with keyboards.

    In Mono mode with Mai Tai (or whatever synth you use), set the number of Voices to 1 for two reasons. First, this is how a real guitar works—you can play only one note at a time on a string. Second, this will often improve tracking in MIDI guitars that are picky about your picking.

Friday Tip: Colorization: It’s Not Just About Eye Candy

Some people think colorization is frivolous—but I don’t. I started using colorization when writing articles, because it was easy to identify elements in the illustrations (e.g., “the white audio is the unprocessed sound, the blue audio is compressed”). But the more I used colorization, the more I realized how useful it could be.

Customizing the “Dark” and “Light” Looks

Although a program’s look is usually personal preference, sometimes it’s utilitarian. When working in a video suite, the ambient lighting is often low, so that the eye’s persistence of vision doesn’t influence how you perceive the video. For this situation, a dark view is preferable. Conversely, those with weak or failing vision need a bright look. If you’re new to Studio One, you might want the labels to really “pop” but later on, as you become more familiar with the program, darken them somewhat. You may want a brighter look when working during daytime, and a more muted look at night. Fortunately, you can save presets for various looks, and call up the right look for the right conditions (although note that there are no keyboard shortcuts for choosing color presets).

Figure 1: From left to right: dark, moderate, and bright luminance settings.

You’ll find these edits under Options > General > Appearance. For a dark look, move the Background Luminance slider to the left and for a light look, to the right (Fig. 1). I like -50% for dark, and +1 for light. For the dark look, setting the Background Contrast at -100% means that the lettering won’t jump out at you. For the brightest possible look, bump the Background Contrast to 100% so that the lettering is clearly visible against the other light colors, and set Saturation to 100% to brighten the colors. Conversely, to tone down the light look, set Background Contrast and Saturation to 0%.

Hue Shift customizes the background of menu bars, empty fields that are normally gray, and the like. The higher the Saturation slider, the more pronounced the colorization.

The Arrangement sliders control the Arrangement and Edit view backgrounds (i.e., what’s behind the Events). I like to see the vertical lines in the Arrangement view, but also keep the background dark. So Arrangement Contrast is at 100%, and Luminance is the darkest possible value (around 10%) that still makes it easy to see horizontal lines in the Edit view (Fig. 2).

Figure 2: The view on the left uses 13% luminance and 100% contrast to make the horizontal background lines more pronounced.

Streamlining Workflow with Color

With a song containing dozens of tracks, it can be difficult to identify which Console channel strip controls which instrument, particularly with the Narrow console view. The text at the bottom of each channel strip helps, but you often need to rename tracks to fit in the allotted space. Even then, the way the brain works, it’s easier to identify based on color (as deciphered by your right brain) than text (as deciphered by your left brain). Without getting too much into how the brain’s hemispheres work, the right brain is associated more with creative tasks like making music, so you want to stay in that mode as much as possible; switching between the two hemispheres can interrupt the creative flow.

I’ve developed standard color schemes for various types of projects. Of course, choose whatever colors work for you; for example, if you’re doing orchestral work, you’d have a different roster of instruments and colors. With my scheme for rock/pop, lead instruments use a brighter version of a color (e.g., lead guitar bright blue, rhythm guitar dark blue).

  • Main drums – red
  • Percussion – yellow
  • Bass – brown
  • Guitar – blue
  • Voice – green
  • Keyboards and orchestral – purple
  • FX – lime green

Furthermore, similar instruments are grouped together in the mixer. So for vocals, you’ll see a block of green strips, for guitar a block of blue strips, etc. (Fig. 3)

Figure 3: A colorized console, with a bright look. The colorization makes it easy to see which faders control which instruments.

 

To colorize channel strips, choose Options > Advanced tab > Console tab (or click the Console’s wrench icon) and check “Colorize Channel Strips.” This colorizes the entire strip. However, if you find colorized strips too distracting, the name labels at the bottom (and the waveforms in the arrange view) are always colored according to your choices. Still, when the Console faders are extended to a higher-than-usual height, I find it easier to grab the correct fader with colored console strips.

In the Arrange view, you can colorize the track controls as well—click on the wrench icon, and click on “Colorize Track Controls.” Although sometimes this feels like too much color, nonetheless, it makes identifying tracks easier (especially with the track height set to a narrow height, like Overview).

Color isn’t really a trivial subject, once you get into it. It has helped my workflow, so I hope these tips serve you as well.

 

Extra TIP: Buy Craig Anderton’s Studio One eBook here for only $10 USD! 

 

Friday Tips: The Melodyne Envelope Flanger

This isn’t a joke—there really is an envelope-controlled flanger hidden inside Melodyne Essential that sounds particularly good with drums, but also works well with program material. The flanging is not your basic, boring “whoosh-whoosh-whoosh” LFO-driven flanging, but follows the amplitude envelope of the track being flanged. It’s all done with Melodyne Essential, although of course you can also do this with more advanced Melodyne versions. Here’s how simple it is to do envelope-followed flanging in Studio One.

  1. Duplicate the track or Event you want to flange.

  1. Select the copied Event, then type Ctrl+M (or right-click on the Event and choose Edit with Melodyne)
  2. In Melodyne, under Algorithm, choose Percussive and let Melodyne re-detect the pitches.

  1. “Select all” in Melodyne so that all the blobs are red, then start playback.
  2. Click in the “Pitch deviation (in cents) of selected note” field.
  3. Drag up or down a few cents to introduce flanging. I tend to like dragging down about -14 cents.

As with any flanging effect, you can regulate the mix of the flanged and dry sounds by altering the balance of the two tracks.

Note that altering the Pitch Deviation parameter indicates an offset from the current Pitch Deviation, not an absolute value. For example if you drag down to -10 cents,  release the mouse button, and click on the parameter again, the display will show 0 instead of -10. So if you drag up by +4 cents, the pitch deviation will now be at -6 cents, not +4. If you get too lost, just select all the blobs, choose the Percussion algorithm again, and Melodyne will set everything back to 0 cents after re-detecting the blobs.

And of course, I don’t expect you to believe that something this seemingly odd actually works, so check out the audio example. The first part is envelope-flanged drums, and the second part applies envelope flanging to program material from my [shameless plug] Joie de Vivre album. So next time you need envelope-controlled flanging, don’t reach for a stompbox—edit with Melodyne.

 

Friday Tips: Studio One’s Hybrid Reverb

The previous tip on creating a dual-band reverb generated a fair amount of interest, so let’s do one more reverb-oriented tip before moving on to another topic.

Studio One has three different reverbs—Mixverb, Room Reverb, and OpenAIR—all of which have different attributes and personalities. I particularly like the Room Reverb for its sophisticated early reflections engine, and the OpenAIR’s wide selection of decay impulses (as well as the ability to load custom impulses I’ve made).

Until now, it never occurred to me how easy it is to create a “hybrid” reverb with the best of both worlds: using the Room Reverb solely as an early reflections engine, and the OpenAIR solely for the reverb decay. To review, reverb is a continuum—it starts with silence during the pre-delay phase when the sound first travels to hit a room’s surfaces, then morphs into early reflections as these sounds bounce around and create echoes, and finally, transforms into the reverb decay—the most complex component. Each one of these components affects the sound differently. In Studio One, these components don’t all have to be from the same reverb.

THE EARLY REFLECTIONS ENGINE

Start by inserting the Room Reverb into an FX Channel, and calling up the Default preset. Then set the Reverb Mix to 0.00 and the Dry/Wet Mix to 100%. The early reflections appear as discrete vertical lines. They’re outlined in red in the screen shot below.

 

If you haven’t experimented with using the Room Reverb as a reflections engine, before proceeding now would be a good time to use the following evaluation procedure and familiarize yourself with its talents.

 

  1. From the Browser, load the loop Crowish Acoustic Chorus 1.wav (Loops > Rock > Drums > Acoustic) into a stereo track. This loop is excellent for showcasing the effects of early reflections.
  2. Create a pre-fader send from this track to the FX Channel with the Room Reverb, and bring the drum channel fader all the way down for now so you hear only the effects of the Room Reverb.
  3. Let’s look at the Room parameters. Vary the Size parameter. The bigger the room, the further away the reflections, and the quieter they are.
  4. Set the Size to around 3.00. Vary Height. Note how at maximum height, the sound is more open; at minimum height, the sound is more constricted. Leave Height around 1.00.
  5. Now vary Width. With narrower widths, you can really hear that the early reflections are discrete echoes. As you increase width, the reflections blend together more. Leave Width around 2.00.
  6. The Geometry controls might as well be called the “stand here” controls. Turning up Distance moves you further away from the sound source. Asy varies your position in the left-right direction within the room.
  7. Plane is a fairly subtle effect. To hear what it does, repeat steps 3 and 4, and then set Size to around 3.00, Dist to 0.10, and Asy to 1.00. Plane spreads the sounds a bit more apart at the maximum setting.

 

Now that you know how to set up different early reflections sounds, let’s create the other half of our hybrid reverb.

THE REVERB DECAY

To provide the reverb decay, insert the OpenAIR reverb after the Room Reverb. Whenever you call up a new OpenAIR preset, do the following.

  1. Set ER/LR to 1.00.
  2. Set Predelay to minimum (-150.00 ms).
  3. Initially set Envelope Fade-in and Envelope ER/LR-Xover to 0.00 ms.

There are two ways to make a space for the early reflections so that they occur before the reverb tail: set an Envelope Fade-in time, an Envelope ER/LR-Xover time, or both. Because the ER/LR control is set to 1.00 there are no early reflections in the Open AIR preset, so if you set the ER/LR-Xover time to (for example) 25 ms, that basically acts like a 25 ms pre-delay for the reverb decay. This opens up a space for you to hear the early reflections before the reverb decay kicks in. If you prefer a smoother transition into the decay, increase the Envelope Fade-in time, or combine it with some ER/LR-Xover time to create a pre-delay along with a fade-in.

The OpenAIR Mix control sets the balance of the early reflections contributed by the Room Reverb and the longer decay tail contributed by the OpenAIR reverb. Choose 0% for reflections only, 100% for decay only.

…AND BEYOND

There are other advantages of the hybrid reverb approach. In the OpenAIR, you can include its early reflections to supplement the ones contributed by the Room Reverb. When you call up a new preset, instead of setting the ER/LR, Predelay, Envelope Fade-In, and Envelope ER/LR-Xover to the defaults mentioned above, bypass the Room Reverb and set the Open AIR’s early reflections as desired. Then, enable the Room Reverb to add its early reflections, and tweak as necessary.

It does take a little effort to edit your sound to perfection, so save it as an FX Chain and you’ll have it any time you want it.

Better Reverb with Frequency Splitting

It’s convenient that Studio One has three significantly different reverbs, but none of them has separate decay times for high and low frequencies. This is one of my favorite reverb features, because (for example) you can have a tight kick ambiance, but let the hats and cymbals fade out in a diaphanous haze…or have a huge kick that sounds like it was recorded in a gothic castle, with tight snare and cymbals on top. Also with highly percussive drums, sometimes I’d like a little more diffusion than what’s available so that reflections aren’t perceived as discrete echoes, but rather, as a smooth wash of sound.

So let’s build the ideal Room Reverb for drums (other instruments, too). There’s a downloadable FX Chain that provides a big drum sound template, but note that the preset control settings cover only one sound out of a cornucopia of possible effects. Once you start modifying the reverb sounds themselves, as well as some of the parameters in the Routing window itself, anything’s possible.

 

ROUTING AND MACRO CONTROLS

Here’s the FX Chain routing.

Splitter 2 provides a Normal split. One split handles the dry signal, while the other goes to the reverbs. Splitter 1 does a Frequency split, with one split going to a single Room Reverb dedicated to the low frequencies, and the other split going to two Room Reverbs in series for the high frequencies. The Split point (crossover frequency) is set around 620 Hz, but varying this parameter provides a wide variety of sounds.

You might wonder “why not just feed two reverbs, and EQ their output?” EQing before going into the reverb gives each reverb more clarity, because the low and high frequencies don’t interact with each other in the process of being reverberated.

The three Mixtool modules provide mixing for the dry, low reverb, and high reverb sounds, as represented by the first three Macro controls. The other controls modify reverb parameters, but of course, these are only some of the editable parameters available for adjustment within the Room Reverb.

 

HOW TO USE IT

Here’s one option, although I don’t claim that it’s necessarily “best practices” (suggestions are welcome in the Comments section!).

Start with the Dry, Low Verb, and High Verb controls at minimum. Bring up the Low Verb, and adjust Low Verb Balance and Low Decay for desired low end. Then turn down Low Verb, bring up High Verb, and adjust its associated controls (Hi Verb Balance, Hi Verb Decay, and Hi Verb Damping). With both Low Verb and High Verb set more or less the same, go into the Routing section and vary Splitter 1’s crossover frequency (the slider below Frequency Split). After finding the optimum crossover point, re-tweak the mix if necessary.

Finally, choose a balance of all three levels, and you’re good to go.

 

WHAT ABOUT THE REVERBS THEMSELVES?

For the default FX Chain preset, the Low Verb has a shorter delay than the High Verbs, but still gives a big kick sound.

The reason for using two Room Reverbs in series for the high reverb component is to increase the amount of diffusion, and provide a smoother sound.

You want fairly different settings for the two reverbs so that they blend, thus giving the feel of more diffusion. There’s not really a lot of thought behind the above settings; I just copied one of the reverbs and changed a few parameters until the sound was smooth.

Incidentally, three Room Reverbs requires a decent amount of CPU, so there are switches at the bottom of the Macro Controls to enable the “eco” mode for each reverb. Choosing eco for the low frequency reverb impacts the sound less than choosing eco for the two high frequency reverbs.

IT’S A WRAP

Download the FX Chain and check out what this FX Chain can do—I think you’ll find that when it comes to reverbs, third time’s a charm.

Click here to get the FX Chain preset!

 

 

Friday Tip: Better Vocals with Phrase-by-Phrase Normalization

Unless you have exceptional vocal control, some vocal or narration phrases will likely be softer than others—not intentionally due to natural dynamics, but as a result of sketchy mic technique, running out of breath, or not being able to hit a note as strongly as other notes. Using compression or limiting to even out a vocal’s peaks has its place, but the low-level sections might not be brought up enough, whereas the high-level ones may sound “squashed.”

A more natural-sounding solution is to edit the vocal to a consistent level first, before applying any compression or limiting, by using phrase-by-phrase gain changes that even out variations. The advantage of adjusting each phrase’s level for consistency is that you haven’t added any of the artifacts associated with compression, or interfered with a phrase’s inherent dynamics. Furthermore if you do add compression or limiting while mixing, you won’t need to use as much as you normally would to obtain the same perceived volume and intimacy. A side benefit of phrase-by-phase normalization is that you can define an event that starts just after an inhale, so the inhale isn’t brought up with the rest of the phrase.

Ready to tweak that vocal to perfection? Let’s go.

  1. Open the vocal event in the Edit view, and open the Audio Bend view.

  1. Click on the Event, and choose Action > Detect Transients. Then click on Remove Bend Markers to start with a clean slate. Your event will look like the above screen shot. (Note: If the vocals have phrases that are separated by spaces, you can choose Transient Detection, Standard Mode, and then click on Analyze. Lower the threshold so that the Bend Markers fall only at the beginning of phrases. However, you’ll may need to move, delete, or add some markers with complex parts, which is why I find it easier just to place Bend Markers where needed.)

  1. You can now close the Audio Bend view if you want more room for the waveform height. Choose the Bend tool, and click at the beginning of each phrase to add a Bend Marker. If a section that needs to be adjusted starts in the middle of a phrase, you can add a Bend Marker before the section that needs tweaking anyway, even if there isn’t silence (we’ll explain why later).
  2. Once you’ve separated the phrases with Bend Markers, select the event in the Edit view by clicking on it with the Arrow tool. Then, choose Action > Split at Bend Markers. Now each phrase is its own event.

  1. Click on an event, and then adjust the gain so the event reaches the desired level. Do this with each event that needs tweaking—done!

 

Note that if audio continues before and after the Bend Marker so the Bend Marker can’t land on silence, Studio One generally handles this well if you place the Bend Marker on a zero-crossing. But if an abrupt level change causes a click at a transition, simply crossfade over it by dragging the end of one event and the beginning of the next event over the transition, and type X to create a crossfade. Adjust the curve for the most natural sound. In extreme cases, fading out just before the click and fading in just after the click can solve any issues.

So why not just do this kind of operation in the Arrange View? Several reasons. First of all, the Edit view is a more comfortable editing environment. But also, sometimes detecting transients will place the Bend Markers accurately enough that all you need to do is split and change levels—it’s much easier than doing a series of splits in the Arrange view. And if you count keystrokes, clicking to drop Bend Markers that define where to split and doing all the splits at once is easier than clicking and splitting at each split. Finally, while in Edit view, you can take advantage of the Bend Markers to adjust phrasing.

While this is a highly effective technique (especially for narration), be careful not to get so involved in this process that you start normalizing, say, individual words. Within any given phrase there will be some dynamics that you’ll want to retain—never lose the human element.

Friday Tips: Keyswitching Made Easy

As the quest for expressive electronic instruments continues, many virtual instruments incorporate keyswitching to provide different articulations. A keyswitch doesn’t play an actual note, but alters what you’re playing in some manner—for example, Presence’s Viola preset dedicates the lowest five white keys (Fig. 1) to articulations like pizzicato, tremolo, and martelé.

 

Fig. 1: The five lowest white keys, outlined in red, are keyswitches that provide articulation options. A small red bar along the bottom of the key indicates which keyswitch is active.

 

This is very helpful—as long as you have a keyboard with enough keys. Articulations typically are on the lowest keys, so if you have a 49-key keyboard (or even a 61-note keyboard) and want to play over its full range (or use something like a two-octave keyboard for mobile applications), the only way to add articulations are as overdubs. Since the point of articulations is to allow for spontaneous expressiveness, this isn’t the best solution. An 88-note keyboard is ideal, but it may not fit in your budget, and it also might not fit physically in your studio.

Fortunately, there’s a convenient alternative: a mini-keyboard like the Korg nanoKEY2 or Akai LPK25. These typically have a street price around $60-$70, so they won’t make too big a dent in your wallet. You really don’t care about the feel or action, because all you want is switches.

Regarding setup, just make sure that both your main keyboard and the mini-keyboard are set up under External Devices—this “just works” because the instrument will listen to whatever controllers are sending in data via USB (note that keyboards with 5-pin DIN MIDI connectors require a way to merge the two outputs into a single data stream, or merging capabilities within the MIDI interface you’re using). You’ll need to drop the mini-keyboard down a few octaves to reach the keyswitch range, but aside from that, you’re covered.

To dedicate a separate track to keyswitching, call up the Add Track menu, specify the desired input, and give it a suitable name (Fig. 2). I find it more convenient not to mix articulation notes in with the musical notes because if I cut, copy, or move a passage of notes, I may accidentally edit an articulation that wasn’t supposed to be edited.

Fig. 2: Use the Add Track menu to create a track that’s dedicated to articulations.

 

So until you have that 88-note, semi-weighted, hammer-action keyboard you’ve always dreamed about, now you have an easy way take full advantage of Presence’s built-in expressiveness—as well as any other instrument with keyswitching.