PreSonus Blog

Tag Archives: PreSonus Studio One


Friday Tip – Create “Virtual Mics” with EQ

I sometimes record acoustic rhythm guitars with one mic for two main reasons: no issues with phase cancellations among multiple mics, and faster setup time. Besides, rhythm guitar parts often sit in the background, so some ambiance with electronic delay and reverb can give a somewhat bigger sound. However, on an album project with the late classical guitarist Linda Cohen, the solo guitar needed to be upfront, and the lack of a stereo image due to using a single mic was problematic.

Rather than experiment with multiple mics and deal with phase issues, I decided to go for the most accurate sound possible from one high-quality, condenser mic. This was successful, in the sense that moving from the control room to the studio sounded virtually identical; but the sound lacked realism. Thinking about what you hear when sitting close to a classical guitar provided clues on how to obtain the desired sound.

If you’re facing a guitarist, your right ear picks up on some of the finger squeaks and string noise from the guitarist’s fretting hand. Meanwhile, your left ear picks up some of the body’s “bass boom.” Although not as directional as the high-frequency finger noise, it still shifts the lower part of the frequency spectrum somewhat to the left. Meanwhile, the main guitar sound fills the room, providing the acoustic equivalent of a center channel.

Sending the guitar track into two additional buses solved the imaging problem by giving one bus a drastic treble cut and panning it somewhat left. The other bus had a drastic bass cut and was panned toward the right (Fig. 1).

Figure 1: The main track (toward the left) splits into three pre-fader buses, each with its own EQ.

One send goes to bus 1. The EQ is set to around 400 Hz (but also try lower frequencies), with a 24 dB/octave slope to focus on the guitar body’s “boom.” Another send goes to bus 2, which emphasizes finger noises and high frequencies. Its EQ has a highpass filter response with a 24dB/octave slope and frequency around 1 kHz. Pan bus 1 toward the left and bus 2 toward the right, because if you’re facing a guitarist the body boom will be toward the listener’s left, and the finger and neck noises will be toward the listener’s right.

The send to bus 3 goes to the main guitar sound bus. Offset its highpass and lowpass filters a little more than an octave from the other two buses, e.g., 160 Hz for the highpass and 2.4 kHz for the lowpass (Fig. 2). This isn’t “technically correct,” but I felt it gave the best sound.

 

Figure 2: The top curve trims the response of the main guitar sound, the middle curve isolates the high frequencies, and the lower curve isolates the low frequencies. EQ controls that aren’t relevant are grayed out.

Monitor the first two buses, and set a good balance of the low and high frequencies. Then bring up the third send’s level, with its pan centered. The result should be a big guitar sound with a stereo image, but we’re not done quite yet.

The balance of the three tracks is crucial to obtaining the most realistic sound, as are the EQ frequencies. Experiment with the EQ settings, and consider reducing the frequency range of the bus with the main guitar sound. If the image is too wide, pan the low and high-frequency buses more to center. It helps to monitor the output in mono as well as stereo for a reality check.

Once you nail the right settings, you may be taken aback to hear the sound of a stereo acoustic guitar with no phase issues. The sound is stronger, more consistent, and the stereo image is rock-solid.

Video Using VocALign in Studio One Pro!

 

In this video, producer Paul Drew shows how VocALign seamlessly works inside Presonus Studio One Professional and almost instantly aligns the timing of multiple vocal tracks to a lead using ARA2, potentially saving hours of painstaking editing time.

ARA (Audio Random Access) is a pioneering extension for audio plug-in interfaces. Co-developed by Celemony and PreSonus, ARA technology enhances the communication between plug-in and DAW, and gives the plug-in and host instant access to the audio data. This video shows Studio One but the workflow is very similar in Cubase Pro & Nuendo, Cakewalk by Bandlab and Reaper.

Purchase VocALign today right out of the PreSonus Shop!

Friday Tips: The Dynamic Brightener for Guitar

When you play an acoustic guitar harder, it not only gets louder, but brighter. Dry, electric guitar doesn’t have that quality…by comparison, the electrified sound by itself is somewhat lifeless. But I’m not here to be negative! Let’s look at a solution that can give your dry electric guitar some more acoustic-like qualities.

How It Works

Create an FX Channel, and add a pre-fader Send to it from your electric guitar track. The FX Channel has an Expander followed by the Pro EQ. The process works by editing the Expander settings so that it passes only the peaks of your playing. Those peaks then pass through a Pro EQ, set for a bass rolloff and a high frequency boost. Therefore, only the peaks become brighter. Here’s the Console setup.

 

The reason for creating a pre-fader send from the guitar track is so that you can bring the guitar fader down, and monitor only the FX Channel as you adjust the settings for the Expander and Pro EQ. The Expander parameter values are rather critical, because you want to grab only the peaks, and expand the rest of the guitar signal downward. The following settings are a good point of departure, assuming the guitar track’s peaks hit close to 0.

 

The most important edit you’ll need to make is to the Expander’s Threshold. After it grabs only the peaks, then experiment with the Range and Ratio controls to obtain the sound you want. Finally, choose a balance of the guitar track and the brightener effect from the FX Channel.

The audio example gets the point across. It consists of guitar and drums, because having the drums in the mix underscores how the dynamically brightened guitar can “speak” better in a track. The first five measures are the guitar with the brightener, the next five measures are the guitar without the brightener, and the final five measures are the brightener channel sound only. You may be surprised at how little of the brightener is needed to make a big difference to the overall guitar sound.

Also, try this on acoustic guitar when you want the guitar to really shine through a mix. Hey, there’s nothing wrong with shedding a little brightness on the situation!

Friday Tips: Synth + Sample Layering

One of my favorite techniques for larger-than-life sounds is layering a synthesizer waveform behind a sampled sound. For example, layering a sine wave along with piano or acoustic guitar, then mixing the sine wave subtly in the background, reinforces the fundamental. With either instrument, this can give a powerful low end. Layering a triangle wave with harp imparts more presence to sampled harps, and layering a triangle wave an octave lower with a female choir sounds like you’ve added a bunch of guys singing along.

Another favorite, which we’ll cover in detail with this week’s tip, is layering a sawtooth or pulse wave with strings. I like those syrupy, synthesized string sounds that were so popular back in the 70s, although I don’t like the lack of realism. On the other hand, sampled strings are realistic, but aren’t lush enough for my tastes. Combine the two, though, and you get lush realism. Here’s how.

  1. Create an instrument track with Presence, and call up the Violin Full preset.
  2. Drag Mai Tai into the same track. You’ll be asked if you want to Replace, Keep, or Combine. Choose Combine.

  1. After choosing Combine, both instruments will be layered within the Instrument Editor (see above).
  2. Program Mai Tai for a very basic sound, because it’s there solely to provide reinforcement—a slight detuning of the oscillators, no filter modulation, very basic LFO settings to add a little vibrato and prevent too static a waveform, amplitude envelope and velocity that tracks the Presence sound as closely as possible, some reverb to create a more “concert hall” sound, etc. The screen shot shows the parameters used for this example. The only semi-fancy programming tricks were making one of the oscillators a pulse wave instead of a sawtooth, and panning the two oscillators very slightly off-center.

 

  1. Adjust the Mai Tai’s volume for the right mix—enough to supplement Presence, but not overwhelm it.

 

That’s all there is to it. Listen to the audio example—first you’ll hear only the Presence sound, then the two layers for a lusher, more synthetic vibe that also incorporates some of the realism of sampling. Happy orchestrating!

Friday Tips—Blues Harmonica FX Chain

If you’ve heard blues harmonica greats like Junior Wells, James Cotton, Jimmy Reed, and Paul Butterfield, you know there’s nothing quite like that big, brash sound. They all manage to transform the harmonica’s reedy timbre into something that seems more like a member of the horn family.

To find out more about the techniques of blues harmonica, check out the article Rediscovering Blues Harmonica. It covers why you don’t play blues harp in its default key (e.g., you typically use a harmonica in the key of A for songs in E), how to mic a harmonica, and more. However, the secret to that big sound is playing through the distortion provided by an amp, or in our software-based world, an amp sim. I don’t really find the Ampire amps suitable for this application, but we can put together an FX Chain that does the job.

Check out the demo to hear the desired goal. The first 12 bars are unprocessed harmonica (other than limiting). The second 12 bars use the FX Chain described in this week’s tip, and which you can download for your own use.


The chain starts with a Limiter to provide a more sustained, consistent sound.

 

Next up: A Pro EQ to take out all the lows and highs, which tightens up the sound and reduces intermodulation distortion. (When using an amp sim, blues harmonica is also a good candidate for multiband processing, as described in the February 1 Friday Tip.)

 

Now it’s time for the Redlight Dist to provide the distortion. For the cabinet, this FX Chain uses the Ampire solely for its 4 x 10 American cabinet—no amp or stomps.

 

After the distortion/cabinet combo, a little midrange “honk” makes the harmonica stand out more in the mix.

 

For a final touch, blues harp often plays through an amp with reverb—so a good spring reverb effect adds a vintage vibe.

You can download the Blues Harp.multipreset and use it as it, but I encourage playing around with it—try different types of distortion and amps, mess with the EQ a bit, and so on. For an example of a finished song with amp sim blues harmonica in context, check out I’ll Take You Higher on YouTube.

 

Click here to download the multipreset!

 

 

Friday Tip: MIDI Guitar Setup with Studio One

I was never a big fan of MIDI guitar, but that changed when I discovered two guitar-like controllers—the YRG1000 You Rock Guitar and Zivix Jamstik. Admittedly, the YRG1000 looks like it escaped from Guitar Hero to seek a better life, but even my guitar-playing “tubes and Telecasters forever!” compatriots are shocked by how well it works. And Jamstik, although it started as a learn-to-play guitar product for the Mac, can also serve as a MIDI guitar controller. Either one has more consistent tracking than MIDI guitar retrofits, and no detectable latency.

The tradeoff is that they’re not actual guitars, which is why they track well. So, think of them as alternate controllers that take advantage of your guitar-playing muscle memory. If you want a true guitar feel, with attributes like actual string-bending, there are MIDI retrofits like Fishman’s clever TriplePlay, and Roland’s GR-55 guitar synthesizer.

In any case, you’ll want to set up your MIDI guitar for best results in Studio One—here’s how.

Poly vs. Mono Mode

MIDI guitars usually offer Poly or Mono mode operation. With Poly mode, all data played on all strings appears over one MIDI channel. With Mono mode, each string generates data over its own channel—typically channel 1 for the high E, channel 2 for B, channel 3 for G, and so on. Mono mode’s main advantage is you can bend notes on individual strings and not bend other strings. The main advantage of Poly mode is you need only one sound generator instead of a multi-timbral instrument, or a stack of six synths.

In terms of playing, Poly mode works fine for pads and rhythm guitar, while Mono mode is best for solos, or when you want different strings to trigger different sounds (e.g., the bottom two strings trigger bass synths, and the upper four a synth pad). Here’s how to set up for both options in Studio One.

 

  1. To add your MIDI guitar controller, choose Studio One > Options > External Devices tab, and then click Add…

    Figure 1: Check “Split Channels” if you plan to use a MIDI guitar in mono mode.

    1. To use your guitar in Mono mode, check Split Channels and make sure All MIDI channels are selected (Fig. 1). This lets you choose individual MIDI channels as Instrument track inputs.

     

    1. For Poly mode, you can follow the same procedure as Mono mode but then you may need to select the desired MIDI channel for an Instrument track (although usually the default works anyway). If you’re sure you’re going to be using only Poly mode, don’t check Split Channels, and choose the MIDI channel over which the instrument transmits.

    Note that you can change these settings any time in the Options > External Devices dialog box by selecting your controller and choosing Edit.

    Choose Your Channels

    For Poly mode, you probably won’t have to do anything—just start playing. With Mono mode, you’ll need to use a multitimbral synth like SampleTank or Kontakt, or six individual synths. For example, suppose you want to use Mai Tai. Create a Mai Tai Instrument track, choose your MIDI controller, and then choose one of the six MIDI channels (Fig. 2). If Split Channels wasn’t selected, you won’t see an option to choose the MIDI channel.

    Figure 2: If you chose Split Channels when you added your controller, you’ll be able to assign your instrument’s MIDI input to a particular MIDI channel.

    Next, after choosing the desired Mai Tai sound, duplicate the Instrument track five more times, and choose the correct MIDI channel for each string. I like to Group the tracks because this simplifies removing layers, turning off record enable, and quantizing. Now record-enable all tracks, and start recording. Fig. 3 shows a recorded Mono guitar part—note how each string’s notes are in their own channel.

    Figure 3: A MIDI guitar part that was recorded in Mono mode is playing back each string’s notes through its own Mai Tai synthesizer.

    To close out, here are three more MIDI guitar tips.

    • In Mono mode with Mai Tai (or whatever synth you use), set the number of Voices to 1 for two reasons. First, this is how a real guitar works—you can play only one note at a time on a string. Second, this will often improve tracking in MIDI guitars that are picky about your picking.
    • Use a synth’s Legato mode, if available. This will prevent re-triggering on each note when sliding up and down the neck, or doing hammer-ons.
    • The Edit view is wonderful for Mono mode because you can see what all six strings are playing, while editing only one.

    MIDI guitar got a bad rap when it first came out, and not without reason. But the technology continues to improve, dedicated controllers overcome some of the limitations of retrofitting a standard guitar, and if you set up Studio One properly, MIDI guitar can open up voicings that are difficult to obtain with keyboards.

    In Mono mode with Mai Tai (or whatever synth you use), set the number of Voices to 1 for two reasons. First, this is how a real guitar works—you can play only one note at a time on a string. Second, this will often improve tracking in MIDI guitars that are picky about your picking.

Friday Tips: The Melodyne Envelope Flanger

This isn’t a joke—there really is an envelope-controlled flanger hidden inside Melodyne Essential that sounds particularly good with drums, but also works well with program material. The flanging is not your basic, boring “whoosh-whoosh-whoosh” LFO-driven flanging, but follows the amplitude envelope of the track being flanged. It’s all done with Melodyne Essential, although of course you can also do this with more advanced Melodyne versions. Here’s how simple it is to do envelope-followed flanging in Studio One.

  1. Duplicate the track or Event you want to flange.

  1. Select the copied Event, then type Ctrl+M (or right-click on the Event and choose Edit with Melodyne)
  2. In Melodyne, under Algorithm, choose Percussive and let Melodyne re-detect the pitches.

  1. “Select all” in Melodyne so that all the blobs are red, then start playback.
  2. Click in the “Pitch deviation (in cents) of selected note” field.
  3. Drag up or down a few cents to introduce flanging. I tend to like dragging down about -14 cents.

As with any flanging effect, you can regulate the mix of the flanged and dry sounds by altering the balance of the two tracks.

Note that altering the Pitch Deviation parameter indicates an offset from the current Pitch Deviation, not an absolute value. For example if you drag down to -10 cents,  release the mouse button, and click on the parameter again, the display will show 0 instead of -10. So if you drag up by +4 cents, the pitch deviation will now be at -6 cents, not +4. If you get too lost, just select all the blobs, choose the Percussion algorithm again, and Melodyne will set everything back to 0 cents after re-detecting the blobs.

And of course, I don’t expect you to believe that something this seemingly odd actually works, so check out the audio example. The first part is envelope-flanged drums, and the second part applies envelope flanging to program material from my [shameless plug] Joie de Vivre album. So next time you need envelope-controlled flanging, don’t reach for a stompbox—edit with Melodyne.

 

Friday Tips: Humbucker to Single-Coil Conversion with EQ

Humbuckers are known for a big, beefy sound, while single-coil pickups are more about clarity and definition. If you want the best of both worlds, you can warm up a soldering iron, ground the junction of the humbucker’s two coils, and voilà—a single coil pickup. But there’s an easier way: use the Pro EQ, which gives the added benefit of not losing the pickup’s humbucking characteristics.

Fig. 1: Humbucker and single-coil response compared.

 

The main difference between humbucker and single coil pickups is the frequency response. The blue line in Fig. 1 shows a humbucker’s spectral response, while the yellow line shows the same humbucker split for single-coil operation. Unlike the single-coil’s response, which is essentially flat from 150 Hz to 3 kHz, the humbucker has a bump in the 500 Hz to 2 kHz range that contributes to the “beefy” sound. Starting at 3 kHz the humbucker response drops off rapidly, while the single coil produces more high-frequencies than the humbucker from 3 kHz to 9 kHz.

Fig. 2: Bridge humbucker to single-coil conversion EQ curve.

Fig. 2 shows an equalizer curve that modifies a bridge humbucker for more of a single-coil response. Of course different humbucker and different single-coil pickups sound different, so this kind of EQ-based “modeling” is an inexact science. However, I think you’ll find that the faux single-coil sound delivers the distinctive, glassy character you want from a single-coil pickup. Feel free to tweak the EQ further—you can come up with variations on the single-coil sound, or “morph” between the humbucker and single-coil characteristics.

Fig. 3: Neck humbucker to single-coil conversion EQ curve.

The difference between a neck humbucker and single-coil response isn’t as dramatic, but the curve in Fig. 3 replicates the neck single-coil character, and provides yet another useful variation for your guitar tone.

 

The bottom line is that you don’t need to break out a soldering (or void your guitar’s warranty) to make your humbucker sound more like a single-coil type—all you need is the right kind of EQ.

 

Friday Tips: Studio One’s Transient Shaper for Kick and Snare

As with so many aspects of audio, the subject of compression presets polarizes people. The purists say there’s no point in having presets, because every signal is different, and the same compressor settings will sound very different on different sources. On the other hand, software comes with presets, and there are plenty of recording blogs on the web that dispense advice about typical preset settings. So who’s right?

And as with so many aspects of audio, they all are. If a preset works “out of the box,” that’s just plain luck. However, there are certain ranges of settings that work well in many cases for particular types of signals. In any case, the effects of compression are totally dependent on the input signal level anyway—if the threshold is set to -10, then signals that peak at 0 will sound very different compared to signals that peak at -10.

The most effective way to approach compression is to decide what effect you want the compression to accomplish, then adjust the compression settings accordingly. It’s also important to remember that compression isn’t just some monolithic effect that “squashes things.” For example, with kick and snare, compression can act just like a transient/decay shaper due to a drum’s rapid decay.

The usual goal for compressing kick is an even sound, yet one that doesn’t reduce punch. However, you have a great deal of latitude in deciding how to implement that goal.

Figure 1: A starting point for kick (and snare) compression.

 

The preset in Fig. 1 uses a fairly high ratio, and hard knee, to even out the highest levels. You want the compression to take hold relatively rapidly, but not take away from the punch. The best option is to start with the attack time at 0, and increase it until you hear the initial hit clearly (but don’t go past that point). Because a kick decays fast, release can be fast as well.

For transient shaping, slowing the attack time softens the attack. Raising the ratio increases the sustain somewhat, while making space for the attack (assuming an appropriate attack time). Between the attack and ratio controls, you can pretty much tailor the kick drum’s attack and sustain characteristics, as well as even out the overall sound. A higher threshold is another way to emphasize the attack, by letting the decay occur naturally. Lowering the threshold reduces the level difference between the attack and decay.

Snare responds similarly to kick, however with an acoustic drum kit, the kick is more isolated physically than the snare. As a result, compressing the snare has the potential to emphasize leakage. Fortunately, the snare is often the focus of a drum part. As a result, you can simply compress the snare, and accept that leakage is part of the deal. With individual, multitracked drums (including electronic drums) where leakage is not a problem, it’s still usually the snare and kick that get compression.

With snare, you may want to use a lower ratio (2:1 – 3:1) for a fuller snare sound. Or, increase the ratio to emphasize the attack more. Again, use the attack time to dial in the desired attack characteristics.

With both kick and snare, you’ll usually want a hard knee. However, the knee control is a fantastic way to fine-tune the attack—and once you have that dialed in, you’ll be good to go.

Friday Tip of the Week – Trolling for Slices

So there you are, with your shiny new Impact XT virtual instrument. You want to populate the pads with some fun drum sounds, and although you like the included kits, you’re itching to get creative and come up with some kits of your own. Fortunately, it’s easy to use Audioloops to populate your Impact XT with a custom selection of drum sounds.

Open the Browser, and under Loops, look for files with the .audioloop suffix. The reason why .audioloop files stretch elegantly is because the loop is cut into slices, with each slice representing an individual “block” of sound—kick, snare, clap, kick and cymbal hitting at the same time, snare and high-hat hitting at the same time…whatever.

When you expand an .audioloop, you’ll see each slice listed individually. Some have only one or two slices, but others—for example, the Combo Beat loops under Electronic > Drums > Loop (Fig. 1)—are rich sources of slices.

 

Figure 1:  The Browser’s Loop tab is loaded with slices, just waiting to be used with Impact XT to create custom kits.

Next, open up Impact XT. To audition the slices, toward the bottom of the browser turn off the loop and metronome options, select a slice, and then click on the Play > button. Click on various slices and when you hear something you like, drag it over to an Impact XT pad (Fig. 2). You won’t have to click the Play button again to audition slices until after you drag a slice over.

 

Figure 2: Drag slices over to Impact XT pads.

The real fun begins when you start to use Impact XT’s sound-shaping options. For example in Figure 2, one of the kick slices has been dropped in pitch, truncated, filtered, and given a new Amp decay setting to sound more like an explosion. Note that the pad’s name will be the same as the .audioloop, so if you’re using multiple slices from the same .audioloop, rename the pad to avoid confusion (right-click on the pad and choose Rename).

And remember, you’re not limited to dragging over slices from the Browser—you can split any file in the Edit window at the Bend markers, and drop those slices into Impact XT.

Sure, Impact XT comes with a lot of preset, ready-to-go kits when you just want to load something and start grooving. But you might be surprised how doing a little mixing and matching with the Browser slices can create something new and different—and each new pad sound is only a click + drag away.