I’m still wrapping my head around PreSonus giving us amp sims that are light years beyond the original Ampire in a free update…and also making them available for use in other programs. But now that they’re here, let’s take advantage of them before PreSonus’s accounting department changes its mind.
The new amp sims do not supplement the old Ampire, but replace it. New Studio One owners will have only the new amps; existing users will find that the legacy presets were removed. If you need to get the older presets back because you used them in pre-4.6 projects, simply install the Ampire XT Classics extension—but I’d recommend redoing any presets with the new amps, because they sound so much better. (The Ampire XT Metal Pack works with the new Ampire, but you may need to re-install it.) The PreSonus Knowledge Base has an article with everything you need to know about making the conversion from the old Ampire to the shiny new Ampire XT.
Bi-amping a guitar amp is useful for the same reason that most studio monitors are bi-amped—just as you can optimize the speakers for high and low frequencies, you can optimize the amps for high and low frequencies. For example, with heavily-distorted chords, the high strings will be equally distorted and relatively indistinct. With bi-amping, the lower notes can have a big, beefy distortion sound, while the high notes ring out on top—which is the subject of this Friday tip.
Let’s take a bi-amped preset apart to find out how it works. This preset is available on the PreSonus Exchange, so if you just want a cool, crunchy rhythm sound, go ahead and download it. But the real value here is learning how to make your own presets, because this preset was made using my guitar, pickups, strings, playing style, pick, and follows my musical tastes. It’s unlikely you play guitar in exactly the same way, so it’s worth tailoring any amp sim preset—not just this one—to your own playing style and gear.
Recording the Guitar
Guitars are mono, but to play stereo games, we need to convert a mono track to dual mono. This allows using processors like the Binaural Pan.
The FX Chain Multipreset
Fig. 1 shows the FX Chain “block diagram.” The Splitter is doing a Frequency split; frequencies below 924.7 Hz go to the left split, while frequencies above that go to the right split.
Figure 1: Bi-amp Multipreset block diagram.
Next up is choosing the amps, and setting their parameters (Fig. 2). The left split uses the MCM 800, a revered British amp that can marshall its resources to give big, beefy sounds. The right split’s VC30 amp is known amongst the vox populi for its bright, ringing high end, so the two amps are ideal for delivering the desired result.
Figure 2: Left split (on top), right split (below).
Now let’s enhance the amp sound with some EQ, reverb, and stereo imaging to spread out the reverb a bit more (Fig. 3).
Figure 3: Final touches for the bi-amp multipreset.
The EQ adds the equivalent of an amp’s “bright” switch. With a non-bi-amped preset, you have to be very careful about adding brightness because it can emphasize any artifacts caused by intermodulation distortion. But that’s not an issue here, because of the VC30’s a clean high end. The gentle low-frequency roll off simulates more of an open-back cabinet sound. The reverb is sort of a cross between a spring and room sound, while the Binaural Pan spreads out the reverb signal for a wider stereo image. The pan setting is fairly conservative; feel free to widen things further.
This multipreset makes an excellent template for further adventures with bi-amplification. Of course, this just scratches the surface of what’s possible with these new amps—so stay tuned to the Friday Tip of the Week for more applications.
This FX Chain complements the Tightener FX Chain presented in the October 25, 2019 Friday Tip. Whereas the Tightener reduces the presence of the key center in a piece of audio, the Resonator enhances the key center by adding resonance. The download at the end includes twelve Resonators—one for each key.
The heart of each Resonator FX Chain is two delay lines whose period correlates to a particular key, and are tuned an octave apart from each other. Mixing the delayed, resonant sound with the dry sound imparts a sense of pitch, which can be useful with unpitched instruments (such as percussion) to blend in better with melodic instruments. It can also help create a sense of pitch for drums that aren’t tuned properly, as well as emphasize any instrument’s key center.
For example, suppose it’s hard to get a shaker part to fit in a mix because its level is either too high and stands out, or is too low and sinks into the track. Adding a feeling of pitch may allow mixing the part a bit higher, while also having it blend into a mix more seamlessly. If taken to an extreme (and the FX Chain allows for that!), the Resonator has special effects potential for sounds like “cylon” voices, or tuning reverb.
Figure 1: Each Resonator has six controls; the Resonance control is the most important one.
The controls (Fig. 1) are pretty straightforward, and cover the full travel of the Analog Delay controls. There’s no need to tweak any Transform curves.
Striking the Right Chord
You can also use the Resonators to impart the sense of a chord by sending a sound to FX Channels, loaded with appropriate Resonators. Fig. 2 shows a shaker part acquiring a more melodic vibe via resonators for the keys of D, F#, and A. This produces a D major chord tonality.
Figure 2: Sending the audio from a shaker to three FX Channels “tunes” the shaker to a D major chord.
Although I came up with this mostly to process unpitched sounds, I’ve found it has other uses as well. For example, with an acoustic guitar part, a Resonator can add a vibe that’s not unlike the drone strings on a sitar. The best implementation I’ve found for this is putting the Resonator in an FX Channel, and automating a send so that the resonance is added only in certain strategic parts. And of course, if you want to get crazee, you can always turn up the resonance and sharpness, and do cylon voices. Fun stuff!
Okay, this is an unusual one. Please fasten your seat belts, and set your tray tables to the upright and locked positions.
Personal bias alert: With pop and rock music, for me it’s all about vocals, drums, and bass. Vocals tell the story, drums handle the rhythm, and bass holds down the low end. For a given collection of songs (formerly known as an “album”), I want all three elements to be relatively consistent from one song to the next—and that’s what this week’s tip is all about. Then the other instruments can weave in and out within the mix.
It’s fantastic that you can flip back and forth between the Project page and a Song that’s been added to the Project page, make tweaks to the Song, then migrate the updated Song back to the Project page. But it’s even better when you can make the most important changes earlier in the process, before you start down the final road of mastering.
Here’s a way to match bass and vocal levels in a collection of songs. This takes advantage of the Project page, but isn’t part of the mastering process itself. Instead, you’ll deploy this technique when the mix is in good shape—it has all the needed processing, automation, etc.—but you want a reality check before you begin mastering.
We’ll cover how to match vocal levels for the songs; bass works similarly, and in some ways, more effectively. Don’t worry, I’m not advocating robo-mixing. A mathematically correct level is not the same thing as an artistically correct level. So, you may still need to change levels later in the process—but this technique lets the voice and bass start from a “level” playing field. If you then need to go back and tweak a mix, you can keep the voice and bass where they are, and work the mix around them.
(Note that it’s important to know what the LUFS and LRA metering in the Project page represent. Rather than make this tip longer, for a complete explanation of LUFS and LRA, please check out this article I wrote for inSync magazine.)
Figure 1: The songs in an album have had only their vocal tracks bounced over to the Project page, so they can be analyzed by the Project page’s analytics.
The waveforms won’t provide any kind of visual confirmation, because you adjusted the levels to make sure the songs themselves had a consistent LUFS reading. For example, if you had to attenuate one of the songs by quite a bit, visually the vocal might seem louder but remember, it’s being attenuated because it was part of a song that was louder.
Also try this technique with bass. Bass will naturally vary from song to song, but again, you may see a lager-than-expected difference, and it may be worth finding out why. In my most recent album, all the bass parts were played with keyboard bass and generated pretty much the same level, so it was easy to use this technique to match the bass levels in all the songs. Drums are a little dicier because they vary more anyway, but if the drum parts are generally similar from song to song, give it a try.
…But There’s More to the Story than LUFS
LRA is another important reading, because it indicates dynamic range—and this is where it gets really educational. After analyzing vocals on an album, I noticed that some of them had a wider dynamic range than others, which influences how loudness is perceived. So, you need to take both LUFS and LRA readings into account when looking for consistency.
For my projects, I collect all the songs I’ve worked on during a year, and release the completed project toward the end of the year. So it’s not too surprising that something mixed in February is going to sound different compared to something mixed in November, and doing something as simple as going back to song and taking a little compression off a vocal (or adding some in) is sometimes all that’s needed for a more consistent sound.
But let me emphasize this isn’t about looking for rules, but looking for clues. Your ears will be the final arbiter, because the context for a part within a song matters. If a level sounds right, it is right. It doesn’t matter what numbers say, because numbers can’t make subjective judgments.
However, don’t minimize the value of this technique, either. The reason I stumbled on it was because one particular song in my next album never seemed quite “right,” and I couldn’t figure out why. After checking it with this technique, the vocal was low compared to the other songs, so the overall mix was lower as well. Even though I could use dynamics processing to make the song reach the same LUFS reading as the other songs, this affected the dynamics within the song itself. After going back into the song, raising the vocal level, and re-focusing the mix around it, everything fell into place.
A couple previous tips dealt with how to give mono instruments, like guitar, a stereo image that won’t degrade when collapsed to mono. Widen Your Mono Guitar—Sans Problems used delay, but in a way that minimized phase issues. Delay-Free Stereo from Mono used two Multiband Dynamics, set for no compression, to separate the audio into bands that you could then pan left or right.
This tip takes the process even further—it’s versatile, relatively simple, easily customizable, and also, has no phase issues when collapsed to mono. I’ve even used it to create a subtle, artificial stereo image from old mono records.
The Stereo Separator is particularly effective with power chords and rhythm guitar, especially as an alternative to layering parts in search of a “bigger” sound—you can obtain a stereo spread with a single track, so the sound is more defined compared to using multiple layers. And of course, if you scroll to the end there’s a downloadable FX Chain, so you can start playing with this immediately.
This example assumes a mono, distorted guitar track, like what you’d obtain by using a single mic on an amp. To create a stereo image, we first need need to convert this to a stereo track. So, set the track’s Channel Mode to stereo, and bounce the track to itself (Ctrl+B) to convert it into a dual mono (i.e., stereo, but with the same audio in the left and right channels). Now we can start playing with the stereo imaging.
And now, the fun begins! Play with the Pan controls to spread the different frequency bands in the stereo field—the audio example gives a good idea of the type of effect this FX Chain can do. The first example is mono, the second widens the image a bit, and the final example does a somewhat more radical stereo image.
Of course, you can go into the routing window, and change the levels of the various splits. Or, add FX in the splits…change the split frequencies…there’s enough to keep you busy for a while. Happy stereo!
This technique dates back to when I was doing live gigs with Brian Hardgroove from Public Enemy—me on guitar, him on drums. Since there was no bass player, we needed a way to fill out the bottom end. I’ve come up with a bunch of ways to do that over the years, but the technique presented here is the easiest one yet to implement. We’ll extract a bass line from an existing guitar track, without using MIDI or virtual instruments—here’s how.
Start by copying the guitar’s audio to a new track, which will become our faux bass track. Call up the Inspector, and transpose the faux bass track down by -12 semitones (Fig. 1). This technique works best with relatively articulated guitar notes, not rhythm guitar chords.
Now, it may seem like transposing down an octave is enough, and we can all go home now. No! The faux bass track needs three processors to sound right (Fig. 2).
But as they always say, the proof is in the pudding. However, since we’re not providing a recipe about how to make pudding, check out the audio example instead.
The first two measures are the guitar by itself, while the second two measures have the faux bass playing along. Pretty cool, eh? Oh…and if you’re in a Cream tribute band, this will definitely come in handy for “Sunshine of Your Love.” Have a great weekend!
Also, this is your last reminder to enter to win from PreSonus. Read more about the #StudioOneGiveaway going on NOW on Instagram and Facebook HERE.
Your guitar is most likely mono. But sometimes you want a wide, full, stereo image. I can relate.
One technique is to send the guitar track to an FX channel, insert a delay set for a relatively short delay (like 25 ms), and then pan the original track and FX channel oppositely. But if you sum the signals to mono, then there’s the possibility of cancellation. In fact, I saw a guy in an internet video who said this was a terrible idea, and you should just overdub the part again and pan that oppositely if you want stereo.
Well, overdubbing is an option, assuming you can play tightly enough that the parts don’t sound sloppy. But don’t forget Studio One has that wonderful Channel Mode button on the Main output, so you can test stereo tracks in mono—simply adjust the delay time for minimum cancellation. You won’t be able to avoid cancellation entirely, but tweaking the time may keep it from being objectionable (especially once the delay time gets above 25 ms or so, because that’s more into doubling range). To make any phase issues even less noticeable, lower the delayed sound’s level a little bit to weight the sound more toward the dry guitar.
But I wouldn’t be writing this tip if I didn’t have a better option—so here it is.
Now, here’s where the magic happens. Set the Main output mode to mono, and you’ll hear virtually no difference between that and the “faux stereo” signal, other than the stereo imaging. The reason why is that now, we have a guitar in the center channel—so choosing mono creates a center channel buildup. This raises the main guitar’s level above the delayed sounds, so there’s virtually no chance of audible cancellation, and it balances the level better between the stereo and mono modes.
Now you have a wide guitar that sounds equally loud, and is phase-issue free, in mono or stereo—happy Friday!
The June 22, 2018 tip covered how to make mastered songs better with tempo changes, but there was some pushback because it wasn’t easy to make these kinds of changes in Studio One. Fortunately, it seems like the developers were listening, because it’s now far easier to change tempo. I’ve been refining various tempo-changing techniques over the past year (and had a chance to gauge reactions to songs using tempo changes compared to those that didn’t), so it seemed like the time is right to re-visit this topic.
WHY TEMPO CHANGES?
In the days before click tracks, music had tempo changes. However, with good musicians, these weren’t random. After analyzing dozens of songs, many (actually, most) of them would speed up slightly during the end of a chorus or verse, or during a solo, and then drop back down again.
For example, many people feel James Brown had one of the tightest rhythm sections ever—which is true, but not because they were a metronome. There were premeditated, conscious tempo changes throughout the song (e.g., speeding up during the run up to the phrase “papa’s got a brand new bag,” in the song of the same name, then dropping back down again—only to speed up to the next climax). Furthermore, the entire song sped up linearly over the course of the song.
Note that you didn’t hear these kinds of changes as something obvious, you felt them. They added to the “tension and release” inherent in any music, which is a key element (along with dynamics) in eliciting an emotional response from listeners.
THE PROBLEM WITH TEMPO CHANGES
It was easy to have natural tempo changes when musicians played together in a room. These days, it’s difficult for solo artists to plan out in advance when changes are going to happen. Also, if you use effects with tempo sync, not all of them follow tempo changes elegantly (and some can’t follow tempo changes at all). Let’s face it—it’s a lot easier to record to a click track, and have a constant tempo. However…
THE STUDIO ONE SOLUTION
Fortunately, Studio One makes it easy to add tempo changes to a finished mix—so you can complete your song, and then add subtle tempo changes where appropriate. This also lets you compare a version without tempo changes, and one with tempo changes. You may not hear a difference, but you’ll feel it.
As mentioned in last year’s tip, for the highest possible fidelity choose Options > Advanced > Audio, and check “Use cache for timestretched audio files.” Next, open a new project, and bring in the mixed file. Important: you need to embed a tempo, otherwise it’s not possible to change the tempo. So, open the Inspector, and enter a tempo under File Tempo. It doesn’t have to match the original song tempo because we’re making relative, not absolute, changes. Also choose Tempo = Timestretch, and Timestretch = Sound – Elastique Pro Formant.
MANIPULATING THE TEMPO TRACK
Working with the tempo track is now as easy as working with automation: click and drag to create ramps, and bend straight lines into curves if desired. You can set high and low tempo limits within the tempo track; the minimum difference between high and low Tempo Track values is 20 BPM, however you can change the tempo track height to increase the resolution. The bottom lines it that it’s possible to create very detailed tempo changes, quickly and easily.
So what does it sound like? Here are two examples. The first is a hard-rock cover version of “Walking on the Moon” (originally recorded by The Police, and written by Sting).
The differences are fairly significant, starting with a low of 135 BPM, going up to 141 BPM, and dropping down as low as 134 BPM.
Here’s another example, a slower song called “My Butterfly.” It covers an even greater relative range, because it goes from a low of 90 to a high of 96 BPM. You may be able to hear the speedup in the solo, not just feel it, now that you know it’s there.
Note that when possible, there’s a constant tempo at the beginning and end. It doesn’t matter so much with songs, but with dance mixes, I can add tempo changes in the track as long as there’s a constant tempo on the intro and outro so DJs don’t go crazy when they’re trying to do beat-matching.
So is it worth making these kinds of changes? All I know is that the songs I do with tempo changes get a better response than songs without tempo changes. Maybe it’s coincidence…but I don’t think so.
Mid-side (M-S) processing encodes a standard stereo track into a different type of stereo track with two separate components: the left channel contains the center of the stereo spread, or mid component, while the right channel contains the sides of the stereo spread—the difference between the original stereo file’s right and left channels (i.e., what the two channels don’t have in common). You can then process these components separately, and after processing, decode the separated components back into conventional stereo.
Is that cool, or what? It lets you get “inside the file,” sometimes to where you can almost remix a mixed stereo file. Need more kick and bass? Add some low end to the center, and it will leave the rest of the audio alone. Or bring up the level of only the sides to make the stereo image wider.
The key to M-S processing is the Mixtool plug-in, and its MS Transform button. The easiest way to get started with M-S processing is with the MS-Transform FX Chain (Fig. 1), found in the Browser’s FX Chains Mixing folder.
The upper Mixtool encodes the signal so that the left channel contains a stereo file’s center component, while the right channel contains the stereo file’s side components. This stereo signal goes to the Splitter, which separates the channels into the side and center paths. These then feed into the lower Mixtool, which decodes the M-S signal back into stereo. (The Limiter isn’t an essential part of this process, but is added for convenience.)
Even this simple implementation is useful. Turn up the post-Splitter gain slider in the Center path to boost the bass, kick, vocals, and other center components. Or, turn up the gain slider in the post-SplitterSide path to bring up the sides, for the wider stereo image we mentioned.
Fig. 2 shows a somewhat more developed FX Chain, where a Pro EQ boosts the highs on the sides. Boosting the highs adds a sense of air, which enhances the stereo image because highs are more directional.
In addition to decoding the signal back to stereo, the second Mixtool has its Output Gain control accessible to compensate for any level differences when the FX Chain is bypass/enabled. Also, you can disable the MS Decoder (last button, lower right) to prevent converting the signal back into stereo, which makes it easy to hear what’s happening in the center and sides.
And of course…you can take this concept much further. Add a second EQ in the center channel, or a compressor if you want to squash the kick/snare/vocals a bit while leaving the sides alone. Try adding reverb to the sides but not the center, to avoid muddying what’s happening in the center. Or, add some short delays to the sides to give more of a room sound….the mind boggles at the possibilities, eh?