PreSonus Blog

Category Archives: Friday Tip of the Week


Hit LUFS Targets with the New Digital Release Menu

It’s a streaming world—and streaming services have their own audio standards with respect to LUFS and True Peak levels. LUFS is not the same as peak or average loudness. Instead, it measures perceived sound levels. In theory, if two songs have the same LUFS readings—whether it’s Billie Eilish whispering or hardcore 1999 Belgian techno—you won’t feel the need to get up and adjust the volume.

True peak measures the peak value after D/A conversion, which can be higher than the peak value prior to D/A conversion. Having a peak value below 0 minimizes the chance of distortion when transcoding a WAV file into a data-compressed format. For more about LUFS, see Understanding LUFS, and Why Should I Care?

The Easy Song Level Matching tip tells how to use the Tricomp and Limiter to match song levels in a collection of songs or an album, and it remains viable. However, if you’ve already mastered your songs the way you like, Studio One version 5.5 has added an export function to the Digital Release menu that can export your songs to any LUFS and true peak level you want. What’s more, it includes presets for all the most popular streaming services (fig. 1).

Figure 1: To meet the specs for particular streaming services, simply choose a preset, and export. Here. YouTube has been chosen.

Another convenient aspect is that once you release the drop-down menu, you can see a particular streaming service’s recommended specs (fig. 2). This is also where you can specify custom settings for LUFS and True Peak.

Figure 2: Choosing a streaming service shows its desired file specs. This shows that Spotify wants files with a max loudness of -14.0 LUFS, and Max true peak of -1.0 dB.

Let’s do a real-world test to check the effectiveness. Consider two songs in the Project Page, one at -6.8 LUFS with a true peak reading of +0.7, and the other at -12.2 LUFS with a true peak reading of +0.1 (fig. 3). We’ll export them to Spotify’s standard mode, which wants -14.0 LUFS and -1.0 true peak, and then load them back into the Project Page to see what happens.

The -6.8 LUFS file had to be turned down a lot to hit -14.0 perceived level. Turning down the level lowers the true peak reading; in this case, it ended up at -6.5. The -12.2 LUFS file didn’t need to be turned down much at all to hit -14.0 LUFS, and its true peak reading is now -1.7. When played back, even though the waveform levels look very different, they sound like they’re at the same level.

However, it’s important to note that this process won’t raise the level if the peaks already hit 0. For example, if one of the files was normalized and its LUFS reading was -15.0, it wouldn’t have increased to -14.0 LUFS because that would have required processing (e.g., limiting) to raise the level. Otherwise, any peaks would have exceeded the headroom.

The export function simply does what most (but not all) streaming services do—turn down levels above the target LUFS to reach the desired LUFS reading. This makes sense, because the main reason streaming services adopting LUFS targets was to prevent songs that were mastered “hot” from having a level advantage over songs mastered with more reasonable (and more listenable) dynamics.

Figure 3: The two files on the left are the original files. The versions on the right now have LUFS readings of ‑14.0.

Finally, you don’t have to master everything to hit a specific service’s specifics, because this new Studio One function does it for you. Master the music so it sounds good to you—go ahead and compress that rock track, or maximize an EDM set. If it has the right sound but its LUFS reading is -9 or whatever, don’t worry about it. When you export it to a specific target, the song will meet that service’s specs.

“Talk Box” Emulator for Guitar

Although this isn’t an actual talk box, it does give humanized mouth sounds with Studio One Professional. This is possible because the human mouth is a filter, and Studio One has filters…so let’s do it! Check out the audio example, then download the .multipreset to try it out for yourself.

Talk Box.multipreset – Click to download

The Talk Box works by splitting the guitar (fig. 1). One split goes to a Pro EQ2, which uses a Channel Editor knob to sweep three bandpass filters simultaneously over the vocal range. The other split goes through a Mixtool that flips the phase, so it cancels all the audio from the Pro EQ2 except for the bandpass filter peaks.

Figure 1: The Talk Box’s “block diagram.”

EQ Settings

The Pro EQ2 uses three stages (fig. 2). The LF stage, in Peaking mode, sweeps from about 250 to 500 Hz. The LMF stage sweeps from around 750 Hz to 1.5 kHz, while the MF stage sweeps from 1.5 to 3.0 kHz.

Figure 2: Pro EQ2 stages being used for the filtering.

Channel Editor Settings

Use the Channel Editor to assign the filter frequencies to a Macro control knob (fig. 3).

Figure 3: The Macro knob’s range has been edited to cover the Mid Frequency stage’s frequencies. Edit the other targets similarly

The second Mixtool at the output increases the level by 6 dB. This is necessary because one of the splits is out of phase. Because only the filter peaks come through, the output level is considerably lower than the input.

Everything described so far is included in the multipreset, but assigning the Macro knob to a MIDI controller or pedal is up to you. I did a blog post on a way to do this, and the information is also in The Huge Book of Studio One Tips and Tricks. (A heads-up for current owners of the book: a free update will be available soon with more tips, presets, and content, so stay tuned.)

Just remember that you can’t automate the Macro knob per se. You’ll need to add three Automation tracks, assigned to the Low, Low Mid, and Mid frequency parameters.  Then, make sure they’re in Write mode when you move the Macro knob control, and they’ll automate the “talk box” changes.

Mix Referencing with Master Matching

 

Mix referencing, where you compare your mix to well-mixed music, can be a big help if you want a reality check on what you’re doing. There are even “curve-stealing” programs that can analyze the spectral response of one song and apply it to another one, but that won’t help train your ears, and you’ll usually need to make changes anyway. So, let’s explore how to customize the process in Studio One.

As examples, I chose two songs that are about as different as can be—”Kids of Summer” by Mayday Parade (rock), and “Spinback” by Comethazine (rap). The goal was to analyze each song’s spectral response, and come up with Pro EQ2 settings that could apply one spectrum to the other. Let’s hear now it turned out.

Fig. 1 shows the original spectra (the lower of the two lines in the graph is the most relevant). Kids of Summer is heavy on the upper mids around 1 to 2 kHz, where guitars and vocals live. Bass is relatively subdued, and the highs start rolling off above 5 kHz. Spinback is heavy on the bass, lighter on the lower mids, and spices up the treble from 5 to about 7 kHz.

Figure 1: Spectra for Kids of Summer (top) and Spinback (bottom).

Let’s listen to the original songs that the spectra represent. Copyright-wise, I think we’re good to go from a fair use standpoint—the excerpts are under 30 seconds, used for educational purposes, transformative because the EQ is going to change them, and don’t diminish the value of the music.

Now let’s create a EQ to give Kids of Summer more of the Spinback spectral response. Fig. 2 shows the EQ settings.

Figure 2: To come closer to the Spinback sound, this boosts lows and highs, and trims the mids.

Fig. 3 shows the spectral response for Kids of Summer after applying the EQ.

Figure 3: Kids of Summer spectra after EQ. Note how it’s much closer to the Spinback spectra.

And here’s what it sounds like.

I really like how this brings out the low-end fullness, although I’d definitely trim the highs a bit—still, it’s the start of a cool alternative. Note that for a fair comparison, the original and EQed versions were set to the same LUFS value.

Now let’s do the reverse. Figure 4 shows the Kids of Summer EQ curve we’ll apply to Spinback.

Figure 4: To approximate the Kids of Summer sound, this cuts the lows and highs, and boosts the mids.

Fig. 5 shows Spinback’s spectral response after applying the Kids of Summer EQ.

Figure 5: We’ve managed to make the Spinback spectrum much more like the one for Kids of Summer.

Let’s hear what Spinback sounds like now.

Again, the original and EQed versions were set for the same LUFS readings. The difference isn’t as dramatic as applying Spinback to Kids of Summer, because the Spinback arrangement is more sparse. However, the EQ’d version makes the vocals more prominent in comparison the more attenuated low and high frequencies, so the whole song comes more “forward” in the speakers compared to the original. This is something that would cut through better on AM radio and mobile device speakers, at the expense of the low-end fullness and high-end sizzle. Even if you’re an expert mixer, this technique can give you some fresh ideas, and new ways to look at music.

And finally, to all of you, thanks for your continued support of these blog posts—and a big thanks to all the behind-the-scenes folks at PreSonus, like Houston Dragna and Ryan Roullard, who turn them into reality. Have a happy, healthy, productive 2022—and make some great music, so we all can listen to it!

The Songwriter’s Assistant

It happens to everyone: creative blocks. You want to do something musical, but…you keep falling into the same old patterns, or you need something new and different to prod you into being creative. Well, that’s when it’s time to load the Songwriter’s Assistant preset into Studio One’s Chorder Note FX. And although this uses virtual instruments, you don’t even need to know how to play keyboards. Really!

Note FX Basics

A Note FX plug-in processes MIDI data coming from an external controller or Instrument track, before passing the data along to a virtual instrument. The Note FX inserts in an instrument track’s Inspector (fig. 1).

Figure 1: Insert Note FX in the Inspector, like you would insert an Event FX for audio. We’ll describe input mode later.

Studio One has several Note FX, but for assistance with smashing creative blocks, this tip uses Chorder as our Note FX of choice. And in the spirit of the holiday season, there’s a gift you can download—the Songwriter’s Assistant preset for Chorder.

Songwriter’s Assistant.preset

Why Chorder Is Cool

Chorder can stack chords on a single key. This is why you don’t need to play keyboards—just hit a key, and a chord of whatever complexity you want will play. The Songwriter’s Assistant uses two keyboard octaves (fig. 2).

 

 

 

 

 

 

 

 

 

Figure 2: The Songwriter’s Assistant preset.

The range C2 – B2 plays major chords. C3 – C4 plays minor chords. So if you don’t know how to voice chords on a keyboard, no problem! And if you don’t even know what notes correspond to C2 through C4, no worries there, either. Just hit keys that sound good, and later on, we’ll deal with how to show what chords to play on guitar or other instruments.

Studio One’s Help file has plenty of information on how to create your own Chorder presets or modify existing ones (like adding more chord types, like 7ths, to other keyboard octaves), so we’ll just concentrate on using the Songwriter’s Assistant as a plug-and-play Note FX. 

Using It

After downloading the Songwriter’s Assistant preset, import it into Chorder. Studio One saves Chorder presets in [drive]\Studio One\Presets\PreSonus\Chorder. Insert an instrument, and then insert Chorder as shown in fig. 1.

Now start playing notes, and come up with cool chord progressions. Record them in the Instrument track to build up your song’s chord progression. That’s all there is to it.

The instrument track can show the note(s) you played, or the notes Chorder generates. With Input Mode (fig. 1) set to “pre” (the same graphic as a send being pre), the track records Chorder’s output. With “post,” it records your original input, although you’ll still hear chords play back. If you selected post but want the track to include the Chorder-generated notes, right-click in the track, and choose Render Instrument Tracks. Now you’ll see what Chorder generated.

Finally, if you don’t play keyboards, just open the Chord Track and drag the instrument track up to it. Studio One will parse your chord progression, and show you the chord letters. Better yet, choose View > Chord Display to see giant chord letters marching across your screen.

And don’t overlook the factory presets, where you’ll find plenty more ideas for generating chord progressions. Is this cool, or what?

Add Lookahead to the Fat Channel Compressors

The Fat Channel is a versatile channel strip plug-in that, because of all the other cool Studio One individual processors, is easy to overlook. But it has several outstanding features, including the ability to choose from a variety of compressors—sort of like plug-ins within a plug-in (fig. 1). Three are stock; the rest are optional at extra cost from the PreSonus shop.

Figure 1: A fully loaded Fat Channel can choose from 11 compressors.

However, none of these compressors has a lookahead feature. Lookahead delays the audio we hear, but the compressor monitors the audio in real time. Thus, the compressor knows in advance when it needs to apply compression. Without lookahead, if you’re using heavy compression (like for guitar sustain), you’ll hear a nasty pop because the compression can’t kick in until the audio exceeds the threshold—and by that time, it’s too late. Some of the audio has already passed through uncompressed, which causes the pop. The first audio example exhibits this pop. Also note that the first and second audio examples are both normalized—but this one sounds really soft, because the pops are so loud you can’t raise the level any higher without bumping against the headroom.

The solution is simple. What’s more, it applies to not only the Fat Channel, but any dynamics processor, from any manufacturer, that doesn’t have lookahead.

Create a bus, and insert the Analog Delay. Edit the parameters for 2 ms of delay, delayed sound only, and no modulation (fig. 2). Insert the Fat Channel after the Analog Delay, and choose your favorite compressor.

Figure 2: Analog Delay settings for the lookahead function.

At the audio track you want to process, create two pre-fader sends. One goes to the bus, and the other to the Fat Channel’s sidechain. Turn down the track’s fader so you hear only the audio coming from the bus. This accomplishes our goal: The audio applies compression to the Fat Channel 2 ms before the audio enters the compressor. So, the compressor is primed and ready to go when a transient hits (fig. 3).

Figure 3: Track and bus setup for Fat Channel lookahead.

Now compare the next audio example to the first one—the nasty pop is gone. Yeah! Also notice how it’s much louder, because the headroom doesn’t have to accommodate a pop.

However, there’s a catch. Studio One plug-ins with a lookahead function delay the signal by 1.5 ms, but apply plug-in delay compensation so that all the other tracks are delayed by 1.5 ms. This keeps the tracks in sync, and you don’t really notice a delay this short. However, our “faux lookahead” doesn’t have plug-in delay compensation.

Then, move the track forward on the timeline by 2 ms to align it with the other tracks. You can do this via the Delay setting in the Event Inspector (F4).

So…What’s the Deal with Aux Channels?

 

Hardware is making a comeback. Real-time, improvisation-based drum machines and synths are gaining popularity, and you can find occasional bargains for used synths that were top of the line only a few years ago. So, it makes sense that Studio One would want to simplify integrating external hardware synths (similarly to how Pipeline integrates external hardware effects).

When Version 5 introduced the Aux Channel, comments ranged from “So great—I’ve been wanting this for years!” to “why not just feed the instrument into audio tracks?” Well, they’re both right—Aux Channels are about workflow with external hardware synthesizers. But Aux Channels can streamline workflow and simplify setup, compared to assigning the instrument outs into audio tracks, which then route to the mixer.

Aux Channels monitor external audio interface inputs directly, not the outputs from recorded tracks, in the mixer. These external inputs can be any audio. For example, when mixing, you might want to listen to a well-mixed CD for comparison. You don’t need to record this as a track, just monitor the inputs it’s feeding as needed.

Aux Channel Benefits

My favorite feature is that you can add a hardware synthesizer to the External Instruments folder (located in the Browser’s Instruments tab), and drag and drop the hardware synth into the arrange view—just like a virtual instrument. This automatically creates the Aux Channel, and sets up the Instrument track as you saved it. Set up the external synth once, then use it any time you want.

Also, the external instrument needs only the Note data track in the Arrange view—audio tracks are unnecessary because you’re just going to mix them in the console anyway. This is consistent with Studio One’s design philosophy of dedicating the Arrange view to arranging, not mixing. Of course, you can show/hide audio tracks in the Arrange view, but it’s more convenient to have those audio tracks show up directly in the console, like your other audio sources.

How to Create an Aux Channel

First, your hardware sound generator needs to be set up as an external Instrument in the Options (Windows) or Preferences (Mac) window. Then, in the Console, click on External toward the lower left. Click the downward arrow for the desired External Device, and choose Edit. (Note that if it’s a workstation that combines a sound generator with a keyboard, you should have two entries—one for the Keyboard, and one for the Instrument. Choose the instrument.)

When the control mapping window appears, click on the Outputs button (the one with the right arrow; see fig. 1), and choose Add Aux Channel.

Figure 1: How to add an Aux Channel for an external device.

After the Aux Channel appears, assign its input to the audio input(s) being fed by the hardware synth. For example, if the synth’s audio is going to stereo input 3+4, then choose that stereo input. (If your existing Song Setup I/O doesn’t include an easily identifiable name for the inputs being used for the Aux track, considering doing some renaming.)

Next, save these default settings. If needed, click on Outputs again to bring up the controller mapping window, and click on Save Default. Saving it is what allows the hardware instrument to show up in the Browser.

All Ready!

In the Browser’s Instruments tab, look under External Instruments (toward the top, just under Multi Instruments). Drag your instrument into the Arrange view, and start playing. If you don’t hear anything, the likely causes are either that the keyboard being assigned to the synth isn’t the default keyboard (specify All Inputs for MIDI in, and it should work), or the Aux Channel input hasn’t been assigned to the correct audio interface inputs.

Also note that with workstations whose keyboard drives the sound generator, turn off the Local Control parameter (usually in the instrument’s MIDI setup menu). Otherwise, you’ll be playing the sound generator from the keyboard, and Studio One will also be feeding it notes. The result is note double-triggering.

Bouncing

To preserve what the Instrument track plays as an audio track, choose a track’s Transform to Audio Track option, or select the Event and bounce to a new track (fig. 2).

Figure 2: The bounce menu for hardware instruments.

When bouncing, make sure that the Record Input is assigned to the Aux Channel where the Instrument’s audio appears. Finally, Studio One knows that because the bounce involves external hardware, the bounce must be done in real-time (faster than real-time bouncing is possible only with virtual instruments that live inside the computer). Happy hardware, everyone!

Amp Sims: Garbage In, Garbage Out

An astute Friday Tip reader commented that while the tip on how to level the outputs of amp sim presets was indeed useful, I should also write about the importance of input levels. Well, I do take requests—and yes, input levels are crucial with amp sims.

Physical amp sims are forgiving. They soak up transients, and chop off low and high frequencies. But amp sims tend to magnify the differences between guitars and playing styles. When going through the same preset, a player who uses a thin flat pick, 0.008 strings, and single coil pickups will sound totally different compared to a player who uses a thumbpick, 0.010 strings, and humbuckers. So, let’s look at four common mistakes people make when feeding amp sims.

  1. Dialing up presets created by someone else. You have no idea what kind of input level the amp sim expects, so you’ll almost certainly need to edit at least some parameters (particularly the input drive or level).
  2. Too much gain. Excessive gain generates nasty distortion, not the “good” distortion an amp creates. You’ll also have issues with decreased definition, potential aliasing, and a sound that splatters all over a mix. Check out the audio example, using Ampire’s Painapple amp.

The first half has the input set to 5 o’clock. Not only is the sound so distorted the playing is indistinct, listen to the very beginning, before the first note hits. All that gain is picking up noise, hum, and garbage that becomes part of your guitar signal. No wonder the amp sim sounds like garbage—it has plenty of garbage mixed in. The audio example’s second half has the input at 9 o’clock. The sound is not only more focused, but stronger.

  1. Inconsistent levels. Amp sim plug-ins are always re-amping—the guitar track is dry. Because amp sims are so dependent on levels, consistent sounds from presets require consistent track levels. I normalize my dry guitar tracks to -3 dB, and then my presets know what to expect. Also, note that Event level adjustments are before the amp sim. Sometimes all that’s needed to optimize the guitar sound is to lower the Event level in places where you want more definition, and raise it when you prefer heavier distortion.
  2. Too many low and high frequencies. Guitar amps were never about flat frequency response. Rolling off lows below the guitar’s range keeps out bass energy that has nothing to do with your playing, and rolling off the highs a bit simulates the high-frequency loss through long cables—something amp sims don’t emulate. In the next audio example, the first half is the same overly distorted sound as the previous audio example’s first half. The second half doesn’t change the input level, but rolls off the lows and highs with the Pro EQ, prior to the amp sim. Fig. 1 shows the Pro EQ settings.

Figure 1: Rolling off the lows and highs before feeding an amp sim can clean up the sound.

In particular, listen to the spaces between notes. The version without EQ has a sort of bassy mud between notes that detracts from the part’s focus.

The bottom line is simple: If your amp sim doesn’t sound right, the quickest fix might be as simple as turning down the input level, and rolling off some lows and highs before the amp sim.

Why You Want to Level Ampire Presets

It’s great we can store presets, trade them with other users, and download free ones. But…when selecting different Ampire presets to decide how they fit with a track, you want their levels to match closely. Then, your evaluation of the sound will be based on the tone, not by whether the preset is softer or louder. However, consistent preset levels are not a given.

Having a baseline level for presets, so you don’t need to change the level every time you call up a new one, is convenient. You can’t really use VU meters for this, because you want sounds to have the same perceived level, which can be different from their measured level. For example, a brighter sound may measure as softer, but be perceived as louder because it has energy where our ears are most sensitive.

My standard of comparison is a dry guitar sound, because I want the same perceived level whether Ampire is enabled, or bypassed. You might prefer to have Ampire always be a few dB louder than your dry guitar—whatever makes you happy.

Enter LUFS

The LUFS (Loudness Unit Full Scale) measurement protocol measures perceived loudness. LUFS measurements allow streaming services like YouTube, Spotify, Apple Music, and others to adjust the volume of various songs to the same perceived level. This is why you don’t have to change the volume every time a different song shows up in a playlist. The system isn’t perfect, but it’s better than dealing with constant level variations. Fortunately, Studio One has a Level Meter plug-in that gives LUFS readings (fig. 1).

Figure 1: The Level meter can measure levels based on a variety of standards, including LUFS.

The Process

  1. Record at least two Events—one for chords, and one for single-note leads. If you use bass presets, you’ll also want an Event with a bass line. Record about a 15 second clip of continuous guitar chord progressions, and another clip with 15-30 seconds of single notes (and/or bass lines), also without pauses.
  2. Insert the Level meter after Ampire, and enable its LUFS and R128 buttons. Set the appropriate event to loop, and start playback.
  3. With Ampire bypassed, adjust the Event’s level for a nominal LUFS reading. I use -18 dB because that works well with Ampire, Helix, and other amp sims. After setting your standard level, it’s a good idea to lock the Events from editing (context menu > Toggle Edit Lock).

We interrupt these steps to bring you an important bulletin: The Level Meter reading that matters is the INT field. This averages out the audio, so you’ll see the reading change at first, and then settle down to a consistent LUFS reading. When you change levels, call up a different preset, or make any changes, click on the Reset button to re-start the averaging process. When doing any LUFS measurements, you can’t be sure the reading is correct until you’ve a) hit Reset, and b) played the event several times, which is why we want to loop it.

We now return to the step-by-step procedure.

  1. Enable Ampire, and call up a preset. Hit the Level Meter’s Reset button, and after the Event has played through a couple times, check the LUFS meter reading.
  2. In my case, an LUFS reading of -20 would mean the preset level is about 2 dB lower than my standard -18 dB level. So, I’d raise Ampire’s output control (fig. 2) by 2 dB. On the other hand, if the LUFS reading was -14 dB, it would be about 4 dB louder than my standard. I’d instead need to lower the output level control by 4 dB.

Figure 2: Use the output control to obtain similar levels from different presets.

  1. After adjusting the level, hit Reset on the Level Meter, and play the loop a few times. If the reading is the same as the nominal value…done! Otherwise, re-tweak.
  2. The final step is to click on Update Preset (fig. 3), so you don’t have to do this again.

Figure 3: Save your work for future sessions.

And while you’re at it, save the Song you used to do this testing. Then you can call it up again in the future, when you want to match preset levels.

Okay, so it took a little time to balance all your presets. But when deciding what preset to use in the future, you’ll be glad you set them all to a baseline level.

Studio One’s Session Bass Player

Studio One includes multiple algorithmic composition tools, but I’m not sure how many users are aware of them. So, let’s look at the way some of these tools can help expedite the songwriting process.

I’ve always prioritized speed when songwriting, because inspiration can disappear quickly. But, I’ve also found that good guide tracks (e.g., cool drum loops instead of metronome clicks) increase the inspiration factor. The trick is to create good guide tracks, without getting distracted into editing a guide tack into a “part.”

A solid bass line helps drive a song, but when songwriting, I don’t want to take the time to grab a bass and do the necessary setup. The blog post Studio One’s Amazing Robot Bassist describes a simple way to create a bass part by hitting notes on the right rhythm, and then conforming them to the Chord track. This week’s tip builds on that concept to let Studio One’s session bass player get way more creative, and generate parts even more quickly.

Fig. 1 shows a chorus being written. The process started with a rhythm guitar part (track 3), which I dragged up to the Chord track so it could parse the chord changes. I made a few changes to the Chord Track, then dragged it into a piano instrument track. This deposited MIDI notes for the chords, so now the piano played the chord progression. I then had the guitar follow the new Chord track, and the guitar and piano played together. Next up was adding a drum loop.

Figure 1: Studio One is set up for songwriting, and has just generated a bass part.

As to the bass part, there are three elements to having Studio One create a bass part.

1: Fill With Notes

After selecting the bass’s blank Note Event, open the Edit Menu and choose Action > Fill with Notes. You’ll want to customize the settings for best results (fig. 2).

Figure 2: Studio One’s session bass player is thinking about what to play.

For example, choose a bass-friendly pitch range. For this song, I also wanted a fairly bouncy part, so the rhythm is made up of 1/4 and 1/8 notes. You might also want some half-notes in there. Click on OK, and you have a part.

Every time you invoke Fill with Notes, the notes will be different. If you don’t like the results, delete the notes, and try again. But don’t agonize over this—all you really want is notes with a rhythm you like, because the Chord Track and Scale will take care of the pitches.

Also note that if you don’t like the notes in, for example, the second half but like the first half, no problem. Delete the notes in the second half, set the loop to the second half, and try again.

2: Chord Track

Now have the part follow the Chord Track. Your follow options are Parallel, Narrow, and Bass. I usually prefer Narrow over Bass, but try them both. Now the notes follow the chord progression.

3: Choose and Apply Scale

This may not be necessary, but if the part is too busy pitch-wise, specify the scale note and choose Major or Minor Triad. Then, choose Action > Apply Scale. Now all the notes will be moved to the first, third, or fifth. Of course, you can also experiment with other scales—but remember, the object is to get an acceptable bass part down fast, so you can move on to songwriting.

Fig. 3 shows a bass part that Studio One’s session bass player “played.” This was on the second try.

Figure 3: The session bass player came up with a pretty cool part. I applied Humanize to the velocities so it didn’t sound too repetitive.

Finally, let’s hear the results. The vitally important point to remember is that all this was created, start to finish, in under four minutes (which included tuning and plugging in the guitar). The end result was some decent guide tracks, so I could start work on the ever-crucial vocal lead line and lyrics. Thank you, session bass player!

 

 

Percussion in Motion

I’m a fan of hand percussion. Tambourines, cowbells, claves, guiros…you name it. But mixing it just right is always tricky. When mixed too high, the percussion becomes a distraction. Too low, and it might as well not be there. The object is to find the sweet spot between those extremes.

My solution is simple: use X-Trem’s Autopan function to give motion to percussion. The following audio example has cowbell (no jokes, please) and tambourine. The cowbell keeps a rock-solid hit on quarter notes, and is panned to an equally rock-solid center. But the tambourine is a different story. I’ve mixed both of them higher than normal, so you can clearly hear how they interact.

In the first half, both the tambourine and cowbell are panned to center. After a brief gap, the section repeats, with X-Trem moving the tambourine back and forth in the stereo field. In a real mix, both percussion parts would be mixed lower, so the tambourine’s motion would be something you sensed rather than heard. The end result is a feeling of more motion with the percussion, because the tambourine’s wanderings keep it from becoming repetitive.

X-Trem Setup

First things first: the track must be stereo. If you recorded it in mono, set the Channel Mode to stereo, select the clip, and type ctrl+B to bounce the clip to itself. This converts it to stereo.

I prefer not to have a regular, detectable panning change. A random LFO waveform would be ideal, but the X-Trem’s 16 Steps waveform is equally good. Slower pan rates are better, because you don’t want the pan position to change so fast that a percussion hit pans while it’s still sustaining or playing. I’d recommend 2 beats (changes every 1/8th note, as in the audio example) or every 4 beats for slower tempos or percussion that sustains.

Draw a pattern that’s as close as possible to seeming random (fig. 1). This prevents the panning from becoming repetitive. 

 

Figure 1: The 16 Steps option lets you create a custom LFO waveform.

Placement

If you want the panning to move around the center, fine—pan the track to center, and you’re done. But if you want the panning to move (for example) between hard left and center, remember that the pan control becomes a balance control in stereo. So if X-Trem pans the audio more toward the right, it will become quieter. To get around this, pan the channel to center, but follow the X-Trem with a Dual Pan that sets the actual panning range. Fig. 2 shows settings for panning between the left and center, while maintaining a constant level. 

Figure 2: Dual Pan settings for varying the auto-panned position between the left and center.

As to the amount of X-Trem modulation depth…it depends. If you’ve been to my seminars where I talk about “the feel factor” with drum parts, you may recall that I like to keep the kick right on the beat, and change timings around it. A similar concept holds true with panning percussion. In the audio example, the cowbell is the anchor, and the tambourine dances around it. If both are moving, the parts can become a distraction rather than an enhancement.

Now you know how to make your percussion parts tickle the listener’s ears just a little bit more…and given the audio example, I’m proud of all of you for not stooping to a “more cowbell” joke!