Perry Sorensen, Head of Mastering for NAXOS of America, gives us his favorite features of the updates in Studio One 5.5 on the Project Page. NAXOS is the largest distributor for classical jazz and world music.
Perry uses Studio One to master and in this video he gives a walk-thru of how he uses track automation plus more!
Learn more about Studio One
Learn more about NAXOS of America
Listen to Perry’s musical expressions with Jessie Kol
It’s a streaming world—and streaming services have their own audio standards with respect to LUFS and True Peak levels. LUFS is not the same as peak or average loudness. Instead, it measures perceived sound levels. In theory, if two songs have the same LUFS readings—whether it’s Billie Eilish whispering or hardcore 1999 Belgian techno—you won’t feel the need to get up and adjust the volume.
True peak measures the peak value after D/A conversion, which can be higher than the peak value prior to D/A conversion. Having a peak value below 0 minimizes the chance of distortion when transcoding a WAV file into a data-compressed format. For more about LUFS, see Understanding LUFS, and Why Should I Care?
The Easy Song Level Matching tip tells how to use the Tricomp and Limiter to match song levels in a collection of songs or an album, and it remains viable. However, if you’ve already mastered your songs the way you like, Studio One version 5.5 has added an export function to the Digital Release menu that can export your songs to any LUFS and true peak level you want. What’s more, it includes presets for all the most popular streaming services (fig. 1).
Another convenient aspect is that once you release the drop-down menu, you can see a particular streaming service’s recommended specs (fig. 2). This is also where you can specify custom settings for LUFS and True Peak.
Let’s do a real-world test to check the effectiveness. Consider two songs in the Project Page, one at -6.8 LUFS with a true peak reading of +0.7, and the other at -12.2 LUFS with a true peak reading of +0.1 (fig. 3). We’ll export them to Spotify’s standard mode, which wants -14.0 LUFS and -1.0 true peak, and then load them back into the Project Page to see what happens.
The -6.8 LUFS file had to be turned down a lot to hit -14.0 perceived level. Turning down the level lowers the true peak reading; in this case, it ended up at -6.5. The -12.2 LUFS file didn’t need to be turned down much at all to hit -14.0 LUFS, and its true peak reading is now -1.7. When played back, even though the waveform levels look very different, they sound like they’re at the same level.
However, it’s important to note that this process won’t raise the level if the peaks already hit 0. For example, if one of the files was normalized and its LUFS reading was -15.0, it wouldn’t have increased to -14.0 LUFS because that would have required processing (e.g., limiting) to raise the level. Otherwise, any peaks would have exceeded the headroom.
The export function simply does what most (but not all) streaming services do—turn down levels above the target LUFS to reach the desired LUFS reading. This makes sense, because the main reason streaming services adopting LUFS targets was to prevent songs that were mastered “hot” from having a level advantage over songs mastered with more reasonable (and more listenable) dynamics.
Finally, you don’t have to master everything to hit a specific service’s specifics, because this new Studio One function does it for you. Master the music so it sounds good to you—go ahead and compress that rock track, or maximize an EDM set. If it has the right sound but its LUFS reading is -9 or whatever, don’t worry about it. When you export it to a specific target, the song will meet that service’s specs.
Lillian: Isabel and I met in high school back in 2016 and became friends through our shared love of music. I can remember one day getting a ukulele and showing Isabel. We would always mess around, but one day we were like, “let’s write a song.” I can remember Isabel’s dad was picking her up and we had not finished the song. He arrived at my house and we were so excited to share it. We played it for him and he loved it. From there we knew that we could write. We began going to open mics at a local coffee shop and played our songs there. After we wrote a few songs, our friends started hearing them. We grew a following and support from our friends and family. We competed in our school’s talent show with an original and got so many positive reviews.
Being in high school, it was hard to get a gig. Our town is small and there are not many opportunities for kids like us. After we had gone to the open mic multiple times, the coffee shop owner reached out to us and wanted to book us to play our music there for two hours. This was the opportunity of a lifetime for us. When the show came, we packed the place out. Our friends and family came to support us and we had the best time ever. It was such an awesome moment to play our songs and have people sing them with us.
This was something Isabel came up with and it stuck. The phrase “Happy to be here” is something that we are whenever we go to play shows. We are so grateful for the opportunities we are given, so the name is fitting. The only difference between the phrase and our name is that the word to is spelled “two” because there are two of us!
To date, we have written about six or seven songs in total, and we plan to write many more in the future! We both have a passionate obsession with music and songwriting, so it’s always easy for us to collaborate.
The spring of 2020 was like no other. Isabel and I had been at different schools by this time, but wanted to release a song. I remember Isabel posted something on Instagram and I heard a groove with it. I sent her a demo and we got to writing. We would send voice memos back and forth because of the lockdown. In the end we created a song and music video all from home. The song, “Stay,” and its music video can be found on our individual Instagram accounts (Isabel) and (Lillian).
I have been living in Nashville for about two years now and love it. I am a student at Belmont University and have been truly blessed by PreSonus and what they have done. While it might seem small, their help has made me a better musician and opened many doors. Creating this project with PreSonus stuff helped be gain more knowledge with recording software. I was able to use what I learned in my own experiences and from audio engineering classes to start a job as a touring musician. The Quantum 2626 has traveled with me around the States! I have been able to drum at multiple music festivals with headliners like Miranda Lambert and Luke Bryan, using PreSonus’ gear for all of my needs during my set. Without their help, I could have easily been passed up for the gig. I am so grateful for their support!
Isabel: Working with PreSonus was such an amazing opportunity for us. Due to the pandemic and the fact that we attend different colleges, we’ve struggled with continuing our songwriting and performing as a band. PreSonus decided to assist by providing us with audio tools (Revelator USB Mic, HD9 Headphones, Quantum 2626, DM-7 Drum Mics and Studio One recording software, via PreSonus Sphere memberships) to begin creating our music via online collaboration. We wrote “Never Met,” which was something we never finished in 2019 — and it will be available for the general public on Spotify, Apple Music, YouTube, and more soon!
Studio One 5.5 has arrived, adding a ton of new mastering power to the Project Page — including Automation — as well as plenty of other features, including a strum notes feature, .MID file support for the Chord Track, Fast Ampire switching, and more!
Here’s our Studio One 5.5 video playlist below. But if you want the full story and all the nitty-gritty details…
Although this isn’t an actual talk box, it does give humanized mouth sounds with Studio One Professional. This is possible because the human mouth is a filter, and Studio One has filters…so let’s do it! Check out the audio example, then download the .multipreset to try it out for yourself.
Talk Box.multipreset – Click to download
The Talk Box works by splitting the guitar (fig. 1). One split goes to a Pro EQ2, which uses a Channel Editor knob to sweep three bandpass filters simultaneously over the vocal range. The other split goes through a Mixtool that flips the phase, so it cancels all the audio from the Pro EQ2 except for the bandpass filter peaks.
The Pro EQ2 uses three stages (fig. 2). The LF stage, in Peaking mode, sweeps from about 250 to 500 Hz. The LMF stage sweeps from around 750 Hz to 1.5 kHz, while the MF stage sweeps from 1.5 to 3.0 kHz.
Channel Editor Settings
Use the Channel Editor to assign the filter frequencies to a Macro control knob (fig. 3).
The second Mixtool at the output increases the level by 6 dB. This is necessary because one of the splits is out of phase. Because only the filter peaks come through, the output level is considerably lower than the input.
Everything described so far is included in the multipreset, but assigning the Macro knob to a MIDI controller or pedal is up to you. I did a blog post on a way to do this, and the information is also in The Huge Book of Studio One Tips and Tricks. (A heads-up for current owners of the book: a free update will be available soon with more tips, presets, and content, so stay tuned.)
Just remember that you can’t automate the Macro knob per se. You’ll need to add three Automation tracks, assigned to the Low, Low Mid, and Mid frequency parameters. Then, make sure they’re in Write mode when you move the Macro knob control, and they’ll automate the “talk box” changes.
Mix referencing, where you compare your mix to well-mixed music, can be a big help if you want a reality check on what you’re doing. There are even “curve-stealing” programs that can analyze the spectral response of one song and apply it to another one, but that won’t help train your ears, and you’ll usually need to make changes anyway. So, let’s explore how to customize the process in Studio One.
As examples, I chose two songs that are about as different as can be—”Kids of Summer” by Mayday Parade (rock), and “Spinback” by Comethazine (rap). The goal was to analyze each song’s spectral response, and come up with Pro EQ2 settings that could apply one spectrum to the other. Let’s hear now it turned out.
Fig. 1 shows the original spectra (the lower of the two lines in the graph is the most relevant). Kids of Summer is heavy on the upper mids around 1 to 2 kHz, where guitars and vocals live. Bass is relatively subdued, and the highs start rolling off above 5 kHz. Spinback is heavy on the bass, lighter on the lower mids, and spices up the treble from 5 to about 7 kHz.
Let’s listen to the original songs that the spectra represent. Copyright-wise, I think we’re good to go from a fair use standpoint—the excerpts are under 30 seconds, used for educational purposes, transformative because the EQ is going to change them, and don’t diminish the value of the music.
Now let’s create a EQ to give Kids of Summer more of the Spinback spectral response. Fig. 2 shows the EQ settings.
Fig. 3 shows the spectral response for Kids of Summer after applying the EQ.
And here’s what it sounds like.
I really like how this brings out the low-end fullness, although I’d definitely trim the highs a bit—still, it’s the start of a cool alternative. Note that for a fair comparison, the original and EQed versions were set to the same LUFS value.
Now let’s do the reverse. Figure 4 shows the Kids of Summer EQ curve we’ll apply to Spinback.
Fig. 5 shows Spinback’s spectral response after applying the Kids of Summer EQ.
Let’s hear what Spinback sounds like now.
Again, the original and EQed versions were set for the same LUFS readings. The difference isn’t as dramatic as applying Spinback to Kids of Summer, because the Spinback arrangement is more sparse. However, the EQ’d version makes the vocals more prominent in comparison the more attenuated low and high frequencies, so the whole song comes more “forward” in the speakers compared to the original. This is something that would cut through better on AM radio and mobile device speakers, at the expense of the low-end fullness and high-end sizzle. Even if you’re an expert mixer, this technique can give you some fresh ideas, and new ways to look at music.
And finally, to all of you, thanks for your continued support of these blog posts—and a big thanks to all the behind-the-scenes folks at PreSonus, like Houston Dragna and Ryan Roullard, who turn them into reality. Have a happy, healthy, productive 2022—and make some great music, so we all can listen to it!
It happens to everyone: creative blocks. You want to do something musical, but…you keep falling into the same old patterns, or you need something new and different to prod you into being creative. Well, that’s when it’s time to load the Songwriter’s Assistant preset into Studio One’s Chorder Note FX. And although this uses virtual instruments, you don’t even need to know how to play keyboards. Really!
Note FX Basics
A Note FX plug-in processes MIDI data coming from an external controller or Instrument track, before passing the data along to a virtual instrument. The Note FX inserts in an instrument track’s Inspector (fig. 1).
Studio One has several Note FX, but for assistance with smashing creative blocks, this tip uses Chorder as our Note FX of choice. And in the spirit of the holiday season, there’s a gift you can download—the Songwriter’s Assistant preset for Chorder.
Why Chorder Is Cool
Chorder can stack chords on a single key. This is why you don’t need to play keyboards—just hit a key, and a chord of whatever complexity you want will play. The Songwriter’s Assistant uses two keyboard octaves (fig. 2).
The range C2 – B2 plays major chords. C3 – C4 plays minor chords. So if you don’t know how to voice chords on a keyboard, no problem! And if you don’t even know what notes correspond to C2 through C4, no worries there, either. Just hit keys that sound good, and later on, we’ll deal with how to show what chords to play on guitar or other instruments.
Studio One’s Help file has plenty of information on how to create your own Chorder presets or modify existing ones (like adding more chord types, like 7ths, to other keyboard octaves), so we’ll just concentrate on using the Songwriter’s Assistant as a plug-and-play Note FX.
After downloading the Songwriter’s Assistant preset, import it into Chorder. Studio One saves Chorder presets in [drive]\Studio One\Presets\PreSonus\Chorder. Insert an instrument, and then insert Chorder as shown in fig. 1.
Now start playing notes, and come up with cool chord progressions. Record them in the Instrument track to build up your song’s chord progression. That’s all there is to it.
The instrument track can show the note(s) you played, or the notes Chorder generates. With Input Mode (fig. 1) set to “pre” (the same graphic as a send being pre), the track records Chorder’s output. With “post,” it records your original input, although you’ll still hear chords play back. If you selected post but want the track to include the Chorder-generated notes, right-click in the track, and choose Render Instrument Tracks. Now you’ll see what Chorder generated.
Finally, if you don’t play keyboards, just open the Chord Track and drag the instrument track up to it. Studio One will parse your chord progression, and show you the chord letters. Better yet, choose View > Chord Display to see giant chord letters marching across your screen.
And don’t overlook the factory presets, where you’ll find plenty more ideas for generating chord progressions. Is this cool, or what?
The Fat Channel is a versatile channel strip plug-in that, because of all the other cool Studio One individual processors, is easy to overlook. But it has several outstanding features, including the ability to choose from a variety of compressors—sort of like plug-ins within a plug-in (fig. 1). Three are stock; the rest are optional at extra cost from the PreSonus shop.
However, none of these compressors has a lookahead feature. Lookahead delays the audio we hear, but the compressor monitors the audio in real time. Thus, the compressor knows in advance when it needs to apply compression. Without lookahead, if you’re using heavy compression (like for guitar sustain), you’ll hear a nasty pop because the compression can’t kick in until the audio exceeds the threshold—and by that time, it’s too late. Some of the audio has already passed through uncompressed, which causes the pop. The first audio example exhibits this pop. Also note that the first and second audio examples are both normalized—but this one sounds really soft, because the pops are so loud you can’t raise the level any higher without bumping against the headroom.
The solution is simple. What’s more, it applies to not only the Fat Channel, but any dynamics processor, from any manufacturer, that doesn’t have lookahead.
Create a bus, and insert the Analog Delay. Edit the parameters for 2 ms of delay, delayed sound only, and no modulation (fig. 2). Insert the Fat Channel after the Analog Delay, and choose your favorite compressor.
At the audio track you want to process, create two pre-fader sends. One goes to the bus, and the other to the Fat Channel’s sidechain. Turn down the track’s fader so you hear only the audio coming from the bus. This accomplishes our goal: The audio applies compression to the Fat Channel 2 ms before the audio enters the compressor. So, the compressor is primed and ready to go when a transient hits (fig. 3).
Now compare the next audio example to the first one—the nasty pop is gone. Yeah! Also notice how it’s much louder, because the headroom doesn’t have to accommodate a pop.
However, there’s a catch. Studio One plug-ins with a lookahead function delay the signal by 1.5 ms, but apply plug-in delay compensation so that all the other tracks are delayed by 1.5 ms. This keeps the tracks in sync, and you don’t really notice a delay this short. However, our “faux lookahead” doesn’t have plug-in delay compensation.
Then, move the track forward on the timeline by 2 ms to align it with the other tracks. You can do this via the Delay setting in the Event Inspector (F4).
Hardware is making a comeback. Real-time, improvisation-based drum machines and synths are gaining popularity, and you can find occasional bargains for used synths that were top of the line only a few years ago. So, it makes sense that Studio One would want to simplify integrating external hardware synths (similarly to how Pipeline integrates external hardware effects).
When Version 5 introduced the Aux Channel, comments ranged from “So great—I’ve been wanting this for years!” to “why not just feed the instrument into audio tracks?” Well, they’re both right—Aux Channels are about workflow with external hardware synthesizers. But Aux Channels can streamline workflow and simplify setup, compared to assigning the instrument outs into audio tracks, which then route to the mixer.
Aux Channels monitor external audio interface inputs directly, not the outputs from recorded tracks, in the mixer. These external inputs can be any audio. For example, when mixing, you might want to listen to a well-mixed CD for comparison. You don’t need to record this as a track, just monitor the inputs it’s feeding as needed.
Aux Channel Benefits
My favorite feature is that you can add a hardware synthesizer to the External Instruments folder (located in the Browser’s Instruments tab), and drag and drop the hardware synth into the arrange view—just like a virtual instrument. This automatically creates the Aux Channel, and sets up the Instrument track as you saved it. Set up the external synth once, then use it any time you want.
Also, the external instrument needs only the Note data track in the Arrange view—audio tracks are unnecessary because you’re just going to mix them in the console anyway. This is consistent with Studio One’s design philosophy of dedicating the Arrange view to arranging, not mixing. Of course, you can show/hide audio tracks in the Arrange view, but it’s more convenient to have those audio tracks show up directly in the console, like your other audio sources.
How to Create an Aux Channel
First, your hardware sound generator needs to be set up as an external Instrument in the Options (Windows) or Preferences (Mac) window. Then, in the Console, click on External toward the lower left. Click the downward arrow for the desired External Device, and choose Edit. (Note that if it’s a workstation that combines a sound generator with a keyboard, you should have two entries—one for the Keyboard, and one for the Instrument. Choose the instrument.)
When the control mapping window appears, click on the Outputs button (the one with the right arrow; see fig. 1), and choose Add Aux Channel.
After the Aux Channel appears, assign its input to the audio input(s) being fed by the hardware synth. For example, if the synth’s audio is going to stereo input 3+4, then choose that stereo input. (If your existing Song Setup I/O doesn’t include an easily identifiable name for the inputs being used for the Aux track, considering doing some renaming.)
Next, save these default settings. If needed, click on Outputs again to bring up the controller mapping window, and click on Save Default. Saving it is what allows the hardware instrument to show up in the Browser.
In the Browser’s Instruments tab, look under External Instruments (toward the top, just under Multi Instruments). Drag your instrument into the Arrange view, and start playing. If you don’t hear anything, the likely causes are either that the keyboard being assigned to the synth isn’t the default keyboard (specify All Inputs for MIDI in, and it should work), or the Aux Channel input hasn’t been assigned to the correct audio interface inputs.
Also note that with workstations whose keyboard drives the sound generator, turn off the Local Control parameter (usually in the instrument’s MIDI setup menu). Otherwise, you’ll be playing the sound generator from the keyboard, and Studio One will also be feeding it notes. The result is note double-triggering.
To preserve what the Instrument track plays as an audio track, choose a track’s Transform to Audio Track option, or select the Event and bounce to a new track (fig. 2).
When bouncing, make sure that the Record Input is assigned to the Aux Channel where the Instrument’s audio appears. Finally, Studio One knows that because the bounce involves external hardware, the bounce must be done in real-time (faster than real-time bouncing is possible only with virtual instruments that live inside the computer). Happy hardware, everyone!