PreSonus Blog

Why the Performance Monitor Is Cool

Ever wonder why inserting a particular plug-in makes the latency go through the roof? Which tracks you should transform because they require a lot of CPU power, and which ones aren’t a problem? The Performance Monitor, accessed via View > Performance Monitor, reveals all

But this isn’t just about interesting information. The Performance Monitor will help you decide which block settings to use, whether native low-latency monitoring will work for you, what level of dropout protection is appropriate, and more. Then, you can make intelligent tradeoffs to process audio in the most efficient way, while optimizing system stability.

Start at the the Top

Referring to fig. 1, the readout at the top indicates CPU and disk activity. Check this periodically to make sure you aren’t running your CPU up against its limits. This can lead to audio crackles, dropouts, and other potential glitches. You can also set the level of dropout protection here.

Figure 1: The Performance Monitor window (the UI has been modified somewhat to make analyzing the data easier).

The Cache readout lets you know if you’re wasting storage space. The Cache accumulates files as you work on a song, but many of these files are only temporary. If you invoke Cleanup Cache, Studio One will reclaim storage space by deleting all unused temp files in the cache. If you have lots of songs and haven’t cleaned out their caches, you might be surprised at how much space this frees up. I usually wait until I’m done with a song before cleaning it up.

Finally, there’s a list of all the plug-ins that are in use. The left-most column shows much relative CPU power a plug-in consumes, as a bar graph, and numerically. The next column to the right shows the plug-in name and format. The Type column shows whether the plug-in is an instrument or insert effect. The final column on the right shows the delay compensation a plug-in requires.

(Note there’s also a column that shows the plug-in “path,” which is the track where the plug-in resides. For effects, it also shows the effect’s position in the insert section. However, fig. 1 doesn’t show this column, because it takes up space, and doesn’t really apply to what we’re covering.)

Figure 2: The transport shows useful performance information.

The transport includes a performance summary (fig. 2). Toward the lower left, you’ll see meters for CPU consumption (top meter) and disk activity. Click on Performance to call up the Performance Monitor window. The circle of dots indicates writing activity to the cache. Also note the figure under the sample rate—this is the total time Studio One has added for plug-in compensation.

Analyzing the Data

I’ve altered the UI graphic a bit, by grouping plug-ins, and adding a line in between to separate the groups. This makes it easier to analyze the data.

The top group includes a variety of reverbs, which in general tend to consume a lot of CPU power. Waves Abbey Road Chambers, iZotope’s Neoverb, and Rare Signals Transatlantic Plate clearly require the most CPU. Neoverb and Abbey Road Chambers also require the most latency compensation.

HD Cart is more efficient than I would have expected, and Studio One’s reverbs give a good account of themselves. Open Air and Room Reverb in Eco mode are extremely efficient, registering only 01 on the CPU meter. However, bumping up Room Reverb to HQ mode registers 03.

Bear in mind that lots of CPU consumption doesn’t mean a poor design—it can mean a complex design. Similarly, minimal CPU consumption doesn’t mean the effect won’t be as nuanced; it can simply mean the effect has been tightly optimized for a specific set of tasks. Also note that CPU-hungry reverbs are good candidates for being placed in an FX Channel or Bus. If used in individual tracks, Transform is your friend.

The next group down compares virtual instruments. You’ll see a fair amount of variation. I didn’t include the native Studio One instruments, because none of them requires delay compensation. Mai Tai and Presence typical register 01 or 02 in terms of CPU consumption, while the others don’t move the CPU meter noticeably. Bottom line: if you want to have a lot of instruments in a project, use as many native Studio One versions as possible, because they’re very efficient.

The next group down is plug-ins that use phase-linear technology. All of these require large amounts of delay compensation, because the delay is what allows for the plug-in to be in-phase internally. The Pro EQ2’s reading (which alternates between 3 and 4) is for only the phase-linear stage enabled; the nonlinear EQ stages draw very little CPU power. This is what you would expect from an EQ that has to be efficient enough to be inserted in lots of tracks, as is typical in a multitrack project.

The next-to-the-last group is amp sims. The figures vary a lot depending on which amps, cabinets, and effects are in use. For example, the Guitar Rig 6 preset includes two of their new amps, in HQ mode, that use a more CPU-intensive modeling process. The PRS V9 doesn’t have a lot of bells and whistles, but concentrates on detailed amp sounds—hence the high CPU consumption. Ampire is somewhat more efficient than most high-quality amp sims, but there’s no avoiding the reality that good amp sims consume a lot of CPU power—which is another reason to become familiar with the Transform function.

Finally, the last group shows why even with today’s powerful computers, there’s a reason why people add Universal Audio’s DSP hardware to their systems. Both the Manley Massive Passive and Shadow Hills compressor draw a lot of power, and require significant delay compensation. But, they don’t draw any power from Studio One, because they get their power from UA’s DSP cards, not your computer’s processor.

It’s interesting to compare plug-ins. You’ll sometimes find that free plug-ins draw a fair amount of CPU because they’re not optimized as tightly as commercial products. You’ll also see why some plug-ins will bring your computer to its knees, while others won’t.

All this reminds me of a post I saw on a forum (not Studio One’s) where a person had just bought a powerful new computer because the old one crashed so much. However, the new one was still a “crashfest”—so he decided the DAW was the problem, and it must have been coded by incompetents. A little probing by other forum members revealed that he really liked iZotope’s Ozone, so he put it on almost every track instead of using a more standard compressor or limiter. Oh, and he also used a lot of amp sims. Ooops…I’m surprised his CPU didn’t melt. If he’d had Studio One, though, and looked at the Performance Monitor, he would have found out how to best optimize his system…and now you can, too.

The Multiband Limiter

Although Studio One offers multiple options for dynamics control, there’s no “maximizing” processor, as often implemented by a multiband limiter (e.g., Waves’ L3 Multimaximizer). The Tricomp and Multiband Dynamics come close, but they’re not quite the same as multiband limiting—so, let’s use Studio One Professional’s toolset to make one.

The concept (fig. 1) is pretty simple: use the Splitter in Frequency Split mode, and follow each split with a Limiter2. The final limiter at the end (Limiter 6) is optional. If you’re really squashing the signal, or choosing a slower response, the output limiter is there to catch any transients that make it through the limiters in the splits.

Figure 1: Each of the five parallel limiters processes a specific frequency band.

 The Control Panel with the Macro controls (fig. 2) is straightforward.

Figure 2: The Macro controls adjust the controls on the five Limiter2 processors that follow the splits.

Each Macro control corresponds to the same control in the Limiter2, and varies that control over its full range, in all the Limiter2 modules that follow the splits. For example, if you vary the Threshold Macro control, it controls the Threshold for all limiters simultaneously (except for Limiter 6 at the output, which has “set-and-forget” settings), over the control’s full range. However, you can get even more out of this FX Chain by opening up the Limiter2 GUIs, and optimizing each one’s settings. For example, using less limiting in the lower midrange can tighten up the sound.

You can download the multipreset from my public PreSonus Sphere area, so you needn’t concern yourself with the details of how it’s put together (although you might have fun reverse-engineering it). And is it worth the download? Well, check out the audio example. The first and second parts are normalized to the same level, but the second one is processed by the multiband limiter. Note that it has a louder perceived level, and is also more articulated. This is because each band has its own dedicated limiter. I rest my case!

Bonus Supplementary Nerd Talk

The Splitter’s filters are not phase-linear, which colors the sound. There’s an easy way to hear the effects of this coloration: Insert a mixed, stereo file in a track, then copy it to a second track. When you play the two together, they should sound the same as either track by itself—just louder.

Next, insert a Splitter in one track, select Frequency Split, and choose 5 splits. Play the two tracks together, and you’ll hear the result of the phase differences interacting. Choosing a different number of splits changes the tonality, because the phase shifts are different.

This is why for mastering, engineers often prefer a phase-linear, multiband limiter—the sound is transparent, and doesn’t have phase issues. The downside of phase-linear EQ is heavier CPU consumption, and increased latency. But it’s equally important to remember that phase issues are an inherent part of vintage, analog EQ, which have a “character” that phase-linear EQs don’t have.

So as usual, the bottom line isn’t choosing one over the other—it’s choosing the right tool for the right job. If you’ve worked only with phase-linear multiband limiters, give this variation a try. With some material, you may find it doesn’t just give more perceived level, but also, gives a more appropriate sonic character.

Sound Design for the Rest of Us

 

Last May, I did a de-stresser FX Chain, and several people commented that they wanted more sound design-oriented tips. Well, I aim to please! So let’s get artfully weird with Studio One

Perhaps you think sound design is just about movies—but it’s not. Those of you who’ve seen my mixing seminars may remember the “giant thud” sound on the downbeat of significant song sections. Or maybe you’ve noticed how DJs use samples to embellish transitions, and change a crowd’s mood. Bottom line: sound is effective, and unexpected sounds can enhance almost any production

It all starts with an initial sound source, which you can then modify with filtering, delay, reverb, level changes, transposition, Chord Track changes, etc. Of course, you can use Mai Tai to create sounds, but let’s look at how to generate truly unique sounds—by tricking effects into doing things they’re not supposed to do

Sound Design Setup

The “problem” with using the stock Studio One effects for sound design is…well, they’re too well-designed. The interesting artifacts they generate are so low in level that most of the time, we don’t even know they exist. The solution is to insert them in a channel, amplify the sound source with one or two Mixtools set to maximum gain, and then enable the Channel’s Monitor button so you can hear the weirdo artifacts they generate. Automating the effects’ parameters takes this even further.

However, now we need to record the sounds. We can’t do this in the normal way, because there’s no actual track input. So, referring to fig. 1, insert another track (we’ll call it the Record Trk), and assign its input to the Effects track’s output. Both tracks need to be the same format – either both stereo, or both mono. (Note that you can also use the Record track’s Gain Input Control to increase the effect’s level.) Start recording, and now your deliciously strange effects will be recorded in the new track.

Figure 1: Track layout for creating sound effects orgies.

The FX channel is optional, but it’s helpful because the Effects track fader needs to turned way up. With it assigned to the Main bus, we’ll hear it along with the track we’re recording. That’s not a problem when recording, but on playback, you’ll hear what you recorded and the Effects track. So, assign the Effects output to a dummy FX bus, turn its fader down, and now you’ll hear only the Recorded track on playback. The Record Trk will still work normally when recording the sound effects.

After recording the sounds, normalize the audio if needed. Finally, add envelopes, transpose the Event (this can be lots of tun), and transform the effect’s sound into something it was never intended to do. Percussion sounds are a no-brainer, as are long transitions from one part of a song to the next. And of course, the Event can follow the Chord Track (use Universal mode).

The Rotor is a fun place to start. Insert it in the Effect track, and run through the various presets. Some DJs would just love to have a collection of these kinds of samples to load into Maschine. Here’s an audio example.

 

 

Audio Example 1 Rotor+Reverb

 

The next example is based the Flanger.

 

Audio Example 2 Flanger

 

Now we’ll have the previous Flanger example follow a strange Chord Track progression, in Universal mode.

 

Audio Example 3 Flanger+Chord track

 

Other Effects

This is just the start…check out what happens when you automate the Stages parameter in the Phaser.

 

Audio Example 4 Phaser Loop

 

Or turn the Mixverb Size, Width, and Mix to 100%, then vary damping. The Flanger is pretty good at generating strange sounds, but like some of these, you’ll have the best results if you set the track mode to mono. OpenAIR is fun, too— when you want a pretty cool rocket engine, load the Air Pressure preset (under Post), set Mix to 100% wet, add some lowpass filtering…and blast off!

Studio One 5.3 has arrived

The best DAW just got better. Again.

Studio One 5.3 adds new features, enhancement, and powerful workflow improvements to Studio One 5. This is a free update for Studio One 5 users and PreSonus Sphere members.

 

1. Sound Variation improvements for composers

Musical Symbols and Dynamics Processing are now integrated with Sound Variations

Both Musical Symbols and Dynamics Processing have been added to the Sound Variations editor. The integration of Musical Symbols allows composers to add symbols to their scores in a manner already familiar to them—and virtual instruments will respond with the appropriate performance articulations. Dynamics symbols are tied to MIDI Velocity, with customizable values. 

Musical Symbols also now receive their own Lane in the Note Editor—any changes made in Score View will be reflected in Piano View, and vice-versa. The Lane is divided into note-based Articulations—such as Staccato, Accent, or Portamento; and range-based Directions like Pizzicato, Vibrato, or Col Legno. Musical Symbols are now displayed directly on Note Events, and there’s even a Variations Global Track view atop the Piano View.

Musical Symbols can be mapped by hand, or Auto-Assigned based on the Sound Variation names of the currently-loaded instrument. And perhaps best of all, orchestral libraries from our friends at Vienna Symphonic Library, UJAM, and EastWest have already done the Sound Variations mapping of their robust libraries for you—a major time-saver!

MIDI channel support for Sound Variations and improved selection

Studio One 5.3 has extended the output mapping of Sound Variations to include MIDI channel information as part of an activation sequence. MIDI channel mapping can be used alone or in combination with other activation messages like keyswitches, controllers, or velocity—deepening Sound Variations’ usability with Kontakt instruments. 

Sound Variations are faster and easier than ever to search and apply thanks to a quick right-click menu of recently-used Variations—for quick re-application—and a one-click “apply” button to place the currently-active Sound Variation at the cursor point.

Learn more about Sound Variations and Musical Symbols here.

2. Show Page improvements

Drag ‘n’ drop more things to create Patches on the fly

Drag ‘n’ drop now works in the Show Page for virtual instrument Presets—drag a Preset from the Browser to a Player and it will create a new instance of the associated Instrument and create a new Patch. The same works with dragging Ampire to a Real Instrument or Backing Track Player—it even works with complex FX chains.

Seamless patch changes

When playing virtual instruments live, seamlessly switching between different sounds is a must! Virtual Instrument Player Patch  changes are now gapless during a performance, so long notes held across patch changes will not be cut off while the new instrument is activated. Try it!

Learn more about Show Page Improvements here.

3. Format conversion and backup options

Zip and upload to PreSonus Sphere

In 5.3, you can now save any Document (Song, Show, or Project) to a .ZIP file, with the options to convert all media to .FLAC and/or exclude any unused media—keeping your file sizes down. And with one extra click, Studio One will upload your .ZIP to PreSonus Sphere Workspace for safekeeping or collaboration. Of course, you and your collaborators can download them again straight from the Cloud tab of Studio One’s Browser—you don’t have to leave Studio One and mess about with your computer’s file explorer or Internet browser. And Studio One can also open any Zip it makes.

You’ve also got new options to quickly export .AAF, Capture Sessions, MIDI Files or Open TL via the “Convert To…” option in the File menu.

Learn more about Archive and Backup improvements here.

4. New creative applications of The Chord Track

Rapid chord progression prototyping via D’n’D

With a single drag ‘n’ drop (of course), you can now drag a Chord Event from the Chord Track into an Instrument Track to render a simple Note Event chord to play with—great for auditioning and prototyping new arrangements.

You can even drag an Audio Event directly to an Instrument Track to render your chords as a Note Event, if the chords had been detected before.

Learn more about Chord Track updates here.

 

5. MPE support for VST3 instruments

5.3 adds MPE support for VST3 instruments using Note Controllers in Studio One. This allows MPE compatible VST3 instruments to work with Studio One and compatible hardware controllers. Great for users of those instruments which automatically hide their VST2 counterpart when a VST3 version is installed.

Learn more about MPE support here.

 

Full Studio One 5.3 video playlist:

Check out the “What’s new in Studio One 5.3” playlist!

 

Learn more about Studio One 

Shop Studio One

 

 

 

PreSonus Studio Monitors now include Studio Magic and Studio One Prime

What some folks may call “something a little extra,” or “a bonus,” we like to call lagniappe. It’s that thirteenth beignet in a baker’s dozen, or the recipe in the back of your PreSonus manual.

And it’s in that spirit that after the success of a limited time promo, we decided that all PreSonus customers who henceforth purchase our qualifying studio monitors (including the subwoofers!) will get a big ol’ Studio Magic software bundle worth over $1000 US bucks, that includes tons of plug-in effects, virtual instruments, and even music lessons—as well as a special version of Studio One Prime that grants access to all of those aforementioned plug-ins!

Click here to learn more about what you get in Studio Magic. It’s a lot!

Qualifying monitors include:

  • Eris-series
  • R-series
  • Sceptre-series
  • Temblor-series subwoofers

Metal Guitar Attack!

They’re called “power chords” for a reason—that delightful mix of definition, sludge, and hugeness is hard to resist. But can we make them more huge and more powerful? Of course, we can, so let’s get started

This tip gives two options: non-real-time, and real time (using the High Density Ampire pack, although other amp sims and processors can work, too). Either technique also works well for LCR mixing fans.

Non-Real-Time Hugeness

  1. Insert Ampire in your guitar track, and edit it for your ideal sound (the Default Ampire preset is a good place to start).
  2. Right-click in the track’s column, and select Duplicate Track (Complete).
  3. Repeat Step 2. Now you have three identical tracks with identical processing.
  4. The key to getting Total Hugeness is transposition. Click on one track’s Event, open the Inspector (F4), and set Transpose to -12 (fig. 1). Click on another track’s event, and in its Inspector, set Transpose to +12. Don’t change the pitch of the remaining track.

After the next section, we’ll get into panning and EQ.

Figure 1: The guitar power chord track has been duplicated twice. The audio on the track to the right has been dropped an octave.

Real-Time Hugeness

Follow the steps above for non-real-time hugeness, but don’t do Step 4. Instead:

  1. For one of the tracks, open up Ampire. Insert the Pitch Shifter before the amp, choose “dn 1 Oct” (fig. 2), click on the top of the pedal, and then drag up until the pedal’s Tune tooltip shows 100. The audio will now be transposed an octave down. If you don’t have the Ampire High Density pack, the transposers in other amp sims will work, but the one in High Density seems better than average.

Figure 2: The Pitch Shifter processor provides real-time transposition.

  1. Similarly, do the same processing on another track, but this time choose “up 1 Oct.”

What’s Next

Whether you chose real-time or non-real-time hugeness, you now have three tracks: Standard pitch, tuned down an octave, and tuned up an octave. Let’s do panning and levels. Here are some options.

  • Standard pitch full left, +12 center, -12 full right. This gives the biggest sound and is used in the audio example.
  • Standard pitch full left, -12 full right, and mute the +12 track. This is ideal for all you LCR fans. It opens up a big hole in the center for bass, kick, snare, and vocals.
  • Standard pitch full left, +12 full right, -12 full right. Another LCR favorite. The +12 gives a more defined sense of pitch in the right channel, so something else with a strong sense of pitch (e.g., Organ of Doom) can fit comfortably in the left channel.
  • Standard pitch center, +12 full left, -12 full right. This emphasizes the main guitar track with the standard tuning.

This approach also lends itself well to automating mute on the various channels. Unmute the octave below when you want to fatten the sound, unmute the octave higher when you want a more defined sense of pitch.

Applying EQ to the transposed audio can customize the sound further. If you’re doing a duo with only drums and guitar, on the octave below track, boost the bass and trim the highs. Pan it to center, and pan the other two tracks left and right. Another possibility is giving more definition to the octave higher track by rolling off the lows and highs a bit and boosting the mids around 2 kHz or so.

Let’s check out the audio example…remember, it’s only one guitar.

Kisnou: Emotional Journeys Across Sonic Landscapes

Italian musician, composer, and producer Kisnou shapes the undefined chaos that was generated as consequence to profound experiences growing up as his kingdom – a place where to give complete freedom to creativity and imagination.

With masterpieces such as “Alive,” “Falling Deeper,” and “Vertigo,” people from all over the world began to feel a deep connection with Kisnou’s music, counting for more than 7 million total streams on Spotify alone in 2020. Featured on BBC, New Balance, TV commercials and countless Spotify playlists, his music is often defined as otherworldly: perfect for anyone who wants to experience a real sonic journey.

From ambient to electronic, from orchestral to indie, Kisnou is a never-ending adventure that explores worlds of atmospheric sounds and storytelling. Featuring bittersweet poetry, untold stories, cold atmospheres, field recordings, and broken song structures, each song is a deep cinematic experience you will not forget.

Kisnou began making music using FL Studio back in 2015, eventually working for years within the Ableton Live software environment before recently discovering Studio One and PreSonus Sphere’s creative workflow environment.


In his words:

So… at the beginning, I really had no knowledge, never played an instrument. I just jumped and went for it. I felt like I had some stories to tell.

I’m a self-taught producer. It’s pretty easy to learn so many things online. I also used to listen to a lot of music, every day—while drawing or doing homework, while coming home from school. It was a part of me and of my life, every day. Many people are surprised when I say that I’m self-taught, especially those who are musicians or producers as well. It makes me feel happy, but I have always been down to Earth and very respectful. For example, in 2020 an American writer sent me one of his books, as a thank you gift because he loved my music. The book is called Wounded Tiger, and the author is such a wonderful person. It is a book about World War II and the true stories of multiple people that lived through that moment of history. I can’t say much about it but the author is trying to find the right chance to make a movie out of it… and I might be a part of the soundtrack team. Fingers crossed!

I graduated in 2019 and got my Bachelor of Arts in Commercial Music, but since 2017 I have been making music for a good fan base online that has grown quite fast. I hit my first million streams on a song, and from there it started to get even better! I had an income, collaboration opportunities, and a licensing partnership with Marmoset Music that got me some really good placements! One of my songs was featured in a New Balance commercial and a Tomorrowland video. Now music is my full time job. I currently have around 150,000 monthly listeners on Spotify alone.

The first artist who actually truly inspired me to make music was Koda. He is a talented guy from Los Angeles who wrote some beautiful songs. His songs were just pure magic for me, they resonated like nothing else. I felt like the lyrics were talking to me. My favorite song from him is “Angel.” I loved the video as well, so much that I contacted the video artist a couple of years ago and we created the music video for my song “In The Origin, We Breathe.”

Other inspirations include: The Cinematic Orchestra, Bersarin Quartett, Sorrow (a great electronic/garage music producer), Pensees, and Owsey. I come from the Ableton world, so I am also very much into electronic music, future garage, and ambient. I am in love with atmospheres, long reverbs, evolving sounds, textures and so on.

Lately I have been listening to the YouTube channel Cryo Chamber. Some songs are a bit too dark sometimes, but you can find such incredible atmospheres. I find it very inspiring.

You know, I live in the countryside, so I am always spending time in nature. I feel like I am lucky to be living here, but at the same time you might feel isolated or lonely quite often. It depends on the mood I guess.

I used Ableton for 3-4 years, made great songs thanks to that DAW, but somehow… I wasn’t really feeling comfortable there. I was slowly getting sick of it, even if the creative tools, the stock plugins and workflow were amazing.

By chance I found out about Studio One and then I started to see what you could do with it and it slowly got my interest, until I finally decided to make the switch.

Currently, I just try to make Studio One adapt to my workflow and that was quite easy. The possibility to internally customize shortcuts and create macros is just wonderful in my opinion. I have many macros mapped around my keyboard, and have others on the buttons of my mouse. I have mapped CTRL + ALT as a hold command on one of the two main side buttons, then on the other one I have a Macro that activates the bend marker view, automatically swaps to the Bend Tool so that I can do my edits and then press it again to deactivate the bend view.

On the four lower side buttons I have mapped the editor, channel, inspector and browser for quick tasks. Though If I hold control and press those buttons, or ALT, I have other sets of commands to help me out.

One more functionality that I love is the Transform to Audio Track command, which prints a MIDI file into audio, but it’s better compared to what I’ve seen in other DAWs I’ve used in the past (FL Studio, Ableton, or Pro Tools) because I can print the MIDI to audio and preserve the instrument—so that If I ever want to revert back to the plug-in, I can do that at any given moment. I can choose to render the insert FX or not, which is also great.

In other DAWs, I either had to make a copy of the plug-in, print one to audio and leave the other there, just disabled. Sometimes I printed a MIDI file into audio feeling that it was perfect, then days later, I felt like I wanted to edit the plugin… and I couldn’t do it anymore because I had not copied the plug-in instance before printing.

Lastly, I’m pleased to be a featured artist on PreSonus Sphere!

The presets I created revolve around the use of white noise, layering and distortion: aspects that I have been exploring in the last months to create a sort of vintage but modern, textured sound. Warm, lush pads and pluck sounds, distorted reverbs and atmospheres were my North Star when creating these presets.

There’s 20 presets in all in this pack: FX chains, pad sounds for Presence, some Macros, Mai Tai patches, and a custom reverb of mine… enjoy!

PreSonus Sphere members can click here to get them!

 


Join PreSonus Sphere today to check out Kisnou’s exclusive Presets and from those by other featured artists!

Only $14.95 per month for Studio One Professional, Notion, and so much more.

Mid-Side Processing with Artist

This is a companion piece to last week’s tip, which described how to implement Splitter functionality in Studio One’s Artist version. The Pro version has a Splitter-based, mid-side processing FX Chain that makes it possible to drop effects for the mid and side audio right into the FX chain. However, the Splitter isn’t what does the heavy lifting for mid-side processing—it’s the Mixtool, which is included with Artist.

Mid-Side Refresher

The input to a mid-side processing system starts with stereo, but the left and right channels then go to an encoder. This sends a signal’s mid (what the left and right channels have in common) to the left channel, while the sides (what the left and right channels don’t have in common) go to the right channel.

The mid is simply both channels of a stereo track panned to center. So, the mid also includes what’s in the right and left sides, but the sides are at a somewhat lower level. This is because anything the left and right channels have in common will be a few dB louder when panned to center.

The sides also pan both channels of a stereo track to center, but one of the channels is out of phase. Therefore, whatever the two channels have in common cancels out. (This is the basis of most vocal remover software and hardware. Because vocals are usually mixed to center, cancelling out the center makes the vocal disappear.)

Separating the mid and side components lets you process them separately. This can be as simple as changing the level of one of them to alter the balance between the mid and sides, or as complex as adding signal processors (like reverb to the sides, and equalization to the mid).

After processing, the mid and sides then go to a decoder. This converts the audio back to conventional stereo.

lMid-Side Channel and Bus Layout

Fig. 1 shows what we need in Artist: the original audio track, a bus for the mid audio, a bus for the side audio, and a bus for the final, decoded audio.

Figure 1: Channel and Bus layout for mid-side processing in Artist.

Insert a Mixtool in the original audio track, and enable MS Transform (see fig. 2). Then, we need to send the encoded signal to the buses. Insert one pre-fader send, assign it to the Mid bus, and pan it full left. Then, insert another pre-fader send, assign it to the Sides bus, and pan it full right.

 

 

Figure 2: The Mixtool settings are the same for both Mixtools. The middle Dual Pan inserts in the Mid bus, while the lower Dual Pan inserts in the Sides bus.

 

 

Referring to fig. 2, the Mid bus has a Dual Pan inserted after any processing, with both controls panned full left. Similarly, the Sides bus has a Dual Pan inserted after any processing, with both controls panned full right. (The Pro EQ2 and OpenAIR inserted in fig. 1 are included just to show that you insert any effects before the Dual Pan plug-ins; they’re not needed for mid-side processing.) Pan the Mid bus pan fader left, and the Sides bus pan fader right.

Assign the bus outputs to the Decoded bus. This has a Mixtool inserted, again with MS Transform enabled. And that’s all there is to it—the Decoded bus is the same as the original audio track, but with the addition of any changes you added to the Mid or Side buses.

To make sure everything is set up correctly, remove any effects from the Mid and Sides, and set all the bus levels to 0. Copy the original audio track, insert a Mixtool into it, and enable Invert Left and Invert Right. Adjust the copied track’s level, and if there’s a setting where it cancels out the decoded track, all your routing, panning, and busing is set up correctly. Happy processing!

Make a Splitter for Studio One Artist

One of my favorite Studio One Professional features is the Splitter, and quite a few of my FX Chains use it. If you own Studio One Artist, which doesn’t have a Splitter, you may look longingly at these FX Chains and think “If only I could do that…”

Well, you can implement most splitter functions in Studio One Artist, by using buses. All the following split options are based on having a track that provides the audio to be split, along with pre-fader sends to additional buses. Note that the track’s fader should be turned all the way down.

 

Normal Split

 

The Splitter’s Normal mode sends the input to two parallel paths, which is ideal for parallel processing. For Artist, we’ll duplicate this mode with two buses, called Split 1 and Split 2 (fig. 1).

 

Figure 1: How to create a Normal split in Artist.

 

The sends to the buses are pre-fader, and panned to center. One send goes to Split 1, and the other to Split 2. Now you can insert different effects in Splits 1 and 2 to do parallel processing.

 

Channel Split

 

The Channel Split mode also splits the input into two parallel paths. One path is for the left channel, while the other path is for the right channel.

 

Figure 2: How to create a Channel Split in Artist.

 

The setup is the same as for the Normal Split (fig. 2), except that each bus has a Dual Pan inserted. The Dual Pan for the left channel has the Input Balance set to <L>, while the Dual Pan for the right channel has the Input Balance set to < R>. I recommend the -6dB Linear Pan law so that if you pan either of the buses, the level remains constant as you pan from left to right.

 

Frequency Split

 

This is tough to duplicate, because the Splitter can split incoming audio into five frequency bands. If other DAWs don’t do it, we can’t expect Artist to do it. But, we can do a three-way, tri-amped split into low, mid, and high frequencies (fig. 3).

 

Figure 3: Tri-Amp Frequency Split.

 

This split is like the Normal Split, except that there are three buses and pre-fader sends instead of two, and each bus has a Pro EQ2 inserted. Each EQ covers its own part of the frequency spectrum—low, mid, and high (fig. 4). Using 6 dB/octave slopes doesn’t provide as much separation between frequency ranges as steeper slopes, but the gentler slopes are necessary to make sure the frequency response is flat when you mix the three channels together.

 

 

Figure 4: (Top to bottom) low, mid, and high curves.

 

The only filter sections we need to use are High Cut and Low Cut—you can ignore everything else. Fig. 5 shows the settings. All bands have 6 dB/octave slopes.

 

Enable the Low band’s Pro EQ2 HC (High Cut) filter, and choose 200 Hz for frequency. Enable the Mid band’s Pro EQ2 LC (Low Cut) filter, and set it to 200 Hz; also enable the HC filter, and set it to 4.00 kHz. Finally, enable the High band’s Pro EQ2 LC filter, and set it to 4.00 kHz. These frequencies are a good starting point, but you may want to modify the split frequencies for different types of audio sources. Just make sure that the low band HC frequency is the same as the mid band’s LC frequency, and the Mid band’s HC frequency is the same as Hi band’s LC frequency.

 

 

Figure 5: Filter control settings.

Granted, setting up these splits takes more effort than dragging a Splitter plug-in into a channel, but the result is the same: cool parallel processing options.

The Dynamic Brightener—Reloaded

In April 2019, I did a Friday Tip called The Dynamic Brightener for Guitar. It’s kind of a cross between dynamic EQ and a transient shaper, and has been a useful FX Chain for me. In fact, it’s been so useful that I’ve used it a lot—and in the process, wanted to enhance it further. This “reloaded” version makes it suitable for more types of audio sources (try it with drums, bass, ukulele, piano, or anything percussive), as well as less critical to adjust. It also lessens potential high-frequency “smearing” issues—the original version applied large amounts of boost and cut, with a non-linear-phase EQ.

Although the original version could have been built using a Splitter, I did a bus-based implementation so that it would work with Studio One Artist. This new version needs to use the Splitter (sorry, Artist users), but that’s what allows for the improvements.

Another interesting aspect is that by using the effects’ expanded view in the channel inserts, you don’t even need to open the effect or Splitter interfaces, to do all the necessary tweaking. This makes the reloaded version much easier to edit for different types of tracks.

How It Works

Fig. 1 shows the FX Chain’s block diagram.

Figure 1: The Reloaded Dynamic Brightener’s block diagram.

 

Splitter 1 is a normal split. The left split provides the track’s dry sound, while the right split goes to Splitter 2, which is set up as a Frequency Split. The Frequency Split determines the cutoff for the high frequencies going into the right split. Splitter 2’s left split, which contains only the split’s lower frequencies, is attenuated completely. Basically, Splitter 2 exists solely to isolate the audio source’s very highest frequencies.

These high frequencies go to an Expander, which emphasizes the peaks. This is what gives both the transient shaping and dynamic EQ-type effects. Because the high frequencies aren’t very loud, the Mixtool allows boosting them to hit the desired level.

Fig. 2 shows the initial Expander and Mixtool settings. But, you won’t be opening the interfaces very much, if at all…you don’t even need Macro Controls.

 

Figure 2: Initial parameter settings for the Expander and Mixtool.

 

Using the Reloaded Dynamic Brightener

In the short console view, open up the “sidecar” that shows the effects. Expand the effects, and set the mixer channel high enough to see the ones shown in fig. 3.

Figure 3: The Reloaded Dynamic Brightener controls.

Here’s how to optimize the settings for your particular application:

  1. Turn off Splitter 1’s output 1 power button. This mutes the dry signal, so we can concentrate on the brightener’s settings.
  2. Adjust Splitter 2’s Frequency Split to isolate the optimum high-frequency range for brightening. This can be as low as 1 kHz or less for guitars with humbucker pickups, on up to 6 kHz (or more) to emphasize drum transients.
  3. Set the Expander’s Ratio and Threshold parameters for the desired amount of brightening and transient shaping. Higher Threshold settings pick off only the top of the boosted high-frequency peaks; the Ratio parameter controls the transient shape. The higher the ratio, the “peakier” the transient.
  4. After editing the high frequencies, re-enable the dry signal by turning on Splitter 1’s output 1 button.
  5. Mix in the desired amount of brightening with the Mixtool Gain parameter. In extreme cases you may want to increase the level control at the end of the Splitter 2 branch, or the output level from Splitter 2 output 2, but this will be needed rarely, if at all.
  6. As a reality check to determine what the brightener contributes to the sound, turn off either Splitter 1 or Splitter 2’s output 2 power button to mute the brightened signal path.

 

This is a tidier, easier-to-adjust, and better-sounding setup than the original dynamic brightener. Download the FX Chain here—the default settings are for dry guitar, and assume a normalized overall track level. With lower track levels, you’ll need to lower the Expander Threshold, or boost output 2 from Splitter 1. But feel free to tweak away, and make the Reloaded Dynamic Brightener do your bidding, for a wide variety of different audio signals.