PreSonus Blog

Tag Archives: Studio One Professional


The Studio One “Drumcoder”

 

Vocoders are processors that use the audio from vocals (called the modulation source, or modulator) to modulate another sound, like a synthesizer pad (called the carrier). However, no law says you have to use vocals as a modulator, and I often use drums to modulate pads, power chords, and more. While Studio One’s toolset doesn’t have enough resolution for super-intelligible vocoding with voice, it’s perfect for drumcoding, which actually benefits from the lower resolution.

This tip is for advanced users and requires a fairly complex setup. Rather than go into too much detail about how it works, simply download the Drumcoder.song file, linked below, which has a complete drumcoding setup. Load Drumcoder.song into Studio One 5, press play, and you’ll hear what drumcoding is all about. (Note that the file format isn’t compatible with previous Studio One versions. However, based on the description in this tip, you should be able to “roll your own” drumcoding setup in previous Studio One versions.)

Click here to get drumcoder.song

 

Let’s check out an audio demo. The first half has the drumcoded sound only, while the second half mixes in the drum (modulator) sound.

 

 

But wait—there’s more! Although the drumcoder isn’t designed to be the greatest vocoder in the world (and it isn’t), you can still get some decent results. Here, the voice is saying “Even do some kinds of vocal effects with the PreSonus drumcoder—have fun!’

 

 

 

Next, we’ll explore how it works…or if you’re impatient, just reverse-engineer the song.

Drumcoding Explained

Vocoding splits the modulator (like voice or drums) into multiple frequency bands. In a traditional vocoder, each band produces a control voltage that corresponds to the audio’s level in each band. Similarly, the carrier splits into the same frequency bands. A VCA follows each carrier band, and the VCAs are fed by the modulator’s control voltages. So, if there’s midrange energy in the modulator, it opens the VCA for the carrier’s midrange audio. If there’s bass energy in the modulator, it opens the VCA for the carrier’s bass audio. With a vocoder, as different energy occurs in different bands that cover a vocal’s frequency range, the carrier mimics that same distribution of energy in its own bands. This is what generates talking instrument effects.

Studio One’s Implementation

Vocoders typically need at least eight frequency bands to make voices sound intelligible. Studio One’s Splitter can divide incoming audio into five bands, which is enough resolution for drumcoding. Fig. 1 (which takes some graphic liberties with Studio One’s UI), shows the signal flow.

 

Figure 1: Drumcoder signal flow.

 

The Drums track provides the modulator signal, and the Mai Tai synthesizer provides the carrier. The Drums track has five pre-fader sends to distribute the drum sound to five buses. As shown in Fig. 2, each of the five buses has a Splitter (but no other effects) set to Frequency Split mode, with splits at 200, 400, 800, and 1600 Hz. The < 200 Hz bus mutes all Splits except for 1, the 200-400 bus mutes all Splits except for 2, the 400-800 bus mutes all splits except for 3, the 800 – 1.6 kHz mutes all splits except for 4, and the > 1.6 kHz bus mutes all splits except for 5. Now each bus output covers one of the five bands.

Figure 2: Splitter settings for the five buses.

 

The Mai Tai carrier has a splitter set to the same frequencies. Each split goes to an Expander, which basically acts like a VCA; see Fig. 3. We don’t need to break out the Splitter outputs, because you can access the sidechain for the Expanders located within the Splitter. (A Mixtool follows each Expander, but it’s there solely to provide a volume control for each of the carrier’s bands in the control panel.)

Figure 3: Effects used for the Mai Tai synthesizer carrier track.

As to the bus outputs, the < 200 Hz bus has a send that goes to the sidechain of the Expander in the carrier’s < 200 Hz split. The 200-400 Hz bus has a send to the sidechain of the Expander in the carrier’s 200-400 Hz split. The 400-800 Hz bus has a send to the sidechain of the Expander in the carrier’s 400-800 Hz split…you get the idea. Basically, each bus provides a “control voltage” for the corresponding “VCA” (Expander) that controls the level of the carrier’s five bands.

Fig. 4 shows the Control panel.

 Figure 4: Drumcoder macro controls.

Threshold, Ratio, and Range cover the full range of Expander controls. They affect how tightly the Expander follows the modulator, which controls the effect’s percussive nature. Just play around with them until you get the sound you want. The Expander Envelope settings aren’t particularly crucial, but I find 0.10 ms Attack and 128.0 ms Release work well. Of course, you also need to enable the sidechain for each Expander, and make sure it’s listening to the bus that corresponds to the correct band.

The five knobs toward the right control the level of the individual bands by altering the Gain of the Mixtool that follows each band’s Expander. The five associated buttons enable or bypass the Expander for a particular band, which can give some really cool effects. For example, turn off the Expander on the Mid band, and with the Song’s Mai Tai preset, it almost sounds like a choir is singing along with the drumcoded drums.

Vocal Effects

Although the Drumcoder isn’t really designed for vocal effects, it still can be fun. The key is to bring up the > 1.6 kHz Bus slider, as this mixes in some of the voice’s “s” sounds, which give intelligibility. Experiment with the Expander controls to find what works well. If you really want to dig into vocal applications, edit the Splitter frequencies to optimize them for the vocal range instead of drums…or leave a comment asking me to pursue this further.

Due to the complexity, if I think I’m going to use the Drumcoder, I’ll just treat this song like a template and build the rest of the song from there. But once you understand the principle of operation, you can always add the effect in to an existing song as needed. I have to say this is one of my favorite Friday tips ever… I hope you enjoy playing with the Drumcoder!

 

 

Studio One 5’s Tape Emulator

 

Although Studio One 5 doesn’t have a tape emulator plug-in per se, it can emulate some of the most important characteristics that people associate with “the tape sound.” Truly emulating tape can go down a serious rabbit hole because tape is a complicated signal processor; no two vintage tape recorders sounded the same because they required alignment (influenced by the engineer’s preferences), used different tape formulations, and were in various states of maintenance. However, emulating three important characteristics provides what most people want from tape emulation.

  • Tape saturates, which rounds off waveform peaks and affects dynamic range. This gives a higher average level, which is part of why tape sounds “punchy.”
  • Head “bump.” The frequency of a bass range peak (around 2 dB) depends on the tape speed and the tape machine. At 15 IPS, a typical peak is in the 40-70 Hz range, and at 30 IPS, in the 70-150 Hz range. However, at 30 IPS, the bass response drops off below the bump—sometimes drastically, sometimes gently. Even though in theory 30 IPS offered better fidelity, many engineers preferred to work at 15 IPS due to the bass response characteristics (and they saved money by using half as much tape for the same recording time).
  • Tape is a flawed recording medium that trades off noise, high-frequency response, and distortion. For example, some engineers aligned their machines to underbias the tape, which increased distortion but gave more highs; other engineers did the reverse and made up for the lack of highs with subsequent equalization.

Check out the audio example to hear what this FX Chain can do. The first part is unprocessed, while the second part uses the default FX Chain control settings with a little underbiasing and head bump. The difference is subtle, but it adds that extra “something.”

 

 

The Tape Emulator FX Chain

This FX Chain starts with a Splitter, which creates three signal paths: one for saturation, one for hiss, and one for hum (Fig. 1).

Figure 1: FX Chain block diagram.

 

After auditioning all available Studio One 5 saturation options, I liked the TriComp best for this application. The Pro EQ stage preceding the TriComp provides the head bump EQ and has a control to emulate the effect of underbiasing tape (more highs, which pushes more high-frequency level into the TriComp and therefore increases distortion in that range) or overbiasing (less highs, less distortion).

At first, I wasn’t going to include tape hiss and hum, but if someone needs to use this FX Chain for sound design (i.e., an actor starts a tape in a theatrical production), then including hiss and hum sounds more authentic. An additional knob chooses 50 or 60 Hz hum, which represents the power standards in different countries. (Note that the closest you can get to these frequencies is 50.4 and 59.1 Hz, but that’s good enough). However, I draw the line at including wow and flutter! Good riddance to both of them.

Because creating three splits reduces each split’s level, the TriComp Gain control provides makeup gain.

Turning Bump on adds a boost at the specified frequency, but also adds a 48 dB low-cut filter around 23 Hz to emulate the loss of very low frequencies due to the head bump. As a result, depending on the program material, adding the bump may increase or decrease the total apparent bass response. For additional flexibility, if you turn Bump Amount down all the way, the Bump On/Off switch enables or disables only the 48 dB/octave low-cut filter.

Fig. 2 shows some typical spectra from using the FX Chain.

Figure 2: The top curve shows the head bump enabled, with underbiasing. The lower curve shows minimal added bump, but with the ultra-low cut filter enabled, and overbiasing.

Roll Tape!

The controls default to rational settings (Fig. 3), which are used in the audio example. But as usual with my FX chains, the settings can go beyond the realm of good taste if needed.

 

Figure 3: Control panel for the Tape Emulator.

For example, I rarely go over 2-3% saturation, but I know some of you are itching to kick it up to 10%. Ditto tape hiss, in case you want to emulate recording on an ancient Radio Shack cassette recorder—with Radio Shack tape. Just remember that the Bias control is clockwise to overbias (less highs), and counter-clockwise to underbias (more highs).

There’s a lot of mythology around tape emulations, and you can find some very good plug-ins that nail the sound of tape. But try this FX Chain—it may give you exactly what you want. Best of all, I promise you’ll never have to clean or demagnetize its tape heads.

Download the Tape Emulator.multipreset here!

How to Make Spotify Happy

 

With physical audio media in its twilight, streaming has become the primary way to distribute music. A wonderful side effect has been the end of the loudness wars, because streaming services like Spotify turn levels up or down as needed to attain a specific, consistent perceived level—squashing a master won’t make it sound any louder.

However, the “garbage in, garbage out” law remains in effect, so you need to submit music that meets a streaming service’s specs. For example, Spotify prefers files with an LUFS of -14.0 (according to the EBU R128 standard), and a True Peak reading of -1.0 or lower. This avoids adding distortion when transcoding to lossy formats. If the LUFS reading is above -14.0, then Spotify wants a True Peak value under -2.0.

Fortunately, when you Detect Loudness for a track on the mastering page, you’ll see a readout of the LUFS and LRA (a measure of overall dynamic range), as well as the True Peak, RMS (average signal level), and DC offset for the left and right channels. Fig. 1 shows an example of the specs generated by detecting loudness.

Figure 1: Although the LUFS reading meets Spotify’s specs, True Peak doesn’t, and the RMS value of the left and right channels isn’t balanced.

 

 Note that this hits Spotify’s desired LUFS, but the left channel’s True Peak value is higher than what’s ideal. This readout also shows that the average RMS levels for each channel are somewhat different—the left channel is 1.2 dB louder than the right one, which also accounts for the higher True Peak value. This may be the way the artist wants the mix to sound, but it could also indicate a potential problem with the mix, where the overall sound isn’t properly centered.

A simple fix is to insert a Dual Pan into the Inserts section. Use the Input Balance control to “weight” the stereo image more to one side for a better balance. After doing so and readjusting the LUFS, we can now give Spotify exactly what it wants (Fig. 2). Also note that the left and right channels are perfectly balanced.

Figure 2: The True Peak and RMS values are now identical, so the two channels are more balanced than they were without the Dual Pan.

 A Crucial Consideration!

You don’t want to mix or master based on numbers, but on what you hear. If you set up Dual Pan to balance the channels, make sure that you enable/bypass the plug-in and compare the two options. You might find that balancing the left and right channels not only accommodates Spotify’s requirements, but improves the mix’s overall balance. If it doesn’t, then leave the balance alone, and lower the track’s overall output level so that True Peak is under -1.0 for both channels (or under -2.0 for LUFS values above ‑14.0). This will likely lower the LUFS reading, but don’t worry about it: Spotify will turn up the track anyway to reach -14.0 LUFS.

Coda: I always thought that squashing dynamic range to try and win the loudness wars made listening to music a less pleasant experience, and that’s one of the reasons CD sales kept declining. Does the end of the loudness wars correspond to the current music industry rebound from streaming? I don’t know… but it wouldn’t surprise me.

My Craziest Mastering Salvage Job (So Far)

 

My mastering specialty is salvage jobs, which has become easier to do with Studio One. But this gig was something else.

Martha Davis’s last solo album (I Have My Standards, whose mastering challenges were covered in this blog post) has done really well. Since the pandemic has sidelined her from touring as Martha Davis and the Motels or going into the studio, she’s releasing a new song every month online. These involve excellent, but unreleased, material.

That’s THE good news. The bad news is that her latest song choice, “In the Meantime,” had the drum machine kick mixed so loud the song should have been credited as “Solo Kick Drum with Vocal Accompaniment.” With a vocalist like Martha (listen to any of her many hits from the 80s), that’s a crime. She was hoping I could fix it.

Don’t tune out, EDM/hip-hop fans. What about those TR-808 “toms” that are always mixed way too high? When I was given a Boy George song to remix, those toms were like sonic kryptonite before I figured out how to deal with them. And let’s not get into those clichéd 808 claps, okay? But we have a solution.

What Didn’t Work

I tried everything to deal with the kick, including EQ, iZotope RX7 spectral reduction, mid-side processing using the Mixtool, and more. The mix was mostly mono, and the kick was full-frequency—from low-frequency boom to a nasty click that was louder than the lead vocal. Multiband dynamics didn’t work because the kick covered too wide a frequency range.

What Did Work

In desperation, I thought maybe I could find an isolated kick sound, throw it out of phase, and cancel the kick wherever it appeared in the song. Very fortunately, the song intro had a kick sound that could be isolated as an individual sample. So instead of going directly to Studio One’s mastering page, I went into the Song page, imported the stereo mix into one track, created a second track for only the kick, and dragged the copied kick to match up with every kick instance in the song (yes, this did take some time…). It wasn’t difficult to line up the copied kicks with sample- (or at least near-sample) accuracy (Fig. 1).

 

Figure 1: The top track is from the original song, while the lower track is an isolated kick. After lining the sounds up with respect to timing, flipping the kick track phase removed the kick sound from the mixed tracks.

The payoff was inserting Mixtool in the kicks-only track and flipping its phase 180 degrees. It canceled the kick! Wow—this physics stuff actually works.

But now there was no kick. So, I added the Waves LinEQ Broadband linear-phase equalizer (a non-linear-phase EQ can’t work in this context) in the kick drum track. This filtered out some of the kick drum’s lower frequencies so there was less cancellation while leaving the highs intact so they would still cancel as much as possible. Adjusting the shelving frequency and attenuation let in just enough of the original kick, without overwhelming the track. Even better, because the kick level was lower, I could bring up the low end to resurrect the bass part that had been overshadowed by the kick.

The Rest of the Story

The mix traveled to the mastering page for a little more processing (Studio One’s Pro EQ and Binaural Pan, IK Multimedia’s Stealth maximizer, and Studio One’s metering). After hitting the desired readings of -13.0 LUFS with -0.2 True Peak readings, the mastering was done. Sure, I would much rather have had the individual tracks to do a remix, but it was what it was—a 28-year-old two-track mix.

To hear how this ended up, the audio example first plays an excerpt from the mastered version. Then there’s a brief pause, followed by the same section with the original file. I’m sure you’ll hear the difference in the kick drum.

Listen to an audio example from In the Meantime here: 

 

The Vocal Repair Kit

 

Although it’s always better to fix issues at the source, here’s a tip to help repair recorded vocals during the mixing phase. The technique (which is featured in the new book How to Record and Mix Great Vocals in Studio One – 2nd Edition) combines multiband dynamics processing with equalization to both de-ess and reduce plosives. Although the screen shot shows the Multiband Dynamics processor in Studio One 5, this technique will work with previous Studio One versions if you duplicate the settings.

 

In the screen shot, the Multiband Dynamics’ Low band settings are outlined in red, and the High band settings are outlined in blue. (Note this is not the actual interface; the high band panel is pasted into the image from a different screen shot so you can see both the Low and High band settings simultaneously.)

The High band acts as a de-esser, because it applies compression to only the high frequencies. This helps tame sibilance. The Low band compresses only the low frequencies, which reduces pops. However, this preset also takes advantage of the way Multiband Dynamics combines equalization with dynamics control. Turning down the Low stage Gain all the way further reduces the low frequencies, where pops like to hang out and cause trouble.

For the High band, vary compression to taste. The compression settings are less critical for the Low band if you turn the Gain down all the way, but in either case, you’ll need to tweak the settings for your particular vocal track.

And that’s all there is to it. When a loud pop or sibilant sound hits the Multiband Dynamics, it’s compressed to be less annoying, while leaving the rest of the vocal intact. Vocal repaired!

Why Overlap Correction is Totally Cool

 

Studio One’s Overlap Correction feature for Note data isn’t new, but it can save you hours of boring work. The basic principle is that if Note data overlaps so that the end of one note extends long enough to overlap the beginning of the next note, selecting them both, and then applying overlap correction, moves the overlapping note’s end earlier so that it no longer overlaps with the next note.

My main use is with keyboard bass. Although I play electric bass, I often prefer keyboard bass because of the sonic consistency, and being able to choose from various sampled basses as well as synth bass sounds. However, it’s important to avoid overlapping notes with keyboard bass for two main reasons:

  • Two non-consonant bass notes playing at the same time muddies the low end, because the beat frequencies associated with such low frequencies are slow and disruptive.
  • The sound is more realistic. Most bass parts are single-note lines, and with electric bass, there’s usually a finite amount of time between notes due to fingering the fret, and then plucking the string.

One option for fixing this is to zoom in on a bass part’s note data, check every note to make sure there aren’t overlaps, and shorten notes as needed. However, Overlap Correction is much easier. Simply:

 

  1. Select All Note data in the Edit View.
  2. Choose Action > Length, and then click the Overlap Correction radio button.
  3. Set overlap to -00.00.01.
  4. Click OK.

Normally I’m reluctant to Select All and do an editing function, but any notes that don’t overlap are left alone, and I haven’t yet run into any problems with single-note lines. Fig. 1 shows a before-and-after of the note data.

Figure 1: The notes circled in white have overlaps; the lower copy of the notes fixes the overlaps with the Length menu’s Overlap Correction feature.

Problem solved! The reason for setting overlap to -00.00.01 instead of 00.00.00 is because with older hardware synthesizers or congested data streams, that very slight pause ensures a note-off before the next note-on appears. This prevents the previous note from “hanging” (i.e., never turning off). You can specify a larger number for a longer pause—or live dangerously, and specify no pause by entering 0.

Also, although I referenced using this with keyboard bass, it’s useful for any single-note lines such as brass, woodwind, single-note MIDI guitar solos, etc. It can also help with hardware instruments, including electronic drums, that have a limited number of voices. By removing overlaps, it’s less likely that the instrument will run out polyphony.

There’s some intelligence built into the overlap correction function. If a note extends past another note, there won’t be any correction. It also seems to be able to recognize pedal points (Fig. 2).

 

Figure 2: Overlap Correction is careful about applying correction with polyphonic lines.

Selecting all notes in the top group of notes and selecting Overlap Correction didn’t make any changes. As shown in the bottom group of notes, preventing the pedal point from overlapping the final chord requires selecting the pedal point, and any of the notes in the last chord with which the pedal point overlaps.

It’s easy to overlook this gem of a feature, but it can really help with instrumental parts—particularly with keyboard bass and solo brass parts.