Songs like “Suddenly Last Summer” and “Only the Lonely” were Top 10 hits that remain on playlists to this day, but her varied (and ongoing) career includes solo albums, an acting stint in “Bill & Ted’s Excellent Adventure,” creating music for several films (including “Moscow on the Hudson,” “Teachers,” “The Golden Child, and her “Soul Man” duet with Sly Stone), and working in theater. She still tours—at least when there’s not a pandemic—and amazingly, her voice is better than it was in the 80s. But, it took Studio One’ Project Page to bring a solo album back to life that had been all but written off ten years ago.
The collection of songs on “I Have My Standards” (Fig. 1) was meant to be an album of jazz standards, with orchestration—but the twist was that Martha had written the “standards.” She cut a demo album with longtime musical collaborator Marty Jourard (piano, tenor sax), Allen Hunter (bass), Paul Pulvirente (drums), and Felix Mercer (clarinet). However, the budget to do the orchestration never materialized, and the record was never finished.
When Martha found out I did mastering, she mentioned “the album that never was” and being curious, I asked to listen to it. I was floored. The songs were deep, the vocals flawless, and the instrumentation excellent. I heard the lack of orchestration as an advantage because the sparse, emotional treatments were compelling in themselves.
Except…being a demo, there were technical problems. Among other issues, the acoustic bass overpowered the song on some of the demos, the mic had a boxy quality that was no friend to Martha’s voice, and there were mix and level issues that resulted in a lack of clarity. I asked if she could locate the multitracks so I could remix before mastering, yet no one had any idea where they were. Oh, and the project had to be mastered for vinyl—she wanted to put it out on 180-gram vinyl for her nascent record label, Remarkable Records.
Fortunately, the songs were recorded with the same basic setup. Although I’m usually not a fan of “one size fits all” mastering chains, in this case, there weren’t too many variations among the songs (Fig. 2).
The Splitter separated the frequencies below 178 Hz from the rest of the audio. With vinyl, bass needs to be centered, and the dynamics need to be controlled—hence the Limiter, and the Dual Pan. with both left and right channels set to center.
Another reason for the Splitter was that in some songs, the acoustic bass overpowered the mix. Note the fader at the end of the Dual Pan, set to -9.0 dB to help keep the bass under control. This setting worked for most of the songs but for one of them, I had to pull the level down by -13.8 dB to get the right balance. The frequency splitting was crucial.
For the rest of the audio, EQ was by far the most important process. Fig. 3 shows the setting that was used on most of the songs. Remember that the RIAA curve for vinyl (which boosts treble massively on the vinyl, then cuts it on playback) isn’t a fan of high frequencies, so the highs were often cut on vinyl masters. Although the steep high-cut filter wasn’t needed on all the songs, when necessary this gave a sound that was more consistent with vinyl records.
The substantial dip at 625 removed the muffled quality by making the highs more prominent. The dips at 2.72 and 3.06 kHz were tricky—they were essential in removing a resonance on Martha’s voice that took away from of the openness and intimacy. Almost all the songs needed the dip at 8 kHz, where treble energy from recording the individual tracks “bunched up.” Bass wasn’t an issue, because the split took care of that.
The Binaural pan was set between 124% and 137%, depending on the song. This mainly had the effect of spreading out the reverb more than the instruments, which enveloped the sound in an ambiance it didn’t have otherwise. This also moved the reverb a bit out of the center, so there could be more focus on the voice.
Finally, I’m not much of a believer in “special sauce” processors, but the Scheps Parallel Particles from Waves was ideal (Fig. 4).
After taming the highs to accommodate vinyl mastering, I wanted to restore a perception of high frequencies. Adding a significant amount of the Air parameter, with just a touch of Thick and Bite for a little more midrange presence, did exactly what was needed.
I was aiming for a LUFS of -12. This was a bit of a compromise between vinyl and streaming. A little compression would make it easier for the vinyl cutter to optimize the levels for vinyl, and besides, being a little “hotter” than the typical streaming target of -14 LUFS was fine. For the last stage of dynamics control, I used IK Multimedia’s Stealth Limiter (which is designed for mastering), in the Project Page’s Post slot. It’s a transparent but CPU-intensive plug-in, hence using only one instance as the final limiter. The songs were the levels I wanted, so they needed only very slight tweaks to hit the Stealth Limiter a little harder or softer to reach the -12 LUFS goal.
It was easy to generate timings from the Project Page, so that those cutting the vinyl would know where to put the bands between cuts…and we were on our way. The vinyl hits the world in August (available from themotels.com and specialty record stores). The digital release is available now on iTunes, Apple Music, Spotify, Amazon Music, Amazon Disk on Demand, and you can hear it on Pandora, Shazam, iHeart Radio, and YouTube Music. Most of these have ways to preview the songs, and I think it’s well worth following some of the links to check out music that sounds like it’s frozen in time, yet curiously modern.
I checked out some of the customer reviews on Amazon. While they’re all over-the-top about Martha’s voice and songs, as you might expect my favorite is the one that said “The album is mastered in such a way that you would think that Martha Davis is actually in the same room with you.” Mission accomplished (Fig. 5)!
You send a drum or percussion track to three buses, each with an EQ covering a different frequency range—e.g., kick, snare, and cymbals. These provide three control signals…and here’s what we do with them.
A guitar track feeds an FX Chain with Ampire, which goes into a Splitter that splits by frequency. There’s a gate in each split, and they’re driven by the control signals. So when the kick hits, the guitar’s low frequencies come through. When snare and upper toms hit, the mids come through and when there are high-frequency sounds like percussion, they trigger the highs. You can think of the effect as similar to a mini-vocoder.
The audio example has some Brazilian rhythms triggering the gates, and you can hear the kind of animation this technique adds to the guitar part. The first four measures have the drums mixed with the processed guitar, while the second four measures are processed guitar only.
Fig. 1: The track layout for multiband gating.
The Drums track has three pre-fader sends, which go to the Lo, Mid, and Hi frequency buses. Each bus has a Pro EQ to emphasize the desired low, mid, and high frequencies. Then, each bus has a send that goes to its associated Gate sidechain in the Guitar track (Fig. 2).
Fig. 2: Splitter and Gate setup for multiband gating.
The guitar goes to Ampire, which splits into three frequencies bands thanks ot the Splitter’s Frequency Split magical powers. Each split goes to a Gate, and the sends from the Lo, Mid, and Hi buses feed their respective gate sidechains.
Inserting a Dual Pan after the Mid and Hi gates can enhance the sound further, by spreading these frequencies a bit to the left or right to give more of a stereo spread. You’ll probably want to keep the low frequencies centered.
You don’t have to get too precise about tuning the EQs in the buses, or setting the Splitter frequencies. I set up the Splitter frequencies by playing guitar through the Splitter, and adjusting the bands so that the guitar’s various frequency ranges seemed balanced. As for the Pro EQs in the buses, I just tuned those to the drum sounds until the guitar rhythm was rockin’ along.
This takes a little effort to set up, but multiband gating can add a unique rhythmic kick to your music. Interestingly, you may also find that you don’t need as much instrumentation when one of them is blurring the line between melody and rhythm.
If you’ve spent a couple of spare evenings at home poking around the web for tips on music and audio production, it’s really very likely that you’ve run into some posts, articles, or comments from Craig Anderton. In fact, you may have had to update your search criteria to sort by “most recent,” because it’s fairly common for Google to show you some Craig Anderton posts from the dawn of the internet age, which—while cool—may not be particularly full of insight on Studio One version 4.
Fact is Craig is our industry’s most acclaimed writers, and he’s spoken about Studio One in-person at more events than I can count, and is of course responsible for the Friday Tips section of this very blog. In short, Craig’s contributions to the success and proliferation of Studio One can’t really be counted.
But his Studio One books? Those can be counted. There are five.
We wanted to take a minute to thank Craig for all of his hard work, broadly-reaching creative output, and continued support of PreSonus and Studio One. Let’s take a closer look at what he’s got over at shop.presonus.com. Chances are one or more of these will prove valuable to you and your process. Note that these are eBooks, not hardcover books, and will be downloaded as PDFs.
Essential reading for anyone who records guitars in Studio One, this definitive book covers invaluable production and engineering techniques.
Consolidates, updates, expands on, and categorizes 130 tips from Craig’s popular “Friday Tip of the Week” blog posts that you probably have been checking out right here. Essential reading. This massive book includes tips on how solve problems, enhance sound quality, improve workflow, achieve greater expressiveness, create signature sounds, and much more.
A comprehensive, practical, and above all inspiring guide on how to use Studio One’s sophisticated toolset to craft the perfect mix.
The ultimate guide to becoming an expert on Studio One’s dynamics processors and dynamics-oriented features.
The ultimate guide to capturing, producing, and mixing superb vocal performances in Studio One.
This tip turns wimpy kicks into superkicks, using a different technique compared to drum replacement (see the Friday Tip for February 9, 2018). Listen to the audio example, and you’ll hear why this is cool.
Audio Example: The second four measures add the SuperKick effect to the loop in the first four measures. The added kick is 40 Hz…so don’t expect to hear anything on laptop speakers!
The basic concept is to add another track with a low-frequency sine wave, tuned to your pitch of choice. This can be a WAV file, but this example uses the highly-underrated, and extremely useful, Tone Generator plug-in set to a floor-shaking 40 Hz sine wave. A Bus “listens” to the loop, and uses EQ to filter out everything except the kick; you don’t hear this audio, but it gates the Tone Generator’s sine wave so that it tracks the kick. Fig. 1 shows the setup.
Figure 1: Setup to tune and enhance the kick in an existing loop.
With the loop fader down so you’re not distracted, play with the Tone Generator frequency, EQ frequency to isolate the kick sound, and Gate settings until there’s reliable kick triggering. How you set the gate provides various options: extend the Release for a “hum drum” effect, or for more expressiveness, automate the release time. Increasing the Hold time alters the character as well.=
And after everything is set up…stand back while the floors shake!
I’m not surprised. Or do you ever have one of those days? Of course you do! Wouldn’t it be great to go down to the beach, listen to the waves for a while, and chill to those soothing sounds? The only problem for me is that going to the beach would involve a 7-hour drive.
Hence the De-Stresser FX Chain, which doesn’t sound exactly like the ocean—but emulates its desirable sonic effects. If you’re already stressed out, then you probably don’t want to take the time to assemble this chain, so feel free to go to the download link. Load the FX Chain into a channel, but note that you must enable input monitoring, because the sound source is the plug-in Tone Generator’s white noise option.
Figure 1: Effects used to create the De-Stresser’s virtual ocean.
Fig. 1 shows the FX Chain’s “block diagram.” The Splitter adds variety to the overall sound by feeding dual asynchronous “waves,” as generated by the X-Trems (set for tremolo mode). The X-Trem LFO’s lowest rate is 0.10 Hz; this should be slow enough, but for even slower waves, you can sync to tempo with a long note value, and set a really slow tempo.
Waves also have a little filtering as they break on the beach, which the Autofilters provide. The Pro EQs tailor the low- and high-frequency content to alter the waves’ apparent size and distance.
And of course, there’s the ever-popular Binaural Pan at the end. This helps create a more realistic stereo image when listening on headphones.
Figure 2: The Macro Controls panel.
Regarding the Macro Controls panel (Fig. 2), the two Timbre controls alter the filter type for the two Autofilters. This provides additional variety, so choose whichever filter type combination you prefer. Crest alters the X-Trem depth, so higher values increase the difference between the waves’ peaks and troughs.
The Sci-Fi Ocean control adds resonance to the filtering. This isn’t designed to enhance the realism, but it’s kinda fun. Another subtle sci-fi sound involves setting the two Timbre controls to the Comb response.
As you move further away from real waves, the sound has fewer high frequencies. So, Distance controls the Pro EQ HC (High Cut) filters. Similarly, Wave Size controls the LC filter, because bigger waves have more of a low-frequency component. The Calmer control varies the Autofilter mix; turning it up gives smaller, shallower waves.
When you want to relax, this makes a soothing background. Put on good headphones, and you can lose yourself in the sound. It also makes a relaxing environmental sound when played over speakers at a low level. If your computer has Bluetooth, and you have Bluetooth speakers, try playing this in the background at the end of a long day.
This is just one example of the kind of environmental sounds and effects you can make with Studio One, so let me know if this type of tip interests you. I’ve also done rain, rocket engines, howling gales, the engine room of an interstellar cargo ship, cosmic thuds, various soundscapes, and even backgrounds designed to encourage theta and delta brain waves. I made the last one originally for a friend of mine whose children had a hard time going to sleep, and burned it to CD. When I asked what he thought, he said “no one has ever heard how it ends.” So I guess it worked! Chalk up another unusual Studio One application.
After last week’s thrilling cliff-hanger about how to preserve your WAV files for future generations, let’s look at how to export all your stereo audio tracks and have them incorporate effects processing, automation, level, and panning. There are several ways to do this; although you can drag files into a Browser folder, and choose Wave File with rendered Insert FX, Studio One’s feature to save stems is much easier and also includes any effects added by effects in Bus and FX Channels. (We’ll also look at how to archive Instrument tracks.)
Saving as stems, where you choose individual Tracks or Channels, makes archiving processed files a breeze. For archiving, I choose Tracks because they’re what I’ll want to bring in for a remix. For example, if you’re using an instrument where multiple outputs feed into a stereo mix, Channels will save the mix, but Tracks will render the individual Instrument sounds into their own tracks.
When you export everything as stems, and bring them back into an empty Song, playback will sound exactly like the Song whose stems you exported. However, note that saving as stems does not necessarily preserve the Song’s organization; for example, tracks inside a folder track are rendered as individual tracks, not as part of a folder. I find this preferable anyway. Also, if you just drag the tracks back into an empty song, they’ll be alphabetized by track name. If this is an issue, number each track in the desired order before exporting.
Select Song > Export Stems. Choose whether you want to export what’s represented by Tracks in the Arrange view, or by Channels in the Console. Again, for archiving, I recommend Tracks (Fig. 1).
Figure 1: The Song > Export Stems option is your friend.
If there’s anything you don’t want to save, uncheck the box next to the track name. Muted tracks are unchecked by default, but if you check them, the tracks are exported properly, and open unmuted.
Note that if an audio track is being sent to effects in a Bus or FX Channel, the exported track will include any added effects. Basically, you’ll save whatever you would hear with Solo enabled. In the Arrange view, each track is soloed as it’s rendered, so you can monitor the archiving progress as it occurs.
In Part 1 on saving raw WAV files, we noted that different approaches required different amounts of storage space. Saving stems requires the most amount of storage space because it saves all tracks from start to end (or whatever area in the timeline you select), even if a track-only has a few seconds of audio in it. However, this also means that the tracks are suitable for importing into programs that don’t recognize Broadcast WAV Files. Start all tracks from the beginning of a song, or at least from the same start point, and they’ll all sync up properly.
Note that the tracks will be affected by your Main fader inserts and processing, including any volume automation that creates a fadeout. I don’t use processors in the Main channel inserts, because I reserve any stereo 2-track processing for the Project page (hey, it’s Studio One—we have the technology!). I’d recommend bypassing any Main channel effects, because if you’re going to use archived files for a remix, you probably don’t want to be locked in to any processing applied to the stereo mix. I also prefer to disable automation Read for volume levels, because the fade may need to last longer with a remix. Keep your options open.
However, the Main fader is useful if you try to save the stems and get an indication that clipping has occurred. Reduce the Main fader by slightly more than the amount of clipping (e.g., if the warning says a file was 1 dB over, lower the Main channel fader by -1.1 dB). Another option would be to isolate the track(s) causing the clipping and reduce their levels; but reducing the Main channel fader maintains the proportional level of the mixed tracks.
Saving an Instrument track as a stem automatically renders it into audio. While that’s very convenient, you have other options.
When you drag an Instrument track’s Event to the Browser, you can save it as a Standard MIDI File (.mid) or as a Musicloop feature (press Shift to select between the two). Think of a Musicloop, a unique Studio One feature, as an Instrument track “channel strip”—when you bring it back into a project, it creates a Channel in the mixer, includes any Insert effects, zeroes the Channel fader, and incorporates the soft synth so you can edit it. Of course, if you’re collaborating with someone who doesn’t have the same soft synth or insert effects, they won’t be available (that’s another reason to stay in the Studio One ecosystem when collaborating if at all possible). But, you’ll still have the note events in a track.
There are three cautions when exporting Instrument track Parts as Musicloops or MIDI files.
The bottom line: Before exporting an Instrument track as a Musicloop or MIDI file, I recommend deleting any muted Parts, selecting all Instrument Parts by typing G to create a single Part, then extending the Part’s start to the Song’s beginning (Fig. 2).
Figure 2: The bottom track has prepped the top track to make it stem-export-friendly.
You can make sure that Instrument tracks import into the Song in the desired placement, by using Transform to Audio Track. As mentioned above, it’s best to delete unmuted sections, and type G to make multiple Parts into a single Part. However, you don’t need to extend the track’s beginning.
However, unlike a Musicloop, this is only an audio file. When you bring it into a song, the resulting Channel does not include the soft synth, insert effects, etc.
Finally…it’s a good idea to save any presets used in your various virtual instruments into the same folder as your archived tracks. You never know…right?
And now you know how to archive your Songs. Next week, we’ll get back to Fun Stuff.
You’re forgiven if you scoot down to something more interesting in this blog, but here’s the deal. I always archive finished projects, because remixing older projects can sometimes give them a second life—for example, I’ve stripped vocals from some songs, and remixed the instrument tracks for video backgrounds. Some have been remixed for other purposes. Some really ancient songs have been remixed because I know more than I did when I mixed them originally.
You can archive to hard drives, SSDs, the cloud…your choice. I prefer Blu-Ray optical media, because it’s more robust than conventional DVDs, has a rated minimum shelf life that will outlive me (at which point my kid can use the discs as coasters), and can be stored in a bank’s safe deposit box.
Superficially, archiving may seem to be the same process as collaboration, because you’re exporting tracks. However, collaboration often occurs during the recording process, and may involve exporting stems—a single track that contains a submix of drums, background vocals, or whatever. Archiving occurs after a song is complete, finished, and mixed. This matters for dealing with details like Event FX and instruments with multiple outputs. By the time I’m doing a final mix, Event FX (and Melodyne pitch correction, which is treated like an Event FX) have been rendered into a file, because I want those edits to be permanent. When collaborating, you might want to not render these edits, in case your collaborator has different ideas of how a track should sound.
With multiple-output instruments, while recording I’m fine with having all the outputs appear over a single channel—but for the final mix, I want each output to be on its own channel for individual processing. Similarly, I want tracks in a Folder track to be exposed and archived individually, not submixed.
So, it’s important to consider why you want to archive, and what you will need in the future. My biggest problem when trying to open really old songs is that some plug-ins may no longer be functional, due to OS incompatibilities, not being installed, being replaced with an update that doesn’t load automatically in place of an older version, different preset formats, etc. Another problem may be some glitch or issue in the audio itself, at which point I need a raw, unprocessed file for fixing the issue before re-applying the processing.
Because I can’t predict exactly what I’ll need years into the future, I have three different archives.
In this week’s tip, we’ll look at exporting raw WAV files. We’ll cover exporting files with processing (effects and automation), and exporting virtual instruments as audio, in next week’s tip.
Studio One’s audio files use the Broadcast Wave Format. This format time-stamps a file with its location on the timeline. When using any of the options we’ll describe, raw (unprocessed) audio files are saved with the following characteristics:
Important: When you drag Broadcast WAV Files back into an empty Song, they won’t be aligned to their time stamp. You need to select them all, and choose Edit > Move to Origin.
The easiest way to save files is by dragging them into a Browser folder. When the files hover over the Browser folder (Fig. 1), select one of three options—Wave File, Wave File with rendered Insert FX, or Audioloop—by cycling through the three options with the QWERTY keyboard’s Shift key. We’ll be archiving raw WAV files, so choose Wave File for the options we’re covering.
Figure 1: The three file options available when dragging to a folder in the Browser are Wave File, Wave File with rendered Insert FX, or Audioloop.
As an example, Fig. 2 shows the basic Song we’ll be archiving. Note that there are multiple Events, and they’re non-contiguous—they’ve been split, muted, etc.
Figure 2: This shows the Events in the Song being archived, for comparison with how they look when saving, or reloading into an empty Song.
Select all the audio Events in your Song, and then drag them into the Browser’s Raw Tracks folder you created (or whatever you named it). The files take up minimal storage space, because nothing is saved that isn’t data in a Song. However, I don’t recommend this option, because when you drag the stored Events back into a Song, each Event ends up on its own track (Fig. 3). So if a Song has 60 different Events, you’ll have 60 tracks. It takes time to consolidate all the original track Events into their original tracks, and then delete the empty tracks that result from moving so many Events into individual tracks.
Figure 3: These files have all been moved to their origin, so they line up properly on the timeline. However, exporting all audio Events as WAV files makes it time-consuming to reconstruct a Song, especially if the tracks were named ambiguously.
Figure 4: Before archiving, the Events in individual tracks have now been joined into a single track Event by selecting the track’s Events, and typing Ctrl+B.
After dragging the files back into an empty Song, select all the files, and then after choosing Edit > Move to Origin, all the files will line up according to their time stamps, and look like they did in Fig. 4. Compare this to Fig. 3, where the individual, non-bounced Events were exported.
When collaborating with someone whose program can’t read Broadcast WAV Files, all imported audio files need to start at the beginning of the Song so that after importing, they’re synched on the timeline. For collaborations it’s more likely you’ll export Stems, as we’ll cover in Part 2, but sometimes the following file type is handy to have around.
Figure 5: All tracks now consist of a single Event, which starts at the Song’s beginning.
When you bring them back into an empty Song, they look like Fig. 5. Extending all audio tracks to the beginning and end is why they take up more memory than the previous options. Note that you will probably need to include the tempo when exchanging files with someone using a different program.
To give a rough idea of the memory differences among the three options, here are the results based on a typical song.
Option 1: 302 MB
Option 2: 407 MB
Option 3: 656 MB
You’re not asleep yet? Cool!! In Part 2, we’ll take this further, and conclude the archiving process.
A VCA Channel has a fader, but it doesn’t pass audio. Instead, the fader acts like a gain control for other channels, or groups of channels. In some ways, you can think of a VCA Channel as “remote control” for other channels. If you assign a VCA to control a channel, you can adjust the channel gain, without having to move the associated channel’s fader. The VCA Channel fader does it for you.
Inserting a VCA channel works the same way as inserting any other kind of channel or bus. (However, there’s a convenient shortcut for grouping, as described later.) To place a channel’s gain under VCA control, choose the VCA Channel from the drop-down menu just below a channel’s fader…and let’s get started with the applications.
APPLICATION #1: EASY AUTOMATION TRIM
Sometimes when mixing, you’ll create detailed level automation where all the moves and changes are perfect. But as the mix develops, you may find you want to increase or decrease the overall level. There are several ways to do this, like inserting a Mixtool and adjusting the level independently of the automation, or using the automation’s Trim control. However, a VCA control is sometimes easier, and besides, it can control several channels at once if desired, without having to feed them to a bus. The VCA fader can even offset automation for multiple tracks that are located within a Folder Track (Fig. 1)
If the automation changes are exactly as desired, but the overall level needs to increase or decrease, offset the gain by adjusting the VCA Channel’s fader. This can be simpler and faster than trying to raise or lower an entire automation curve using the automation Trim control. Furthermore, after making the appropriate adjustments, you can hide the VCA Channel to reduce mixer clutter, and show it only if future adjustments are necessary.
APPLICATION #2: NESTED GROUPING
One of the most common grouping applications involves drums—when you group a drum kit’s individual drums, once you get the right balance, you can bring their collective levels up or down without upsetting the balance. Studio One offers multiple ways to group channels. The traditional option is to feed all the outputs you want to group to a bus, and vary the level with the bus fader. For quick changes, a more modern option is to select the channels you want to group, so that moving one fader moves all the faders.
But VCAs take this further, because VCA groups can be nested. This means groups can be subsets of other groups.
A classic example of why this is useful involves orchestral scoring. The first violins could be assigned to VCA group 1st Violins, the second violins to VCA group 2nd Violins, violas to VCA group Violas, and cellos and double basses to VCA group Cellos+Basses.
You could assign the 1st Violins and 2nd Violins VCA groups to a Violins Group, and then assign the Violins group, Violas group, and Cellos+Basses group to a Strings group. Now you can vary the level of the first violins, the second violins, both violin sections (with the Violins Group), the violas, the cellos and double basses, and/or the entire string section (Fig. 2). This kind of nested grouping is also useful with choirs, percussion ensembles, drum machines with multiple outputs, background singers, multitracked drum libraries, and more.
Figure 2: The 1st Violins and 2nd Violins have their own group, which are in turn controlled by the Violins group. Furthermore, the Violins, Violas, and Cellos+Basses groups are all controlled by the Strings group.
Although it may seem traditional grouping with buses would offer the same functionality, note that all the channel outputs would need to go through the same audio bus. Because VCA faders don’t pass audio, any audio output assignments for the channels controlled by the VCA remain independent. You’re “busing” gain, not audio outputs—that’s significant.
If you create a group, then all the faders within that group remain independent. Although with Studio One you can temporarily exclude a fader from a group to adjust it, that’s not necessary with VCA grouping—you can move a fader independently that’s controlled by a VCA, and it will still be linked to the other members of a VCA group when you move the VCA fader.
Bottom line: The easiest way to work with large numbers of groups is with VCA faders.
APPLICATION #3: GROUPS AND SEND EFFECTS
A classic reason for using a VCA fader involves send effects. Suppose several channels (e.g., individual drums) go to a submix bus fader, and the channels also have post-fader Send controls going to an effect, such as reverb. With a conventional submix bus, as you pull down the bus fader, the faders for the individual tracks haven’t changed—so the post-fader send from those tracks is still sending a signal to the reverb bus. Even with the bus fader down all the way, you’ll still hear the reverb.
A VCA Channel solves this because it controls the gain of the individual channels. Less gain means less signal going into the channel’s Send control, regardless of the channel fader’s position. So with the VCA fader all the way down, there’s no signal going to the reverb (Fig. 3)
APPLICATION #4: BUS VS. VCA
There’s a fine point of using VCAs to control channel faders. Suppose individual drums feed a bus with a compressor or saturation effect. As you change the channel gain, the input to the compression or saturation changes, which alters the effect. If this is a problem, then you’re better off sending the channels to a standard bus. But this can also be an advantage, because pushing the levels could create a more intense sound by increasing the amount of compression or saturation. The VCA fader would determine the group’s “character,” while the bus control acts like a master volume control for the overall group level.
And because a VCA fader can control bus levels, some drums could go to an audio bus with a compressor, and some drums to a bus without compression. Then you could use the VCA fader to control the levels of both buses. This allows for tricks like raising the level of the drums and compressing the high-hats, cymbals, and toms more, while leaving the kick and snare uncompressed…or increasing saturation on the kick and snare, while leaving other drum sounds untouched.
Granted, VCA Channels may not be essential to all workflows. But if you know what they can do, a VCA Channel may be able to solve a problem that would otherwise require a complex workaround with any other option.
Most of us who work here at PreSonus are musicians ???????or audio engineers ?.
And some of us are also gamers ?in addition to that.
For those of you who can relate, check out this interesting and fun video that PreSonus Artist/Endorser Nik Jeremić just created and shared with us recently. He’s using an Xbox One game controller to trigger samples in Studio One:
You like to mix with mastering processors in the Main bus to approximate the eventual mastered sound, but ultimately, you want to add (or update) an unprocessed file for serious mastering in the Project page. However, reality checks are tough. When you disable the master bus processors so you can hear the unprocessed sound you’ll be exporting, the level will usually change. So then you have to re-balance the levels, but you might not get them quite to where they were. And unfortunately, one of the biggest enemies of consistent mixing and mastering is varying monitoring levels. (Shameless plug alert: my book How to Create Compelling Mixes in Studio One, which is also available in Spanish, tells how to obtain consistent levels when mixing.)
Or, suppose you want to use the Tricomp or a similar “maximizing” program in the master bus. Although these can make a mix “pop,” there may be an unfair advantage if they make the music louder—after all, our brains tend to think that “louder is better.” The only way to get a realistic idea of how much difference the processor really makes is if you balance the processed and unprocessed levels so they’re the same.
Or, maybe you use the cool Sonarworks program to flatten your headphone or speaker’s response, so you can do more translatable mixes. But Sonarworks should be enabled only when monitoring; you don’t want to export a file with a correction curve applied. Bypassing the Sonarworks plug-in when updating the Project page, or exporting a master file, is essential. But in the heat of the creative moment, you might forget to do that, and then need to re-export.
The Pre-Main bus essentially doubles up the Main bus, to create an alternate destination for all your channels. The Pre-Main bus, whose output feeds the Main bus, serves as a “sandbox” for the Main bus. You can insert whatever processors you want into the Pre-Main bus for monitoring, without affecting what’s ultimately exported from the Main bus.
Here’s how it works.
Figure 1: The Pre-Main bus, outlined in white, has the Tricomp and Sonarworks plug-ins inserted. Note that all the channels have their outputs assigned to the Pre-Main bus.
With all channels selected, changing the output field for one channel changes the output field for all channels. Assign the outputs to the Main bus, play some music, and look at the Level Meter to check the LUFS reading.
Now assign the channel outputs to the Pre-Main bus. Again, observe the Level Meter in the Master bus. Adjust the Pre-Main bus’s level for the best level match when switching the output fields between the Main and Pre-Main bus. By matching the levels, you can be sure you’re listening to a fair comparison of the processed audio (the Pre-Main bus) and the unprocessed audio that will be exported from the Main bus.
The only caution is that when all your channels are selected, if you change a channel’s fader, the faders for all the channels will change. Sometimes, this is a good thing—if you experience “fader level creep” while mixing, instead of lowering the master fader, you can lower the channel levels. But you also need to be careful not to reflexively adjust a channel’s level, and end up adjusting all of them by mistake. Remember to click on the channel whose fader you want to adjust, before doing any editing.
Doubling up the Main bus can be really convenient when mixing—check it out when you want to audition processors in the master bus, but also, be able to do a quick reality check with the unprocessed sound to find out the difference any processors really make to the overall output.
Acknowledgement: Thanks to Steve Cook, who devised a similar technique to accommodate using Sonarworks in Cakewalk, for providing the inspiration for this post.