You’re forgiven if you scoot down to something more interesting in this blog, but here’s the deal. I always archive finished projects, because remixing older projects can sometimes give them a second life—for example, I’ve stripped vocals from some songs, and remixed the instrument tracks for video backgrounds. Some have been remixed for other purposes. Some really ancient songs have been remixed because I know more than I did when I mixed them originally.
You can archive to hard drives, SSDs, the cloud…your choice. I prefer Blu-Ray optical media, because it’s more robust than conventional DVDs, has a rated minimum shelf life that will outlive me (at which point my kid can use the discs as coasters), and can be stored in a bank’s safe deposit box.
Superficially, archiving may seem to be the same process as collaboration, because you’re exporting tracks. However, collaboration often occurs during the recording process, and may involve exporting stems—a single track that contains a submix of drums, background vocals, or whatever. Archiving occurs after a song is complete, finished, and mixed. This matters for dealing with details like Event FX and instruments with multiple outputs. By the time I’m doing a final mix, Event FX (and Melodyne pitch correction, which is treated like an Event FX) have been rendered into a file, because I want those edits to be permanent. When collaborating, you might want to not render these edits, in case your collaborator has different ideas of how a track should sound.
With multiple-output instruments, while recording I’m fine with having all the outputs appear over a single channel—but for the final mix, I want each output to be on its own channel for individual processing. Similarly, I want tracks in a Folder track to be exposed and archived individually, not submixed.
So, it’s important to consider why you want to archive, and what you will need in the future. My biggest problem when trying to open really old songs is that some plug-ins may no longer be functional, due to OS incompatibilities, not being installed, being replaced with an update that doesn’t load automatically in place of an older version, different preset formats, etc. Another problem may be some glitch or issue in the audio itself, at which point I need a raw, unprocessed file for fixing the issue before re-applying the processing.
Because I can’t predict exactly what I’ll need years into the future, I have three different archives.
In this week’s tip, we’ll look at exporting raw WAV files. We’ll cover exporting files with processing (effects and automation), and exporting virtual instruments as audio, in next week’s tip.
Studio One’s audio files use the Broadcast Wave Format. This format time-stamps a file with its location on the timeline. When using any of the options we’ll describe, raw (unprocessed) audio files are saved with the following characteristics:
Important: When you drag Broadcast WAV Files back into an empty Song, they won’t be aligned to their time stamp. You need to select them all, and choose Edit > Move to Origin.
The easiest way to save files is by dragging them into a Browser folder. When the files hover over the Browser folder (Fig. 1), select one of three options—Wave File, Wave File with rendered Insert FX, or Audioloop—by cycling through the three options with the QWERTY keyboard’s Shift key. We’ll be archiving raw WAV files, so choose Wave File for the options we’re covering.
Figure 1: The three file options available when dragging to a folder in the Browser are Wave File, Wave File with rendered Insert FX, or Audioloop.
As an example, Fig. 2 shows the basic Song we’ll be archiving. Note that there are multiple Events, and they’re non-contiguous—they’ve been split, muted, etc.
Figure 2: This shows the Events in the Song being archived, for comparison with how they look when saving, or reloading into an empty Song.
Select all the audio Events in your Song, and then drag them into the Browser’s Raw Tracks folder you created (or whatever you named it). The files take up minimal storage space, because nothing is saved that isn’t data in a Song. However, I don’t recommend this option, because when you drag the stored Events back into a Song, each Event ends up on its own track (Fig. 3). So if a Song has 60 different Events, you’ll have 60 tracks. It takes time to consolidate all the original track Events into their original tracks, and then delete the empty tracks that result from moving so many Events into individual tracks.
Figure 3: These files have all been moved to their origin, so they line up properly on the timeline. However, exporting all audio Events as WAV files makes it time-consuming to reconstruct a Song, especially if the tracks were named ambiguously.
Figure 4: Before archiving, the Events in individual tracks have now been joined into a single track Event by selecting the track’s Events, and typing Ctrl+B.
After dragging the files back into an empty Song, select all the files, and then after choosing Edit > Move to Origin, all the files will line up according to their time stamps, and look like they did in Fig. 4. Compare this to Fig. 3, where the individual, non-bounced Events were exported.
When collaborating with someone whose program can’t read Broadcast WAV Files, all imported audio files need to start at the beginning of the Song so that after importing, they’re synched on the timeline. For collaborations it’s more likely you’ll export Stems, as we’ll cover in Part 2, but sometimes the following file type is handy to have around.
Figure 5: All tracks now consist of a single Event, which starts at the Song’s beginning.
When you bring them back into an empty Song, they look like Fig. 5. Extending all audio tracks to the beginning and end is why they take up more memory than the previous options. Note that you will probably need to include the tempo when exchanging files with someone using a different program.
To give a rough idea of the memory differences among the three options, here are the results based on a typical song.
Option 1: 302 MB
Option 2: 407 MB
Option 3: 656 MB
You’re not asleep yet? Cool!! In Part 2, we’ll take this further, and conclude the archiving process.
I’m not one of those people who wants to do heavy compression all the time, but I do feel bass is an exception. Mics, speakers, and rooms tend to have response anomalies in the bass range; even if you’re using bass recorded direct, compression can help even out the response for a smoother, rounder sound.
Although stereo compressors are the usual go-to for bass, I often prefer a multiband dynamics processor because it can serve simultaneously as a compressor and EQ. Typically, I’ll apply a lot of compression to the lowest band (crossover below 200 Hz or so), light compression to the low-mid bands (as well as reduce their levels in the overall mix), and medium compression to the high-mid band (from about 1.2 kHz to 6 kHz). I often turn down the level for the band above 5-6 kHz or so (there’s not a lot happening up there with bass anyway), but sometimes I’ll set a ratio below 1.0 so that the highest band turns into an expander. If there’s any hiss in the very highest band, this will help reduce it. Another advantage of using Multiband Dynamics is that you can tweak the high and low band gain parameters so that the bass fits well with the rest of the tracks.
The preset in the following screenshot gives a sound like “Tuned Thunder,” thanks to heavy compression in the lowest band. To choose a loop that’s good for demoing this sound, choose Rock > Bass > Clean, and then select 08 02 P Ransack D riff.audioloop. Insert the Multiband Dynamics processor, and start with the default preset.
As with most dynamic processing presets, the effect is highly dependent on the input level. For this preset, normalize the bass loop. Then change the L band to 125 Hz, with a ratio of 15:1, and a Low Threshold of -30 dB. Mute the LM band.
With the Multiband Dynamics processor bypassed, observe the peak value for the bass track. Now enable Multiband Dynamics, and adjust the Low band’s Gain until the peak value matches the peak value with the Multiband Dynamics bypassed-—you’ll hear a big, fat, round sound that sort of tunnels through a mix.
Now let’s go to the other extreme. A significant treble boost can help a bass hold its own against other tracks, because the ear/brain combination will fill in the lower frequencies. The next screen shot shows settings for extreme articulation so the bass really “pops,” and cuts through a track. Again, start with the default preset but set the Low band frequency to 110 Hz or so.
The only band that’s compressed is the Mid band (320 – 1.2 kHz, with parameter settings shown in the screen shot). A bit of gain for the High Mid band emphasizes pick noise and harmonics—5 dB or so seems about right—and to compensate for the extra highs, add some gain to the low band below 110 Hz. Again, about 4-5 dB seems to work well.
When adjusting the Multiband Dynamics processor, note that you can zero in on the exact effect you want for each band by using the Solo and Mute buttons on individual stages. So next time you want to both compress and equalize bass, consider using Multiband Dynamics instead—and get the best of both worlds.