Hardware is making a comeback. Real-time, improvisation-based drum machines and synths are gaining popularity, and you can find occasional bargains for used synths that were top of the line only a few years ago. So, it makes sense that Studio One would want to simplify integrating external hardware synths (similarly to how Pipeline integrates external hardware effects).
When Version 5 introduced the Aux Channel, comments ranged from “So great—I’ve been wanting this for years!” to “why not just feed the instrument into audio tracks?” Well, they’re both right—Aux Channels are about workflow with external hardware synthesizers. But Aux Channels can streamline workflow and simplify setup, compared to assigning the instrument outs into audio tracks, which then route to the mixer.
Aux Channels monitor external audio interface inputs directly, not the outputs from recorded tracks, in the mixer. These external inputs can be any audio. For example, when mixing, you might want to listen to a well-mixed CD for comparison. You don’t need to record this as a track, just monitor the inputs it’s feeding as needed.
Aux Channel Benefits
My favorite feature is that you can add a hardware synthesizer to the External Instruments folder (located in the Browser’s Instruments tab), and drag and drop the hardware synth into the arrange view—just like a virtual instrument. This automatically creates the Aux Channel, and sets up the Instrument track as you saved it. Set up the external synth once, then use it any time you want.
Also, the external instrument needs only the Note data track in the Arrange view—audio tracks are unnecessary because you’re just going to mix them in the console anyway. This is consistent with Studio One’s design philosophy of dedicating the Arrange view to arranging, not mixing. Of course, you can show/hide audio tracks in the Arrange view, but it’s more convenient to have those audio tracks show up directly in the console, like your other audio sources.
How to Create an Aux Channel
First, your hardware sound generator needs to be set up as an external Instrument in the Options (Windows) or Preferences (Mac) window. Then, in the Console, click on External toward the lower left. Click the downward arrow for the desired External Device, and choose Edit. (Note that if it’s a workstation that combines a sound generator with a keyboard, you should have two entries—one for the Keyboard, and one for the Instrument. Choose the instrument.)
When the control mapping window appears, click on the Outputs button (the one with the right arrow; see fig. 1), and choose Add Aux Channel.
After the Aux Channel appears, assign its input to the audio input(s) being fed by the hardware synth. For example, if the synth’s audio is going to stereo input 3+4, then choose that stereo input. (If your existing Song Setup I/O doesn’t include an easily identifiable name for the inputs being used for the Aux track, considering doing some renaming.)
Next, save these default settings. If needed, click on Outputs again to bring up the controller mapping window, and click on Save Default. Saving it is what allows the hardware instrument to show up in the Browser.
In the Browser’s Instruments tab, look under External Instruments (toward the top, just under Multi Instruments). Drag your instrument into the Arrange view, and start playing. If you don’t hear anything, the likely causes are either that the keyboard being assigned to the synth isn’t the default keyboard (specify All Inputs for MIDI in, and it should work), or the Aux Channel input hasn’t been assigned to the correct audio interface inputs.
Also note that with workstations whose keyboard drives the sound generator, turn off the Local Control parameter (usually in the instrument’s MIDI setup menu). Otherwise, you’ll be playing the sound generator from the keyboard, and Studio One will also be feeding it notes. The result is note double-triggering.
To preserve what the Instrument track plays as an audio track, choose a track’s Transform to Audio Track option, or select the Event and bounce to a new track (fig. 2).
When bouncing, make sure that the Record Input is assigned to the Aux Channel where the Instrument’s audio appears. Finally, Studio One knows that because the bounce involves external hardware, the bounce must be done in real-time (faster than real-time bouncing is possible only with virtual instruments that live inside the computer). Happy hardware, everyone!
I started off my mixing career mixing Tinie Tempah’s triple-platinum album Discovery. Prior to this I wrote and produced music and was signed to labels such as Virgin and Defected Records. Since mixing Tinie’s album I have been busy mixing records for many artist including BTS, Sigrid, Dua Lipa, Kodaline, The Disciples, Sigma and most recently Shane Cod and his platinum single, “Get Out Of My Head.”
I have been mixing in Studio One for six years now and it keeps getting better and better. PreSonus really listens to user feedback and implements suggested improvements frequently; I haven’t experienced this with any other DAW.
Studio One allows for a very fast workflow and because of it’s intuitive build and design I can easily focus on the mixing.
During lockdown in 2020, we decided that London was lacking in high-end state of the art podcast production facilities, so we built VOXPOD Studios. My podcast room can host up to eight people and also offers livestreaming and video recording of the shows on five video cameras placed around the room.
The PreSonus PD-70 dynamic mic has proven to be a game changer in VOXPOD studios. Its sound quality and tone set the bar above all the others on the market.
VOXPOD Studios has already started hosting shows for some big podcasts here in the UK, including the James Smith Podcast and Rugby’s leading podcast, “The Good The Bad and the Rugby.”
Lastly, another new positive feature of 2020 was the launch of PreSonus Sphere. It’s truly a brilliant way of connecting the rapidly growing number or Studio One users, world-wide. I love being able to try out suggested Presets and Studio One shared components from other engineers, writers and producers.
An astute Friday Tip reader commented that while the tip on how to level the outputs of amp sim presets was indeed useful, I should also write about the importance of input levels. Well, I do take requests—and yes, input levels are crucial with amp sims.
Physical amp sims are forgiving. They soak up transients, and chop off low and high frequencies. But amp sims tend to magnify the differences between guitars and playing styles. When going through the same preset, a player who uses a thin flat pick, 0.008 strings, and single coil pickups will sound totally different compared to a player who uses a thumbpick, 0.010 strings, and humbuckers. So, let’s look at four common mistakes people make when feeding amp sims.
The first half has the input set to 5 o’clock. Not only is the sound so distorted the playing is indistinct, listen to the very beginning, before the first note hits. All that gain is picking up noise, hum, and garbage that becomes part of your guitar signal. No wonder the amp sim sounds like garbage—it has plenty of garbage mixed in. The audio example’s second half has the input at 9 o’clock. The sound is not only more focused, but stronger.
In particular, listen to the spaces between notes. The version without EQ has a sort of bassy mud between notes that detracts from the part’s focus.
The bottom line is simple: If your amp sim doesn’t sound right, the quickest fix might be as simple as turning down the input level, and rolling off some lows and highs before the amp sim.
It’s great we can store presets, trade them with other users, and download free ones. But…when selecting different Ampire presets to decide how they fit with a track, you want their levels to match closely. Then, your evaluation of the sound will be based on the tone, not by whether the preset is softer or louder. However, consistent preset levels are not a given.
Having a baseline level for presets, so you don’t need to change the level every time you call up a new one, is convenient. You can’t really use VU meters for this, because you want sounds to have the same perceived level, which can be different from their measured level. For example, a brighter sound may measure as softer, but be perceived as louder because it has energy where our ears are most sensitive.
My standard of comparison is a dry guitar sound, because I want the same perceived level whether Ampire is enabled, or bypassed. You might prefer to have Ampire always be a few dB louder than your dry guitar—whatever makes you happy.
The LUFS (Loudness Unit Full Scale) measurement protocol measures perceived loudness. LUFS measurements allow streaming services like YouTube, Spotify, Apple Music, and others to adjust the volume of various songs to the same perceived level. This is why you don’t have to change the volume every time a different song shows up in a playlist. The system isn’t perfect, but it’s better than dealing with constant level variations. Fortunately, Studio One has a Level Meter plug-in that gives LUFS readings (fig. 1).
We interrupt these steps to bring you an important bulletin: The Level Meter reading that matters is the INT field. This averages out the audio, so you’ll see the reading change at first, and then settle down to a consistent LUFS reading. When you change levels, call up a different preset, or make any changes, click on the Reset button to re-start the averaging process. When doing any LUFS measurements, you can’t be sure the reading is correct until you’ve a) hit Reset, and b) played the event several times, which is why we want to loop it.
We now return to the step-by-step procedure.
And while you’re at it, save the Song you used to do this testing. Then you can call it up again in the future, when you want to match preset levels.
Okay, so it took a little time to balance all your presets. But when deciding what preset to use in the future, you’ll be glad you set them all to a baseline level.
Brody Tullier (aka Zeno) is a 17 year-old composer based in Baton Rouge, Louisiana who has been composing and arranging original music for the past four years.
Delving into advanced audio production more and more in recent years, his music has become the lively, energetic, and polished arrangements that you can hear in his recent Bandcamp releases here.
Brody’s preferred style of compositions lean heavily towards video-game inspired tracks and he aspires to one day seek a career in the video game music industry.
We wish him great success in his ongoing growth as a musician, composer and producer!
Start recording today with this complete, all-PreSonus package! Based on the AudioBox USB 96 audio/MIDI interface and award-winning Studio One recording and production software, PreSonus AudioBox 96 Studio is great for creating multitrack recordings, demos, live recordings, podcasts, field recordings for video and sound effects, and much more. Learn more about the AudioBox 96 Studio here.
You get our best-ever selling AudioBox interface, the M7 Condenser microphone, comfy HD7 headphones, Studio One Artist, the Studio Magic Suite (over $1000 worth of software) and all the cables you need to hook it up. It’s everything you need to record and produce in a single purchase—and for a limited time it’s more affordable than ever!
This is an instant rebate, live at the point of purchase. No forms to fill out.
Start recording today with the PX-1 or a pair of PM-2s. The PX-1 large-diaphragm cardioid condenser microphone is an ideal solution for recording vocals, guitar, podcasts, and much more! Learn more about the PX-1 here. The PM-2 stereo mic set provides 2 matched, pro-quality, small-diaphragm condenser mics with XY mounting bar. Ideal for drum overheads, ensembles, etc. Learn more about the PM-2 set here.
This is an instant rebate, live at the point of purchase. No forms to fill out.
Studio One includes multiple algorithmic composition tools, but I’m not sure how many users are aware of them. So, let’s look at the way some of these tools can help expedite the songwriting process.
I’ve always prioritized speed when songwriting, because inspiration can disappear quickly. But, I’ve also found that good guide tracks (e.g., cool drum loops instead of metronome clicks) increase the inspiration factor. The trick is to create good guide tracks, without getting distracted into editing a guide tack into a “part.”
A solid bass line helps drive a song, but when songwriting, I don’t want to take the time to grab a bass and do the necessary setup. The blog post Studio One’s Amazing Robot Bassist describes a simple way to create a bass part by hitting notes on the right rhythm, and then conforming them to the Chord track. This week’s tip builds on that concept to let Studio One’s session bass player get way more creative, and generate parts even more quickly.
Fig. 1 shows a chorus being written. The process started with a rhythm guitar part (track 3), which I dragged up to the Chord track so it could parse the chord changes. I made a few changes to the Chord Track, then dragged it into a piano instrument track. This deposited MIDI notes for the chords, so now the piano played the chord progression. I then had the guitar follow the new Chord track, and the guitar and piano played together. Next up was adding a drum loop.
As to the bass part, there are three elements to having Studio One create a bass part.
1: Fill With Notes
After selecting the bass’s blank Note Event, open the Edit Menu and choose Action > Fill with Notes. You’ll want to customize the settings for best results (fig. 2).
For example, choose a bass-friendly pitch range. For this song, I also wanted a fairly bouncy part, so the rhythm is made up of 1/4 and 1/8 notes. You might also want some half-notes in there. Click on OK, and you have a part.
Every time you invoke Fill with Notes, the notes will be different. If you don’t like the results, delete the notes, and try again. But don’t agonize over this—all you really want is notes with a rhythm you like, because the Chord Track and Scale will take care of the pitches.
Also note that if you don’t like the notes in, for example, the second half but like the first half, no problem. Delete the notes in the second half, set the loop to the second half, and try again.
2: Chord Track
Now have the part follow the Chord Track. Your follow options are Parallel, Narrow, and Bass. I usually prefer Narrow over Bass, but try them both. Now the notes follow the chord progression.
3: Choose and Apply Scale
This may not be necessary, but if the part is too busy pitch-wise, specify the scale note and choose Major or Minor Triad. Then, choose Action > Apply Scale. Now all the notes will be moved to the first, third, or fifth. Of course, you can also experiment with other scales—but remember, the object is to get an acceptable bass part down fast, so you can move on to songwriting.
Fig. 3 shows a bass part that Studio One’s session bass player “played.” This was on the second try.
Finally, let’s hear the results. The vitally important point to remember is that all this was created, start to finish, in under four minutes (which included tuning and plugging in the guitar). The end result was some decent guide tracks, so I could start work on the ever-crucial vocal lead line and lyrics. Thank you, session bass player!
I’m a fan of hand percussion. Tambourines, cowbells, claves, guiros…you name it. But mixing it just right is always tricky. When mixed too high, the percussion becomes a distraction. Too low, and it might as well not be there. The object is to find the sweet spot between those extremes.
My solution is simple: use X-Trem’s Autopan function to give motion to percussion. The following audio example has cowbell (no jokes, please) and tambourine. The cowbell keeps a rock-solid hit on quarter notes, and is panned to an equally rock-solid center. But the tambourine is a different story. I’ve mixed both of them higher than normal, so you can clearly hear how they interact.
In the first half, both the tambourine and cowbell are panned to center. After a brief gap, the section repeats, with X-Trem moving the tambourine back and forth in the stereo field. In a real mix, both percussion parts would be mixed lower, so the tambourine’s motion would be something you sensed rather than heard. The end result is a feeling of more motion with the percussion, because the tambourine’s wanderings keep it from becoming repetitive.
First things first: the track must be stereo. If you recorded it in mono, set the Channel Mode to stereo, select the clip, and type ctrl+B to bounce the clip to itself. This converts it to stereo.
I prefer not to have a regular, detectable panning change. A random LFO waveform would be ideal, but the X-Trem’s 16 Steps waveform is equally good. Slower pan rates are better, because you don’t want the pan position to change so fast that a percussion hit pans while it’s still sustaining or playing. I’d recommend 2 beats (changes every 1/8th note, as in the audio example) or every 4 beats for slower tempos or percussion that sustains.
Draw a pattern that’s as close as possible to seeming random (fig. 1). This prevents the panning from becoming repetitive.
If you want the panning to move around the center, fine—pan the track to center, and you’re done. But if you want the panning to move (for example) between hard left and center, remember that the pan control becomes a balance control in stereo. So if X-Trem pans the audio more toward the right, it will become quieter. To get around this, pan the channel to center, but follow the X-Trem with a Dual Pan that sets the actual panning range. Fig. 2 shows settings for panning between the left and center, while maintaining a constant level.
As to the amount of X-Trem modulation depth…it depends. If you’ve been to my seminars where I talk about “the feel factor” with drum parts, you may recall that I like to keep the kick right on the beat, and change timings around it. A similar concept holds true with panning percussion. In the audio example, the cowbell is the anchor, and the tambourine dances around it. If both are moving, the parts can become a distraction rather than an enhancement.
Now you know how to make your percussion parts tickle the listener’s ears just a little bit more…and given the audio example, I’m proud of all of you for not stooping to a “more cowbell” joke!