A sampled drum sound can get pretty boring. There’s even a name for triggering the same sound repeatedly—“the machine gun effect.” Sometimes you want this, but often, it’s preferable to have a sound that responds to velocity and is more expressive.
There are two ways to address this with Impact XT, depending on whether you have multiple samples recorded at different intensities (i.e., softer and harder hits), or only one sample, which then means you have to “fake” it sounds like it was recorded with different intensities.
Multiple Drum Samples
This is the most common way to create expressive drum parts. Drum sample libraries often include multiple versions of the same drum sound—like soft, medium, and hard hits. The technique we’ll describe here works for more than three samples, but limiting it to three is helpful for the sake of illustration.
Impact XT makes it super-simple to take advantage of sounds recorded at different intensities because you can load multiple samples on a single pad. However, note that if a pad already contains a sample and you drag a new sample to a pad, it will replace, not supplement, the existing sample. So, you need to use a different approach.
Figure 1: Click on the + sign to load another sample on to a pad.
Figure 2: The splitter bar between samples can alter the velocity range to which a sample responds.
Now you’ll trigger different drum samples, depending on the velocity.
How to Fake Multiple Drum Samples
If you have a single drum sample with a hard hit, then you can use Impact XT’s sample start parameter to fake softer hits by changing the sample start time. (Starting sample playback later in the sample cuts off part of the attack, which sounds like a drum that’s hit softer.)
Figure 3: Click on the sample start line, and drag right to start sample playback past the initial attack. The readout toward the lower right shows the amount of offset, in samples.
Play the drum at different velocities. Tweak sample start times, and/or velocities, to obtain a smooth change from lower to higher velocities.
But Wait…There’s More!
Let’s add two more elements to emphasize the dynamics. These parameters affect all samples loaded on the pad, and are also effective with pads that have only a single sample.
Figure 4: Assigning velocity to Pitch and Filter Cutoff can enhance dynamics even further.
At the Pitch module, turn up the Velocity to Pitch parameter by around 0.26 semitones (Fig. 4). This raises the pitch slightly when you hit the drum harder, which emulates acoustic drums (the initial strike raises the tension on the head, which increases pitch slightly, depending on how hard you hit the drum).
Similarly, back off on the Filter Cutoff slightly, and turn up the Filter’s Vel parameter a little bit (e.g., 10%). This will make the sound brighter with higher velocities.
Done! Now go forth, and give your music more expressive drum sounds.
I sometimes record acoustic rhythm guitars with one mic for two main reasons: no issues with phase cancellations among multiple mics, and faster setup time. Besides, rhythm guitar parts often sit in the background, so some ambiance with electronic delay and reverb can give a somewhat bigger sound. However, on an album project with the late classical guitarist Linda Cohen, the solo guitar needed to be upfront, and the lack of a stereo image due to using a single mic was problematic.
Rather than experiment with multiple mics and deal with phase issues, I decided to go for the most accurate sound possible from one high-quality, condenser mic. This was successful, in the sense that moving from the control room to the studio sounded virtually identical; but the sound lacked realism. Thinking about what you hear when sitting close to a classical guitar provided clues on how to obtain the desired sound.
If you’re facing a guitarist, your right ear picks up on some of the finger squeaks and string noise from the guitarist’s fretting hand. Meanwhile, your left ear picks up some of the body’s “bass boom.” Although not as directional as the high-frequency finger noise, it still shifts the lower part of the frequency spectrum somewhat to the left. Meanwhile, the main guitar sound fills the room, providing the acoustic equivalent of a center channel.
Sending the guitar track into two additional buses solved the imaging problem by giving one bus a drastic treble cut and panning it somewhat left. The other bus had a drastic bass cut and was panned toward the right (Fig. 1).
Figure 1: The main track (toward the left) splits into three pre-fader buses, each with its own EQ.
One send goes to bus 1. The EQ is set to around 400 Hz (but also try lower frequencies), with a 24 dB/octave slope to focus on the guitar body’s “boom.” Another send goes to bus 2, which emphasizes finger noises and high frequencies. Its EQ has a highpass filter response with a 24dB/octave slope and frequency around 1 kHz. Pan bus 1 toward the left and bus 2 toward the right, because if you’re facing a guitarist the body boom will be toward the listener’s left, and the finger and neck noises will be toward the listener’s right.
The send to bus 3 goes to the main guitar sound bus. Offset its highpass and lowpass filters a little more than an octave from the other two buses, e.g., 160 Hz for the highpass and 2.4 kHz for the lowpass (Fig. 2). This isn’t “technically correct,” but I felt it gave the best sound.
Figure 2: The top curve trims the response of the main guitar sound, the middle curve isolates the high frequencies, and the lower curve isolates the low frequencies. EQ controls that aren’t relevant are grayed out.
Monitor the first two buses, and set a good balance of the low and high frequencies. Then bring up the third send’s level, with its pan centered. The result should be a big guitar sound with a stereo image, but we’re not done quite yet.
The balance of the three tracks is crucial to obtaining the most realistic sound, as are the EQ frequencies. Experiment with the EQ settings, and consider reducing the frequency range of the bus with the main guitar sound. If the image is too wide, pan the low and high-frequency buses more to center. It helps to monitor the output in mono as well as stereo for a reality check.
Once you nail the right settings, you may be taken aback to hear the sound of a stereo acoustic guitar with no phase issues. The sound is stronger, more consistent, and the stereo image is rock-solid.
In this video, producer Paul Drew shows how VocALign seamlessly works inside Presonus Studio One Professional and almost instantly aligns the timing of multiple vocal tracks to a lead using ARA2, potentially saving hours of painstaking editing time.
ARA (Audio Random Access) is a pioneering extension for audio plug-in interfaces. Co-developed by Celemony and PreSonus, ARA technology enhances the communication between plug-in and DAW, and gives the plug-in and host instant access to the audio data. This video shows Studio One but the workflow is very similar in Cubase Pro & Nuendo, Cakewalk by Bandlab and Reaper.
Spoiler alert: We’ll get into some rocket science stuff here, which probably doesn’t affect your projects much anyway…so if you prefer something with a more musical vibe, come back next week. But to dispel some of the confusion regarding an oft-misunderstood concept, keep reading.
You pan a mono signal from left to right. Simple, right? Actually, no. In the center, there’s a 3 dB RMS volume buildup because the same signal is in both channels. Ideally, you want the signal’s average level—its power—to have the same perceived volume, whether the sound is panned left, right, or center. Dropping the level when centered by 3 dB RMS accomplishes this. As a result, traditional hardware mixers tapered the response as you turned a panpot to create this 3 dB dip.
However, there are other panning protocols. (Before your head explodes, please note you don’t need to learn all this stuff—it’s just to give you an idea of the complexity of pan laws, because all we really need to do is understand how things work in Studio One.) For example, some engineers preferred more of a drop in the center, so that audio panned to the sides would “pop” more due to the higher level, and open up more space in the center for vocals, kick, and bass. You could accomplish the same result by adjusting the channel level and pan, but the additional drop was sort of like having a preference you didn’t need to think about. To complicate matters further, some mixers lowered the center signal compared to the sides, while others raised the side signals compared to the center. If a DAW does the latter, when you import a normalized file and pan it hard left or hard right, it will go above 0 and clip.
But wait! There’s more. Some engineers didn’t want equal power over the panpot’s entire travel, but a slightly different curve. Others wanted a linear change that didn’t dip the signal at all.
Fortunately, Studio One has a rational approach to pan laws, namely…
THE “WHAT-THE-HECK-DO-PAN-LAWS-DO” TEST SETUP
To see how the different panning laws affect signal levels, I created a test setup (Fig. 1) with a mono track fed by the Tone Generator set to a sine wave. Two pre-fader sends went to two buses, each with a Dual Pan inserted and linked for mono. That way, one bus’s Dual Pan could be set for hard pan and the other bus’s Dual Pan for center, to compare what happens to the signal level.
Figure 1: Test setup to determine how different pan laws affect signal levels.
In all the following test result images, Track 1 shows the mono sine wave at 0 dB, Bus 1 shows the result of panning the Dual Pan full left, and Bus 2 shows the result of panning the Dual Pan to center.
Fig. 2 uses the -3dB Constant Power Sin/Cos setting for the Dual Pans. Note that the centered version in Bus 2 is 3 dB below the same signal panned full left. This is the same setting as the default for the channel panpots. However, if you collapse the output signal to mono, you’ll get a 3 dB center-channel buildup. (A fine point: setting the Main bus mode to mono affects signals leaving the main bus; the meters still show the incoming signal. To see what’s happening when you collapse the Main out to mono, you need to insert a Dual Pan in the Main bus, click on Link, and set all controls to center.)
Figure 2: -3dB Constant Power Sin/Cos pan law.
Fig. 3 uses the -6 dB linear curve. Here, the centered signal is -6 dB below the signal panned hard left. Use this curve if the signal is going to be collapsed to mono after the main bus, because it keeps the gain constant when you collapse stereo to mono by eliminating the +3 dB increase that would happen otherwise.
Figure 3: The -6 dB linear curve is often preferable if you’re mixing in stereo, but also anticipate that the final result will end up being collapsed to mono.
Fig. 4 shows the resulting signal from the 0dB Balanced Sin/Cos setting. There’s no bump or decrease compared to the centered signal, so this acts like a balance control with a constant amount of gain as you pan from left to right.
Figure 4: 0dB Balanced Sin/Cos acts like a balance control.
Sharp-eyed readers who haven’t dozed off yet may have noticed we haven’t covered two variations on the curves described so far. -3dB Constant Power Sqrt (Fig. 5; Sqrt stands for Square Root) is like the ‑3 dB Constant Power Sin/Cos, but the curve is subtly different.
Figure 5: -3dB Constant Power Sqrt bends the curve shape slightly compared to the other Constant Power curve.
In this example, the panpot is set to 75% left instead of full left. Bus 1 shows what happens with -3dB Constant Power Sin/Cos, while Bus 2 is the Sqrt version. The Sqrt version is in less of a hurry to attenuate the right channel as you pan toward the left. Some engineers feel this more closely the situation in a space that’s not acoustically treated, so there’s a natural acoustic center buildup.
Finally, Fig. 6 compares the 0 dB Balance variations, sin/cos and linear.
Figure 6: Comparing the two 0 dB Balance pan law options.
The difference is similar to the Constant Power examples, in that the basic idea is the same, but again, the linear version doesn’t attenuate the right channel as rapidly when you pan left.
I HAVEN’T FALLEN ASLEEP YET, SO PLEASE, JUST TELL ME WHAT I SHOULD USE!
The bottom line is when using the channel panpot with a mono track, if you live in a stereo mixdown world the above is mostly of academic interest. But if you’re mixing in stereo and know that your mix will be collapsed to mono (e.g., for broadcast), consider using the Dual Pan in mono channels, and set it to the -6 dB Linear pan law.
For stereo audio, again, a channel panpot works as it should—it acts like a balance control. However if the output is going to be collapsed into mono, you might want to leave the channel panpot centered, and insert a Dual Pan control to do the panning. It should be set to -6 dB Linear, controls unlinked, and then you move both controls equally to pan (e.g., if you want the sound slightly right of center, set both the left and right panpots to 66% right). Now when you pan, the mono levels in the main bus will be constant.
ONE MORE TAKEAWAY…
And finally…I’m sure you’ve seen people on the net who swear that DAW “A” sounds better than DAW “B” because they exported the tracks from one DAW, brought them into a second DAW, set the channel faders and panpots the same, and then were shocked that the two DAWs didn’t sound identical. And for further proof, they note that after mixing down the outputs and doing a null test, the outputs didn’t null. Well, maybe that proves that DAWs are different…but maybe what it really proves is that different programs default to different pan laws, so of course, there are bound to be differences in the mixes.
It’s the DEAL of the SUMMER! For one week only, save 50% off on ALL MVP Loops right out of the PreSonus Shop!
MVP Loops provide some of our most popular, top-selling loops and their whole collection is half off right now! Some of MVPs greatest hits include:
For SEVEN DAYS only– Score the Ampire XT Metal Pack for 50% OFF right out of the PreSonus Shop!
The Ampire XT Metal Pack is an Extension for PreSonus’ Ampire XT Native Effects plug-in with six new roaring amp models and six new cajun-encrusted speaker-cabinet emulations designed to bleach the tats off metal guitarists. Adding to the lethal-weapon qualities of this Pack is a brand-new Metal Drumkit for the Impact virtual drum instrument.
HURRY! This offer is available from July 1 through July 7, 2019 and is offered worldwide!
Looking for some of the best-sounding pianos you can get for Studio One? Look no further than this Piano Collection from Chocolate Audio. And, lucky you, save 30% on the whole collection for the month of July 2019!
Three different pianos are available, each recorded with high-quality mics and expensive preamps. They also take advantage of Presence XT’s advanced scripting functionality to simulate the behavior of these beloved instruments as accurately as digitally possible.
Click on over to shop.presonus.com to hear audio demos of these incredible-sounding instruments. And if you’re still not sure after listening… get the combo pack of all three! The Chocolate Audio Piano Collection for Studio One is available only at shop.presonus.com.
Last but not least:
The Chocolate Audio Pianos are compatible with Studio One 3.2 or later: Prime, Artist, and Professional editions.
All of the pianos in this family have the following onboard script controls:
Well…maybe it actually is, and we’ll cover both positive and negative flanging (there’s a link to download multipresets for both options). Both do true, through-zero flanging, which sounds like the vintage, tape-based flanging sound from the late 60s.
The basis of this is—surprise!—our old friend the Autofilter (see the Friday Tip for June 17, Studio One’s Secret Equalizer, for information on using its unusual filter responses for sound design). The more I use that sucker, the more uses I find for it. I’m hoping there’s a dishwashing module in there somewhere…meanwhile, for this tip we’ll use the Comb filter.
Flanging depended on two signals playing against each other, with the time delay of one varying while the other stayed constant. Positive flanging was the result of the two signals being in phase. This gave a zinging, resonant type of flanging sound.
Fig. 1 shows the control settings for positive flanging. Turn Auto Gain off, Mix to 100%, and set both pairs of Env and LFO sliders to 0. Adding Drive gives a little saturation for more of a vintage tape sound (or follow the May 31 tip, In Praise of Saturation, for an alternate tape sound option). Resonance is to taste, but the setting shown above is a good place to start. The Gain control setting of 3 dB isn’t essential, but compensates for a volume loss when enabling/bypassing the FX Chain.
Varying the Cutoff controls the flanging effect. We won’t use the Autofilter’s LFO, because real tape flanging didn’t use an LFO—you controlled it by hand. Controlling the flanging process was always inexact due to tape recorder motor inertia, so a better strategy is to automate the Cutoff parameter, and create an automation curve that approximates the way flanging really varied (Fig. 2)—which was most definitely not a sine or triangle wave. A major advantage of creating an automation curve is that we can make sure that the flanging follows the music in the most fitting way.
Throwing one of the two signals used to create flanging out of phase gave negative flanging, which had a hollower, “sucking” kind of sound. Also, when the variable speed tape caught up with and matched the reference tape, the signal canceled briefly due to being out of phase. It’s a little more difficult to create negative flanging, but here’s how to do it.
So is this the best flanger plug-in ever? Well if not, it’s pretty close…listen to the audio examples, and see what you think.
Both examples are adapted/excerpted from the song All Over Again (Every Day).
If you like what you hear, download the multipresets. There are individual ones for Positive Flanging and Negative Flanging. To automate the Flange Freq knob, right-click on it and choose Edit Knob 1 Automation. This overlays an automation envelope on the track that you can edit as desired to control the flanging.
And here’s a fine point for the rocket scientists in the crowd. Although most flangers do flanging by delaying one signal compared to another, most delays can’t go all the way up to 0 ms of delay, which is crucial for through-zero flanging where the two signals cancel at the negative flanging’s peak. The usual workaround is to delay the dry signal somewhat, for example by 1 ms, so if the minimum delay time for the processed signal is 1 ms, the two will be identical and cancel. The advantage of using the comb filter approach is that there’s no need to add any delay to the dry signal, yet they can still cancel at the peak of the flanging.
Finally, I’d like to mention my latest eBook—More Than Compressors – The Complete Guide to Dynamics in Studio One. It’s the follow-up to the book How to Record and Mix Great Vocals in Studio One. The new book is 146 pages, covers all aspects of dynamics (not just the signal processors), and is available as a download for $9.99.
We were recently introduced to Denny White via his artist bro and Studio One fan Josh Cumbee. Denny combines pop and electronic beats, soulful blues vocals, and a singer/songwriter style that takes listeners on a trip! Living in Los Angeles has awarded him opportunities to play alongside acts such as Young the Giant, Dawes, and Tove Styrke. He JUST released some vocal sample packs with our friends at Splice, and he’s currently working on a collection of singles leading up to his debut full-length album coming out soon! We recently had the opportunity to chat with him about his career and his gear.
Give us some background on yourself. How long have you been making music?
I grew up in a sleepy California suburb called Hemet and music was always at the centerpiece of everything we did. I fell slowly into making music as a career, and still find it crazy that I call this my “job.” My freshman year of college, I met my good buddy Brent Kutlze, who produced my first solo EP and mentored me early on. I saw first hand how he wrote & produced for other artists, while also being a full-time one himself in his band OneRepublic. Releasing that first EP led to me meeting a manager, doing hundreds of co-writes, moving to LA, and eventually signing a publishing deal with Warner Chappell.
How has the music industry changed since your early days?
It’s such a catch-22… everything’s changed while nothing has at the same time. I was technically streaming music in high school with Limewire and MySpace, but couldn’t have dreamed it would morph into streaming as we know it today. On the recording side, I’m still producing on a laptop like I was in college, but everything is light years better and faster than anything I could have imagined then. One of the biggest changes is the vast amounts of knowledge and resources available to everyone now. The industry once sounded like some mysterious faraway place that only a few had access to, but now that glass ceiling has been shattered. I’ve written with kids who know about publishing, licensing, producing, and even their own frequency preferences on a vocal, thanks to amazing resources like Pensado’s Place, or podcasts like Ross Golan’s And the Writer is.
My first song was written for a school talent show, and I hope to find a dusty VHS tape someday with a little me on it, most likely singing a mid-tempo Ben Folds-esque piano tune.
Who has been an influence in your life?
Hands down my wife’s been the biggest influence in my life. Musically, I’ve been the benefactor of so many talented friends and collaborators who’ve had an influence on me as well over the years, Brent Kutlze, Michael Brun, David Hodges, Alex Delicata, Steve Wilmot, and Jeff Sojka to name a few!
Have you ever wanted to give up on music? What keeps you going?
I’ve never wanted to give up on music per se, but have definitely contemplated other career paths, as this one has the propensity to drive you mad; you really have to love it despite the wild ebb and flow of the industry and embrace the process daily. My faith and family keep me going on days I don’t want to.
How did you first hear of PreSonus?
I’ve always known about PreSonus but knew little about the products until really hearing about Studio One from my freakishly talented friend Josh Cumbee last year.
What do you like about PreSonus? What caught your eye?
I remember being in Josh’s studio and was immediately intrigued when I saw the Start Page of Studio One. It felt so unique and custom to Josh. The first feature that caught my eye was the window in the middle where you can upload your own art, that prints on every mixdown. Also, the organization of seeing all recent files on the left, without having to scroll through a list or search your hard drive immediately spoke to my OCD-ness.
What PreSonus products do you use?
What features are you most impressed with in Studio One?
I really dig Console Shaper, and the immediate vibe it can give to any blank start. The hybrid dual buffer engine is insane and makes it possible to work in large projects that historically would have been a cluster cuss, and allows me to use instances of soft synths that are taxing on CPU like Kontact or Vengeance Avenger up until the finish line. Tracklist organization, Fat Channel, and “Candleblower” bass in Mai Tai are a few of the other million things I love in it.
Any user tips or tricks or interesting stories based on your experience with Studio One?
Recently I released a Vocal pack on Splice, and Sample One XT made all my vocal chops feel so much more creative and important-sounding than anything I could have accomplished in my sad old DAW’s sampler. First I’d record pass of adlibs, tune with the integrated Melodyne (insanely fast,) then map individual samples across 3-5 keys and quickly explore new melody ideas. Another huge lifehack is I have “W” set to “Locate Mouse Cursor.” It’s insane how much time these things have saved me, and now I’m able to be creative almost immediately.
How easy/difficult was Studio One to learn?
The transition was so easy. I was very reluctant at first, thinking It’d take way too much time, but after doing a few sessions in it I was back at full speed with a whole new perspective on producing.
Where do you go for support?
From the Knowledgebase to millions of videos on YouTube, or texting one of my friends about Studio One, there’s never a shortage of support.
Recent projects? What’s next for you?
Last week I released my first Vocal Sample Pack on Splice that I’m really proud of. Currently, I’m in the middle of writing for my album, while also producing a record for Gabriel Conte!
Shakers, tambourines, eggs, maracas, and the like can add life and interest to a song by complementing the drum track. But it’s not always easy to play this kind of part. It has to be consistent, but not busy; humble enough to stay in the background, but strong enough to add impact…and this sounds like a job for version 4.5’s new MIDI features.
We’ll go through the process of creating a cool, 16th-note-based percussion part, but bear in mind that this is just one approach. Although it works well, there are many ways you can modify this process (which we’ll touch on at the end).
First, Choose Your Sound
Ideally, you’ll have a couple different samples of the percussion instrument you want to use. But if you don’t, there’s a simple workaround. I use Impact for these kinds of parts, and if there’s only one sample of something like a shaker, I’ll drag it to two pads, and detune one of the pads by -1 semitone so they sound different. In the following example, we’ll call the original sample Sound 1, and detuned sample, Sound 2.
Let’s create a two-bar percussion loop to start. Grab the Draw tool, and set the Quantize value to 1/4. Drag across the two measures to create a hit at every quarter note for Sound 1 (Fig. 1).
Next, set the Quantize value to 1/16. Drag across the two measures to create a hit at every 16th note for Sound 2 (Fig. 2). Hit Play, so you can marvel at how totally unmusical it sounds.
Now let’s make the part sound good. The key here is not to alter the 1/4 note hits—we want them rock solid, so that the rhythm won’t get pulled too far astray when we start adding variations to the 16th notes.
Select only the 16th notes for Sound 2, and let’s use version 4.5’s new Thin Out Notes command. I’m a fan of Delete notes randomly, and we’ll delete 40% of the notes. Choose the 1/16 grid, since that matches the part. Click OK, and now the part isn’t quite so annoyingly constant (Fig. 3).
But we still need to do something about the velocity, which is way too consistent—the real world doesn’t work that way. Select the string of 16th notes again, and this time, choose Humanize. Set a Velocity range and Note start range (like -40/40% and -.0015/0.0015 respectively), and then click OK (Fig. 4). Now look at the velocity strip: it’s a lot more interesting. The timing changes are also helpful, but they don’t have the “drunken percussion player” quality that you get a lot with randomized timings, because those rock-solid quarter note hits are still establishing the beat.
So now we have an interesting two-measure loop, but let’s not loop it—instead, we’ll create a part that lasts as long as we want, and it will still be interesting. Here’s how.
Duplicate the two measures for as long as you want. Select all the notes in the Sound B row, and choose Randomize notes. Uncheck everything except Shuffle Notes. Click on OK. All the notes will stay in the same position, and because there are no other candidate notes for shuffling, the timing won’t change. What will Shuffle is velocity. If you created a Shuffle Macro for the May 24 tip on End Boring MIDI Drum Parts, it will come in handy here—keep hitting that macro until the pattern is the way you want. After you de-select the notes, if you’ve chosen Velocity for note color, you’ll have a pretty colorful velocity strip (Fig. 5).
Now you have a part that sounds pretty good, and once you become familiar with the process, you’ll find it takes less time to generate a part than it does to read this. Here are some options to this technique.
The bottom line: there are a lot of possibilities!