Heartcast Media is a dedicated full-service studio in Washington, D.C. that works with clients to create high quality, authentic podcast content that inspires, educates and connects. Molly Ruland and her team specialize in working with entrepreneurs, visionaries, and businesses of all sizes who have an impactful point-of-view.
Woman-owned Heartcast Media is the vision of Molly Ruland who is dedicated to helping individuals and organizations bring their authentic, original content to life through podcasts. A sister-company to One Love Massive, Heartcast Media clients range from go-go bands to conservative political commentators.
They’re also PreSonus users—and have recorded 85 bands and 150 podcasts in the past 11 months alone!
We think Molly’s business idea is genius, and of course we’re glad that they’ve chosen the StudioLive 16 for their time-sensitive workflow. From the Heartcast website:
We have fully embraced technology and have figured out how to eliminate post production with real time video editing and audio mastering. We deliver all files within 48 hours of recording, typically within 3-4.
We’re proud to be a part of their process, so we wanted to hear more about how this whole operation works. Read all about Molly and Heartcast Media….
Tell us about your background. How long have you been in the audio industry?
I have owned and operated a multimedia company for the last 20 years. I was primarily focused on artist bookings and events. Creating an aesthetic has always been my passion.
How has the audio industry changed since your early days?
Everything is so streamlined now, and the gatekeepers have been removed. I love the idea of accessibility and practicality. Information is readily available which has opened doors for people who weren’t always welcome at the table, and I think that’s great.
How did Heartcast Media come about?
After recording 85 bands and 150 podcasts in 11 months, I realized that my passion and vision align perfectly through podcast production. I love amplifying voices, I always have. I saw a need in the market for high-quality turnkey podcast production, so I created the business to solve that problem. We do things differently—we embrace technology, and by doing so we are able to eliminate the need for a lot of post-production. This saves people time and money and our clients love that.
What’s your favorite podcast right now? Are you allowed to have a favorite?
Tom Bilyeu’s Impact Theory. No question, hands down. Game changer for me.
Tell us about your podcast. Where did the idea for your podcast come from? How does your first podcast compare to your most recent?
I have just launched The Lower Third Podcast because I know so many amazing people whom I garner so much inspiration from, and I wanted to interview and talk to them about mindset and passion. It’s a work in progress. I am looking forward to producing more episodes. However, my passion is producing other people’s podcast and helping them be successful.
There are so many podcasts these days. How do you stand out?
Having a plan for your podcast is imperative. Every podcaster should examine how and if their podcast is providing value. If there isn’t a clear answer, you don’t have a podcast yet.
What challenges do you face recording a podcast?
I am positive that most people don’t understand how much work goes into creating and producing a podcast. It’s a lot of work. It’s not cheap either, and anyone who tells you can start a podcast for $100 is delusional. If you are going to start a podcast you have to have a lot of resilience and a strong sense of self, because it will be a heavy rock to push uphill until you get momentum. It will not happen overnight.
What advice do you have for someone who wants to start a podcast?
Have a plan, understand the workload, and always be open to being wrong.
How did you first hear of PreSonus?
I learned about PreSonus through Adam Levin at Chuck Levin’s Music Center in Wheaton, Maryland. It’s legendary.
I have the StudioLive 16 in my studio, and we love it. It’s a little more than we need for podcasts, but we also produce live music events so it’s great to have a board that can do both. It’s a solid piece of equipment with really great features that fit our needs. It’s a beautiful board, what’s not to love?
Recent projects? What’s next for you?
My goal is to produce the best podcasts coming out of the East Coast by elevating and amplifying voices in my community that will make the world a better place, one conversation at a time. Every city should have a Heartcast Media.
With the ideal mix, the balance among instruments is perfect, and you can hear every instrument (or instrument section) clearly and distinctly. However, getting there can take a while, with a lot of trial and error. Fortunately, there’s a simple trick you can use when setting up a mix to accelerate the process: Start your mix with all channel pan sliders set to center (Fig. 1).
Figure 1: All the pan sliders (outlined in white) are set to center for a reason.
With stereo tracks, changing the track interleave to mono isn’t adequate, because it will throw off the channel’s level in the mix. Instead, temporarily add a Dual Pan set for the -6dB Linear Pan Law, and center both the Left and Right panpots (fig. 2). Now your stereo track will appear in the mix as mono.
Figure 2: Use the Dual Pan, set to the -6dB Linear pan law, to convert stereo channels temporarily to mono when setting up for a mix.
Now listen carefully to your mix. Are all the instruments distinct? Monitoring in mono will reveal places where one instrument might mask or interfere with another, like kick and bass, or piano and guitar (depending on the note range).
The solution is to use EQ to carve out each instrument’s rightful place in the frequency spectrum. For example, if you want to prioritize the guitar part, you may need to reduce some of the piano’s midrange, and boost the regions above and below the guitar. For the guitar, boost a bit in the region where you cut the piano. With those tweaks in place, you’ll find it easier to differentiate between the two.
For kick/bass issues, the usual solution is to increase treble on one of them—with kick, this brings out the beater sound and with bass, string “zings” and pick noises. Another option is to add saturation to the bass, while leaving the kick drum alone. If the bass is playing relatively high notes, then perhaps a boost to the kick around 50-70 Hz will help separate the two.
Keep carving away, and adjusting the EQ until all the instruments are clear and distinct. Now when you start doing stereo placement, the sound will be open, with a huge soundstage and a level of clarity you might not obtain otherwise—or which might take a lot of tweaking to achieve.
We’re Not Done with Mono Just Yet…
Okay, now you have a great stereo mix. But it’s also important to make sure your mix collapses well to mono, because you have no control over the playback system. It might play from someone’s smartphone, and sounds mostly mono…or play back over speakers that are close to each other, so there’s not real good stereo separation. Radio is another possibility where the stereo might not be wonderful.
Some processors, especially ones that control stereo imaging with mid-side processing, may have phase or other issues when collapsed to mono. Short, stereo delays can also have problems collapsing to mono, and produce comb-filtering-type effects. So, hop on over to the main bus, and click the Channel Mode button to convert the output to mono (Fig. 3).
Figure 3: The Channel Mode button (circled in yellow) can switch the output between mono and stereo.
Hopefully, everything will sound correct—just collapsed to mono. But if not, start soloing channels and comparing what they sound like with the Channel Mode button in stereo and mono, until you chase down the culprit. Make the appropriate tweaks (which may be as simple as tweaking the delay time in one channel of a stereo delay processor), make sure the mix still sounds good in stereo, and you’re done.
I sometimes record acoustic rhythm guitars with one mic for two main reasons: no issues with phase cancellations among multiple mics, and faster setup time. Besides, rhythm guitar parts often sit in the background, so some ambiance with electronic delay and reverb can give a somewhat bigger sound. However, on an album project with the late classical guitarist Linda Cohen, the solo guitar needed to be upfront, and the lack of a stereo image due to using a single mic was problematic.
Rather than experiment with multiple mics and deal with phase issues, I decided to go for the most accurate sound possible from one high-quality, condenser mic. This was successful, in the sense that moving from the control room to the studio sounded virtually identical; but the sound lacked realism. Thinking about what you hear when sitting close to a classical guitar provided clues on how to obtain the desired sound.
If you’re facing a guitarist, your right ear picks up on some of the finger squeaks and string noise from the guitarist’s fretting hand. Meanwhile, your left ear picks up some of the body’s “bass boom.” Although not as directional as the high-frequency finger noise, it still shifts the lower part of the frequency spectrum somewhat to the left. Meanwhile, the main guitar sound fills the room, providing the acoustic equivalent of a center channel.
Sending the guitar track into two additional buses solved the imaging problem by giving one bus a drastic treble cut and panning it somewhat left. The other bus had a drastic bass cut and was panned toward the right (Fig. 1).
Figure 1: The main track (toward the left) splits into three pre-fader buses, each with its own EQ.
One send goes to bus 1. The EQ is set to around 400 Hz (but also try lower frequencies), with a 24 dB/octave slope to focus on the guitar body’s “boom.” Another send goes to bus 2, which emphasizes finger noises and high frequencies. Its EQ has a highpass filter response with a 24dB/octave slope and frequency around 1 kHz. Pan bus 1 toward the left and bus 2 toward the right, because if you’re facing a guitarist the body boom will be toward the listener’s left, and the finger and neck noises will be toward the listener’s right.
The send to bus 3 goes to the main guitar sound bus. Offset its highpass and lowpass filters a little more than an octave from the other two buses, e.g., 160 Hz for the highpass and 2.4 kHz for the lowpass (Fig. 2). This isn’t “technically correct,” but I felt it gave the best sound.
Figure 2: The top curve trims the response of the main guitar sound, the middle curve isolates the high frequencies, and the lower curve isolates the low frequencies. EQ controls that aren’t relevant are grayed out.
Monitor the first two buses, and set a good balance of the low and high frequencies. Then bring up the third send’s level, with its pan centered. The result should be a big guitar sound with a stereo image, but we’re not done quite yet.
The balance of the three tracks is crucial to obtaining the most realistic sound, as are the EQ frequencies. Experiment with the EQ settings, and consider reducing the frequency range of the bus with the main guitar sound. If the image is too wide, pan the low and high-frequency buses more to center. It helps to monitor the output in mono as well as stereo for a reality check.
Once you nail the right settings, you may be taken aback to hear the sound of a stereo acoustic guitar with no phase issues. The sound is stronger, more consistent, and the stereo image is rock-solid.
I like anything that kickstarts creativity and gets you out of a rut—which is what this tip is all about. And, there’s even a bonus tip about how to create a Macro to make this process as simple as invoking a key command.
Here’s the premise. You have a MIDI drum part. It’s fine, but you want to add interest with a fill in various measures. So you move hits around to create a fill, but then you realize you want fills in quite a few places…and maybe you tend to fall into doing the same kind of fills, so you want some fresh ideas.
Here’s the solution: Studio One 4.5’s new Randomize menu, which can introduce random variations in velocity, note length, and other parameters. But what’s of interest for this application is the way Shuffle can move notes around on the timeline, while retaining the same pitch. This is great for drum parts.
The following drum part has a really simple pattern in measure 4—let’s spice it up. The notes follow an 8th note rhythm; applying shuffle will retain the 8th note rhythm, but let’s suppose you want to shuffle the fills into 16th-note rhythms.
Here’s a cool trick for altering the rhythm. If you’re using Impact, mute a drum you’re not using, and enter a string of 16th notes for that drum (outlined in orange in the following image). Then select all the notes you want to shuffle.
Go to the Action menu, and under Process, choose Randomize Notes. Next, click the box for Shuffle notes (outlined in orange).
Click on OK, and the notes will be shuffled to create a new pattern. You won’t hear the “ghost” 16th notes triggering the silent drum, but they’ll affect the shuffle. Here’s the pattern after shuffling.
If you like what you hear from the randomization, great. But if not, adding a couple more hits manually might do what you need. However, you can also make the randomizing process really efficient by creating a Macro to Undo/Shuffle/hit Enter.
Create the Macro by clicking on Edit|Undo in the left column, and then choose Add. Next, add Musical Functions|Randomize. For the Argument, check Shuffle notes; I also like to randomize Velocity between 40% and 100%. The last step in the Macro is Navigation|Enter. Finally, assign the Macro to a keyboard shortcut. I assigned it to Ctrl+Alt+E (as in, End Boring Drum Parts).
With the Macro, if you don’t like the results of the shuffle, then just hit the keyboard shortcut to initiate another shuffle…listen, decide, repeat as needed. (Note that you need to do the first in a series of shuffles manually because the Macro starts with an Undo command.) It usually doesn’t take too many tries to come up with something cool, or that with minimum modifications will do what you want. Once you have a fill you like, you can erase the ghost notes.
If the fill isn’t “dense” enough, no problem. Just add some extra kick, snare, etc. hits, do the first Randomize process, and then keep hitting the Macro keyboard shortcut until you hear a fill you like. Sometimes, drum hits will end up on the same note—this can actually be useful, by adding unanticipated dynamics.
Perhaps this sounds too good to be true, but try it. It’s never been easier to generate a bunch of fills—and then keep the ones you like best.
You never know where you’ll find inspiration. As I was trying not to listen to the background music in my local supermarket, “She Drives Me Crazy” by Fine Young Cannibals—a song from over 30 years ago!—earwormed its way into my brain. Check it out at https://youtu.be/UtvmTu4zAMg.
My first thought was “they sure don’t make snare drum sounds like those any more.” But hey, we have Studio One! Surely there’s a way to do that—and there is. The basic idea is to extract a trigger from a snare, use it to drive the Mai Tai synth, then layer it to enhance the snare.
Skeptical? Check out the audio example.
ISOLATING THE SNARE
If you’re dealing with a drum loop or submix, you first need to extract the snare sound.
TWEAKING THE MAI TAI
Now the fun begins! Figure 3 shows a typical starting point for a snare-enhancing sound.
The reason for choosing Mai Tai as the sound source is because of its “Character” options that, along with the filter controls, noise Color control, and FX (particularly Reverb, EQ, and Distortion), produce a huge variety of electronic snare sounds. The Character module’s Sound and Amount controls are particularly helpful. The more you play with the controls, the more you’ll start to understand just how many sounds are possible.
BUT WAIT…THERE’S MORE!
If the snare is on a separate track, then you don’t need the Pro EQ or FX Channel. Just insert a Gate in the snare track, enable the Gate’s trigger output, and adjust the Gate Threshold controls to trigger on each snare drum hit. The comments above regarding the Attack, Release, and Hold controls apply here as well.
Nor are you limited to snare. You can isolate the kick drum, and trigger a massive, low-frequency sine wave from the Mai Tai to give those car door-vibrating kick drums. Toms can sometimes be easy to isolate, depending on how they’re tuned. And don’t be afraid to venture outside of the “drum enhancement” comfort zone—sometimes the wrong Gate threshold settings, driving the wrong sound, can produce an effect that’s deliciously “right.”
Some instruments, when compressed, lack “sparkle” if the stronger, lower frequencies compress high frequencies as well as lower ones. This is a common problem with guitar, but there’s a solution: the Compressor’s internal sidechain can apply compression to only the guitar’s lower frequencies, while leaving the higher frequencies uncompressed so they “ring out” above the compressed sound. (Multiband compression works for this too, but sidechaining can be a faster and easier way to accomplish the same results.) Frequency-selective compression can also be effective with drums, dance mixes, and other applications—like the “pumping drums” effect covered in the Friday Tip for October 5, 2018. Here’s how to do frequency-selective compression with guitar.
The compression controls are fairly critical in this application, so you’ll probably need to tweak them a bit to obtain the desired results.
If you need more flexibility than the internal filter can provide, there’s a simple workaround.
Copy the guitar track. You won’t be listening to this track, but using it solely as a control track to drive the Compressor sidechain. Insert a Pro EQ in the copied track, adjust the EQ’s range to cover the frequencies you want to compress, and assign the copied track’s output to the Compressor sidechain. Because we’re not using the internal sidechain, click the Sidechain button in the Compressor’s header to enable the external sidechain.
The bottom line is that “compressed” and “lively-sounding” don’t have to be mutually exclusive—try frequency-selective compression, and find out for yourself.
Jeff Timmons is a singer, songwriter, producer, and founding member of the Grammy-nominated, iconic 90s pop group 98 Degrees! The group has six studio albums, has sold over 10 million records worldwide, and have eight Top 40 singles in the US. Additionally, Jeff has worked on numerous other projects including two solo albums and has continued to establish himself in the music industry as a producer. We connected with Jeff on Twitter and discovered his love for Studio One. We recently had the opportunity to catch up with him and ask a few questions regarding his work and Studio One.
Give us some background on yourself. How long have you been making music?
We started 98 Degrees way back in 1995. We signed to Motown in ‘96, then were upstreamed to Universal shortly after. I’ve been producing and engineering since ‘99.
How has the music industry changed since your early days?
The obvious is the digital streaming component. It’s completely changed the game. There is a lot less artist development, unfortunately. But, your ability to be virally prolific is exponential and amazing.
Do you ever get sick of talking about 98 Degrees?
Not at all. Being a part of something like that has been a complete blessing. We’re very fortunate to have an amazing fanbase and to be still selling out shows 25 years later.
Describe the first time you wrote a song?
I first started writing songs in high school and I didn’t get into production until I built my first rig in the late 90s. I had all of this massive hardware in a road case and would cart it around from city to city, and back and forth from the tour bus. Wow, how times have changed!
Who has been an influence in your life?
From a production standpoint, everyone from Babyface, Max Martin, Anders Bagge, Dr. Luke, Benny Blanco, Timbaland… it’s a long list!
Have you ever wanted to give up on music?
A million times. Everyone knows it’s a hard business.
What keeps you going?
My love and passion for creating and playing with sounds won’t let me give up on it.
What do you like about Studio One?
The ease of use and GUI is amazing. The drag and drop of synths and VSTs, the new key detection feature, sequencing… these are all incredible features.
When did you first hear about Studio One?
I heard about it when it first came out. I’m always looking to get better, and my friend Dominic Rodriguez, who I really trust and is prolific in the K-pop space suggested I try it; I didn’t waste any time and joined. He was right! Learning a new DAW is like learning a new language.
What features are you most impressed with in Studio One?
The ease of use, and how quickly I can get things laid out. Again, the new key detection is amazing. The fact that I can then change the key to match on all of the tracks in a non-destructive way is just mind-blowing. I recommend it to everyone.
Any user tips or tricks or interesting stories based on your experience with Studio One?
I love how you can combine virtual instruments on single tracks. That’s incredible to me.
How easy/difficult was Studio One to learn?
I’m still learning all of the tricks and features because there are so many, but it didn’t take me long to start flying with it.
Where do you go for inspiration?
I get inspired by a lot of things. I’ll hear a new song, a riff or beat or melody of an old one, or a new idea will just pop into my head.
Recent projects? What’s next for you?
I’m working with a number of projects. I did all of the music for a show on Discovery Science called “Droned.” I’m working with a new hip-hop artist, a male vocal group called Overnight, and a young female pop sensation named Nicole Michelle.
Starting today and ending April 30, 2019 buy one AIR Loudspeaker and get the other for half off!
This offer includes the following:
This offer begins today and ends April 30 and is available in US and Canada ONLY!
Of course we’ll always go with PreSonus. But you don’t have to take our word for it. Watch John Tendy explain why the AIR 10s are perfect for him.
I was never a big fan of MIDI guitar, but that changed when I discovered two guitar-like controllers—the YRG1000 You Rock Guitar and Zivix Jamstik. Admittedly, the YRG1000 looks like it escaped from Guitar Hero to seek a better life, but even my guitar-playing “tubes and Telecasters forever!” compatriots are shocked by how well it works. And Jamstik, although it started as a learn-to-play guitar product for the Mac, can also serve as a MIDI guitar controller. Either one has more consistent tracking than MIDI guitar retrofits, and no detectable latency.
The tradeoff is that they’re not actual guitars, which is why they track well. So, think of them as alternate controllers that take advantage of your guitar-playing muscle memory. If you want a true guitar feel, with attributes like actual string-bending, there are MIDI retrofits like Fishman’s clever TriplePlay, and Roland’s GR-55 guitar synthesizer.
In any case, you’ll want to set up your MIDI guitar for best results in Studio One—here’s how.
Poly vs. Mono Mode
MIDI guitars usually offer Poly or Mono mode operation. With Poly mode, all data played on all strings appears over one MIDI channel. With Mono mode, each string generates data over its own channel—typically channel 1 for the high E, channel 2 for B, channel 3 for G, and so on. Mono mode’s main advantage is you can bend notes on individual strings and not bend other strings. The main advantage of Poly mode is you need only one sound generator instead of a multi-timbral instrument, or a stack of six synths.
In terms of playing, Poly mode works fine for pads and rhythm guitar, while Mono mode is best for solos, or when you want different strings to trigger different sounds (e.g., the bottom two strings trigger bass synths, and the upper four a synth pad). Here’s how to set up for both options in Studio One.
Note that you can change these settings any time in the Options > External Devices dialog box by selecting your controller and choosing Edit.
Choose Your Channels
For Poly mode, you probably won’t have to do anything—just start playing. With Mono mode, you’ll need to use a multitimbral synth like SampleTank or Kontakt, or six individual synths. For example, suppose you want to use Mai Tai. Create a Mai Tai Instrument track, choose your MIDI controller, and then choose one of the six MIDI channels (Fig. 2). If Split Channels wasn’t selected, you won’t see an option to choose the MIDI channel.
Next, after choosing the desired Mai Tai sound, duplicate the Instrument track five more times, and choose the correct MIDI channel for each string. I like to Group the tracks because this simplifies removing layers, turning off record enable, and quantizing. Now record-enable all tracks, and start recording. Fig. 3 shows a recorded Mono guitar part—note how each string’s notes are in their own channel.
To close out, here are three more MIDI guitar tips.
MIDI guitar got a bad rap when it first came out, and not without reason. But the technology continues to improve, dedicated controllers overcome some of the limitations of retrofitting a standard guitar, and if you set up Studio One properly, MIDI guitar can open up voicings that are difficult to obtain with keyboards.
In Mono mode with Mai Tai (or whatever synth you use), set the number of Voices to 1 for two reasons. First, this is how a real guitar works—you can play only one note at a time on a string. Second, this will often improve tracking in MIDI guitars that are picky about your picking.
Some people think colorization is frivolous—but I don’t. I started using colorization when writing articles, because it was easy to identify elements in the illustrations (e.g., “the white audio is the unprocessed sound, the blue audio is compressed”). But the more I used colorization, the more I realized how useful it could be.
Customizing the “Dark” and “Light” Looks
Although a program’s look is usually personal preference, sometimes it’s utilitarian. When working in a video suite, the ambient lighting is often low, so that the eye’s persistence of vision doesn’t influence how you perceive the video. For this situation, a dark view is preferable. Conversely, those with weak or failing vision need a bright look. If you’re new to Studio One, you might want the labels to really “pop” but later on, as you become more familiar with the program, darken them somewhat. You may want a brighter look when working during daytime, and a more muted look at night. Fortunately, you can save presets for various looks, and call up the right look for the right conditions (although note that there are no keyboard shortcuts for choosing color presets).
You’ll find these edits under Options > General > Appearance. For a dark look, move the Background Luminance slider to the left and for a light look, to the right (Fig. 1). I like -50% for dark, and +1 for light. For the dark look, setting the Background Contrast at -100% means that the lettering won’t jump out at you. For the brightest possible look, bump the Background Contrast to 100% so that the lettering is clearly visible against the other light colors, and set Saturation to 100% to brighten the colors. Conversely, to tone down the light look, set Background Contrast and Saturation to 0%.
Hue Shift customizes the background of menu bars, empty fields that are normally gray, and the like. The higher the Saturation slider, the more pronounced the colorization.
The Arrangement sliders control the Arrangement and Edit view backgrounds (i.e., what’s behind the Events). I like to see the vertical lines in the Arrangement view, but also keep the background dark. So Arrangement Contrast is at 100%, and Luminance is the darkest possible value (around 10%) that still makes it easy to see horizontal lines in the Edit view (Fig. 2).
Streamlining Workflow with Color
With a song containing dozens of tracks, it can be difficult to identify which Console channel strip controls which instrument, particularly with the Narrow console view. The text at the bottom of each channel strip helps, but you often need to rename tracks to fit in the allotted space. Even then, the way the brain works, it’s easier to identify based on color (as deciphered by your right brain) than text (as deciphered by your left brain). Without getting too much into how the brain’s hemispheres work, the right brain is associated more with creative tasks like making music, so you want to stay in that mode as much as possible; switching between the two hemispheres can interrupt the creative flow.
I’ve developed standard color schemes for various types of projects. Of course, choose whatever colors work for you; for example, if you’re doing orchestral work, you’d have a different roster of instruments and colors. With my scheme for rock/pop, lead instruments use a brighter version of a color (e.g., lead guitar bright blue, rhythm guitar dark blue).
Furthermore, similar instruments are grouped together in the mixer. So for vocals, you’ll see a block of green strips, for guitar a block of blue strips, etc. (Fig. 3)
To colorize channel strips, choose Options > Advanced tab > Console tab (or click the Console’s wrench icon) and check “Colorize Channel Strips.” This colorizes the entire strip. However, if you find colorized strips too distracting, the name labels at the bottom (and the waveforms in the arrange view) are always colored according to your choices. Still, when the Console faders are extended to a higher-than-usual height, I find it easier to grab the correct fader with colored console strips.
In the Arrange view, you can colorize the track controls as well—click on the wrench icon, and click on “Colorize Track Controls.” Although sometimes this feels like too much color, nonetheless, it makes identifying tracks easier (especially with the track height set to a narrow height, like Overview).
Color isn’t really a trivial subject, once you get into it. It has helped my workflow, so I hope these tips serve you as well.