PreSonus Blog

Monthly Archives: April 2021


LCR Mixing and Panning Explained

Lately, it seems there’s an increasing buzz about “LCR” mixing. LCR stands for Left, Center, and Right, and it’s a panning technique where all panpots are set to either left, center, or right—nothing in between. Look it up on the internet, and you’ll find polarized opinions that vary from it’s the Holy Grail of mixing, to it’s ridiculous and vaguely idiotic. Well, I’m not polarized, so I’ll give you the bottom line: it can work well in some situations, but not so well in others.

Proponents of this style of mixing claim several advantages:

  • The resulting mixes sound very wide without having to use image processing, because there’s so much energy in the sides.
  • It simplifies mixing decisions, because you don’t have to agonize over stereo placement.
  • Mixes translate well for those not sitting in stereo’s “sweet spot,” because the most important material is panned to the center.
  • It forces you to pay attention to EQ and the arrangement, to make sure there’s good differentiation among instruments panned hard left and hard right.
  • If an LCR mix leaves “holes” in the stereo field, then you can use reverb or other stereo ambience to fill that space. As one example, stereo overhead mics on drums can pan hard left and hard right, yet still fill in a lot of the space in the middle. Or, place reverb in the channel opposite of where a signal is panned.

There are plenty of engineers who prefer LCR mixes for the reasons given above. However, LCR is not a panacea, nor is it necessarily desirable. It also may not fit an artist’s goal. For those who think of music in more symphonic terms—as multiple elements creating a greater whole, to be listened to under optimal conditions—the idea of doing something like panning the woodwinds and brass far left and the violins full right, with orchestral percussion and double bass in the middle, makes no sense. Conversely, if you’re doing a pop mix where you want every element to be distinct, an LCR approach can work well, if done properly.

Then again, some engineers consider a mix to be essentially a variation on mono, because the most important elements are panned to center. They don’t want distractions on the left and right; those elements exist to provide a “frame” around the center.

Another consideration is according to all the stats I’ve seen, these days more people listen on headphones than component system speakers. LCR mixing can sound great at first on headphones due to the novelty, but eventually becomes unnatural and fatiguing. Then again, as depressing a thought as this may be, a disturbingly large part of the population listens to music on computer speakers. Any panning nuances are lost under those conditions, whereas LCR mixing can sound direct and unambiguous.

 

Help Is on the Way!

So what’s a mix engineer to do? Well, a good way to get familiar with LCR is to load up some of your favorite songs into Studio One, and listen to the mid and sides separately. Hearing instruments in the sides tends to imply an LCR mix; Madonna’s “Ray of Light” comes to mind. For a “pure” LCR mix, listen to the original version of Cat Stevens’ “Matthew and Son” on YouTube. It was recorded in 1966 (trivia fans: John Paul Jones, later of Led Zeppelin, played bass). Back then, the limited number of tracks, and mixing console limitations, almost forced engineers into doing LCR. In case you wondered why some songs of that era had the drums in one channel and the bass in the opposite channel…now you know why.

Anyway, it’s easy to do mid-side analysis in Studio One (Fig. 1).

Figure 1: Setup for analyzing mid and side components of music.

The Mixtool, with MS Transform selected, encodes a stereo signal into mid (left channel) and sides (right channel). However, it’s difficult to do any meaningful analysis with the mid in one ear and the sides in the other. So, the Dual Pan’s Input Balance control chooses either the mid <L> or sides <R>. The panpots place the chosen audio in the center of the stereo field.

Once you start finding out whether your favorite songs are LCR or mixed more conventionally, it will help you decide what might work best for you. If you decide to experiment with LCR mixing, bear in mind that it kind of lives by its own rules, and it takes some experience to wrap your head around how to get the most out of it.

 

And the Verdict Is…

Well, you can believe whatever you like from what you see on the internet, and more importantly, choose what sounds best to you…but this is my blog post, so here’s what I think ?. Any and all comments are welcome!

As mentioned in a previous blog post, I always start mixes in mono. I feel this is the best way to find out if sounds mask either other, whether some tracks are redundant because they don’t contribute that much to the arrangement, and which tracks need EQ so they can carve out their own part of the frequency spectrum. That way, whether instruments are on top of each other or spread out, they’ll work well together.

But from there on, I split my approach. I still favor the center and use the sides as a frame, but also selectively choose particular elements (usually rhythm guitar, keyboards, and percussion) to pan off to the left or right so there’s a strong presence in the sides. For me, this gives the best of both worlds: a wide mix with good separation of various elements, but done in service of creating a full mix, without holes in the stereo field. Those who listen on headphones won’t be subjected to an over-exaggerated stereo effect, while those who listen over speakers will have a less critical “sweet spot” than if there was nuanced panning.

I came up with this approach simply because it fits the kind of music I make, and the way I expect most people will listen to it. Only later did I find out I had combined LCR mixing with a more traditional approach, and that underscores the bottom line: all music is different, and there are few—if any—“one-size-fits-all” rules.

Well, with the possible exception of “oil the kick drum pedal before you press record.”

StudioLive and Studio One at the Oscars… In Iceland!

As a part of the Academy Awards this year, the song “Húsavík” from Eurovision Song Contest: The Story of Fire Saga (which was nominated for the Best Music (Original Song) award) was featured as part of the pre-show event.

This breathtaking performance, set in Húsavík, Iceland (the same setting and filming location as the movie) featured Swedish pop star Molly Sandén, the original singer of the track, backed by a girls choir from the town of Húsavík. On top of that, the stage and performance were set on the actual docks of beautiful, mountain-swathed Húsavík, with full crew, rigging, lighting. The sound for the performance, including routing, playback, recording, and monitoring came courtesy of a StudioLive 24 Series III mixer and Studio One 5.  After four days of setup, the performance was finally recorded live the day before airing at the awards ceremony. Oh, and all of this took place outside… in 4°C weather—not counting for windchill.

– Big thanks to Trausti M. Ingólfsson who works for Tónabúðin, our distributor in Iceland, for giving us the report on this wonderful event.

Check out the StudioLive Series III mixer and Studio One 5 in action along with a couple of screenshots of the performance:

Watch the full performance here:

Exploring PreSonus Sphere Membership Products with Jacob Lamb

When you join PreSonus Sphere your membership comes with Studio One Professional, Notion plus the native software instruments, effects and plug-ins that PreSonus offers.

Studio One Professional is a powerful and intuitive DAW that works for you – made only more powerful by the full catalog of plugins – while Notion is an easy way to create full scores, sheet music for individual instruments, or guitar tabs and chord charts. Sphere members can also enjoy ongoing software upgrades when new versions are released.

In this Sphere episode, Jacob takes us through a demo of the “Products” tab, and all that is included.

Join PreSonus Sphere today! Only $14.95 per month for Studio One Professional, Notion, and so much more.


Follow Jacob on Instagram

Mid-Side Meets Reverb

The post on using mid-side processing with the CTC-1 garnered a good response, so let’s follow up with one of my favorite mid-side techniques: M-S reverb.

 

To recap, mid-side processing separates sounds in the center of a stereo file from sounds panned to the sides, processes them individually, then puts them back together again into stereo. It isn’t a perfect separation, because the mid is the sum of the left and right channels. Although this boosts the center somewhat, the mid still includes the sides. However, the side channel is quite precise, because it’s derived from putting the right and left channels out of phase—so the center cancels.

 

Applying Mid-Side Reverb

 

Before getting into how to make M-S reverb, here’s why it’s useful. Some productions have an overall reverb to provide ambiance, and a second reverb (often plate) dedicated to the vocal. The vocal is usually mixed to center, so it’s competing for space with the bass, snare, and kick. If they’re contributing to the overall reverb, and the vocal is creating its own reverb, that’s a lot of reverb in the center.

 

One popular fix is adding a highpass filter prior to the overall reverb, set to around 300 Hz. This keeps the bass and kick from muddying the reverb. However, it doesn’t take care of midrange or high-frequency sounds that are panned to center, like snare. These can compete even more with the vocal if they’re in the same frequency range.

 

While some reverbs let you tailor high- and low-frequency reverb times with a crossover, this doesn’t cover all the processing you might want to do, nor does Studio One’s Room Reverb include these parameters. Mid-side reverb, with different reverbs on the mid and sides, is a more flexible solution for customizing an overall reverb ambiance.

 

 

Assembling the Mid-Side Reverb

 

Download the FX Chain, or if you want to roll your own, start by dragging the MS-Transform FX Chain into a bus (of course, this also works for individual channels). Then drag a Room Reverb into each split (Fig. 1). The default reverb preset is a good place to start, but if the FX Chain is in a bus, remember to set the Mix controls for 100% wet sound. I also like to insert a Binaural Pan after the second MixTool to widen the overall stereo image.

 

Figure 1: Mid-Side Reverb FX Chain, which adds two Room Reverbs and a Binaural Pan to the MS-Transform FX Chain.

 

The reverb on the left handles the center, while the reverb on the right processes the sides. Lower the fader after the left reverb; Fig. 1 shows -6 dB, but adjust to taste. This alone will open up some space in the center for your vocal and its reverb. However, where this effect really comes into its own is when you tweak the reverb parameters for each reverb. For example…

 

  • If you still want reverb on the kick and low end, vary the mid reverb’s Length parameter. Shorter lengths tighten the kick more, while longer lengths give that Kick of Doom reverb sound.
  • Increase Length on the sides for a more atmospheric reverb sound.
  • Increase pre-delay on the sides, to make space for attacks on the vocal track. Consonants benefit from the extra clarity.
  • For this application, Eco mode sounds fine but try HQ as well.
  • Turn up the Binaural Pan after the second Mixtool. I often turn it up all the way, because it sounds great in stereo, and there aren’t any phasey issues of the output collapses to mono.

 

By adjusting the two reverbs, you can sculpt them to give the desired overall reverb sound. If you then place a vocal in the center with a sweet plate, I think you’ll find that the vocal and overall reverb create a smooth, differentiated, and conflict-free reverb effect.

 

 

 

 

 

 

 

Learning More in PreSonus Sphere with Jacob Lamb

With a PreSonus Sphere membership you get access to exclusive masterclasses in the “Learn” section.

Here you can dive into practical recording topics from industry professionals, covering recording tips, manipulating compression, perfecting EQ on a track, general mixing/mastering techniques, and more! Beyond the recording side, you can explore PreSonus Sphere product-specific videos like dialing in your guitar tone with Ampire, navigating Studio One and learning Notion.

In this episode, Jacob shows us the layout of the “Learn” tab, and how to navigate this “one stop shop” of classes.

Join PreSonus Sphere today! Only $14.95 per month for Studio One Professional, Notion, and so much more.


Follow Jacob on Instagram

No Downtime with Studio One and Yang Tan

Yang Tan is the Founder and Engineer for Absolute Magnitude Entertainment (AME) Records, based in Los Angeles, California.

Her client list includes: YG, Jackson Wang, Nipsey Hussle, Bia, Kris Wu, Migos, A$AP Ferg, J Cole, Kanye West, Maddi Jane and Kid Cudi… to name a few. Let’s find out more about this rising young creative who paints with sound, and also happens to be a PreSonus Sphere member—and a featured artist as well!


I am from Guangzhou, China, a metropolitan city close to Hong Kong where many imports and exports occur. As a child, I didn’t have the luxury of accessing music at the touch of my fingertips, like I do now. I remember going to secret spots on the weekends to pick out records among piles and piles of CDs with broken cases, which were smuggled in from overseas and were damaged by the customs. My mom had a Sony stereo set with a CD player and two cassette slots… it was pretty fancy in the ’90s. I was obsessed with recording my favorite songs to the cassette tapes. And then my mom bought a Walkman with recording ability through its built-in mic—I figured out how to play music in the background with my mom’s stereo and record bedtime stories I wrote. I paid for all of those CDs, but none of the profits went to the creators.

My family wanted me to follow in their footsteps and become a visual artist or a designer, but I was already obsessed with music. I always wanted to play the piano. So at the age of 16, I decided to pursue music secretly. I found two incredible music teachers on the Internet and started taking lessons, unbeknownst to my family. I learned how to read, play, and study music with strict and intense classical training. It was really difficult at the time because I didn’t know if anything would come from it, and I had to make money on the side to pay for the lessons. Looking back, I’m glad I took that risk. It was totally worth it. A year later, I was accepted to the Communication University of China, the best music and technology program in China, to study music. My music career began. 

The next part of my journey called for a relocation to the states, so I moved to Los Angeles after college. I started at Paramount Recording Studios and climbed the career ladder there. The learning never stops in Los Angeles; every day I pick up something new and practice until it becomes a habit. I am so inspired by the music culture in L.A., everyone I meet is just so talented, driven and inspiring. You don’t have to learn how to read music to be able to create music. How it sounds and how it connects with people is the most important part of the business.

 

 

 

 

The PreSonus audio products that I’ve been using are the StudioLive 16.0.2 digital mixer and their award-winning Digital Audio Workstation (DAW) software Studio One Professional, which I mostly use for producing.

Its ease of use, flexibility and Macros are among the top features that led me to choose working creatively in this environment. Other DAWs usually require third-party software to program Macros, whereas with Studio One it is integrated natively as part of the DAW workflow itself.

Another particularly useful feature about Studio One that I find useful is the ARA integration (with Melodyne pitch correction) to the Studio One software engine. It saves so much time and I can edit vocal audio clips in real time at any stage of the process.

I love the quick-nudge capability inside of an audio clip. It’s a fast workflow and I don’t have to clean up the edit point or cross-fades every time I make an edit.

In short, Studio One flows really well, it’s quick and intuitive. No downtime for creativity. Truly amazing!


PreSonus Sphere Members: check out Yang Tan’s exclusive Vocal FX Chain Presets here

Visit her Website | Instagram

Recording ReWired Programs

I had a bunch of legacy Acid projects from my pre-Studio One days, as well as some Ableton Live projects that were part of my live performances. With live performance a non-starter for the past year, I wanted to turn them into songs, and mix them in Studio One’s environment.

 

Gregor’s clever video, Ableton Live and Studio One Side-by-Side, shows how to drag-and-drop files between Live and Studio One. But I didn’t want individual files, I needed entire tracks…including ones I could improvise in real time with Live. The obvious answer is ReWire, since both Acid and Live can ReWire into Studio One. However, you can’t record what comes into the Instrument tracks used by ReWire. Nor can you bounce the ReWired audio, because there’s nothing physically in Studio One to bounce.

 

It turned out the answer is temporarily messy—but totally simple. First, let’s refresh our memory about ReWire. 

 

Setting Up ReWire

 

Start by telling Studio One to recognize ReWire devices. Under Options > Advanced > Services, make sure ReWire Support is enabled. In Studio One’s browser, under the Instruments tab, open the ReWire folder. Drag in the program you want to ReWire, the same way you’d drag in an instrument. (Incidentally, although you’re limited to dragging in one instance of the same ReWire client, you can ReWire two or more different clients into Studio One. Suitable clients includes Live, Acid Pro, FL Studio, Renoise, Reason before version 11, and others.) 

 

After dragging in Ableton Live, open it. ReWired clients are supposed to open automatically, but that’s not always the case.

 

Now we need to patch Live and Studio One together. In Ableton Live, for the Audio To fields, choose ReWire Out, and a separate output bus for each track. In my project, there were 9 stereo tracks (Fig. 1).

Figure 1: Assign Ableton Live’s ReWire outputs to buses. These connect to Studio One as track inputs.

 

Then, expand the Instrument panel in Studio One, and check all the buses that were assigned in Ableton Live. This automatically opens up mixer channels to play back the audio (Fig. 2). However, the mixer channels can’t record anything, so we need to go further.

Figure 2: Ableton Live loaded into Studio One, which treats Ableton Live like a virtual instrument with multiple outputs. 

 

Recording the ReWired Program

 

As mentioned, the following is temporarily messy. But once you’re recorded your tracks, you can tidy everything up, and your Live project will be a Studio One project. (Note that I renamed the tracks in Studio One as 1-9, so I didn’t have to refer to the stereo bus numbers in the following steps.) To do recording:

 

  1. In each Studio One track, go to its Send section and choose Add Bus Channel. Now we have Buses 1-9—one for each track.
  2. Our original instrument tracks have served their purpose, so we can hide them to avoid screen clutter. Now Studio One shows 9 buses (Fig. 3).

Figure 3: The buses are carrying the audio from Ableton Live’s outputs.

 

  1. Create 9 more tracks in Studio One (for my project, these were stereo). Assign each track input to an associated bus, so that each of the 9 buses terminates in a unique track. Now we can hide the bus tracks, and record-enable the new tracks to record the audio (fig. 4).

Figure 4: Studio One is set up to record the audio from Ableton Live.

 

  1. Now you’re ready to record whatever is in Ableton Live over to Studio One, in real time. 
  2. Fig. 5 shows the results of unhiding everything, narrowing the channels, and hitting play. At this point, if everything transferred over correctly, you can delete the ReWired tracks, remove the buses they fed, close Ableton Live, and you’re left with all the Live audio in Studio One tracks. Mission accomplished!

Figure 5: The Ableton Live audio has completed its move into Studio One. Now you can delete the instrument and bus channels you don’t need any more, close Ableton Live, return the U-Haul, and start doing your favorite Studio One stuff to supplement what you did in Live. Harmonic Editing, anyone?

 

Bonus tip: This is also the way to play Ableton Live instruments in real time, especially through Live’s various tempo-synched effects, while recording them in Studio One. And don’t forget about Gregor’s trick of moving Studio One files over to Live—this opens up using Live’s effects on Studio One tracks, which you can then record back into Studio One, along with other tracks, using the above technique.

 

Granted, I use Studio One for most of my multitrack projects. But there’s a lot to be gained by becoming fluent in multiple programs.   

 

How to use PreSonus Sphere Workspaces with Jacob Lamb

If you’re a musician working with other artists, or working alone and trying to keep your folders organized and neat, the PreSonus Sphere Workspaces tab is an indispensable tool for file sharing and organization.

Collaboration has never been easier; share whole songs or individual instrument stems, with quick listening right inside the Workspaces page—no downloading needed! In this Sphere episode, Jacob Lamb takes us through some of his thoughts on how Workspaces can be utilized in his studio, for both song creation and teaching students.

 

Join PreSonus Sphere today! Only $14.95 per month for Studio One Professional, Notion, and so much more.


Follow Jacob on Instagram

Mid-Side Meets the CTC-1

I’ve often said it’s more fun to ask “what if I…?” than “how do I?” “What-if” is about trying something new, while “how do I” is about re-creating something that already exists. Well, I couldn’t help but wonder “what if” you combined the CTC-1 with mid-side processing, and sprinkled on a little of that CTC-1 magic? Let’s find out. (For more information on mid-side processing, check out my blog post Mid-Side Processing Made Easy. Also, note that only Studio One Professional allows using Mix Engine FX.)

 

One stumbling block is that the CTC-1 is designed to be inserted in a bus,  and the Mid-Side Transform FX chain won’t allow inserting Mix Engine FX. Fortunately, there’s a simple workaround (see Fig. 1).

 

  1. Copy the stereo track you want to process, so you have two tracks with the same stereo audio. One will provide the Mid audio, and the other, the Sides audio.
  2. Insert an MS-Transform FX Chain into each track (you’ll find this FX Chain in the Browser’s Mixing folder, under FX Chains)
  3. Create a bus for each track.
  4. Assign each track output to its own bus (not the main out). However, the bus outputs should go to the Main out.
  5. Add a CTC-1 Mix Engine FX in each bus.

Figure 1: Setup for adding mid-side processing with the CTC-1 to a mixed stereo file.

 

  1. To dedicate one bus to the mid audio, and the other to the sides, open up the Splitters in the MS-Transform FX Chains.
  2. Mute the sides output for the Mid track (top of Fig. 2, outlined in orange). Then, mute the mid output for the Sides track (bottom of Fig. 2, also outlined in orange).

Figure 2: One bus is Mid only, the other is Sides only.

 

Now you can add the desired amount of CTC-1 goodness to the mids and sides. And of course, you can vary the bus levels to choose the desired proportion of mid and sides audio.

 

Audition Time!

 

The following example is an excerpt from the original file, without the CTC-1.

 

 

 

Next up, CTC-1 with the Custom option on the Mid, and the Tube option on the Sides. Fig. 3 shows their settings—a fair amount of Character, and a little bit of Drive.

Figure 3: CTC-1 settings for the audio example.

 

 

 

If you didn’t hear much difference, trying playing Audio Example 1 again after playing Audio Example 2. Sometimes it’s easier to tell when something’s missing, compared to when something’s been added.

 

The more you know about the CTC-1, the more effectively you can use it. The bottom line is I now know the answer to my “what if” question: get some buses into the picture, and the CTC-1 can be hella good for processing mid and sides!

 

Notion iOS 2.6 Release Notes

Notion iOS 2.6 Maintenance Release

A maintenance update is now available for Notion iOS, the best-selling notation app on iOS. This is a free update for Notion iOS owners that can be obtained by visiting Notion in the App Store on your device, or checking your available updates in the App Store.

All the changes are below. And while you’re here, please join us at our new official Facebook user group for news, tips and community support: https://www.facebook.com/groups/PreSonusNotionUsers

 


All Fixes and Enhancements:

Improvements:

  • Automatic rest groupings improved for:
    • MIDI import
    • Realtime MIDI record
    • Fill with Rests tool
  • MusicXML import of verse information from Sibelius improved
  • User Guide and Handwriting Help .PDFs now shown in device browser

Fixes:

  • Spacebar in Text Box now works as expected if lyrics have been previously entered
  • Fix for occasional issue when changing guitar tab numbers
  • Shown ranges for Viola (Section), Cello (Section) and Bass (Section) have been corrected
  • Slash chord playback of enharmonic chords e.g. G#, D#, E#, A# now sounds as expected
  • Issue fixed with cross-staff beamed triplets that have glissandi
  • Fixed crash when cross-staff beaming the first pitch of the first chord of a tuplet
  • Final barline no longer breaks multi-measure rest

General:

  • General stability fixes
  • Minimum requirements: Apple iOS9 or higher (please note, Notion 2.x will be the final version to support iOS9 and iOS10)