PreSonus Blog

Track Matching with the Project Page

Okay, this is an unusual one. Please fasten your seat belts, and set your tray tables to the upright and locked positions.

 

Personal bias alert: With pop and rock music, for me it’s all about vocals, drums, and bass. Vocals tell the story, drums handle the rhythm, and bass holds down the low end. For a given collection of songs (formerly known as an “album”), I want all three elements to be relatively consistent from one song to the next—and that’s what this week’s tip is all about. Then the other instruments can weave in and out within the mix.

 

It’s fantastic that you can flip back and forth between the Project page and a Song that’s been added to the Project page, make tweaks to the Song, then migrate the updated Song back to the Project page. But it’s even better when you can make the most important changes earlier in the process, before you start down the final road of mastering.

 

Here’s a way to match bass and vocal levels in a collection of songs. This takes advantage of the Project page, but isn’t part of the mastering process itself. Instead, you’ll deploy this technique when the mix is in good shape—it has all the needed processing, automation, etc.—but you want a reality check before you begin mastering.

 

We’ll cover how to match vocal levels for the songs; bass works similarly, and in some ways, more effectively. Don’t worry, I’m not advocating robo-mixing. A mathematically correct level is not the same thing as an artistically correct level. So, you may still need to change levels later in the process—but this technique lets the voice and bass start from a “level” playing field. If you then need to go back and tweak a mix, you can keep the voice and bass where they are, and work the mix around them.

 

(Note that it’s important to know what the LUFS and LRA metering in the Project page represent. Rather than make this tip longer, for a complete explanation of LUFS and LRA, please check out this article I wrote for inSync magazine.)

 

  1. Create a test folder, and copy all your album’s Songs into it. Because this tip is about a diagnostic technique, you don’t want to overwrite your work-in-progress songs.
  2. Create a new test Project.
  3. Open a copied Song, remove any master bus processing, and Choose Add to Project for the test project. Add all the other songs on the album to the test project. Do not normalize the songs within the test project.
  4. Open the Loudness Information section for each song, and select the Post FX tab. Adjust each song’s individual level fader (not the master fader) so all songs have the same LUFS reading, then save the Project. The absolute LUFS value doesn’t matter; choose a target, like -20 LUFS. (When adjusting levels, 1 dB of level change alters the LUFS reading by 1. For example, if a song registers at -18.4 dB, decrease the level by 1.6 dB to reach -20 LUFS. Check and re-check by clicking on Update Loudness as needed until the LUFS readings are the same.)
  5. Choose a Song to edit (click on the wrench next to the song title). When the Song opens, solo only the vocal track. Then choose Song > Update Mastering File. Note: If a dialog box says the mastering file is already up to date, just change a fader on one of the non-soloed tracks, and try again. After updating, choose View > Projects to return to the test project.
  6. Repeat step 5 for each of the remaining Songs.
  7. Select all the tracks in the Project page, then click on Update Loudness.
  8. Check the Loudness Information for each song, which now consists of only the vocal (Fig. 1). For example, suppose the readings for six songs are (1) -24.7, (2) -23.8, (3) -24.5, (4) -22.7, (5) -23.1, and (6) -24.3. Those are all pretty close; we’ll consider -24.5 an average reading. The vocals on songs (1), (3), and (6) have consistent levels. (2) and (5) are a tad high, but song (4) is quite a bit higher. This doesn’t mean there’s a problem, but when you go back to using the original (not the copied) Songs and Project, try lowering the vocal on that song by 1 or 2 dB, and decide whether it fits in better with the other songs.

Figure 1: The songs in an album have had only their vocal tracks bounced over to the Project page, so they can be analyzed by the Project page’s analytics.

 

The waveforms won’t provide any kind of visual confirmation, because you adjusted the levels to make sure the songs themselves had a consistent LUFS reading. For example, if you had to attenuate one of the songs by quite a bit, visually the vocal might seem louder but remember, it’s being attenuated because it was part of a song that was louder.

 

Also try this technique with bass. Bass will naturally vary from song to song, but again, you may see a lager-than-expected difference, and it may be worth finding out why. In my most recent album, all the bass parts were played with keyboard bass and generated pretty much the same level, so it was easy to use this technique to match the bass levels in all the songs. Drums are a little dicier because they vary more anyway, but if the drum parts are generally similar from song to song, give it a try.

 

…But There’s More to the Story than LUFS

 

LRA is another important reading, because it indicates dynamic range—and this is where it gets really educational. After analyzing vocals on an album, I noticed that some of them had a wider dynamic range than others, which influences how loudness is perceived. So, you need to take both LUFS and LRA readings into account when looking for consistency.

 

For my projects, I collect all the songs I’ve worked on during a year, and release the completed project toward the end of the year. So it’s not too surprising that something mixed in February is going to sound different compared to something mixed in November, and doing something as simple as going back to song and taking a little compression off a vocal (or adding some in) is sometimes all that’s needed for a more consistent sound.

 

But let me emphasize this isn’t about looking for rules, but looking for clues. Your ears will be the final arbiter, because the context for a part within a song matters. If a level sounds right, it is right. It doesn’t matter what numbers say, because numbers can’t make subjective judgments.

 

However, don’t minimize the value of this technique, either. The reason I stumbled on it was because one particular song in my next album never seemed quite “right,” and I couldn’t figure out why. After checking it with this technique, the vocal was low compared to the other songs, so the overall mix was lower as well. Even though I could use dynamics processing to make the song reach the same LUFS reading as the other songs, this affected the dynamics within the song itself. After going back into the song, raising the vocal level, and re-focusing the mix around it, everything fell into place.

 

  • This article really needs updating (and when it’s updated, I’ll be posting it on http://www.craiganderton.org, but it’s an okay place to dig deeper: https://www.harmonycentral.com/articles/uncategorized/dc-offset-the-case-of-the-missing-headroom-r31/.

  • Danielson

    If it could be usefull, look at this article https://blog.presonus.com/index.php/2018/11/16/friday-tips-easy-song-level-matching/ . in that article you can see that the use of dynamics plugins in the Project Page (but I think also in the song page) increases DC values. I agree with you that there is not enough documentation about the meaning of the DC values in the Loudness Information tab. In add there is not any documentation about “Block DC offset” in the nMixtool plugins; I tried about one million times to place Mixtool everywhere in order to block DC, without any success. I think that your last question could be the key to understand why the “Block” doesn.t work. Anyway after Craig raccomendation, i will become less obsessed by DC in future and I will use this value only as a simple indication about the recording quality.

  • I’m not sure how block DC offset works. I assume it looks for silence, and moves that to 0. There are also some plug-ins with subsonic filters that reduce/remove DC offset. But again…it’s not that much to worry about. It’s mostly a problem with hardware that has DC offset, feeding an interface with response down to DC. Some MOTU and Presonus interfaces can do down that low.

  • Caspar David Friedrich

    Thanks for the article Craig, after twenty-odd years in the video side of video post-production, I have been getting very seriously into audio for video. Your posts are most informative BTW and I have bought your three latest PDFs in the Presonus store.
    In any case, I was intrigued by Danielson’s comments because I have also bumped into mentions of DC offset recently and couldn’t find much proper info about it online.
    I fell in love with Studio One Pro all over again this afternoon when I read in your post that Studio One can provide in-depth loudness information for an audio file. My file was a 90 minute movie soundtrack, so it was so excellent not having to run it in realtime through some loudness plugin to get the IL etc.
    (BTW, I own and am getting my head round Izotope’s Insight 2 and RX7, as well as SPL’s Hawkeye and the Waves WLM Plus Loudness meter)
    Initially the Loudness Information in Studio One said that my DC Offset was minus infinity which is good as I understand it.
    The True Peak values were JUST under 0, the Integrated Loudness was -29LUFS and LRA was 18dB or so. For indy cinema projection, I think those values are OK, but for a Blu-Ray mix I thought I could push the IL by a few LU and possibly get the True Peak value down to about -2 dBFS – basically compress it ever so slightly.
    So as an experiment I added a McDSP ML1 mastering limiter, set the limiter target to -2dB with a very soft knee and the threshold to -5dB in order to raise everything by 3dB.
    When I updated the Loudness Info again, I noticed that the post-info DC offset had become -110dB.
    I found that a bit perplexing since I really don’t quite understand DC offset.
    It’s a big jump from minus infinity to -110dB, but I was heartened to hear you say that a DC Offset of -96dB was basically irrelevant.
    I guess by raising the level of a file by 3dB, it will also raise the noise floor… but more in-depth information from your good self about DC Offset would be greatly appreciated.
    Thanks for all your interesting and illuminating posts…. sorry for my essay.

  • Thank you for commenting, I wondered if anyone would find this interesting! Discovering DC offset is usually difficult, because the levels are so low. Besides, if the DC offset is around -96 dB or so, it won’t make any practical difference. However, if your mix in the project page shows a lot of DC offset, add the Mixtool plug-in to various tracks and check “Block DC Offset.” Update the song in the Project page, and see if the DC offset is gone. I hope this helps, let me know if you have any follow-up questions.

  • Danielson

    Very interesting. But I need a little clarification, not too technical because I’m not a pro.

    In your article about LUFS yoiu say:
    “DC shows the track’s DC offset. Typically, it’s-infinity, but sometimes I’ll run into a tiny bit of offset, which isn’t too terrible. If there is significant DC offset, knowing this is important because it will mess up the track’s headroom, and you’ll have a hard time matching levels properly. You can go back to theoriginal track and eliminate the DC offset, which will then allow any limiting or maximization to work more efficiently.”
    My question is:

    – How can I discover DC offset during the mix ?
    I noticed, after many troubles, that DC is related to various factors:

    1) to sub frequencies level

    2) to phase shifts during equalization

    3) to level recording too low (noise level issue)

    4) too much delays (phase issue)

    5) use of stereo plugins on mono tracks (misktake)

    Can you suggest something about this question ?
    Thanks and my compliments for your job