Compression and bass go together like ham and eggs, red beans and rice, or peanut butter and jelly (or gin and tonic, if you prefer something a little stronger). A lot of engineers plug in a compressor within milliseconds of turning up the bass track’s fader. Some “pro tips” on the web even recommend inserting lots of compressors in series. Hey, if one is good, then four must be better—right? Well, I’m not convinced.
Lately with electric bass (synth bass, too) I’ve been tossing compressors aside, and using Limiter2 when I want to get a solid sound down fast. And I mean fast—that 15 seconds is actually a bit misleading. I’ve clocked myself at under 12 seconds from drag-and-drop to pressing play, including editing the Limiter2 settings.
Check out the audio example. The drums are using my Bigness of Huge Drum Sound FX Chain. The first four measures are the bass sound as recorded, using the Limiter2. The next four measures are the same, but with the Limiter2 bypassed. Note that when the limiter is in play, the bass isn’t overwhelmed by the drums.
Fig. 1 shows the Limiter2 settings.
Figure 1: Settings for bass with the Limiter2.
That’s all there is to it. (But if you’re a 5-string bass fan, I do recommend changing the Release time to 300.0 ms.)
Granted, this isn’t necessarily a “one-size-fits-all” tip. You might want to add some EQ, some AutoFilter funk in parallel, or whatever. But this punchy, full sound will hold its own in the rhythm section, and get you through the tracking session. What’s more, if the bass player has a good touch and properly adjusted pickups, it maybe even take you to the final mix.
IR-driven cabs are often the weak link with amp sims Fortunately, cab emulations have improved dramatically over the years. Yet like samples, they remain “frozen” to a particular cab—they have their sound, and that’s it.
Although some guitar players think that a cab is a magical device, it’s really just a filter. To be sure, it can be a magical filter…but it’s still a filter. So, we can use filters to create our own cabs. They won’t be able to replicate a specific cabinet down to the smallest detail, but that’s not the point. Using the Pro EQ2 filter to create your own cabinet can give responses that IRs can’t give, with a different sound that can be satisfyingly smooth, and…well, “analog.”
I analyzed the frequency response of several cabs, using the Tone Generator’s pink noise along with the Spectrum Analyzer plug-in, then tried to replicate the response as closely as possible with the Pro EQ2. Although sometimes I was shocked at how close this could come to the cab, more often than not I couldn’t help but make some tweaks—it’s almost like I had taken that cab, brought it into a woodworking shop, and made specific changes for my needs.
If you want to experiment…be my guest! Insert Ampire, choose your amp (I particularly like the following curves with the VC30), select no cab in Ampire (important!), insert the ProEQ2 afterward, and rock out. Here are some ideas to get you started. Note that the white curve is the sum of all the other curves, so that’s the curve you actually hear.
This curve is based on a 1 x12 cabinet that’s designed for leads, but works with rhythm parts as well (Fig. 1).
Here’s a curve that’s more “Tweedish” (Fig. 2)
This curve (Fig. 3) is based on an amp by a company that no longer makes amps, but whose name I better not mention so that I don’t have to deal with lawyers. Suffice it to say they’re known mostly for making guitars that are popular with rock guitarists.
And here’s one more…just because we can (Fig. 4)! It’s based on a 2 x 12 cab.
These all have two elements in common: high-frequency rolloffs, and interesting resonances. Although “EQ cabs” may not replace IRs, they’re not supposed to—this is about augmenting your options. Nonetheless, in many of my current productions, I prefer using the Pro EQ2-based cabs because it’s easier to tailor them to fit in with a specific mix.
For this week’s tip, I’m not providing presets because this isn’t about presets—it’s about editing an “analog” cab to give the sounds you need for your productions. So, the “best” curve will depend on what works best with your guitar, playing style, and production goals. In any event, I think you’ll find that constructing your own cabinet can provide a musically useful, and novel, way to expand on what IR-based cabinets can do.
It may be a stereo world, but we still have a lot of mono signal sources. Although some people use delay to convert mono to stereo, this can be fraught if the stereo needs to collapse back to mono eventually. EQ can do an effective, albeit not as dramatic a job, and I wrote a blog post about How to Create Delay-Free Stereo from Mono using two Multiband Dynamics processors. This is a very flexible setup because you can automate the Multiband Dynamics parameters, as well as add in compression selectively if desired.
However, it’s also possible to convert mono into stereo within a single track—no buses needed—with a Splitter and some pan controls. While not as editable as the previous approach, it does the job, is simple to use, requires virtually no CPU power, and the stereo signal collapses back to mono with no problems.
The mono-to-stereo conversion process works by splitting the incoming signal into five frequency bands (Fig. 1). A Dual Pan follows each band, with Link enabled and Width set to 0%. So, you can use the Pan controls (which are brought out to the Macro Controls panel) to place each band wherever you want in the stereo field. This is what creates the stereo image.
Figure 1: Block diagram for the Super-Simple Mono-to-Stereo Converter.
Figure 2: Macro Control knob panel.
There’s a downloadable FX Chain, which takes care of the parameter assignments for the Macro Controls (Fig. 2). However, it would be pretty easy to do it yourself. The five Pan knobs vary the Pan parameters in the five Dual Pans (one for each frequency band). The two right-most controls are tied to a Dual Pan at the output, and serve as “master pan” controls.
The track being processed has to be in Stereo channel mode. It’s okay if you recorded the track in mono; just make sure Channel Mode is stereo when you play back, or convert the mono to “dual mono” by selecting the Event while the Channel Mode is stereo, and typing Ctrl+B.
The five left-most knobs control panning for the five bands. Pan the frequency bands as desired in the stereo field. Also note the Stereo/Mono switch. When dimmed, it de-activates all the Dual Pans, so it’s easy to compare the synthesized stereo and original mono sounds.
For the full stereo effect, set the Left Pan fully counter-clockwise, and Right Pan fully clockwise. To “tilt” the image more to one side or the other, bring the appropriate Pan control more toward center. For example, if you want to tilt the stereo spread toward the right channel, turn the Left Pan knob more clockwise.
One of my favorite applications is creating a stereo image from an acoustic guitar that was miked with a single mic (to avoid potential phase issues) or taken direct—you can still have a stereo “feel.” It’s also fun to follow Ampire with this FX Chain when you want to splatter a distorted rhythm guitar part across the stereo fiels, or give an old mono synth a stereo facelift.
As I said in the beginning, it’s a stereo world…but now your mono signal sources can be part of it a little more easily.
Several of the comments have mentioned wanting me to do some video tips, and this week’s tip is well-suited to a video treatment—so here you go.
Gain envelopes have many uses, but one of my favorites is using them to bring down peaks with vocal and narration to allow boosting the overall level. This is a further refinement of the phrase-by-phrase normalization technique I’ve mentioned in the past, which is basically like compressing without a compressor. As a result there are none of the artifacts associated with compression or limiting, so the resulting sound is totally natural.
Showing this with a video makes it easy to see how placing nodes strategically simplifies taming peaks, and how clicking and dragging on a single node can control your dynamics, quickly and efficiently.
View this post on Instagram
If you like the video, let me know in the comments section, and I’ll do more videos in the future if I think they can convey a concept better or more efficiently than text.
And while we’re on the subject of videos…if you’re not aware of Gregor Beyerle’s playlist of Studio One video tips, they’re well worth watching. Unlike so many YouTube “tips” videos, after watching his videos you’ll feel you learned useful techniques that will help you use Studio One more efficiently—not wasted several minutes of your life you’ll never get back ?. Even when I’ve known much of what he covers, there are always some little gems I hadn’t discovered before.
First, thank you for your continued support of the Studio One eBooks. The goal was to make sure that the books remain current—so there are revisions, as well as new editions.
Revisions are like software “point” updates. They’re free to registered users of the original book, and also make sure new buyers get the latest information. A revision for “How to Make Compelling Mixes with Studio One” will be available next week. New editions expand substantially on the original (like how software advances from one version to the next). The latest is More than Compressors: The Complete Guide to Dynamics in Studio One – 2nd Edition, available now in the PreSonus shop (available to owners of the first edition for half-price).
Second, remember that if you have any questions, comments, corrections, or additional ideas about the books, there’s a support thread where you can ask questions and I’ll answer them. The thread also announces when revisions and new editions are available.
And now…on to the tip!
Creating steel or slide guitar sounds with keyboards is difficult, because few soft synths have polyphonic glide. If they do, sometimes the results are unpredictable.
For my first, admittedly pathetic attempt at “steel synth,” I tried setting the synth bend range to 12 semitones and using the pitch bend wheel to slide entire chords up or down in pitch. However, hitting an exact pitch with the wheel is really difficult. I tried editing the parts to have correct tuning…but that took forever.
Fortunately, there’s a simple answer. It’s not a real-time solution (you’ll need to use the note data edit view), but it works really well—check out the audio example.
The basic idea for slide emulations is you sustain a note, and then use pitch bend to slide the sustained note(s) up (or down). In Fig. 1, a C major chord is gliding up to F and then G, to create the ever-popular I-IV-V progression.
Figure 1: A C major chord is sliding up to an F major, and then a G major.
To ensure correct tuning, create a pitch bend node where you want the new pitch to begin. Right-click on it, and then enter a number that corresponds to the number of semitones you want to “glide” (see the table below). This assumes the synth’s pitch bend range is set to 12 semitones. If you want to bend down by a certain number of semitones, use the same pitch bend amount—just make it negative.
Remember that pitch bend is based on a percentage scale, so in Fig. 1, the first pitch bend node (circled in white to make it more obvious) is set to 0.417 (5 semitones). The second node for the fifth is 0.583 semitones. Lines from one node to the next create the actual glide.
When you right-click on a node to enter a number, the resolution appears to be only two digits to the right of the decimal point, which isn’t good enough for accurate tuning. However, you can enter a three-digit number, as shown above. Even though it won’t be displayed, if you enter that third digit, the dialog box accepts it and Studio One will remember it—so now, you can glide to the exact right pitch.
One of the complaints about electronic music instruments and controllers is that they lack the expressiveness of acoustic instruments. Although future instruments will take advantage of MIDI 2.0’s enhanced expressiveness, two options are available right now: polyphonic pressure, and MPE (MIDI Polyphonic Expression). Studio One 5 can record/edit both, and ATOM SQ generates polyphonic pressure…so let’s dig deeper.
First, there’s some confusion because people call the same function by different names. Channel Aftertouch = Channel Pressure = Mono Aftertouch = Mono Pressure. Polyphonic Aftertouch = Polyphonic Pressure = Poly AT = Poly Aftertouch = Poly Pressure. Okay! Now we’ve cleared that up.
Aftertouch generates a control signal when you press down on a keyboard key after it’s down, or continue pressing on a percussion pad after striking it. Aftertouch is a variable message, like a mod wheel or footpedal—not a switch. A typical application would be changing filter cutoff, adding modulation, or doing guitar-like pitch bends by pressing on a key.
There are two aftertouch flavors. Mono pressure has been around since the days of the Yamaha DX7, and sends the highest controller value of all keys that are currently being pressed. Polyphonic pressure sends individual pressure messages for each key. For example, when holding down a chord for a brass section, by assigning poly pressure to filter cutoff, you can make just one note brighter by pressing down on its associated key. The other chord notes remain unaffected unless they’re also pressed.
Controllers with polyphonic aftertouch used to be fairly expensive and rare, but that’s changing—as evidenced by ATOM SQ.
As expected, you need a synth that responds to poly pressure. Many hardware synths respond to it, even if they don’t generate it. As to soft synths, although I haven’t tested all of the following, they reportedly support poly pressure: several Korg Collection synths, Kontakt, Reaktor, all Arturia instruments, all U-He instruments, XILS-Lab synths, TAL-Sampler, AAS synths, Albino 3, impOSCar2, Mach5, and Omnisphere. If you know of others, feel free to mention them in the comments section below. (Currently, Studio One’s bundled instruments don’t respond to polyphonic aftertouch.)
Figure 1: ATOM SQ being set up to generate Poly Pressure messages.
With ATOM SQ, press the Setup button. Hit the lower-left “pressure” button below the display, then spin the dial to choose Poly (Fig. 1). Note that if ATOM SQ outputs poly pressure, most instruments that respond only to channel (mono) aftertouch will ignore these messages.
Record poly pressure in Studio One 5 as you would any MIDI controller. To edit pressure messages, use the Edit window’s Note Controller tab. Select Pressure for the Type, and then the Pitch of the note you want to edit. Or, click on a note to select its corresponding note Pitch automatically. You can then edit that note’s poly pressure controller as you would any other controller (Fig. 2).
Figure 2: The selected Note’s data is white; unselected notes of the same pitch are blue. The gray lines in the background show the poly pressure controller messages for notes with other pitches.
It may seem that editing data for individual notes would be tedious, and it can be. However, because poly pressure allows for more expressive real-time playing, you might not feel the need to do as much editing anyway—you won’t need to use editing to add expressiveness that you couldn’t add while playing.
A fine point is that it’s currently not possible to copy Note Controller data from one note, then paste it to a note of a different pitch (probably because the whole point of poly AT is for different notes to have different controller data). However, if you copy the note itself to a different pitch, the Note Controller data will go along with it.
Although ATOM SQ can adopt a layout that resembles a keyboard, it would be a mistake to see it as a stripped-down version of a standard keyboard. Controllers with polyphonic pressure tend to think outside the usual keyboard box, by incorporating pads or other transducers that are designed for predictable pressure sensitivity. Poly pressure has been around for a while, but a new generation of MIDI controllers (like ATOM SQ) are making the technology—and the resulting expressiveness—far more accessible for those who want to wring more soul out of their synths.
It’s not surprising a lot of Studio One users also have Ableton Live, because they’re quite different. I’ve always felt Studio One is a pro recording studio (with a helluva backline) disguised as software, while Ableton is a live performance instrument disguised as software.
Fortunately, if you like working simultaneously with Live’s loops and scenes and Studio One’s rich feature set, Studio One can host Live as a ReWire client. Even better, ATOM SQ can provide full native integration with Ableton Live when it’s ReWired as a client—once you know how to set up the MIDI ins and outs for both programs.
Now ATOM SQ will act as an integrated controller with Ableton Live while it’s ReWired into Studio One. Cool, eh?
To return control to Studio One, reverse the process—in Live, set Control Surface to None, and toggle the MIDI Ports that relate to ATOM SQ from On to Off. In Studio One’s Options > External Devices, For ATOM SQ, reconnect ATOM SQ to Receive From and Send To.
Note that with ATOM SQ controlling Studio One, the Transport function still controls both Live and Studio One. Also, if Live has the focus, any QWERTY keyboard assignments for triggering Clips and Scenes remain valid. So even while using ATOM SQ in the native mode for Studio One, you can still trigger different Clip and Scenes in Live. If you switch the focus back to Studio One, then any QWERTY keyboard shortcuts will trigger their assigned Studio One shortcuts.
Note: When switching back and forth between Live and Studio One, and enabling/disabling Studio One and Ableton Live modes for ATOM SQ, to return to Live you may need to “refresh” Live’s Preferences settings. Choose None for the Control Surface and then re-select ATOM SQ. Next, turn the various MIDI Port options off and on again.
Vocoders are processors that use the audio from vocals (called the modulation source, or modulator) to modulate another sound, like a synthesizer pad (called the carrier). However, no law says you have to use vocals as a modulator, and I often use drums to modulate pads, power chords, and more. While Studio One’s toolset doesn’t have enough resolution for super-intelligible vocoding with voice, it’s perfect for drumcoding, which actually benefits from the lower resolution.
This tip is for advanced users and requires a fairly complex setup. Rather than go into too much detail about how it works, simply download the Drumcoder.song file, linked below, which has a complete drumcoding setup. Load Drumcoder.song into Studio One 5, press play, and you’ll hear what drumcoding is all about. (Note that the file format isn’t compatible with previous Studio One versions. However, based on the description in this tip, you should be able to “roll your own” drumcoding setup in previous Studio One versions.)
Let’s check out an audio demo. The first half has the drumcoded sound only, while the second half mixes in the drum (modulator) sound.
But wait—there’s more! Although the drumcoder isn’t designed to be the greatest vocoder in the world (and it isn’t), you can still get some decent results. Here, the voice is saying “Even do some kinds of vocal effects with the PreSonus drumcoder—have fun!’
Next, we’ll explore how it works…or if you’re impatient, just reverse-engineer the song.
Vocoding splits the modulator (like voice or drums) into multiple frequency bands. In a traditional vocoder, each band produces a control voltage that corresponds to the audio’s level in each band. Similarly, the carrier splits into the same frequency bands. A VCA follows each carrier band, and the VCAs are fed by the modulator’s control voltages. So, if there’s midrange energy in the modulator, it opens the VCA for the carrier’s midrange audio. If there’s bass energy in the modulator, it opens the VCA for the carrier’s bass audio. With a vocoder, as different energy occurs in different bands that cover a vocal’s frequency range, the carrier mimics that same distribution of energy in its own bands. This is what generates talking instrument effects.
Vocoders typically need at least eight frequency bands to make voices sound intelligible. Studio One’s Splitter can divide incoming audio into five bands, which is enough resolution for drumcoding. Fig. 1 (which takes some graphic liberties with Studio One’s UI), shows the signal flow.
Figure 1: Drumcoder signal flow.
The Drums track provides the modulator signal, and the Mai Tai synthesizer provides the carrier. The Drums track has five pre-fader sends to distribute the drum sound to five buses. As shown in Fig. 2, each of the five buses has a Splitter (but no other effects) set to Frequency Split mode, with splits at 200, 400, 800, and 1600 Hz. The < 200 Hz bus mutes all Splits except for 1, the 200-400 bus mutes all Splits except for 2, the 400-800 bus mutes all splits except for 3, the 800 – 1.6 kHz mutes all splits except for 4, and the > 1.6 kHz bus mutes all splits except for 5. Now each bus output covers one of the five bands.
Figure 2: Splitter settings for the five buses.
The Mai Tai carrier has a splitter set to the same frequencies. Each split goes to an Expander, which basically acts like a VCA; see Fig. 3. We don’t need to break out the Splitter outputs, because you can access the sidechain for the Expanders located within the Splitter. (A Mixtool follows each Expander, but it’s there solely to provide a volume control for each of the carrier’s bands in the control panel.)
Figure 3: Effects used for the Mai Tai synthesizer carrier track.
As to the bus outputs, the < 200 Hz bus has a send that goes to the sidechain of the Expander in the carrier’s < 200 Hz split. The 200-400 Hz bus has a send to the sidechain of the Expander in the carrier’s 200-400 Hz split. The 400-800 Hz bus has a send to the sidechain of the Expander in the carrier’s 400-800 Hz split…you get the idea. Basically, each bus provides a “control voltage” for the corresponding “VCA” (Expander) that controls the level of the carrier’s five bands.
Fig. 4 shows the Control panel.
Figure 4: Drumcoder macro controls.
Threshold, Ratio, and Range cover the full range of Expander controls. They affect how tightly the Expander follows the modulator, which controls the effect’s percussive nature. Just play around with them until you get the sound you want. The Expander Envelope settings aren’t particularly crucial, but I find 0.10 ms Attack and 128.0 ms Release work well. Of course, you also need to enable the sidechain for each Expander, and make sure it’s listening to the bus that corresponds to the correct band.
The five knobs toward the right control the level of the individual bands by altering the Gain of the Mixtool that follows each band’s Expander. The five associated buttons enable or bypass the Expander for a particular band, which can give some really cool effects. For example, turn off the Expander on the Mid band, and with the Song’s Mai Tai preset, it almost sounds like a choir is singing along with the drumcoded drums.
Although the Drumcoder isn’t really designed for vocal effects, it still can be fun. The key is to bring up the > 1.6 kHz Bus slider, as this mixes in some of the voice’s “s” sounds, which give intelligibility. Experiment with the Expander controls to find what works well. If you really want to dig into vocal applications, edit the Splitter frequencies to optimize them for the vocal range instead of drums…or leave a comment asking me to pursue this further.
Due to the complexity, if I think I’m going to use the Drumcoder, I’ll just treat this song like a template and build the rest of the song from there. But once you understand the principle of operation, you can always add the effect in to an existing song as needed. I have to say this is one of my favorite Friday tips ever… I hope you enjoy playing with the Drumcoder!
Although Studio One 5 doesn’t have a tape emulator plug-in per se, it can emulate some of the most important characteristics that people associate with “the tape sound.” Truly emulating tape can go down a serious rabbit hole because tape is a complicated signal processor; no two vintage tape recorders sounded the same because they required alignment (influenced by the engineer’s preferences), used different tape formulations, and were in various states of maintenance. However, emulating three important characteristics provides what most people want from tape emulation.
Check out the audio example to hear what this FX Chain can do. The first part is unprocessed, while the second part uses the default FX Chain control settings with a little underbiasing and head bump. The difference is subtle, but it adds that extra “something.”
This FX Chain starts with a Splitter, which creates three signal paths: one for saturation, one for hiss, and one for hum (Fig. 1).
Figure 1: FX Chain block diagram.
After auditioning all available Studio One 5 saturation options, I liked the TriComp best for this application. The Pro EQ stage preceding the TriComp provides the head bump EQ and has a control to emulate the effect of underbiasing tape (more highs, which pushes more high-frequency level into the TriComp and therefore increases distortion in that range) or overbiasing (less highs, less distortion).
At first, I wasn’t going to include tape hiss and hum, but if someone needs to use this FX Chain for sound design (i.e., an actor starts a tape in a theatrical production), then including hiss and hum sounds more authentic. An additional knob chooses 50 or 60 Hz hum, which represents the power standards in different countries. (Note that the closest you can get to these frequencies is 50.4 and 59.1 Hz, but that’s good enough). However, I draw the line at including wow and flutter! Good riddance to both of them.
Because creating three splits reduces each split’s level, the TriComp Gain control provides makeup gain.
Turning Bump on adds a boost at the specified frequency, but also adds a 48 dB low-cut filter around 23 Hz to emulate the loss of very low frequencies due to the head bump. As a result, depending on the program material, adding the bump may increase or decrease the total apparent bass response. For additional flexibility, if you turn Bump Amount down all the way, the Bump On/Off switch enables or disables only the 48 dB/octave low-cut filter.
Fig. 2 shows some typical spectra from using the FX Chain.
Figure 2: The top curve shows the head bump enabled, with underbiasing. The lower curve shows minimal added bump, but with the ultra-low cut filter enabled, and overbiasing.
The controls default to rational settings (Fig. 3), which are used in the audio example. But as usual with my FX chains, the settings can go beyond the realm of good taste if needed.
Figure 3: Control panel for the Tape Emulator.
For example, I rarely go over 2-3% saturation, but I know some of you are itching to kick it up to 10%. Ditto tape hiss, in case you want to emulate recording on an ancient Radio Shack cassette recorder—with Radio Shack tape. Just remember that the Bias control is clockwise to overbias (less highs), and counter-clockwise to underbias (more highs).
There’s a lot of mythology around tape emulations, and you can find some very good plug-ins that nail the sound of tape. But try this FX Chain—it may give you exactly what you want. Best of all, I promise you’ll never have to clean or demagnetize its tape heads.
With physical audio media in its twilight, streaming has become the primary way to distribute music. A wonderful side effect has been the end of the loudness wars, because streaming services like Spotify turn levels up or down as needed to attain a specific, consistent perceived level—squashing a master won’t make it sound any louder.
However, the “garbage in, garbage out” law remains in effect, so you need to submit music that meets a streaming service’s specs. For example, Spotify prefers files with an LUFS of -14.0 (according to the EBU R128 standard), and a True Peak reading of -1.0 or lower. This avoids adding distortion when transcoding to lossy formats. If the LUFS reading is above -14.0, then Spotify wants a True Peak value under -2.0.
Fortunately, when you Detect Loudness for a track on the mastering page, you’ll see a readout of the LUFS and LRA (a measure of overall dynamic range), as well as the True Peak, RMS (average signal level), and DC offset for the left and right channels. Fig. 1 shows an example of the specs generated by detecting loudness.
Note that this hits Spotify’s desired LUFS, but the left channel’s True Peak value is higher than what’s ideal. This readout also shows that the average RMS levels for each channel are somewhat different—the left channel is 1.2 dB louder than the right one, which also accounts for the higher True Peak value. This may be the way the artist wants the mix to sound, but it could also indicate a potential problem with the mix, where the overall sound isn’t properly centered.
A simple fix is to insert a Dual Pan into the Inserts section. Use the Input Balance control to “weight” the stereo image more to one side for a better balance. After doing so and readjusting the LUFS, we can now give Spotify exactly what it wants (Fig. 2). Also note that the left and right channels are perfectly balanced.
A Crucial Consideration!
You don’t want to mix or master based on numbers, but on what you hear. If you set up Dual Pan to balance the channels, make sure that you enable/bypass the plug-in and compare the two options. You might find that balancing the left and right channels not only accommodates Spotify’s requirements, but improves the mix’s overall balance. If it doesn’t, then leave the balance alone, and lower the track’s overall output level so that True Peak is under -1.0 for both channels (or under -2.0 for LUFS values above ‑14.0). This will likely lower the LUFS reading, but don’t worry about it: Spotify will turn up the track anyway to reach -14.0 LUFS.
Coda: I always thought that squashing dynamic range to try and win the loudness wars made listening to music a less pleasant experience, and that’s one of the reasons CD sales kept declining. Does the end of the loudness wars correspond to the current music industry rebound from streaming? I don’t know… but it wouldn’t surprise me.