Humbuckers are known for a big, beefy sound, while single-coil pickups are more about clarity and definition. If you want the best of both worlds, you can warm up a soldering iron, ground the junction of the humbucker’s two coils, and voilà—a single coil pickup. But there’s an easier way: use the Pro EQ, which gives the added benefit of not losing the pickup’s humbucking characteristics.
The main difference between humbucker and single coil pickups is the frequency response. The blue line in Fig. 1 shows a humbucker’s spectral response, while the yellow line shows the same humbucker split for single-coil operation. Unlike the single-coil’s response, which is essentially flat from 150 Hz to 3 kHz, the humbucker has a bump in the 500 Hz to 2 kHz range that contributes to the “beefy” sound. Starting at 3 kHz the humbucker response drops off rapidly, while the single coil produces more high-frequencies than the humbucker from 3 kHz to 9 kHz.
Fig. 2 shows an equalizer curve that modifies a bridge humbucker for more of a single-coil response. Of course different humbucker and different single-coil pickups sound different, so this kind of EQ-based “modeling” is an inexact science. However, I think you’ll find that the faux single-coil sound delivers the distinctive, glassy character you want from a single-coil pickup. Feel free to tweak the EQ further—you can come up with variations on the single-coil sound, or “morph” between the humbucker and single-coil characteristics.
The difference between a neck humbucker and single-coil response isn’t as dramatic, but the curve in Fig. 3 replicates the neck single-coil character, and provides yet another useful variation for your guitar tone.
The bottom line is that you don’t need to break out a soldering (or void your guitar’s warranty) to make your humbucker sound more like a single-coil type—all you need is the right kind of EQ.
Jonathan Penson is the Norwegian Refugee Council Regional Education Adviser for East Africa and Yemen. We recently learned about his experience with the Mtendeli refugee camp in Tanzania, where a youth music program is using Studio One Prime to teach digital music skills to young Burundian refugees. Jonathan was able to spend some time with us to answer some questions about Mtendeli, its beneficiaries, and the students involved in this incredible story.
First of all, for readers not familiar with the situation affecting the youth in your program, please tell us how Music for Mtendeli came to be.
Mtendeli refugee camp in western Tanzania hosts close to forty thousand refugees fleeing political violence in Burundi. A high proportion of the refugees are youth. But, without the opportunity to work or leave the camp, there is very little to do, and frustrations run high. In addition, many youths experienced traumatic events in their home country. So we started a music project as part of a creative arts program, with the aim of providing a positive outlet for youths’ energies, and music therapy.
The Norwegian Refugee Council started the youth center in the camp in 2017. We run the creative arts program alongside vocational skills training and literacy and numeracy classes. We train youth in the creative use of ICT, as well as for business, and the youth organize clubs for music, theatre, and modern and traditional dance. We are hoping to start photography courses.
The music program combines traditional skills such as drumming – for which Burundians are famed – with cutting-edge technology. We want to nurture refugee youths’ talents, and the plan is to start a modest music studio that will help them to record and share their music.
How many of the youth are involved in the Digital Music program? How many staff? How long has the program been running?
More than 200 youths have benefited from the program since it began last year. This includes music students using virtual instruments and those who have learned to use DAW software to produce and edit music. We have one ICT instructor, Deo, who has a keen interest in digital music production.
What styles of music are the most popular for them to produce and record?
We find there are two distinct music ‘camps’ with the camp: gospel and non-gospel. In the non-gospel camp, R ‘n’ B and hip-hop are very popular, but you will also hear a lot of popular African artists, especially from Congo.
Is there any cross-pollination between the programs at Mtendeli? For example, do the music students produce music for the dance program?
There is direct cross-pollination between the producers and dancers, with the modern dance club using some music from the music class. They also use popular music downloaded from the Internet and other sources.
We would love to hear some of the music that has come from the program if you can send us links or files! Can you?
We would certainly like to do this in the future. For the moment, we need to build the skills of the students. They currently don’t have keyboards, headphones or mice to operate the DAW, making composition very difficult. We’re working on getting these to them – we’re talking to a music technology company about donating them. (The center has been generously funded by humanitarian donors, but this project might be a bit too left-field for them.)
What opportunities are there for our readers to support the program?
We currently have a fund-raising page open. We’ll use the funds to pay for shipping the music equipment to the camp. Logistically, this is complex and costly. We’d love for your readers to contribute that way
Is there anything else you would like our readers to know?
We asked this to the dancers and music students in Mtendeli. They’d love to produce professional-sounding music and share it, but they need more musical equipment and facilities before they can really get going. They’d also like more teaching staff, and a more comprehensive DAW–they’re currently using the free version of PreSonus Studio One. All we have at the moment is the laptops, the DAW, the space–and the energy, creativity and enthusiasm of 200 young people!
But this isn’t about the fundraising – it’s about fulfilling people’s potential, raising awareness about refugees, and linking musicians. In time, we’d like a platform for showcasing our youths’ talents–so if any readers can support that, get in touch!
With kind regards,
This signal processing setup is optimized for single-string guitar solos where you want a lot of sustain—but it has a secret ingredient that puts it ahead of typical guitar stompbox sustainers.
The compression aspect is pretty straightforward. A sustainer is all about a high compression ratio and low threshold, which are set to 20:1 and -35 dB, respectively. The sharp knee keeps the sustain going as long as possible, and a short attack time clamps down the attack. The release time isn’t too critical, although this depends on your playing style; a relatively long one (300-500 ms) usually works best.
This is one of those rare instances where you don’t want to enable the compressor’s Auto or Adaptive feature, because the goal here isn’t the most natural sound—it’s an effect. However, enable Lookahead because it helps to tame the attack.
Because of the extreme amount of compression, you’ll need about 30 dB of makeup gain to compensate for the gain reduction due to compression.
And now, the secret ingredient! With most sustainers, after the release time ends, if there’s a pause between notes you’ll hear a loud “pop” when you play a new note because of the compression kicking back in. A fast attack and lookahead helps, but it’s almost impossible to avoid some kind of nasty transient. If you follow the compressor with an amp sim, the distortion will hide the pop somewhat but it can still lead to an ugly attack.
Enter the noise gate. This doesn’t just remove hum, noise, and other low-level signals from being sustained, but the 55 ms attack time (coupled with the enabled lookahead button) means that when you hit a note after a pause, the note attack ramps up more slowly, so the compressor can “grab” the note without creating a pop (or if it does, the pop will be greatly reduced). If there’s an amp sim involved, you’ll hear a cleaner attack, and better overall sound. Note that depending on how fast you play and the compressor’s release time, you may need to shorten the Noise Gate’s Release and Hold times. In any event, when you want serious ssssssuuuuussssstttttaaaaaiiiiinnnnn for your single-note guitar solos, this is the ticket.
If you’ve been on the fence about getting a StudioLive Series III Mixer, you should know there’s never been a better time than now. Until the end of 2018, StudioLive Series III Mixers (both console and rack versions) include the Classic Studio Fat Channel Bundle, a $249 USD value.
StudioLive Fat Channel Plug-ins work in both your StudioLive Series III mixers AND Studio One. They’re state-space modeled after real vintage hardware and sound like the genuine article. The only thing you miss out on is paying eBay prices for old dusty hardware that will likely have a few expensive issues to work out.
All you have to do is buy a mixer and register it to your account at my.presonus.com, and we’ll add the downloadable Fat Channel Plug-ins to your account.
Here’s what you get:
Check out this video series to see and hear the Fat Channel Plug-ins in action!
MVP Loops from Los Angeles has released a steady stream of killer hip-hop, EDM, and instrumental loop content for producers since 2009, and we’re ecstatic to offer all of them for 30% off for the month of November right from the PreSonus Shop!
Here’s a little more about MVP Loops:
November 2018 only… Save 30% on Fat Channel Plug-ins for StudioLive Series III mixers and Studio One!
Fat Channel Plug-ins work in both StudioLive Mixers AND Studio One. These plug-ins are virtual signal processors that load in your StudioLive Series III console or rack mixer’s Fat Channel, expanding your Fat Channel processor library much like plug-ins do in a DAW. Each plug-in comes in both StudioLive Series III format and Studio One format so you can use your new processor in both mixer and DAW Fat Channels.
PreSonus Fat Channel plug-ins are state-space modeled by world-class engineers with Ph.D.’s in analog signal processing to faithfully produce the sound and response of the original hardware processors. Now you can have a wide variety of fresh DSP for live and studio sound. No other mixer anywhere near this price class has expandable processing—only PreSonus StudioLive Series III.
zplane is a provider of audio processing and music analysis technology operating out of Berlin. Studio One takes advantage of several powerful zplane technologies—our new Chord Detection feature takes advantage of zplane’s KORT, and our Harmonic Editing features leverage a combination of KORT and reTune. Furthermore, their élastiquePro time stretch has been in Studio One since the very beginning!
We were able to get some time with Tim Flohrer, CTO and founder, and ask him a few questions about zplane.
How and when did zplane come to be?
We started off as a three-man company in 2000 directly from University. Martin, Alexander and me, we all studied electronic engineering but we were always into music. Our start was kind of naive to say the least: our business plan was basically ‘let’s have a company that offers development services for something with computer and music’. So, as you might imagine, it took us a while to find our business model. Back in 2003 we had a customer who needed a high quality and performant time stretching but didn’t want to pay the whole development costs. So, we developed the time stretching for him at a much lower rate but kept the rights for the algorithm. That could be considered the start of our current licensing business. We still had to learn a lot about the business but the foundation was laid back then. From that point on we continued to develop more and more audio algorithms for licensing such as beat tracking, key and chord recognition, auto segmentation and monophonic as well as polyphonic pitch manipulation.
Later we started our own consumer product line so we could make our algorithms available in a way that we thought they should be used.
Tell me a bit about the current zplane team.
In 2013 Alexander retired from the active business at zplane since he was offered a professorship at Georgia Tech, Atlanta and moved to the USA. Now, there is Martin and me left as original founders, 4 employees and currently one intern.
Martin mainly handles the business side of things while I keep track of the technical stuff. Maik and Daniel do the hardcore programming and application development. Holger and Till take care of the research and science. Jonas – our current intern – helps the latter two. And everybody does customer support and internal testing.
All our employees did either do their master thesis at our company or at least had an internship before.
The Partner List on zplane.de is quite impressive. I see a few names of companies that a musician or producer using Studio One may not immediately associate with music, like Konami and Vinyl Dreams. What do you feel are some of the lesser-known applications of zplane technology that some users might not expect? I can see a use case for audio restoration in forensic audio, for example.
Thanks! It certainly took a while to grow this partner list. The first thing that comes to my mind when thinking about the less obvious applications is games – board games to be precise. Last year Harmonix and Hasbro released a board game using ELASTIQUE called Dropmix. It lets you construct music played by your smartphone by dropping playing cards on a game board. So, as you might guess elastique is running on the smartphone – but still the use case is kind of unusual.
Concerning forensic audio we currently have no customer using it for something comparable but we’re certainly open for that. Besides audio restoration even time stretching or pitch shifting may be very useful in that area.
Electronic music in particular is rife with happy accidents, where a technology developed for one purpose eventually finds its home in another. The most notable example being perhaps the Roland TB-303, or Auto-Tune’s obtuse origins in interpreting seismic data. Have the zplane team experienced anything like this in the development and subsequent application of their own technologies? Any surprising uses of the technologies that were not in mind in the development cycle?
Well, to be honest, I was always waiting for something like this to happen: people doing something completely different than what they were supposed to do. It didn’t happen as in the mentioned examples so far. But I was seriously impressed when I saw what people did with the chord tracks in Studio One: they applied the chord progressions to the reverb tail of other tracks. Obviously really didn’t expect that use case. However, it is truly amazing how musicians adapt new technology and convert it creatively into some new kind of expression.
How has your experience been working with the PreSonus Software team?
We’ve been working with the PreSonus Software team already for a long time even before they belonged to PreSonus. So, we have a very good and open relationship. Especially when implementing new technologies you need a lot of communication in order to get the most out of the technology but also to explain the limits of what is possible technology-wise. This communication back and forth does not only help on the implementation side but also gives us a lot of feedback and in the end improves the technology. So, especially with the implementation of the chord track and the harmonic editing this has been a very fruitful co-operation and it doesn’t end here. We’re working on the future already.
Can you talk about the difference between élastique as a plug-in and the different versions of élastique used in DAWs? Anything special in Studio One?
In fact, as long as the DAWs are using the latest version of ELASTIQUE PRO they are all using the same algorithm. Sometimes people still think that it sounds different in different hosts. This is partly voodoo and partly due to implementation differences: so in one host you have a fixed time stretch factor in others this is adapted continuously which will obviously cause audible differences. But still the base algorithm is the same everywhere. Our plugin Elastique Pitch takes the algorithm to only do pitch shifting which is mostly due to the limitations of a real-time plugin interface which is unable to handle time stretching.
Where do you see audio chord detection and harmonic editing go in the future?
Of course, I see that the accuracy of the chord detection will go up as well as the quality of the polyphonic pitch manipulation. It’s all already in the making. We’re currently moving away from classic signal processing and experimenting with modern deep learning and artificial intelligence approaches. And our first results for the chord detection look very promising. Currently, we see the most potential in the chord detection to improve the overall performance of the harmonic editing – wrong assumptions on the original chord have a major negative impact on the result. That being said, one should always be aware that this will never be perfect – even professional musicians disagree on certain chord transcriptions and so will computers generate debatable results from time to time.
Also, the RETUNE algorithm still has a long way to go but improving the internal pitch detection using the above-mentioned techniques will most probably take us giant steps ahead. But we haven’t started to work on this yet.
One should never forget though that all of this is just a tool to help people to get creative.
There was a time when making edits to pitch and length independently of one another seemed impossible. The same could be said about isolating a single voice from a chord for independent editing. As barriers like these continue to be broken down… what’s the next big sonic quandary for a company like zplane to solve?
The holy grail of music signal processing has always been ‘demixing’ and that hasn’t been solved yet. There are new approaches but all of them don’t get close to studio quality. So that’ll stay on our roadmap.
However, we think that taking it step by step rather than only searching for the holy grail is the way to go. So, we’ll continue to improve the existing algorithms using new techniques and learn from that ourselves.
Also, it is not always about the new algorithm, sometimes it’s ‘just’ combining existing algorithms to something new. Similar to what we did with PreSonus in Studio One: we had the chord detection and the RETUNE algorithm and put it to together to create harmonic editing.
Similarly, we’re currently working on a new product combining a lot of our technologies to support musicians with their daily practice, train their ear and eventually help them become better musicians. So, keep your eyes and ears open for that next year.
Big thanks to Tim for taking the time for this interview! Learn more about zplane at the following links:
As with so many aspects of audio, the subject of compression presets polarizes people. The purists say there’s no point in having presets, because every signal is different, and the same compressor settings will sound very different on different sources. On the other hand, software comes with presets, and there are plenty of recording blogs on the web that dispense advice about typical preset settings. So who’s right?
And as with so many aspects of audio, they all are. If a preset works “out of the box,” that’s just plain luck. However, there are certain ranges of settings that work well in many cases for particular types of signals. In any case, the effects of compression are totally dependent on the input signal level anyway—if the threshold is set to -10, then signals that peak at 0 will sound very different compared to signals that peak at -10.
The most effective way to approach compression is to decide what effect you want the compression to accomplish, then adjust the compression settings accordingly. It’s also important to remember that compression isn’t just some monolithic effect that “squashes things.” For example, with kick and snare, compression can act just like a transient/decay shaper due to a drum’s rapid decay.
The usual goal for compressing kick is an even sound, yet one that doesn’t reduce punch. However, you have a great deal of latitude in deciding how to implement that goal.
The preset in Fig. 1 uses a fairly high ratio, and hard knee, to even out the highest levels. You want the compression to take hold relatively rapidly, but not take away from the punch. The best option is to start with the attack time at 0, and increase it until you hear the initial hit clearly (but don’t go past that point). Because a kick decays fast, release can be fast as well.
For transient shaping, slowing the attack time softens the attack. Raising the ratio increases the sustain somewhat, while making space for the attack (assuming an appropriate attack time). Between the attack and ratio controls, you can pretty much tailor the kick drum’s attack and sustain characteristics, as well as even out the overall sound. A higher threshold is another way to emphasize the attack, by letting the decay occur naturally. Lowering the threshold reduces the level difference between the attack and decay.
Snare responds similarly to kick, however with an acoustic drum kit, the kick is more isolated physically than the snare. As a result, compressing the snare has the potential to emphasize leakage. Fortunately, the snare is often the focus of a drum part. As a result, you can simply compress the snare, and accept that leakage is part of the deal. With individual, multitracked drums (including electronic drums) where leakage is not a problem, it’s still usually the snare and kick that get compression.
With snare, you may want to use a lower ratio (2:1 – 3:1) for a fuller snare sound. Or, increase the ratio to emphasize the attack more. Again, use the attack time to dial in the desired attack characteristics.
With both kick and snare, you’ll usually want a hard knee. However, the knee control is a fantastic way to fine-tune the attack—and once you have that dialed in, you’ll be good to go.
We’re proud to announce that we’ve partnered with Splice to make it easier for you to backup and share your Studio One project files. Splice Studio now supports Studio One project files—meaning once you’ve downloaded the Splice Studio Desktop application, you’ll be able to automatically back up your project files, access any saved version, and collaborate with others on your tracks!
Summer may be over in the northern hemisphere, but we can still splash around. This is one of those “hiding in plain sight” kind of tips, but it’s pretty cool.
The premise: Sometimes you don’t want reverb all the time, so you kick up the send control to push something like a snare hit into the reverb for a quick reverb “splash” (anyone who’s listened to my music knows this is one of my favorite techniques). The reverb adds a dramatic emphasis to the rhythm, but is short enough that it doesn’t wear out its welcome—listen to the audio example, which demos this technique with Studio One’s Crowish Acoustic Chorus 1 drum loop.
However, although this technique is great with drums, it also works well with rhythm guitar, hand percussion, synths, you name it… even kick works well in some songs. I’m not convinced about bass, but aside from that, this has a lot of uses.
Studio One offers an easy way to produce regular splashes automatically (like on the second and fourth beats of a measure, where an emphasizing element hits). Insert X-Trem before the reverb, select 16 Steps as the “waveform,” click Sync, and choose your rhythm. The screenshot shows Beats set to 1/2 so that the reverb splash happens on 2 and 4, which in the case of the audio example, adds reverb to the snare on 2, and to the closed high-hat on 4.
And that’s pretty much it. Because the reverb is in a bus, set Mix to 100%. The 480 Hall from Halls > Medium Halls is one of my faves for this application, but hey… use whatever ’verb puts a smile on your face.