Selah Weekes, aka “selah you did that?” is a 20-year-old music producer from Toronto. Raised in a family of musicians, Selah’s life was intertwined with music from the start. Now, with seven years of dedicated music production experience, Selah has honed his craft through programs like the Remix Project 16.0 and Toronto Metropolitan University’s professional music program, which helped him get into the heart of music production and the broader industry. As an active educator, Selah has garnered more than 100,000 views with his online tutorials, inspiring aspiring music producers in a captivating way. Although he’s just getting started, Selah has amassed more than 30 million cumulative streams across all platforms, and his love for music reverberates through his every endeavour. In 2023, as a summer student working at SOCAN Foundation, Selah led a series of both group and one-on-one workshops, engagingly educating SOCAN members in music production, beat-making, mixing, and sound design. Below are some of the tips and tricks he shared, in the art and craft of music production.

Compression
Even though it’s probably the most important effect, it’s often very hard to hear in a recording. It glues the sounds together. It levels out the dynamic range of a given track, and can make vocals sound more natural. This is the number one tool to get professional-sounding mixes. There are different styles of compression, different layers of it, and it can balance your mix, enhance it, glue it, or fix it. If you want to learn more about compression, and how to hear it, there’s a free 10-hour course on YouTube about how to use it.

In compression, the threshold sets where your gain reduction starts. A low threshold compresses more of the signal, while a high threshold compresses less of the signal. The ratio determines how much of your signal gets compressed, once going above the threshold. For example, a 4-to-1 ratio means that for every decibel (dB) of the signal that passes the threshold, it brings that part down 4 decibels. Infinite ratio blocks the signal from crossing the threshold – this is also known as a limiter. Attack and release are used to shape the compression, giving it some character. Attack sets how fast the compressor reaches its full gain reduction after passing the threshold. For example, of you have a slow attack, it’s like a slope moving toward whatever your ratio is, from 1-to-1, through to 4-to-1. Release sets how quickly the gain reduction stops after the compressor goes below the threshold. You can compress the entire track, or use multi-band compressors to affect each individual EQ curve.

A de-esser takes out harsh “S” sounds, around the 4,500 hz range, but you need to use it after the compressor, because compression can bring out these harsh frequencies more.

Reverb
Reverb is probably one of the most over-used and wrongly used effects. It sounds good, but it usually muddies up the mix. Modern reverbs have a knob to make them not affect the low end. Use reverb to add space to an empty mix, but don’t clutter it up too much. Try to use one type of reverb in a mix. Types of reverb include Hall (replicating the sound of a concert hall, good for strings, muddy up the mix); Chamber (similar to Hall but with more clarity, good for vocals and guitars); Room (emulates the reverb of a room , much smaller, good for most things, doesn’t muddy up the mix very much); Plate (doesn’t copy a real space, artificial, very warm); and Spring (an artificial reverb that has a spring inside it, rather than a plate, brings a clean and bright tone, the kind of reverb found inside most guitar amplifiers).

Delay
Delay repeats the audio signal to which you apply it. Delay time is how long the delay takes to come into effect. Feedback sets how many times the delay will happen. You can mix between the “wet” (100% wet is all of the signal put into the delay) and “dry” (100% dry is the normal signal before putting it into the delay), and you can mix between them. Some delays have a ping-pong button, which means the delayed signal goes between both ears. You can also set the timing of the delay, usually by the sub-division of beats: half-beat, quarter-beat, eighth. Types of delay include Slap (short, single repeat); Doubling Echo (doesn’t add delay, but thickens vocals, making it sound like two vocal takes – but recording two takes is better); Looping (delays a sound enough to create a loop0; and Modulated (effects like chorus, flanger, and phase shifters are technically delays).

Buses
Buses, or “send” tracks, are tracks to which you send other sounds. Buses often have reverb and delays on them. You can route multiple track audios to one bus track. For example you can have a pre-master bus to try things before adding to the actual master recording. Buses are used to add effects to a group of tracks – for example, having a drum bus. They help to mix the dry and wet signal with less muddiness. They free up the CPU (Central Processing Unit) on your system by not using as many CPU-intensive plug-ins, especially reverbs, on every individual track. Buses also allow for greater complexity in layering effects.

Plug-In Order
Plug-in order – the sequence in which you apply your plug-ins, or effects – really matters. Different plug-ins will have different relationships with each other. So it’s a good idea to learn what each plug-in does, and determine what order works for you, and why. For example, why it’s best to use a de-esser after a compressor. There’s no set rule, so you can determine your plug-in order. Use your own ears, and your own discretion, and play around with it to find out what works best for you.

My plug-in order for vocals – though it’s subjective, and depends on the mix – is Melodyne, Autotune (or an alternative tuner), Subtractive EQ, Compression, De-Esser, EQ, Multi-Band Compression (for specific frequencies), Saturation (which makes the vocals warmer), then Reverb and Delay (on buses).

 



As a SOCAN member, is your music being used in an on-screen production, broadcast on TV, in movies, or on audiovisual streaming services? Are you already collecting performance royalties for that screen production use? If so, it’s time to start collecting AV post-sync (audiovisual post-synchronization) royalties to add to the value of your music – and increase your royalty payments.

 To better understand and benefit from your AV post-sync rights, we asked Martin Lavallée, SOCAN’s Head of Reproduction Rights, to answer some key questions.

What are AV post-sync (audiovisual post-synchronization) rights?
AV post-sync (audiovisual post-synchronization) rights are those for the digital server copies of movies, TV shows, and other productions, made by broadcasters or online platforms for offline viewing, and where on-demand functions like rewind and fast-forward are used. They refer to any reproductions that are made by any digital AV platform or TV broadcaster of the screen production in which your music is already embedded. Essentially, it’s the reproduction rights for any copy they make of any screen production in order to be able to broadcast or stream it digitally.

What’s the difference between synchronization (sync) and post-synchronization (post-sync) rights?
They are very different rights, and the licenses are paid by different organizations.

Sync rights are usually handled by the producer of a movie or TV show. This is the team that’s sync-ing (synchronizing) music to images in a TV show or movie – whether it’s a new song or composition created for a specific scene, or a pre-existing song or composition being used in the production. The money for this usually comes from the production budget of the movie or TV show.

Post-sync rights come into play once the final screen production, including all of the music embedded in it, is given to the broadcaster or audiovisual streaming platform – like Netflix, AppleTV+, Disney+, Illico, Crave, etc. Because this right occurs after the production is complete, a music rights collective, like SOCAN, is the entity that licenses them – not the producer. The audiovisual streaming services put the production onto a server, where their customers access them for viewing. They’re also able to make temporary copies onto their devices for later viewing, with on-demand functions like pause, rewind, and fast-forward. For all of this to happen, in each case, the audiovisual streaming platform has to make a digital copy of the production. You can use those functions because the movie or TV show has been copied into a memory cache. It’s not a download, but it’s a kind of quasi-permanent copy.  Because of the advocacy of SOCAN, that copy was recognized and affirmed at the Supreme Court of Canada as being subject to reproduction rights. This right doesn’t exist in the U.S., but it is recognized all over the world.

Are publishers already collecting AV post-sync royalties?
Even though the AV post-sync right has been around since 1992, and the Supreme Court of Canada re-affirmed it in 2014, a lot of our rights holders are only beginning to realize and understand the value of this complementary revenue stream, one that exists in addition to their sync business.  SOCAN has now concluded more than 30 licence agreements with different TV stations and platforms, and is regularly distributing AV post-sync royalties to those who joined SOCAN for reproduction rights. Don’t forget:  being a member of SOCAN allows us to manage your performance rights, but you must sign up separately to become a reproduction rights client. One does not create the other automatically.

Why are there concerns about AV post-sync rights among the screen composer community?
Screen composers often believe that raising the prospect of AV post-sync rights will deter producers from working with them, thinking that producers are responsible for paying this royalty, and might be unwilling to do so. But the producers aren’t responsible for paying AV post-sync royalties; the broadcaster or streaming service pays them.  This is important, because when a producer is selling their movie or TV show to a broadcaster or streaming platform, the producer wants to ensure that there are no added or outstanding rights considerations.  They want to deliver with a one-time license fee and no strings. However, rights collectives like SOCAN have direct licence agreements with the broadcaster or streaming platform for AV post-sync, so they’ve already agreed to pay for AV post-sync rights. The producer can still provide the production with no strings, understanding that the broadcaster or platform has already agreed to pay post-sync.

We’ve normalized the fact that a contract clause reserving future AV post-sync royalties won’t jeopardize the sale of a project to a platform. We’ve set up post-sync deals with all broadcasters in Québec, along with the largest digital AV platform in Canada, and more.

With whom does SOCAN have agreements to collect for AV Post-Sync rights?
We collect from major streaming services and commercial TV networks. We’ve negotiated settlements with CBC, and the largest audiovisual platform in Canada, and SOCAN is in ongoing negotiations with virtually all the others. Because they’re already licensed with SOCAN for performing rights, our established relationship makes it easier to sit down and negotiate AV post-sync rights.

Why does using a collective like SOCAN make sense for AV post-sync rights?
SOCAN has filed an AV post-sync tariff going back to 2015 and is negotiating AV post-sync with all major screen platforms. Deals already concluded create an immediate revenue stream for reproduction rights clients. And SOCAN already manages cue sheets, which identify the repertoire of our members being used in screen productions. Distribution occurs four times a year, and our recently enhanced royalty statements are more granular, to better meet the needs of our publishers.  Our a-la-carte service offer allows publishers and self-published writers to join solely for AV post-sync rights, with no long-term commitment. We have therefore grown our repertoire for this right by more than 250 percent in the last year, which in turn allows us to negotiate better deals.

Without SOCAN, rightsholders would have to personally negotiate with every broadcaster and audiovisual streaming platform for every music use for AV post-sync rights. Time-consuming doesn’t even begin to describe the effort that would require. It’s far more efficient to have us do it on a collective basis, for all their repertoire, engaging organizations with whom we already have contractual relationships.



Selah Weekes, aka “selah you did that?” is a 20-year-old music producer from Toronto. Raised in a family of musicians, Selah’s life was intertwined with music from the start. Now, with seven years of dedicated music production experience, Selah has honed his craft through programs like the Remix Project 16.0 and Toronto Metropolitan University’s professional music program, which helped him get into the heart of music production and the broader industry. As an active educator, Selah has garnered more than 100,000 views with his online tutorials, inspiring aspiring music producers in a captivating way. Although he’s just getting started, Selah has amassed more than 30 million cumulative streams across all platforms, and his love for music reverberates through his every endeavour. In 2023, as a summer student working at SOCAN Foundation, Selah led a series of both group and one-on-one workshops, engagingly educating SOCAN members in music production, beat-making, mixing, and sound design. Below are some of the tips and tricks he shared, in the art and craft of music production.

Plug-Ins
TAL software has some great effects plug-ins, some great reverbs. The Valhalla Vintage Verb is also good. Those are my favourite go-to reverbs. For actual plug-ins, Podalski and u-he have some great synths. Podalski is my favourite. BBC Symphony Orchestra, and a lot of the Spitfire plug-ins, have some great orchestral sounds; it’s like a whole orchestral library. Contact is pretty good. Labs, by Spitfire, is a great one. And Pancake is another great one – it’s an emulation of a pretty good Soundtoys plug-in.

Mixing
You want to start with leveling and panning. In leveling, you’re using volume sliders to get each sound to an appropriate level in the mix, so nothing is levelled too loud or quiet. With panning, you’re getting the sound to be not too wide or narrow.

For leveling, you can mute all of the instruments, turn them down to zero, and start by leveling the loudest one (usually the kick/bass drum) first, then bringing other instruments up around it. If you want your track to sound more ‘high-end,’ you can turn up the volume of a high-end instrument in your track. You don’t have to have everything at the same level, everything just has to be at the level you want to hear. There’s usually an instrument in the track that’s the main thing, or the special thing, so if that’s the guitar, make sure the guitar is turned up more than something else. If it’s the vocal, then make sure the vocal is the most present thing in the mix.

Panning is a very under-used tool by beginners. This is how you get a wider mix. A lot of people, back in the old days, usually panned everything 100% right or 100% left. But with stereo, and in-ear headphones (earbuds), I don’t think that’s as necessary anymore. You can look at how the instruments in a band recording are placed in the mix, as a guide. Or if you look at photographs of orchestras onstage, see how the instruments are located – where the bass instruments, percussion, horns, and woodwinds are – to get a sense of how you might want to pan your equivalents in the mix. You can try to emulate how it works in real life. You can do the same with a four-piece jazz band.

Pitch Correction
A lot of people are kind of “iffy” on pitch correction. But on recordings of modern music that are being released on a professional level, they’re all pitch-corrected. It does give them a more professional, cohesive sound. If all your instruments are on-key, and the vocals are just a little off-key, it might be something you can’t hear at all – but when it’s on-key it just sounds a lot better. I usually use Melodyne and Waves Tune for pitch correction.

EQ (Equalization)
After leveling, EQ is the next step in mixing. It helps to understand the frequency spectrum. 0 to 80 Hz (hertz) is sub-bass, 80 to 250 is bass, 250 to 500 is low-mids, 500 to 2,000 is mids, 2,000 to 4,000 is high-mids, 4,000 to 6,000 is sibilance (where a lot of presence can be), and 6,000 to 20,000 is the high end. There are different types of “standard” EQ curves that many people use: low cut, high cut, bell curve, band pass.

Additive EQ boosts certain frequencies. Subtractive EQ takes out frequencies from the mix. Standard practice is to take out frequencies below 100 hz. The idea is to let each sound have its own space in the frequency spectrum. Subtractive EQ is usually what you should be using, taking certain frequencies out. For example, if I take out some of the low end, it’ll make the high end sound more present. Before adding stuff, which can add unwanted frequencies, try taking out stuff. Usually, cutting is better than boosting. For melodies, I’m usually cutting from the lower end to give more space. You can apply a low cut on the entire melody, or the entire track, so you don’t have to do it with each individual instrument. It can save you time.

Mid-Range
To allow mixes to translate to multiple playback devices – cellphones, airpods, headphones, laptops, home sound systems, car sound systems – focus on the mid-range when mixing. Mixing headphones (designed for mixing) mostly focus on the mid-range when you listen to the playback. This is because most sound systems don’t have as wide a frequency range as your recording software. Some car sound systems have no high end – and it really drops off after 1,000 Hz. Airpods also don’t have a lot of presence, or low end. But their mid-range is similar to that of a car sound system, as it is for most listening devices. So you want to mix to that. If you don’t have mixing headphones, you can cut the low and high ends on your EQ, and listen to the playback like that, to see how it’ll sound on most devices.