Selah Weekes, aka “selah you did that?” is a 20-year-old music producer from Toronto. Raised in a family of musicians, Selah’s life was intertwined with music from the start. Now, with seven years of dedicated music production experience, Selah has honed his craft through programs like the Remix Project 16.0 and Toronto Metropolitan University’s professional music program, which helped him get into the heart of music production and the broader industry. As an active educator, Selah has garnered more than 100,000 views with his online tutorials, inspiring aspiring music producers in a captivating way. Although he’s just getting started, Selah has amassed more than 30 million cumulative streams across all platforms, and his love for music reverberates through his every endeavour. In 2023, as a summer student working at SOCAN Foundation, Selah led a series of both group and one-on-one workshops, engagingly educating SOCAN members in music production, beat-making, mixing, and sound design. Below are some of the tips and tricks he shared, in the art and craft of music production.

Plug-Ins
TAL software has some great effects plug-ins, some great reverbs. The Valhalla Vintage Verb is also good. Those are my favourite go-to reverbs. For actual plug-ins, Podalski and u-he have some great synths. Podalski is my favourite. BBC Symphony Orchestra, and a lot of the Spitfire plug-ins, have some great orchestral sounds; it’s like a whole orchestral library. Contact is pretty good. Labs, by Spitfire, is a great one. And Pancake is another great one – it’s an emulation of a pretty good Soundtoys plug-in.

Mixing
You want to start with leveling and panning. In leveling, you’re using volume sliders to get each sound to an appropriate level in the mix, so nothing is levelled too loud or quiet. With panning, you’re getting the sound to be not too wide or narrow.

For leveling, you can mute all of the instruments, turn them down to zero, and start by leveling the loudest one (usually the kick/bass drum) first, then bringing other instruments up around it. If you want your track to sound more ‘high-end,’ you can turn up the volume of a high-end instrument in your track. You don’t have to have everything at the same level, everything just has to be at the level you want to hear. There’s usually an instrument in the track that’s the main thing, or the special thing, so if that’s the guitar, make sure the guitar is turned up more than something else. If it’s the vocal, then make sure the vocal is the most present thing in the mix.

Panning is a very under-used tool by beginners. This is how you get a wider mix. A lot of people, back in the old days, usually panned everything 100% right or 100% left. But with stereo, and in-ear headphones (earbuds), I don’t think that’s as necessary anymore. You can look at how the instruments in a band recording are placed in the mix, as a guide. Or if you look at photographs of orchestras onstage, see how the instruments are located – where the bass instruments, percussion, horns, and woodwinds are – to get a sense of how you might want to pan your equivalents in the mix. You can try to emulate how it works in real life. You can do the same with a four-piece jazz band.

Pitch Correction
A lot of people are kind of “iffy” on pitch correction. But on recordings of modern music that are being released on a professional level, they’re all pitch-corrected. It does give them a more professional, cohesive sound. If all your instruments are on-key, and the vocals are just a little off-key, it might be something you can’t hear at all – but when it’s on-key it just sounds a lot better. I usually use Melodyne and Waves Tune for pitch correction.

EQ (Equalization)
After leveling, EQ is the next step in mixing. It helps to understand the frequency spectrum. 0 to 80 Hz (hertz) is sub-bass, 80 to 250 is bass, 250 to 500 is low-mids, 500 to 2,000 is mids, 2,000 to 4,000 is high-mids, 4,000 to 6,000 is sibilance (where a lot of presence can be), and 6,000 to 20,000 is the high end. There are different types of “standard” EQ curves that many people use: low cut, high cut, bell curve, band pass.

Additive EQ boosts certain frequencies. Subtractive EQ takes out frequencies from the mix. Standard practice is to take out frequencies below 100 hz. The idea is to let each sound have its own space in the frequency spectrum. Subtractive EQ is usually what you should be using, taking certain frequencies out. For example, if I take out some of the low end, it’ll make the high end sound more present. Before adding stuff, which can add unwanted frequencies, try taking out stuff. Usually, cutting is better than boosting. For melodies, I’m usually cutting from the lower end to give more space. You can apply a low cut on the entire melody, or the entire track, so you don’t have to do it with each individual instrument. It can save you time.

Mid-Range
To allow mixes to translate to multiple playback devices – cellphones, airpods, headphones, laptops, home sound systems, car sound systems – focus on the mid-range when mixing. Mixing headphones (designed for mixing) mostly focus on the mid-range when you listen to the playback. This is because most sound systems don’t have as wide a frequency range as your recording software. Some car sound systems have no high end – and it really drops off after 1,000 Hz. Airpods also don’t have a lot of presence, or low end. But their mid-range is similar to that of a car sound system, as it is for most listening devices. So you want to mix to that. If you don’t have mixing headphones, you can cut the low and high ends on your EQ, and listen to the playback like that, to see how it’ll sound on most devices.



Selah Weekes, aka “selah you did that?” is a 20-year-old music producer from Toronto. Raised in a family of musicians, Selah’s life was intertwined with music from the start. Now, with seven years of dedicated music production experience, Selah has honed his craft through programs like the Remix Project 16.0 and Toronto Metropolitan University’s professional music program, which helped him get into the heart of music production and the broader industry. As an active educator, Selah has garnered more than 100,000 views with his online tutorials, inspiring aspiring music producers in a captivating way. Although he’s just getting started, Selah has amassed more than 30 million cumulative streams across all platforms, and his love for music reverberates through his every endeavour. In 2023, as a summer student working at SOCAN Foundation, Selah led a series of both group and one-on-one workshops, engagingly educating SOCAN members in music production, beat-making, mixing, and sound design. Below are some of the tips and tricks he shared, in the art and craft of music production.

Compression
Even though it’s probably the most important effect, it’s often very hard to hear in a recording. It glues the sounds together. It levels out the dynamic range of a given track, and can make vocals sound more natural. This is the number one tool to get professional-sounding mixes. There are different styles of compression, different layers of it, and it can balance your mix, enhance it, glue it, or fix it. If you want to learn more about compression, and how to hear it, there’s a free 10-hour course on YouTube about how to use it.

In compression, the threshold sets where your gain reduction starts. A low threshold compresses more of the signal, while a high threshold compresses less of the signal. The ratio determines how much of your signal gets compressed, once going above the threshold. For example, a 4-to-1 ratio means that for every decibel (dB) of the signal that passes the threshold, it brings that part down 4 decibels. Infinite ratio blocks the signal from crossing the threshold – this is also known as a limiter. Attack and release are used to shape the compression, giving it some character. Attack sets how fast the compressor reaches its full gain reduction after passing the threshold. For example, of you have a slow attack, it’s like a slope moving toward whatever your ratio is, from 1-to-1, through to 4-to-1. Release sets how quickly the gain reduction stops after the compressor goes below the threshold. You can compress the entire track, or use multi-band compressors to affect each individual EQ curve.

A de-esser takes out harsh “S” sounds, around the 4,500 hz range, but you need to use it after the compressor, because compression can bring out these harsh frequencies more.

Reverb
Reverb is probably one of the most over-used and wrongly used effects. It sounds good, but it usually muddies up the mix. Modern reverbs have a knob to make them not affect the low end. Use reverb to add space to an empty mix, but don’t clutter it up too much. Try to use one type of reverb in a mix. Types of reverb include Hall (replicating the sound of a concert hall, good for strings, muddy up the mix); Chamber (similar to Hall but with more clarity, good for vocals and guitars); Room (emulates the reverb of a room , much smaller, good for most things, doesn’t muddy up the mix very much); Plate (doesn’t copy a real space, artificial, very warm); and Spring (an artificial reverb that has a spring inside it, rather than a plate, brings a clean and bright tone, the kind of reverb found inside most guitar amplifiers).

Delay
Delay repeats the audio signal to which you apply it. Delay time is how long the delay takes to come into effect. Feedback sets how many times the delay will happen. You can mix between the “wet” (100% wet is all of the signal put into the delay) and “dry” (100% dry is the normal signal before putting it into the delay), and you can mix between them. Some delays have a ping-pong button, which means the delayed signal goes between both ears. You can also set the timing of the delay, usually by the sub-division of beats: half-beat, quarter-beat, eighth. Types of delay include Slap (short, single repeat); Doubling Echo (doesn’t add delay, but thickens vocals, making it sound like two vocal takes – but recording two takes is better); Looping (delays a sound enough to create a loop0; and Modulated (effects like chorus, flanger, and phase shifters are technically delays).

Buses
Buses, or “send” tracks, are tracks to which you send other sounds. Buses often have reverb and delays on them. You can route multiple track audios to one bus track. For example you can have a pre-master bus to try things before adding to the actual master recording. Buses are used to add effects to a group of tracks – for example, having a drum bus. They help to mix the dry and wet signal with less muddiness. They free up the CPU (Central Processing Unit) on your system by not using as many CPU-intensive plug-ins, especially reverbs, on every individual track. Buses also allow for greater complexity in layering effects.

Plug-In Order
Plug-in order – the sequence in which you apply your plug-ins, or effects – really matters. Different plug-ins will have different relationships with each other. So it’s a good idea to learn what each plug-in does, and determine what order works for you, and why. For example, why it’s best to use a de-esser after a compressor. There’s no set rule, so you can determine your plug-in order. Use your own ears, and your own discretion, and play around with it to find out what works best for you.

My plug-in order for vocals – though it’s subjective, and depends on the mix – is Melodyne, Autotune (or an alternative tuner), Subtractive EQ, Compression, De-Esser, EQ, Multi-Band Compression (for specific frequencies), Saturation (which makes the vocals warmer), then Reverb and Delay (on buses).

 



What are the differences between song rights and master rights?

“Song” refers to rights in a song or musical composition itself – the music and the words (if any). These rights are generated in the writing of a song, before any recorded performance occurs. “Master” refers to the rights in the sound recording of the song or composition. The sound recording is what you hear on radio or streaming services – the recorded performance of the song by the artist.  Because the writer isn’t always the owner of the sound recording (whether artist or label), these two rights ensure everyone is fairly compensated for their contribution.

Songwriters, composers, lyricists, and music publishers are typically the owners of the rights of a song or composition. When a songwriter, composer, or lyricist enters into an agreement with a music publisher, the music creator and the publisher usually share ownership of the songs. The publisher then works to maximize the earning power of the songs by placing them with recording artists, in movies and TV shows, in commercials, or videogames, or any other medium or situation where the songs can earn as much in royalties as possible.

In Canada, usually the maker of a sound recording initially owns the master rights. When an artist enters into a recording agreement with a record label, the recordings they produce together are usually owned by the record label. The label then works to maximize the earning power of the sound recording by selling as many copies of it as possible, and/or licensing or distributing it as widely as possible (including in other territories).

When artists working with a record label produce a new song, the original recording will have at least two owners (or sets of owners) – the owner(s) of the songs or compositions (the song rights)  and the owner(s) of the sound recording (the master rights). For example, co-writers TOBi, Alex Goose, and Hannah Vasanth own the rights to their song “Family Matters” (there’s no publisher on the song), but RCA Records own the master rights to the recording of the song, as performed by TOBi on his album Elements Vol. 1.

Alternatively, if a song is written, performed, and also recorded by one person, without any publisher or record company, then that person gets copyright ownership to both the song (the song rights) and the sound recording of it (the master rights). For example, Julian Taylor both wrote and recorded his song “Wide Awake,” so he owns both the song rights and the master rights (though he owns the master rights through his self-owned record company, Howling Turtle).

Synchronization royalties generate income for copyright-protected music that’s paired or “synced” with visual media. Sync licenses grant the right to use songs or compositions in any visual media – films, television, commercials, videogames, online streaming, music videos, etc. Any use of copyright-protected music in a screen project will require a master license from the owner(s) of the sound recording in order to use that recording, and a sync (synchronization) license from the owner(s) of the song itself, for the use of that song, or even a portion of it. For example, you need both a master and sync license before using the latest track from Loud Luxury in a commercial video.

The song right (to the song or composition) is divided into two kinds of rights: performance and reproduction (or “mechanical”) rights.

Performance rights generate royalties when a song or musical composition that you wrote, or co-wrote, or publish, is played in public (on TV, radio, digital media, streaming media, in background music, live performance, in movies, etc.). SOCAN’s role is to collect and distribute the royalties you’ve rightfully earned when your song is played in public.

The reproduction (or “mechanical”) right, is the right to authorize the reproduction of your musical composition or song on various media, including streaming, downloads, CDs, vinyl records, cassettes, DVDs, etc. When copies of your music are made  in these media, reproduction royalties are owed to you, and SOCAN ensures that you receive the royalties you’ve earned.

SOCAN can represent both your performance rights, and your reproduction rights – the latter, on almost every type of audio, audio-visual, digital, or physical media in existence. We negotiate licensing agreements on behalf of our members and clients for the use of their works, and ensure that they’re fairly compensated for their extraordinary talent. SOCAN is essential to receiving all the royalties you’ve earned when your music is performed or reproduced around the world.

For more information: https://iconcollective.edu/how-music-royalties-work/