I first noticed this so-called advice touted by producers on social media. It’s as if it was some kind of a gospel for the New Age of Music Industry. Loudness Wars are over! Spotify is the new king and in their great wisdom THEY have decided that all of the music submitted to their platform will be turned to LUFS -14. The gospel of the New Age then continues; if you are a heretic and do the cardinal sin of mastering above this holy number of loudness, then you are being punished by Spotify gods and turned back down to where it ought to be. LUFS -14. Therefore, you waste potential dynamic range of your master when you even attempt to go above this magical level of eternal bliss. This is horribly misguided advice by the way if you didn’t get it from my tone already.
Of course, I am exaggerating and being sarcastic about this, but it seems that this myth is rather deeply rooted with producers who just don’t know any better. It’s not too far off base, to be fair, but often people are very misguided about what all of this actually means and how you should go about your masters in relation to this practice.
I embarked on a little journey and studied this matter. I wanted to find out some facts behind this myth and rather than parrot the gospel, think for myself, analyze the matter and then shed a little light of my own upon the whole thing, from my perspective. I mean not to impose my own views about this upon you, but rather to inform you about my findings, thoughts and most of all, encourage you to do dig a little bit deeper and think about these things yourself.
First of all, once submitted, Spotify handles your audio file in a following way:
- Spotify converts your audio material to WAV 44.1 kHz (keeping bit depth).
This right here shows you that there is no reason to master audio files for Spotify (or most of the streaming services, for that matter) in higher frequency rates than what they will downsample anyway. The most you can achieve with running higher rates is aliasing for your audio material. If you don’t know what aliasing means in audio, this video explains the concept very well.
- Spotify transcodes the file into the following delivery formats for the quality options available to listeners:
– Ogg/Vorbis [96, 160, 320 kbps]
– AAC [128, 256 kbps]
– HE-AACv2 [24 kbps]
The recommendation to limit TP to -1.0 dB makes sense with the more compressed file formats. I wouldn’t be concerned over the higher formats of Ogg/Vorbis though. Keep in mind that distortion happens with compressed formats regardless. The aim would be to attempt to minimize perceived distortion.
- Spotify calculates the loudness using ReplayGain.
First of all, what “ReplayGain?”
Wikipedia says the following of the concept:
“ReplayGain is a proposed standard published by David Robinson in 2001 to measure and normalize the perceived loudness of audio in computer audio formats such as MP3 and Ogg Vorbis. It allows media players to normalize loudness for individual tracks or albums. This avoids the common problem of having to manually adjust volume levels between tracks when playing audio files from albums that have been mastered at different loudness levels.”
Then how is this loudness calculated by ReplayGain? I took a look into it and like anything else so far, this is isn’t a simple cut and dry process you might want it to be, namely a target number that it simply goes for. It is a 3-part process that utilizes both psychoacoustics and “statistical processing.”
The first part of the process, called “Loudness filter,” will utilize an inverted Fletcher–Munson curve to attempt to best counter the imbalance between the perceived loudness by the human ear vs. the physical reality. Also, this part of the process will roll off frequencies below 150hz.
After these, the second part of the process is to chop the entire track into 50ms long separate blocks of samples. The RMS value (Root Mean Square) is being taken from each of these samples and stored.
The third and the final part is to sort these clips from the quietest to the loudest between a scale of 1% to 100% with 1% being the softest, gentlest piece of 50ms from the track and 100% being the loudest 50ms of it. Sample rated as 95% will then be chosen as a representative of the entirety of the file.
ReplayGain adapts the SMPTE calibration for music playback. Under ReplayGain, audio is therefore played so that its loudness, as measured using the procedures described in loudness measurement above, matches the loudness of a pink noise signal with an RMS level of -14 dB relative to a full-scale sinusoid.
Remember now that RMS isn’t LUFS. RMS is root mean square, which is the average power of your audio signal, while LUFS stands for Loudness units relative to Full Scale. These are not the same thing.
Pay in mind that in this kind of statistical processing, the value 95 (between 0 to 100) of the loudest 50ms will be used as the standard for how ReplayGain applies this loudness normalization to the entire track. Theoretically an individual track can have its louder parts louder if there are more quieter parts within the audio material. Statistically this would pump the absolutely loudest parts further toward the remaining 4-5% ignored by this algorithm. I’ve seen one example of this done with simply Pink Noise, whereas extending quieter part of the pink noise by 2 seconds withing a 100 second clip made an impact of a whopping 7.4 dB difference to the algorithm. Given, this was done with pink noise so take it for what it’s worth. It’s a proof of concept though.
But wait! That’s not the whole story. Spotify Premium users can choose if they want to turn the normalization on or not.
As of today when this article is written, Spotify states that premium users can choose between the following volume normalization levels in their app settings:
Loud – equalling ca -11 dB LUFS (+6 dB gain multiplied to ReplayGain)
Normal (default) – equalling ca -14 dB LUFS (+3 dB gain multiplied to ReplayGain)
Quiet – equalling ca – 23 dB LUFS (-5 dB gain multiplied to ReplayGain)
As ReplayGain is a option within the Spotify player it doesn’t first turn down the audio material to -14 RMS represented through the process, but it will adapt the wanted level based on user preference and set the volume level accordingly, just as a normal volume knob on your amplifier would. This does not introduce distortion to the audio material.
SO THERE WE HAVE IT.
What should we keep in mind after learning this?
- Try not stress about the loudness of your master. Master to the level that fits the musical style and the artistic expression of the song. Various styles work in various ways. There is no magic number for any of them. Unless you are doing tracks to be played at clubs etc. you don’t need them to play at super loud levels.
- Think of loudness optimization as finding a good balance between the versatility of the track in various medias and how the loudness, or lack of it, affects the final track.
- Don’t simply listen to me about this issue. Think for yourself. Don’t let anyone dictate sets of rules you shouldn’t ever break, without actually studying them yourself and thinking for yourself about the matter. This applies to much more than simply mastering.
- Don’t ignore what Spotify and other streaming platforms are doing to the tracks you send to them. It is useful to know what the process is and their recommendations have value as a reference for your work.
- But I think it is important to keep in mind that typically your distributor sends the same master you send them to various other streaming platforms AND digital stores. It is up to you what platform or digital service you’d like to prioritize, if you are going to optimize the master to one of these and not aim for versatility.
Article written by:
August 10, 2020