Should I keep dBFS level on vocals relatively same?
While recording vocals, should I gain whispering so dBFS meter is on similar level as when I do a scream? Like should it always sit, let's say about -15 dBFS, or is it fine when whispers are much quiter than screams in mix?
How much of a dynamics in here are the "rule of thumb" if there's something like that.
3 Comments
Sorted by latest first Latest Oldest Best
When humans sing, they naturally have a wide dynamic range. You want to capture that dynamic range, so after you have finished recording a vocal track, it should have a good bit of dynamics in it. It is pretty close to impossible to record vocals that don't have widely varying levels in it. Better singers will keep their voice intensity more even and may use mic techniques to even it out, and then you can use compression and limiting at the time to even things out, but you can never keep the levels at exactly one spot, and you don't want to.
These days, it is pretty popular to capture all the dynamic range you can get when you record, and then if you want to reduce it for the mix, you would use processing afterwards to do that.
When mixing, there is no answer to the question of what the dynamic range should be. It depends on the mix. So there's no rule of thumb, but you'll never be able to get it to be at exactly one level the whole time. Even if you could do that, that's not music. Music has dynamics.
dBFS, or "decibels relative to full-scale", measures how close to the maximum possible sample value your digital audio is. In other words, 0 dBFS means some peaks of your audio are hitting the numeric limit of what your audio format can represent. So, you never want to get too close to 0 dBFS or you'll start distorting.
That said, the relationship between dBFS and how "loud" something sounds is somewhat loose. Audio with the same dBFS measurement can sound louder or softer depending on the waveform. So, when you're trying to balance one signal with another in an ear-pleasing way, dBFS isn't the unit you should be using; dBu (relative to some standard level) would probably be the way to go.
And, THAT said, you're asking a pretty subjective question. What do you want your whispering to sound like, relative to your screaming? What else is happening in the mix? There may be times when you want your whispering to sound like God speaking to all mankind; there may be times when you want your screaming to sound like it's coming from a sealed tomb.
I would in most cases recommend sticking to a single gain setting for each instrument/voice during tracking. Technically speaking, there is a benefit to increasing the gain when recording a quiet passage: unused headroom means you're wasting SNR, but if you have good preamps and ADCs in your interface (and use ?24 bit!) then their SNR should anyways be more than enough for almost all recordings. Manually changing the gain is easy to get wrong – you may end up clipping the input at an unexpected louder note, which is much worse than a bit more than optimal preamp noise. And even if you get it essentially right, it may be non-obvious from the track where you changed the gain. But such manual “step”-changes can end up standing out surprisingly jarring in the final mix, and you need to go back and figure out where exactly the change happened and how to “undo” it.
The proper way to reduce excessive dynamic range is to do it smoothly. This can be done pretty well with volume automation in any modern DAW, but even better and much simpler with an ordinary compressor plugin. A compressor can do more than just bringing out a whispered passage over a loud mix, it can also bring out single syllables that would get swallowed between e.g. loud plosives. Doing that with automation would be hard, and with manual gain it's basically impossible.
Many engineers prefer having an analogue hardware compressor before the audio interface. This is like having the compressor automate the gain instead of volume. Thus is has in principle the advantage of better SNR. (Or to put it another way, compressing in digital effectively decimates the ADC's SNR, and a hardware compressor in the input circumvents this.) But frankly, I don't think this advantage is objectively measurable nowadays. When I set up such a compressor, I do it mostly for the singer's monitor signal: many singers prefer having a compressor there; it allows them to sing more dynamical in the first place because they'll hear themselves well even when singing quietly. And, because it's always easy to reduce dynamic range after the fact but less so to increase it, you should always encourage as much dynamics as possible in a studio performance.
(Good singers are able to do that by themselves though, by changing the distance to the microphone according to dynamic level. Whether that's better than a compressor is a matter of taste.)
As for how much dynamic range the vocal track should have in the final mix – that really depends heavily on the musical setting. In a metal or pop mix, you'll usually need to compress the vocals pretty heavily, in classical, Folk or Jazz much less. Apply as much compression as needed for the vocals to sit nicely in the mix, no more. Don't worry about absolute numbers, that should be only the mastering engineer's concern if at all.
Terms of Use Privacy policy Contact About Cancellation policy © freshhoot.com2026 All Rights reserved.