Suggestion To Unify Gain Readouts And Adjustments When Rendering, Etc.

Scenario A:

  • I make a new song and set the master volume to 0dB.
  • I load/create a sample which is then normalised to 0dB.
  • The instrument amplification is also set to 0dB.
  • My “song” contains one single sustained note of this instrument.
  • The track which contains the note is also set to 0dB.

Result during live playback:
The master track shows the output peaking at -6dB (instead of the expected 0dB).

Result after using “render selection to sample”:
The newly generated sample peaks at 0dB.

Result after rendering the song to .wav:
The song .wav peaks at -6dB (instead of the expected 0dB).

Scenario B:
Same as above, except…

  • The instrument amplification is now set to +6dB.
  • I have also enabled the Auto Gain feature on the master track.

Result during live playback:
The master track shows the output peaking at 0dB (instead of the expected +6dB).
Auto Gain does not detect that the output is now in fact boosted by +6dB, therefore the master volume is not automatically reduced to compensate for this.

Result after using “render selection to sample”:
The newly generated sample is clipped and distorted (instead of expecting it to be ok, since auto gain did not detect anything too loud).

Result after rendering the song to .wav:
The song .wav peaks at 0dB.

Now… I do of course understand some of the reasons behind what is happening here, and that things have been changed recently due to user requests when using render to sample, etc., but I think there’s a bit of disruption in the logic, or that it could potentially really confuse some people who are not expecting it. The peak detection / auto gain certainly seems buggy, or that there is 6dB of gain not being compensated for somewhere in the system.

Anyway, it would be nice if all of this could be unified a little better, so that “render selection to sample” and song export produce identical results (for better or worse)?

.

I think the only way to clean this up is by introducing a global headroom factor, which will be -6dB by default. If you dont want the headroom and know what you are doing you could set it to 0dB and save the song a template.

We should then also roll back the render selection changes so that it renders at the headroom level and does not add it magically back.

I understand the need to have a buffer zone, so that new users (who may tend to push the levels really high) are not constantly frustrated when their music clips like hell, but I personally think that introducing new concepts to handle this is not the correct approach. I think that by attempting to create these additional safety nets to protect people from the evils of clipping, it is ultimately just making the issue more confusing and unclear for people.

In my opinion, it seems more logical that whatever master level you use should be exactly what is heard/rendered, regardless if it results in clipping or not. The default master level should then simply be set to a sensible starting point such as -6dB (or probably even lower just to be really safe).

I think that by simplifying the whole process, it would actually help people to learn and get a much better understanding of what they are doing with their sounds, and to realise one very important and obvious fact: if you have a lot of loud instruments playing and you crank it up to 11, then it’s gonna sound like crap!

That would be my “tough love” approach anyway. :D

Really though, my main point in all of this is the confusing disconnection in logic - when the levels you set in Renoise are not always what you actually get in your end results. That to me is the most important thing to fix here.

If a headroom factor is introduced, then I’d like to make one important request: the gain readouts in Renoise should actually change to reflect this factor. In other words, if the headroom is set to -6dB, then the very upper limit of the master volume level should also display -6dB, etc. It should be very clear that such a feature is enabled and what it is currently set to, so that it doesn’t create yet another confusing disconnection between visible settings and rendered results.

What we need is some mixing and mastering basics in the manual to at least make users understand how mixing and mastering can be done to get some decent sound out of the mix without messing up the audio spectrum.
This should be really monkeytalk with lots of self-explanatory pictures.

I’d like to help in someway if possible on this subject. It really bugs me to see Renoisers using the softclip function, or dither inappropriately. Also some basics on master chain usage and monitoring practices. I’m good with words etc but not good with screenshots.

Something like a ‘best practice’ workflow?

hehe to avoid clipping I used to have a default track with channels set to -12dB , i know it could sound quiet but it’s better than moving your master volume up and down, i suggest default volume for new tracks to be -6 or -12 db - even better would be if this could be user adjustable, and correct master view

If you provide the words, i’ll provide the screenshots.

Ok you’re on vV. But it’ll take me a week or two… Hey I might use gdocs so the writing can be shared…

Ok I’m making a document on Google Documents called:
Renoise Mixing and Mastering - A Best Practice Model

If anyone has gmail and would like to help with editing/contributions let me know.

Ahm. Wait. We first should decide on how to continue with this problem, don’t we? Give me some time to answer to dblues reply first please…

Yes, its a bit a tennis match, no offense intended. Whenever I “render selection to sample” now, the process for me is:
• Master Track: -6
• Render Selection to Sample
• Rename and tailor cut if necessary
• Sample Properties: +6
• Master Track: back to 0 and continue working

Go ahead and take your time taktik. It’ll take me weeks to properly draft something. I’m a slow perfectionist writer. Whatever the end result is I’ll write something that represents that.

Hiya,

I’m quite sure this thread was created at least partially as a result of my question here. Thing is, I still haven’t received a proper explanation for my issue.

Simply put, if the VST itself doesn’t create clipping how come the sample rendered from it does? I find no logic in that at all… or am I being naïve?