I have a question regarding the default output volume at 0dB in Renoise:
If I play a mp3 in an mp3 player like Vox at 100% volume (so no volume modification appears), and then load this track into Renoise and play it at 0dB (0dB in the track, 0dB in the master, 0dB in output), then the track will be much much more quite in Renoise… Why is this? What I am doing wrong? And if this is for purpose, with what exact dB-boost will I achieve exactly the same volume as in the mp3 player (seems to be something like 5.6dB)?
Oh ^_^! Never have realized this option (or completely forgot about it)! Always wondered, hm my songs are always quite quiet :)Thanks for the info!
One question more: If I remove this -6db headroom, and play this mp3 in Renoise, it is showing clipping… Why this? The mp3 is actually a m4a with 16bit and 44.1kHz… How can this clip at 0dB?
EDIT:
This clipping only appears if the interpolation is set to Cubic or Sinc. No clipping appears in linear mode.
Hm, so a smooth interpolation will require a bit more headroom, especially if the sample is already clipped or has harsh transients, right?
The mp3 is actually a m4a with 16bit and 44.1kHz… How can this clip at 0dB? (…) EDIT: This clipping only appears if the interpolation is set to Cubic or Sinc. No clipping appears in linear mode.
Resampling will always take place if the waveform’s sample rate does not match the sample rate set in your Renoise audio preferences, or when the waveform is played back at a different pitch/frequency for any other reason.
Are you perhaps running Renoise at 48kHz or 96kHz? The 44.1kHz waveform must be resampled to match your audio output in that case.
Cubic and Sinc interpolation both take multiple adjacent data points from the waveform, and then reconstruct a curve based on those points. One side effect of this reconstruction is that the curve’s peak may occasionally overshoot the maximum amplitude present in the original waveform data. This is why a waveform that is fully normalized to 0dB may overshoot and clip when resampled using one of these methods. (Some older/cheaper DACs in CD players even suffered from a similar problem, where 0dB mastered CDs would clip during playback)
Linear interpolation does not suffer from this problem, because it simply blends from one fixed data point to the next.
Personally, I’d recommend that you forget all about this 0dB crap, and always work with a small amount of headroom just to be safe. Even if you master to -1dB or -2dB instead, this is still more than enough to get a loud sound, while providing you with a nice safety buffer against resampling, crappy MP3 encoding, and so on.
Hey dblue, one more question regarding interpolation:
If I play a sample at C-4 recorded at 48kHz with a device out settings rate set to 44,1kHz, were will the interpolation occur? Does Renoise do the interpolation or CoreAudio then? And if it’s Renoise, will this interpolation be affected by the instrument’s interpolation setting (linear/cubic/sinc) ? Or only by render settings?
Ah, and one more question: VSTi’s in general do their own interpolation, Renoise only does here the summation, so master interpolation settings have no effect one these?
Does Renoise do the interpolation or CoreAudio then?
All resampling/interpolation is done internally by Renoise, all tracks are mixed together, and then the final “master” signal is delivered to your audio driver at the sample rate you’ve set in your Renoise audio preferences. What happens after that point is pretty much beyond our control, but if you have Renoise set to match the native sample rate of your audio interface, then things should be fine, I guess.
will this interpolation be affected by the instrument’s interpolation setting (linear/cubic/sinc)?
Correct. The resampling is dictated by the interpolation method you choose for each sample, and is applied everywhere, during playback or rendering, whenever the frequency of the played sample does not precisely match the sample rate set in your audio preferences (or the sample rate chosen when rendering).
Ah, and one more question: VSTi’s in general do their own interpolation, Renoise only does here the summation, so master interpolation settings have no effect one these?
Correct. VSTs do whatever they need to do internally, and then simply deliver an audio stream to Renoise at precisely the same sample rate you’ve set in your audio preferences. No resampling is necessary, because the sample rates are identical. Absolutely no resampling/interpolation is ever applied to the output of a VST instrument or effect within Renoise, because there’s simply no need for it.
If I choose interpolation “precise” in export, will this also trigger the high quality mode of each VSTi’s, if this is available/supported there? And if I switch to normal, what rendering mode is then suggested to the VSTis? In which rendering mode is ensured that the result will sound exactly like live? Only in realtime mode?
BTW. the manual misses some info about dithering in the “render song to audio file”-section. Also it would be maybe most logical, if the dithering appears in the export dialogue? Or is dithering also activated/used, if my playback device only supports 24 bit integer ? EDIT: Ok, read it, only for 8bit/16bit output. So will dithering appear in exports with 16bit? Plugins like L2 from waves also can dither for 24bit… Also cubase does, if I remember correctly. Indeed maybe this is only necessary for classical music…
If I choose interpolation “precise” in export, will this also trigger the high quality mode of each VSTi’s (…)
No. The interpolation only applies to samples. Nothing else.
Technically, there’s no such “high quality mode” defined by the VST 2 spec. However, if the plugin chooses to query Renoise about its current processing state, then we do let the plugin know that we’re running in either realtime or offline mode, as these basic states are defined/recognised by VST. It’s entirely up to the plugin if it enables some higher quality internal processing during offline mode.
In which rendering mode is ensured that the result will sound exactly like live? Only in realtime mode?
Correct. (Unless you can precisely control what the plugin is doing at all times through some internal options)
will dithering appear in exports with 16bit?
Yes, if dithering is enabled in your audio preferences.
It may indeed be useful to have a dithering on/off switch available in the render dialog as well.
Generally speaking, if you find the dithering process to be very important and you wish to handle it in a specific way, then simply render 32-bit from Renoise and perform your dithering elsewhere, in another third-party tool specially designed for the task.
It’s the headroom. By default, 6dB is subtracted from the master, to avoid newbs who overdrive everything and then complain about bad sound quality
Seriously, though. You can change this setting from the Song → Playback Options dialog.
Hey, only wanted to correct this for readers: this “newbie headroom” will be applied to all tracks, not the master.
EDIT:
And hey, what about allowing positive values in track headroom (and name it track amplification instead)? No, seriously, this is an interesting tool to boost all track values at once. E.g. If you use compression on the tracks, you will a completely different compression.
Btw. wasn’t there a tool with which I could increase all / selected track pre-volumes at once? Or was it Cubase? EDIT: Here is it:http://www.renoise.com/tools/multi-volumes
Shift selecting multiple volume sliders in the mixer would be a dream.
Sorry to leave you hanging in any way, but let’s be fair here: it was only ~12 hours between your posts. No answer from me (or anyone else) in that short time does not mean anything in particular, except that you are perhaps slightly impatient
Do Renoise’s internal fxs have different quality modes for realtime/offline rendering, or do they render always the same?
(…)
I assume R3 internal fx have no special hq rendering mode, e.g. reverbs or oversampling in filters etc.
Renoise native DSP devices do not do anything especially different “quality”-wise during rendering vs live playback. The idea here is that the devices should sound the same rendered as they did during live playback, with no major surprises to throw you off.
They should also — hopefully — sound good enough during live playback, so that an alternate “high quality” rendering mode is not actually necessary in the first place. This is of course highly subjective, and past conversations do suggest that some people are not happy with the filters (for example). Can’t please everyone, sadly!