Sample-rate & bit depth noobisms

For years I’ve been using 44100hz setting in the preferences audio tab in Renoise and life is good, however there are questions,.

This setting is just the playback resolution right? It doesn’t affect the quality when exporting a song render?

For example, if you’ve set 44100hz for the playback engine, and have used solely 48000hz 24 bit samples in the song and export the song to a sample rate of 48000 hz in the ‘render to disk dialog’, the render will sound identical to if I had set the playback engine to 48000 Hz and rendered to 48000?

Also, is there a benefit of rendering to a higher sample & bitrate setting in the ‘render to disk’ dialog window, even if you’ve only used 16 bit 44100 samples in the project?

I guess I’m looking for an upgrade in soundquality, the cpu limitations aren’t there anymore to have to work in 44100, I’m just not sure what settings would make sense using different rates in the same song.

I probably should a/b this myself lol.

How do you guys work with samples of different bit depth & sample rate in the same project, playback resolution vs render to disk settings?

The samplerate set in the audio driver has nothing to do with the “render to disk” samplerate. The audio driver’s samplerate is just for realtime playback. If you render to disk with a higher quality than the playback settings, then you also have a rendered audio file with exactly that higher quality.

There’s nothing wrong exporting with higher quality than the used samples have, as long as you don’t downsample the exported file later to lower samplerates and/or bit rates.
But if your samples all have e.g. 16 bit and 44100 Hz, then it is better to render with the same bits and samplerate. Because rendering with higher settings than the used samples can affect the quality of these samples when you downsample the exported audio file later. Exporting with higher rates just has an advantage if you (also) use VST plugins. Because a VST synth (or a VST effect) processes the sound in realtime and can have a better quality with higher rates. But normally 44100 Hz and 16 bit is enough. Higher rates are just necessary, if you plan a professional mastering for the file. Because the the sensitive mastering processors of the professional studios can work better with higher rates.

About listening and rendering frequency rates: You are right, and your choices make sense.

Moreover, you can set the rendering’s interpolation quality to precise, so sample rate conversion for samples that are not at the same sampling rate as the render will always be the best, regardless of the interpolation’s settings while you play the song (which is set per sample, in the sampler tab (it think - could someone please confirm this?)).

Note that your sample’s rate matching your playing/rendering rates only matter if you don’t pitch your samples, because if you do, then they are not read at the same rate they were recorded anyway. Even when they match, this matters very little, since today’s interpolation algorythms are really good.

Note that some VST plugins sound better in 96k than in 48k. So this could justify an extra step: render in 96k and then convert the rendered file to 48k. It might sound a tiny bit better than rendering directly in 48k. At worst, it won’t make any difference.

Now about bit depth : more is better, but matching doesn’t matter. At all.

The only bad thing that could happen with bit depth is if you had a bottleneck between your source and your final destination, but in reality, it’s rather the opposite: modern DAWs always mix in very high bit depths (like 32 or 64). I don’t know what bit depth Renoise uses for mixing exactly, but it’s certainly enough for you not to worry about it.
Have all your samples in at least 16 bits (24 bits is better, 32 bits is probably overkill). While working, listen at whatever bit depth your soundcard supports. Export your song in 24 bits, and release your final master in either 16 or 24 bits (again, 24 is better, though few people are capable of, or equipped for, hearing the difference. And 32 bits might cause more compatibility problems than increases in sound quality).
Bit depth will affect filesizes, but I don’t think it will affect CPU load (since the mixing is done in high bit depth anyway).

Now to be clear: all this makes only a very tiny bit of difference. Especially the parts about sampling rate.

It is mostly about oversampling. And you need to look separately at Renoise instruments a.k.a. Renoise realtime sampler engine vs. VSTis, which provide their own engine. If you have a quite old VSTi, it may not use oversampling (e.g. play NI pro53 at very high notes, you will hear aliasing noise, which I actually like). If you then double the sample rate, the VSTi will render like it was using oversampling 2x, since the time resolution now is doubled while actual calculation, and it will be resampled later back, outside the synth, with a lot of oversampling.

If you look at Renoise instruments, it may behave different. I think dblue stated once that realtime mode render should sound exactly like realtime (not sure though, I sometimes imagine slight differences, but could be a VSTi instead), and the other offline modes also try to optimize the resampling quality of the samples, so can sound slightly different (usually a bit more bright and precise at high end). If you used drum samples played and sampled at 44,1kHz and now are switching to 48kHz output, it could even result in “worse mathematical quality”, since now the sample needs to be resampled and not just played in original state. Depending on your samples resample setting, it can the sound more or less drastically different. This scenario vice versa, like you described it, you were already used to the resampled sound of your drum samples, switching then to 48kHz may result in some more dull yet more precise and less noisy sound.

A modern VSTi usually provides oversampling and other techniques, so I don’t think there will be much audible difference when switching sample rates, as long you are not going below 44,1kHz.

If you composed your song in 44,1kHz, I would also render it in 44,1kHz, since it is meant in this way. A 48kHz usually sounds a tiny bit brighter, I guess due more precise resampling (e.g. compare Renoise cubic vs. sinc on a hihat, not played at sampled rate/output rate). I also think people are still used to the 44,1kHz sound, nothing wrong about it. I normally use 48kHz btw., mainly because also my hardware uses that as standard. So do a lot of streaming portals, too. There is less resampling involved, if you use 48kHz.

Even if you only used 44,1/16bit samples, you will highly benefit from a 24 bit render, because you usually applied fx and volume modulation and also you replayed the sample not at sampled rate, so used the resampling engine.

Some older synths and fx have a buggy implementation, so switching samplerate will also affect pitch.
If you want a 1:1 render of your song, you can use a vst recorder like melda MRecorder (free).

I couldn’t hear any quality problem in your productions btw.

Oversampling does not necessarily will sound “better”, it will sound “less noisy”. Noise can be beautiful though. Resampling obviously sounds different in Renoise. But also I am not sure if you can say it sounds “worse” than original samplerate.

I work with 44.1 or 48, and render with 192k. The result is then converted to 44.1 or so. High rendering rates are like oversampling everything…

There is a difference in quality - like the very high res render is like oversampling everything and using better interpolation. It will be audible with good listening environments, i.e. hi grade headphones etc. Like the highs will be much more clear and defined, and the depth will be much more apparent. Low system rate can result in coarse, gritty and unclear highs.

Ofc there will be difference in sound. I find translations is mostly ok for me, but eqing the highs one has to learn a bit to anticipate the additional clarity. One could as well just render a pattern and a/b the results of the different rendering rates, to see how drastic the changes are. Sometimes it is a suprise for me, but until now it is a rather nice suprise. As things have been balanced before in the mix, the balance will still be there, just on a bit different scale. Rendering the mix in hi res you can do mastering afterwards in hi res, so one could correct any unwanted shift of overal balance in the highs at mastering stage, i.e. if the highs are too bright after rendering in hi res.

Even when converting to 44.1 afterwards, rendering in higher quality will give a better result. because not only the result is oversampled, but all the sample manipulation, all the synths and effects. There will be no or only little aliasing, and the post conversion to 44.1 will not bring back the aliasing.

As for using high res samples - well, the better the sample quality the better the result. Samples make sense in hi res, because all the manipulations done to them (pitching, resampling) will result in less artefacts. Even when workin with a low system rate like 44.1. This cannot be fully put into a relation to the main system sample rate, it rather is an additional quality factor imho.

Interesting, I just gave it a try betweem 192 and 44.1 and I think I can hear a difference.

Thanks for all the insights, feedback.

Haven’t had time to make music recently but have set the play engine sample rate to 48000hz and will try different settings when exporting. The thing I’m curious about is when having exported a 48000hz 32 bit mixdown of a songfile, is when you use multiband limiting / compression / dithering in an external wave editor for additional editing, what would be the best (dithering quantize settings) settings for ultimately converting to a 44100hz 16 bit soundfile (16 bit as most uploading services seem to need this(?)).

Shouldnt the conversion from 192 to 44.1 afterwards give you a lot more artifacts than the initial benefits offer?
If you use some superior software to do this, which is it?
Also, this page compares a lot of software in this matter, its quite nice to browse: http://src.infinitewave.ca/

Yes, the downsampling will introduce something (most probably there is something like a strong sinc lowpass filter to bandlimit), but I feel it is very subtle and different to what would happen when rendering all the glory at the lower rate (all synths, samplers, effects could be subject to stronger aliasing then). Ofc also some of the extra quality is lost from 192 to 44.1, resulting in a different sound, but I feel that certain qualities in the highs are still preserved.

I did tests with good headphones, and found renoise tunes rendered at higher rates always had superior sound. Maybe if you only use certain top notch VSTs that yield high quality with little aliasing by default, the differences might be less apparent. I mostly use the renoise sampler and dsp, and found the quality boost substantial.

The graphs show how the samplers do their job in different software? Renoise is not quite on par compared to some of the big shots… You can imagine the rendering at higher rates like shifting the upper border of those graphs way up into the inaudible, and then downsampling the result will chop off the (inaudible) grit in hard manner.

As for “superior” software for rate conversion I had never really thought about. From the site with the graphs it seems like “SoX” does a really clean job compared to ffmpeg, and thus will be the (free) software of choice for me in future for the rate conversion task.

According to this article : http://www.lavryengineering.com/pdfs/lavry-white-paper-the_optimal_sample_rate_for_quality_audio.pdf , converting to 60KHz should be enough to prevent audible degradation (that is, degradation in the audible spectrum) from the brickwall lowpass applied when downsampling. Since 60KHz is not a standard, the closest standards are 48KHz and 96KHz (unless you count 64KHz, which kinda failed at becoming a standard). Personally I go for 48KHz. Best quality/size ratio.