Faster Rendering with HQ Sinc Interpolation Tip

Limit your master volume meter on top to 5 FPS, then open the Plugins section, not the Song View section, which used to be my mistake. Then, Render, I did a 96000 Hz, 32-bit offline with all samples to Sinc, and it souped up the rendering time; what used to take plenty of hours now takes 10 minutes. I hope this helps someone. I use a Dell Studio 1735 laptop, which is kind of ancient, but it’s all I have at the moment.

1 Like

Hello,
In my opinion, 96kHz is unnecessary. 48kHz is satisfactory everywhere.
You can turn off sinc interpolation if you use the same sample rate in the samples.
Yes, sometimes more kHz can be faster because even sinc interpolation is involved when converting 44.1 to 48. It is not an even ratio. 48 to 96 is.
Use the same sample rate everywhere and render at the same sample rate.

2 Likes

Lol what a good hack…I always marveled at the spectrum view reacting to and displaying the render result. If you render 96khz at 44.100, the fft graph will be shifted in frequency…Now I guess rendering will be much faster if you disable scope and reduce fps? Great idea! Should be able to disable this processing all together just rendering a progress bar…

Hell, no. try and hear the difference. You render 96khz, and have much better (everything oversampled!) quality in some regards. It may be that the result is slightly different than your work version even. Downsampling to 44.1 khz, some of the extra clarity is preserved. I always render 96 or even 192. People who want music as .flac and have good gear will be glad about hq lossless versions, and also the mp3 should sound a little better on good gear.

only if you run only 44.1 samples at 44.1 and compose like a magix music maker guru. The sinc rendering will bandlimit the whole renoise resampler and also the tracker sampler, all sample based instruments should sound more or less smoother in the highs with this option.

No matter what you convert, you need a proper bandlimiting conversion or a proper interpolation when upsampling. Even if you downsample from 96 to 48 khz, you cannot just drop every 2nd sample, but you must generate a bandlimited mean of all samples. I mean such even ratios can be optimized, but it’s not really a big use. The sinc option will not just resample your result audio, but all sample based instruments, and especially pitch-modulated sample instruments will be more clear with sinc interpolation (no more HF grit, no reflections in the highs).

Maybe you want a 1:1 clear wyhiwyg result from your VST plugin outputs? Then you can do that, of course, and prevent your audio having to be resampled, at all. I use mostly renoise native instruments, so for me the gain of both offline sinc interpolation and rendering at a higher rate is probably greater than for you. By oversampling you will also oversample your VSTs…if you use commercial ones, it might be overkill and no audible difference, as they might already use bandlimiting and oversampling internally, but cheaper or free ones may also benefit from rendering at a higher rate, btw.

For the final result btw, I just use a tool like ffmpeg to make the different encodings and versions. I really dig the difference of 96khz - 24/32 bit music, it’s something for audiophiles but with good headphones you can usually hear the difference. As a producer with neutral studio monitoring gear, you should be able to also hear this difference and prepare for people with acute hearing. It’s an art to exploit the greater quality, so some people may want to do it, but work at 44.1 to save cpu cycles. A simple conversion with a tool which properly resamples the output, should not diminish the quality, and static resampling of audio is way fast. This should not kill any quality, unless you restrict it with your new format intentionally (conversion to lesser quality format).

Rememeber the offline sinc rendering option affect the whole renoise sampler with all pitch modulations, and not just the result resampling. It does not affect VSTs though, and that’s why people here make different choices in that regard.

I agree, but we’re talking about samples.
The DSP is somewhere else and it resamples inside, there always has to be some reserve for filters, etc. But 96kHz is really for DSP. At 48kHz, Nyquist is 24kHz.
Sorry, but I don’t think everyone can hear like a dog.
I haven’t measured anything, but I think that at my age I’ll be happy to hear 18kHz.
I can say, however, that even with quality headphones,
I can’t really tell the difference between 48 and 96.
Only if the mix was worth it…

Just my unbiased opinion as listener

2 Likes

Hey…well, I am also talking about sample…look into the manual in the rendering section, please… The offline sinc resampler really resamples all cubic or realtime sinc samples with offline non-realtime sinc algorithm. So the whole renoise sample, all instruments which are supposed to be bandlimited, are then much cleaner sounding than the realtime alternatives - look in the renoise manual, and you will see it will affect the noise floor of all cubic or sinc interpolated samples.

I do a lot of renoise native sound design, and it will really affect the sound and make it much cleaner. Some instruments might really change their character if there is very much (boost/distortion/feedback) DSP processing, and then need to be rendered in their original rate to work with the song. I don’t encountered this in any critical way, yet though…usually all my instruments just sound a little up to considerably cleaner and more defined and transparent than with realtime audio. So it rather improves my sound than changing it in a critical way.

The precise/HQ mode in “interpolation” is not the general resampling for the output audio stream, but really the core of all sample based cubic/sinc renoise instruments. The output won’t be resampled, at all, renoise delivers it at the rate it rendered it. You’ll have to use other tools to resample the whole output, in case you need another result file rate.

Yeah well there is a market, and I agree that electronic music is not often made for audiophiles. Some artsy artists really do that, though, and people listen to it with very expensive equipment.

Yeah it’s hard, but in my work I can clearly tell which render is more clear. Like I said, it depends on the software you use. Renoise really is not completely up to industry standards, and gains benefits from such practice, some VST won’t.

Also if you have a low quality 48 render, and another hiq 96 render, with good gear a trained person can tell which is the better quality in comparison. Play back only one of the tracks, and the difference fades due to lack of comparison. When rendering 96 and then converting to 48, the difference between the render and the conversion will be much less - the better rendering also survives resampling, but you will still lose some quality and this will be audible in comparison with good gear, but only to a small extent. Hence I render in 96 or even 192, and downsample it and use that master for all purposes, even for cd I just convert it with ffmpeg to 44.1/16. The quality will be superior over another 44.1 render, but that’s just my own experience with my own kind of (renoise native-heavy) music production style.

The optimal Hz for analog sound is 100,000 Hz, 24 bit. I think 192000 Hz may be too much, unless you record a live instrument from an external source like vocals. I believe my computer is only capable of 96000 Hz. If you generate a sine wave in Audacity with the HQ Tone Plugin at 96000 Hz, it sounds super clean; 192000 Hz produces noisy artifacts. 96000 Hz is the sampling rate, which is how many slices per second the audio can hold in the given spectrum. What you’re talking about is the audible frequency spectrum, not the sampling rate. I hope you learned something new. Also, 32-bit depth may be too clean; I tried a sine wave at 32-bit in Audacity, and I can hear the frequency fluctuate up and down. 24-bit puts the right amount of noise level for a more linear sound.

Opps,
You can’t judge analog sound by sample rate and bitrate. That’s digital. 100kHz, I’m hearing that for the first time and I don’t know what it means.
A sine tone should be clean at every sampling frequency.
Is there noise at 192kHz?
Then the sound card probably can’t handle it or the shielding is bad.
Bit depth doesn’t affect the frequency but the dynamic range, so if you hear frequency fluctuations, the signal generator is bad.
But the point at the beginning was that I recommend using the same sample rate for samples if possible, because even sinc interpolation, although good, can cause artifacts with an inaccurate ratio of 160:147 (44.1vs48).

100 kHz is not usable by sound cards with software, so the only possible solution would be 96000 Hz, with the exception of Linux being able to play at a 100 kHz sampling rate, which is truly analog, like what’s heard in the real world. I found it on Wikipedia that anything above 100 kHz is overkill. I’m just saying that generating a sine wave on Audacity at 192 kHz makes it asymmetrical, causing audible artifacts to pop up. At 96000 Hz, the sine wave is symmetrical.

100 khz is probably a theoretical limit, due to buffer size optimizations sound card vendors etc. have settled for 96khz. This is only 4 Khz slower, so it should yield the right result.

About 192, well, you can record ultrasonic with it, lol. If you have craploads of processing, you may find a benefit. The same goes with 32bits - with 24 bits you have fixed point precision, which is good for a final results resolution. With 32bits, you suddenly have floating point numbers, so in reality you only have 23bits plus some extra.

The special point about floating point is, you can amplify or diminish amplitude of the signal big time, and not really lose precision while deamplifying, and not clipping while amplifying too much, because the floating point has a higher range and a higher precision around zero. That’s why DAWs where processing is in line would rather use floating point, and only the final result gets converted to 24 or 16 bit fixed point precision.

If you’ve artifacts or fluctuation at 192khz, maybe your sound card or drive have problems. It should be sounding clean, just like 96khz. Maybe some software just can’t cope with it right, or the hardware/driver is lacking somehow.

Yeah there are really dacs with free sample rate choice I believe, so you could drive them at exactly 100khz, but believe me the difference to 96khz you would not notice, at all, it’s too close and both resolutions are way too fine.

1 Like

Yes 24bit can handle everything.
32bit float is fine but there is a fine line.
Really small numbers close to zero start to cause problems and that’s how most DSP denormals cut off at 1e-15 sometimes even 1e-12.
Personally in lua scripts even though I use “ffi” C interface I go to 1e-8 (-160dB) because my CPU just can’t handle it anymore.
And it can’t affect the audio signal. the rest is not audio but numerical residues.

This is more like a relic problem. You can actually set flags in the CPU so it will process your DSP code with denormal handling set to flush - it will just assume anything below the non-denormal range as zero, and will not slow down any longer. Any code ofc needs testing. Maybe some plugins cannot handle it well, then it’s the developers’ fault for using the fpu this way. Denormal handling is actually necessary for some ultra-precise operations only, and will of course improve precision of certain calculations, but it’s usually not necessary for audio…

If you try to do DSP on a fixed point (24 bit) stream, you’ll have problems in calculations. Not saying it is not possible, but you may have a hard time keeping precision in certain algorithms. SSE etc. vector operations can have controlled denormal handling, and even though I know maybe some interpreter languages can’t control it (sadly…), but that’s how DSP code is supposed to run when you’re after performance.

Yes, that’s how it should be for programs that compile.
I write my applications in luajit and with the help of “FFI” I can achieve good results. Unfortunately, I have to take care of the denormals myself. But the CPU will thank me.
Other techniques like avoiding the garbage collector are just icing on the cake :slight_smile: