It’s important to understand that interpolation only applies to samples which are being played at different pitches, ie. not their base frequency. For example, if a sample has a base frequency of 44.1kHz at C-4, and you play a D#4 note in your song, the sample must be resampled to play at that new pitch. This is also true if you have a sample at 44.1kHz and you render the song at 48kHz - the sample must be resampled from 44.1kHz to 48kHz.
The output from DSP devices and VST/AU plugins is never resampled, so the interpolation mode has absolutely no effect over them.
If you want to benchmark the different interpolation modes, then your test song should consist of sample-based instruments being played at a variety of different pitches. This is where the resampling will come into play, and where you will see a clear difference in rendering speed between cubic and sinc. I would also advise you to create the test song in 2.7, so that it can be rendered in both 2.7 and 2.8 for a proper comparison.
If you’re trying to do a more general benchmark of 2.7 vs 2.8 rendering, it’s probably still a good idea to use a song that was designed for 2.7, ie. no groups or other new 2.8 features. Try to establish a good base measurement first, just to see if they’re even remotely similar.
pfff little, as usual, … kind of agressive/arrogant answers i allready know from you…, opposed with your shares… as usual, with _ or - or coma or whatever between bit and arts…
When you convert the sample rate of an audio file from one frequency to another frequency on a computer that is running Windows 7 or Windows Server 2008 R2, the new audio file sounds distorted* during playback.
The audio file sounds distorted* when you play it on any audio record, capture, or encoder application if the following conditions are true:
The application uses the Multimedia Extensions (MME) Wave I/O API.
The application relies on the Audio Resampler or an audio sample rate converter.
maybe thats why the (argurus) sinc algo is having a hard time…