The Perils Of Interpolation (versus Byte For Byte Downmix)

I had a surprise when I tried to record a pure C64-style square wave, and downmix it with Cubic or Arguru interpolation.

Compare:

ORIGINAL SAMPLE:

ARGURU INTERPOLATION (through “Render song to Disk”):

(EDIT +8 hours: One can (almost) disregard this thread now. This sample is 10Khz, so the interpolation effect is therefore more dramatic. Also, one can turn interpolation off after all by using Taktic’s advice further below.)

The interpolated one looks and definitely SOUNDS different. The original has a clarity that the interpolated one lacks. I’m not too bothered how supposedly good these interpolation types are for some mixes (though I’m sure they are good for general use) - but I really want to keep the original signal where possible.

Numerous reasons:

1: For archival/integrity purposes. No information is changed - the sound is mixed as is (if one wants ‘higher quality’ and a smoother sound, one can always use sounds at an arbitrarily high sample rate).

2: Comparison reasons. I’d love to hear how no interpolation sounds compared to Arguru interpolation. This is for curiosity, scientific, and practical reasons. Like to see linear interpolation as another type for comparison too.

3: To check the integrity of a VST (seeing the sample rate (if any) more clearly).

4: For C64 buffs ;)

I’m guessing interpolation was useful for the time where sample rates were low, and smoothing needed to be applied. But with 44Khz samples/VSTs becoming a standard, that’s surely not necessary now.

Please let’s have the option to disable interpolation along with mono mixing as mentioned in this thread (only stereo at the mo).

Yes!

1

I don’t get it. There already is a Interpolation setting for each sample in the Instrument → Sample Properties.

Do you just want to set this for ALL samples at once?

i think he actually means the interpolation setting for the final render (entire song)

Okay confession time - that section is one of the few areas of Renoise I was only semi-aware of. I’m really pleased to see it can render without interpolation if necessary for samples. I’m curious now - if this section exists, then why does the option of interpolation exist in the ‘Render Song to Disk’ window? Does it apply a second layer of interpolation? I looked in the online manual, but it wasn’t clear on this.

I also noticed that the square wave I used was at a 10,000hz rate. I noticed that when I used a 44,100hz rate, the interpolation made very little difference at all, unless played at low octaves. That’s normal of course - good stuff.

I take it that VSTs don’t use the interpolation in the “Instrument Settings → Sample Properties” bit.

That would still be nice, but I’m quite happy even now :)

-1

At first I was confused about this, because there is a setting for interpolation in both the sample properties and the song render dialog. But I think I get it now – can anyone confirm this?

  • If interpolation in the sample properties area is set to none or linear, those samples will be interpolated like that no matter what.

  • If interpolation in the sample properties area is set to cubic, as well as at song render time, they will have cubic interpolation.

  • If interpolation in the sample properties area is set to cubic, and Arguru’s sinc in the song render dialog, that sample will be rendered with the more complex algorithm (Arguru).

So there’s no “master interpolation” happening to the mixdown at render time, it’s only per-sample… Does this seem right?