Understanding the point of interpolation (and little bit of Nyquist Shannon Sampling Theorems to boot)

Thanks to Nyquist and Shannon, analog to digital (and vice versa) conversions of audio samples has been thing since for awhile. And Renoise has this implemented into an instrument feature called interpolation (or resampling). Obviously you can read up from the wiki, but what I want to talk about is a certain application: resampling lo-fi audio samples and reproduce analog variants.

Let’s take a A-7 triangle wave generated from the Custom Wave Generator:
lofi_triangle

If you apply linear interpolation
linear_triangle

It’s very clear resampling the audio sample straightens out the waveform and produces a close to representation of a triangle wave. Sometimes due to the nature of the wave being resampled, the signal may not look pleasing at higher frequency.
So why is this important? Space saving! If you’re trying to save space in your project/computer this is great for this application. Now the only trade off is you need to use a little bit of CPU especially on low end PCs. These days CPUs are well made to the point I don’t think there’s not much tradeoff using Linear or Cubic Interpolation. Also depending on how the lofi sample is shaped, the interpolation can reshape it into not exactly what you expect, so experiment and look for different lofi-samples to your disposal!

1 Like

Good to know that you can save up some space on those chip samples :smiley: Also… isn’t it preferable to have samples start at 0 DC?

Also good to remember is that the D/A converter in your audio interface will do a sophisticated interpolation of all points. But only for the samplerate it is set to.

Sometimes! If you want to achieve unique sounds, use different variance of it to get that “analog” feel.