Samples Sound Darker/Less Vibrant In Renoise?

oh again Iā€™ve been absentā€¦ as I came back to refresh myself on this thread I realized I said something stupidā€¦ I said I sometimes disable ā€œresamplingā€ in trackers to avoid the effect of muddying the frequencies presentā€¦ duh me. I meant I disable interpolation! embarrassing ā€¦

also, I wish to state right away that I have no pretenses that an expensive tool sounds better than an inexpensive tool :slight_smile: thatā€™s not why I mentioned Kontakt. this also brings me back to a related question Iā€™d like to ask: is it possible that interpolation is really the sum total of the problem Iā€™m experiencing? does the ā€œrender plugin to samplesā€ feature sound 100% identical to the plugin if I disable playback interpolation? is a waveform more accurate if interpolation is off? Iā€™m fully acknowledging that Iā€™m not understanding some of the technical details about interpolation. Iā€™ve assumed that it exists soley to deal with low quality sampling artifacts, but maybe I missed the point. is interpolation a necessary feature of resampling itself regardless of sample quality (to play back samples at different pitches/speeds)? I have assumed it is a feature intended only for ā€œcleaning upā€ low qual samplesā€¦ maybe my Kontakt rendered to samples would sound 1:1 if the playback engine had interpolation disabled? I know ā€¦ I should be testing these theories myself ā€¦ :wink:

i took a listen by drag-&-dropping all of them into Sound Forge (i ignored the file names so i listened without bias). i hear no difference. but i didnā€™t expect to, as these are pure and basic sounds at a relatively low frequency. i think that changes iā€™m noticing are impossible to see in such pure, low frequency samples. maybe a better test would be to try something thatā€™s more rich with frequencies/harmonics/etc. if the frequencies iā€™m finding are being affected are not present to begin with, the simple sawtooth wave test wont be able to prove anything one way or the other. for example, i noticed a lot of change between Kontakt as plug-in hosted in Renoise (playing the Kontakt 3 pizzicato string around C3 to C4) and the same instrument after being converted to samples (with Renoiseā€™s ā€œrender plugin to samplesā€ tool). that was a dramatic difference to me between plugin and samples. itā€™s what drove me to move my project to a different tool outside of Renoise.

I love the smell of my C64 heats.

The interpolation in this case refers to the method used when resampling, so youā€™re talking about the same thing anyway.

Even when interpolation is ā€˜disabledā€™, in some cases linear interpolation will still be used, because although it sounds pretty crappy, itā€™s quick and easy to compute and will still sound a lot better than nothing at all.

For the resampling to be completely disabled, a method like sample and hold or nearest-neighbour interpolation would be used, but itā€™s quite rare to see this because itā€™s pretty useless from a musical point of view when compared to something even as basic as linear interpolation. If interpolation is totally disabled like this, then playing samples at different pitches results in a ā€˜blockyā€™ lofi sound with a lot of nasty aliasing, which can be nice for creative purposes, but is quite useless for high quality resampling. For what itā€™s worth, Renoise currently uses the sample and hold method when interpolation is set to ā€˜Noneā€™, so itā€™s there if you really want it.

(Small update: Edited the above paragraph a little bit just to clarify some things)

This is a tricky question to answer. Many plugin synths or samplers will apply a lot of subtle variations to the sound each time a new note is played. There might be many different types of modulations being applied, in order to vary the tone/timbre and create a more natural sound, etc. In the case of a synth, then every note is being synthesised uniquely. In the case of a sampler, then there may be several multi-sampled layers or different velocities per note, or even multiple different samples of each note which are selected randomly or in a round robin style, all of which combine to help give the instrument a more natural and authentic sound across the entire range.

If you render the plugin to samples in Renoise, then at most you are only getting one captured sample per note. You may be losing a lot of variations that youā€™d get from the plugin itself when playing in realtime, and this could very easily change the whole feeling of the instrument. Depending on the options you choose in the ā€˜render plugin to instrumentā€™ dialog, you might be capturing an even smaller amount of samples, like one sample for every 3 notes, for example. In this case, then interpolation will have to be used in order to play the notes inbetween each captured sample. If the instrument only has captured samples for C-4 and F-4, but you want to play a D#4 note, then one of those samples must be resampled/interpolated at some point. It sort of depends on how you configure things in the instrument settings tab, and how you arrange all the multisamples across the note range.

Bottom line: rendering a plugin instrument to samples (which you then intend to play back as a new instrument) probably wonā€™t sound 100% identical to the original plugin, but thereā€™s so many other factors at play here that you cannot simply blame it on the interpolation.

If weā€™re talking about simply playing back a sample in Renoise at its original pitch (ie. C-4), then the interpolation method does not matter here. There is no interpolation necessary when playing samples at their original pitch/speed, so there is no interpolation being applied. You are getting a 1:1 copy of the original data. Interpolation is only applied when playing the sample at a different pitch/speed.

Itā€™s not strictly necessary, as Iā€™ve explained above, but it will dramatically improve the quality of samples played at different pitches/speeds.

The quality of the original sample is not really the point here. High quality samples still need to be resampled with a good quality interpolation method if you want to accurately play them back at different speeds. In fact, you could argue that high quality samples probably benefit more from good interpolation, because thereā€™s a lot more detail and/or frequency content that could be damaged and distorted if the signal is not properly reconstructed.

I chose a sawtooth waveform because it is one of the most harmonically rich sounds that exist. Even though the fundamental frequency of my example sound was 110Hz, the sawtooth itself has strong harmonics that reach all the way up through the frequency spectrum. This sharp, rich sound is what makes the sawtooth an excellent basis for sound design, since you can use filtering to shape it into a huge range of diverse tones and timbres. Because it contains such rich harmonics in its raw form, any changes (ie. dulling) of the frequencies would be very easy to hear, so if there was a real difference in tone between playing the raw sound in Sound Forge vs playing the sound in Renoise, you would be able to identify it immediately.

If you really want to satisfy your own curiosity, then you could try the same test but with a sample of white noise instead. Generate some white noise in Sound Forge, then compare how it sounds in Renoise. Since white noise contains all frequencies distributed evenly, then any change in tone in Renoise should also be pretty easy to spot this way.

This could be related to what Iā€™ve already said, regarding the fact that the plugin could be applying all sorts of subtle nuances and changes to the sound, compared to the sampled version in Renoise which is very static and could very easily sound different because of that. I donā€™t have Kontakt 3 or the instrument youā€™re talking about, so unfortunately I really cannot test this myself.

Iā€™ve already asked a few times in this thread, but could you please, please, please just upload some example sounds that we can listen to? Render a sequence of notes played by the Kontakt plugin itself, then render the same sequence of notes played by the Renoise sampled version. Or render out any other kind of test you think is appropriate. Show me the differences you are experiencing. This is the only way that I can possibly hear exactly what youā€™re talking about.

If you donā€™t upload some examples that I can listen to and compare directly, then weā€™ll be going round and round in circles forever! :)

Sorry to nitpick but you have already pointed out yourself that nearest neighbour is a form of interpolation. It still has to check position against both values and calculate which it is closest to. The alternative (no interpolation) would be to say ā€œWe donā€™t have a value here, Iā€™m not going to look what was closest, I am going to put a default value (0) in/repeat last value.ā€ This would result in nothing but a output of silence with glitches on it or a change of level each time an old sample position coincides with a new one, unless you have pitched by a nice round division, such as an octave.

So yes, some form of interpolation is 100% required.

Also I donā€™t agree that a saw wave is a good test bed. Any steady tone our ears very quickly adjust and get used to. Transients are very important when trying to pick out finer detail without making the ears tired. Doesnā€™t matter how harmonically rich your source is if itā€™s a constantly repeated cycle (IMHO obviously.) Especially when using such a low base note, where the harmonics will be very close together in terms of perceived pitch at the upper registries, and thus much more susceptible to the auditory masking effect.

No interpolation means it does not calculate/synthesize missing samples, but replaces them with previous sample:

Original: 1, 2, 3, 4

Twice slower:
No interpolation: 1, 1, 2, 2, 3, 3, 4
Linear interpolation: 1, 1.5, 2, 2.5, 3, 3.5, 4

Something has to be done in order to reconstruct the signal, I will agree with you there. But as Suva points out, no interpolation could mean simply repeating the last known sample, which is what Renoise currently does when interpolation is set to ā€œNoneā€. Nearest-neighbour is of course a very primitive form of interpolation, but for practical purposes it will produce almost identical results to simply repeating the last known sample, ie. not very good results at all for resampling. In my mind, neither method is useful except for novelty usage, ie. when specifically aiming for a very lofi sound.

I only used a sawtooth because of what dysamoria said in his original post:

You canā€™t really get much more harmonically rich than the sawtooth, so based on his own words I thought that would be a good place to start. The base frequency doesnā€™t have a lot to do with it in this particular instance, imho, as I believe itā€™s quite easy to spot a ā€˜dullingā€™ effect on a sawtooth over quite a wide range of base frequencies.

I donā€™t know exactly what constitutes a ā€˜dullingā€™ effect, but to hopefully give a very quick demonstration hereā€™s another .wav to listen to:
test_saw_880hz_vs_110hz.wav

This is what youā€™ll hear:

  • sawtooth at 880Hz, unfiltered
  • sawtooth at 880Hz, filtered at 10kHz with a butterworth 8n lowpass
  • sawtooth at 110Hz, unfiltered
  • sawtooth at 110Hz, filtered at 10kHz with a butterworth 8n lowpass

Can we agree that a ā€˜dullingā€™ effect is observable in both instances?

Edit:
Anywayā€¦ until I actually hear some example sounds from dysamoria, this is all getting a bit silly. :)

Yeah sorry I was being a bit blindsided, last sample that would of played, rather than last sample that was in correct position, which in most likelihood would be the same as nearest neighbour 50% of the time (next sample the other 50%.) In your example No Interpolation would also be the same as Nearest Neighbour (should of done a third to illustrate it better.)

I have personally admitted to very rarely being able to hear any difference, and that wasnā€™t a deadening of sound (which I still believe is often psych-acoustic and related to the -6dB drop in level.) Although so much of the time these days Iā€™m on laptop sound and headphones anywayā€¦

Trying to upload something that does have some noticeable differences when comparing inverted/mixed waves but the ftp access to my site is making me want to throw hammers at people!

Right I give up with my server, seems OVH are having serious FTP Upload issues (I can browse and delete fine but not a chance of uploading) so here it is on Mediafire.

http://www.mediafire.com/?4oiz51iext0i7zc

Process:
Select as high a quality break loop with good top end rides (24bit but unfortunately 44.1hKz is all I had.)
Load into Renoise.
Add gainer in Master set to +6.021dB (can we get the Preset +6.021 rather than +6.00 dB please?)
Play sample at +3 semitones.
Render Song using both Cubic and Arguruā€™s Sync.
Load rendered files, play at -3 semitones to play again at original pitch and export again using both interpolation methods.
So Arguruā€™s Sync is used two in the AS file, Cubic is used twice in the C file, both have a double run of a 3 semitone change.

Personally canā€™t hear the differnece and doubt I would if I had better than onboard chipset sound from my laptop through headphones but you can see some differences when inverting and mixing with orignal. This shows Arguruā€™s Sync is definitely closer to original waveform that Cubic. There may be some additional differences made by the fact 6.021dB isnā€™t quite exact, but it is pretty close.

So can anybody actually hear much subjective difference in the samples? (I should probably of given them random namesā€¦)

i just want to say, i donā€™t really care if the issue gets resolved or not, i just enjoy reading all this stuff from people who understand way more of the technicalities of sound than i do, and understanding half of it, and hoping to understand it all when people stop replying to this thread :D

@dblue: your explanations are awesome and detailed and you should consider writing an article for Renoise:In Depth, as articles get posted there way to little (because WHEN an article is posted, it is always a great read, thanks to mr_mark_dollin for that most of the time i think)

@suva: thank you for the visual explanation of this interpolation stuff. i was going to ask for this and you beat me to it. i completely understand that now.

keep up the discussion guys! valuable information here :)

Basically this is correct, both are sub par quality by todays standards, although there is a real harmonic difference in no interpolation and nearest neighbor interpolation. The former produces some wrong harmonics in the sound, because the up-ramps will get unnaturally sharp.

EDIT: Okay maybe I was wrong, I sketched the idea and only thing I came up with was some slight phase shift. Otherwise they seem to be identical. Canā€™t say for sure, donā€™t have time to dig in more deeply at the moment. :)

BYTlN6wjcvQ

Audio Myths Workshop - http://www.youtube.com/watch?v=BYTlN6wjcvQA

Strongly recommended

Just for fun, I made this graph that shows the basics of each interpolation method:

thaaaaaaaaaaaaaaaaaaaaaaaaaank you!

Yeah, as you can see from the sketch, the sample-and-hold and the nearest neighbor are identical except for slight phase shift on the first one.

Would have to start a completely different thread with Poll for it to really be useful then. Willing to with a similar, but different break if people want. Admitted I personally couldnā€™t hear the difference, especially of the lower quality set-up I use the majority of the time these days (and honestly doubt I would on my better, but still budget, monitoring system.)

Thanks for the graph dBlue. Do we know how many samples Renoise compares against for Sinx/x (sync) interpolation? (That is the same as Arguruā€™s Sync isnā€™t it? Couldnā€™t find any definitive answer.) As unlike other interpolations the full maths doesnā€™t only rely on the sample points either side, but cascades at deceasing positive and negative values. Although I have to admit Iā€™ve forgotten quite a lot of what I knew from when I studied it for video signals (makes the basis for calculating colour and upscaling for pretty much all half-decent video systems out there, donā€™t know if any of you know the terms 4:2:2 or 4:2:0 and what they meanā€¦)

i donā€™t believe round robin sample playing is an issue here. velocity technically could be as i know samplers tend to use bright samples and filter them with a low pass, opening it up more as the note velocity increases to simulate the brighter sound of a harder plucked/struck stringā€¦ but i did not change the default velocity setting in the plug-in renderer dialog (127) which should bypass such an effect. also, this wouldnā€™t account for the difference i find between trackers and basic sound editors (say, between a sample in Sound Forge and the same sample in Renoise or MOD Plug Tracker). comparing the plug-in to a rendering of the plug-in merely gives me the same problem iā€™ve seen for years in trackers. iā€™m trying to understand all the whys and hows. first thing i thought that could be the problem is that my samples needed to be high quality (to be able to represent the higher frequencies), but this seems to have less effect than i expected (high Hz samples donā€™t eliminate the problem, so iā€™m trying to get a wider understanding of everything that might contribute).

i tried a few different settings. even capturing a wide range of unique notes/samples didnā€™t help. the difference wasnā€™t in the note playback (performance) so much as the actual samples themselves sounding different. where in the audio chain does the renderer capture the source from? is it right before the final output to sound card or somewhere else (ie: is the plug-in audio out different from a tracker instrument audio out)?

my understanding is that ā€œgetting different notesā€ was a matter of changing playback rate. why is resampling done? is it because the samples need to be matched to a consistent final sound card output? ie: if a sample is at 44.1KHz, and i play a note at 40KHz, will that noteā€™s sample data be resampled to output to a sound card running at 44.1KHz or will the sample stay as is? iā€™m not knowing the programming technical details here, so this is my guess. i see that the samplers in Reason have switches for ā€œHigh Quality Interpolationā€ on them, so thatā€™s the same as choosing different interpolation settings in Renoise, right?

the reasoning seems sound, but my experience appears to be different. i think the fundamental frequency matters in this case. the ā€œdullingā€ as iā€™ve been calling it, seems most notable on sounds that are at relatively high notes or noisy like hi-hats and snares.

thatā€™s a really good idea!

iā€™m very sorry iā€™ve not posted examples. i know i really should. iā€™m dealing with a lot of cognitive interference issues from getting off a medication that was screwing me up, so itā€™s very difficult for me to switch between different kinds of tasksā€¦ the idea of doing all this work to make demonstrations has been daunting. i know that in the end itā€™s the only way to move this conversation forward. thanks VERY much for your patience with me thus far. i greatly appreciate it! :)

absolutely. this is a great example of what iā€™m dealing with, only this demonstration is much more exaggerated than ā€œthe real thing.ā€ but itā€™s safe to say that what iā€™m perceiving is very similar to a low-pass filter effect. it is definitely much more notable to me on the 880Hz sample, as well.

i perceive no difference at all, though i see slight shifting in the waveform display. so whatever the difference, iā€™m not hearing it ^_^

you ROCK!! Thank you!! iā€™m a kinesthetic learner. that said, iā€™m an extremely visual person. if someone can make a visual representation of a concept, i am FAR more likely to comprehend it. thanks so much! now i have an idea of what each of these interpolation techniques really is all about!! :) B) :D:lol: :yeah:

ok, so i did the white noise test. 16-bit 44.1KHz, stereo. no difference at all between Sound Forge and MOD Plug Tracker or Renoise. iā€™m REALLY STUMPED now. <_< if itā€™s not based on attenuation of frequenciesā€¦ what the heck am i experiencing?? as soon as i run into an example of a difference that i can produce demo WAVs of, i will do so. this is bugging the crap out of me. youā€™re all being very great about this conversation and i really appreciate it. i hope to eventually answer this whole thing for myself one and for all!