I think you’ve basically answered the question here yourself.
(Forgive the rambling now, haha… it gets a bit weird when constantly mentioning samples and sample rates in the same sentence)
During live playback, the DSP effects are being processed at whatever sample rate Renoise’s audio engine is set to. During WAV render/export, the DSP effects are processed at whatever sample rate you choose there. In both of these situations, the instruments/samples in the song are being resampled/interpolated so that their overall output is at the correct sample rate and in sync with the audio engine, and then the DSPs are applied to that resampled output. When using the “Process Track DSPs” function, this resampling does not take place (as far as I can tell), and so the raw sample data simply gets processed as if it did match the sample rate of the audio engine, when in fact it may be totally different. (Your 44100Hz sample may get processed at 88200Hz, for example)
Let’s say that Renoise’s audio engine is set to play at 44100Hz. Then we have a sample which is also 44100Hz, and we make 2 extra copies of it: one copy which we adjust to have a sample rate of 22050Hz, and the other copy adjusted for 88200Hz. If we now use the “Process Track DSPs” function to apply an effect such as Delay to all 3 versions of the sample, the result is that the 44100Hz version is of course delayed correctly, but the 88200Hz version has a delay which is twice as fast, and the 22050Hz version has a delay which is twice as slow. This is obviously due to the mis-match between the audio engine sample rate, and the sample rate of the sample itself.
You would also run into this issue if you have a sample (such as your kick drum) which does match the audio engine sample rate - where both audio engine and sample are set to 44100Hz, for example - but in your song you’re actually playing notes at pitches which do not match the sample’s base note. The sample data itself is only 44100Hz when you play a C4 note, but your song may contain a G4, or a D5, or whatever. In effect, this is also creating a sample rate mis-match. You might use “Process Track DSPs” to modify your original 44100Hz sample data, but then you play it back at a G4 or D5 note and it sounds weird, because you are then playing the modified sample at the incorrect sample rate.
It’s tricky to say what could be done here. To process the DSPs correctly when using the “Process Track DSPs” function, the sample itself must first match the sample rate of the audio engine. Should Renoise automatically modify and resample the sample first before processing the DSP effects? Should it temporarily process the actual DSPs at a different sample rate instead, so that the sample data itself remains in its original format? Either way, there are going to be mis-matches at some point, so I’m not sure if there’s a perfect solution.
However you choose to approach it, the bottom line is that the sample rate (and base note) of the sample needs to match the sample rate of the audio engine, in order for the DSP effects to be applied “correctly”.
Perhaps in the future, Renoise can take into account the sample’s base note and original sample rate, and somehow compensate these in order to match the output of the audio engine. It may not be perfect for every situation, but maybe it would feel a bit more correct for most uses.
Why did you wait so long to bring it up?
.