Fx Rendering Not Same As Real-Time

Hey all, im hoping for either a fix or enough people to have noticed this to make a change.
Ive been using Renoise for years, am a registered customer and use it professionally but have had one major issue with it from the start. If I have say a Lo-Fi effect and maybe some EQ, other plug-ins running on say a kickdrum sample, listening to the playback in real-time it sounds one way. If I then try and render that same set of fx to the wav, no matter what bit depth or resolution I select, it never sounds the same. It would not be an issue but as im about to take my new album on the road im bouncing down unnecessary fx etc to static wavs to free up CPU time but I lose elements in the transition.
If anyone has any similar feedback on this, please feel free to reply. I have 26 days till I play support with Autechre on my album launch night 23rd April and am currently optimising my live set.

Also lately the new versions of Renoise seem to be creating much larger file sizes and load / save much slower.

If I have posted to the wrong part of the forum, I apologise in advance, long time user, first time poster.

anodyne / puresthatred
Psychonavigation / Mantrap / Skam / Ultramack Records

do you mean render to selection or final render?
I have not experience this behaviour , one way you could solve this is just recording the track ( sounds ) into a wave editor if your soundcard allows this

Does this happen all the time, no matter what plugins you use, for example also when you’re just using Renoises native dsp’s and no vst(i)s?

I bet a vst(i) plugin is the culprit here.

Sadly, this happens whether using Renoise’s built in fx as well as external.
If youd like to try, load up a kickdrum sample, run the lo-fi plug in on it.
Put one single note into the pattern and listen to how it sounds.
Now Apply the FX to the sample directly using the FX button in the sample editor.
It never sounds the same, regardless of the bit depth or resoultion I use in the original sample.

Im guessing the overall fx are run though a different engine when in real-time that gets bypassed when rendering to the sample. As someone mentioned, its tedious to have to record, re-record externally etc when all I want to do is bounce down a single fx to a single sample, say a kickdrum through a static lo-fi fx and have it sound the same. The rendered version always has a different sound to the real-time one. Its like its sampled at a different rate.
Im not new to audio editing, recording etc as I have been a producer for 18 years, im open to suggestions but hoping others have noticed this issue as it happens on multiple machines, Ive tested it on iBook, Sony Vaio, my Quad Core, etc, all different soundcards, OS, same problem on every version of Renoise since I started using it.

Im using the kickdrum / lo-fi fx as an example, its not the only plug-in this affects. Some are more noticable than others.

Just hoping someday for a solution as its been bugging me for about 5 years.

Can’t confirm here, just did a test and everything sounds the same, though I have had experiences in the past where there were discrepancies, but that was because of some internal lfo’s in the effect that changed over time (like with flanger effects or what not).

Hope you are aware that after you have pressed fx in the sample editor, you need to switch off the dsp/vst’s in the Track Dsp’s tab to hear the rendered in effects as intended, otherwise they’ll ‘double up’ and you’ll indeed get a different result than expected!

I think you’ve basically answered the question here yourself.

(Forgive the rambling now, haha… it gets a bit weird when constantly mentioning samples and sample rates in the same sentence)

During live playback, the DSP effects are being processed at whatever sample rate Renoise’s audio engine is set to. During WAV render/export, the DSP effects are processed at whatever sample rate you choose there. In both of these situations, the instruments/samples in the song are being resampled/interpolated so that their overall output is at the correct sample rate and in sync with the audio engine, and then the DSPs are applied to that resampled output. When using the “Process Track DSPs” function, this resampling does not take place (as far as I can tell), and so the raw sample data simply gets processed as if it did match the sample rate of the audio engine, when in fact it may be totally different. (Your 44100Hz sample may get processed at 88200Hz, for example)

Let’s say that Renoise’s audio engine is set to play at 44100Hz. Then we have a sample which is also 44100Hz, and we make 2 extra copies of it: one copy which we adjust to have a sample rate of 22050Hz, and the other copy adjusted for 88200Hz. If we now use the “Process Track DSPs” function to apply an effect such as Delay to all 3 versions of the sample, the result is that the 44100Hz version is of course delayed correctly, but the 88200Hz version has a delay which is twice as fast, and the 22050Hz version has a delay which is twice as slow. This is obviously due to the mis-match between the audio engine sample rate, and the sample rate of the sample itself.

You would also run into this issue if you have a sample (such as your kick drum) which does match the audio engine sample rate - where both audio engine and sample are set to 44100Hz, for example - but in your song you’re actually playing notes at pitches which do not match the sample’s base note. The sample data itself is only 44100Hz when you play a C4 note, but your song may contain a G4, or a D5, or whatever. In effect, this is also creating a sample rate mis-match. You might use “Process Track DSPs” to modify your original 44100Hz sample data, but then you play it back at a G4 or D5 note and it sounds weird, because you are then playing the modified sample at the incorrect sample rate.

It’s tricky to say what could be done here. To process the DSPs correctly when using the “Process Track DSPs” function, the sample itself must first match the sample rate of the audio engine. Should Renoise automatically modify and resample the sample first before processing the DSP effects? Should it temporarily process the actual DSPs at a different sample rate instead, so that the sample data itself remains in its original format? Either way, there are going to be mis-matches at some point, so I’m not sure if there’s a perfect solution.

However you choose to approach it, the bottom line is that the sample rate (and base note) of the sample needs to match the sample rate of the audio engine, in order for the DSP effects to be applied “correctly”.

Perhaps in the future, Renoise can take into account the sample’s base note and original sample rate, and somehow compensate these in order to match the output of the audio engine. It may not be perfect for every situation, but maybe it would feel a bit more correct for most uses.

Why did you wait so long to bring it up? :)

.

I think dblue’s post goes some way towards explaining why I’ve experienced the exact same problem in EVERY instance, and have simply got right out of the habit of trusting the ‘Apply FX’ button. I’ve sadly just got used to rendering to sample, which is a considerable workflow disruption…

Good to know why, anyway :)