Per-voice Sample FX chain

Currently we have a per-sample effect chain which applies across all simultaneous “instances” of that sample.

Would we be able to have a separate effect chain which applies to each one of those instances independently?

One important use case I imagine would be adding distortion to a sample without introducing interference between different notes of the same sample playing at once. (Sure, you could just distort the underlying sample, but then you’d lose dynamic/macro control over it.)

instrument effects are per-voice already, otherwise you wouldn’t get an alternating delay as in this example:

4472 per voice delay.xrns

or am I missing something?

Marty’s description is correct, in that FX chains do not care about the polyphonics of a signal and simply receives an audio-stream that it should process.
Even when adding meta devices like the key or velocity tracker, they can only represent a single value at any given time.

So, the suggestion is basically that an instrument should produce multiple audio streams, and that each FX chain would then be capable of dynamically “cloning” itself to handle each voice independently. Actually, I think I’ve heard talk about the internal structure in Renoise being able to handle this, but that it would make people’s computers explode?

Still, I think the concept has been proven to work in a software like Audulus…here, I think that nodes will transparently adapt to a monophonic or polyphonic signal

PS: just noticed that the video is by afta8. Nice work m8 :slight_smile: