This is an attempt of me to define what I would consider the perfect implementation of the Renoise sampler, in light of the current problems and limitations.
While there are many ideas floating around in several threads, I thought it might be a good idea to write them all up in one place, since I think that they also depend on each other in one way or the other.
Output routing
This seems to be a bit tricky subject. In the past, Renoise allowed an instrument to be played on every track, i.e. wherever a note was played in the sequencer, the audio was routed to. So basically, there was entirely free audio routing for the sampler.
In 3.0 this behaviour changed, but only for instrumnts that include FX: An instrument with FX can only output audio to one track. Which track that is, is not well defined at the moment.
This could be solved by allowing instruments routing audio to different tracks, in case FX are used (if not, well, then the whole thing behaves like it did in 2.8). So, not using FX → old 2.8 behaviour, using FX → VSTi behaviour (see also next point)!
Note data routing
Crucially, this is different to the point above. Irrespective of how the sound from the instrument will be routed, the note data can be placed on different tracks.
As mentioned above, in 2.8 there was a one-to-one correspondence between note data and audio output.
Since this is not the case anymore for 3.0 (when FX are used), I think the behaviour should at least be changed to mimic this of VSTi: Notes can be placed anywhere, but the audio routing will be independent (see above).
Mute groups
This is partly related to the output routing it seems. In the current implementation mute groups function only within a track. That is, when not using FX, mute groups will work, if you keep the to be muted samples withing the same track.
When using FX, however, things should work the following: We should be able to route each FX chain to a different output (see also two points above). As soon as there is one or more output routings (i.e. one FX cain in use), the mute group behaviour should change to a global mode (because now a limited number of track outputs is defined, the overhead computational problem is gone!) and samples in one mute group should be muted irrespective where the notes are placed (EDIT: …and irrespective of where the audio is output to!).
“Tracker” vs. “Live” mode
This is how danoise seems to see the current sampler problem:
http://forum.renoise.com/index.php?/topic/40793-nna-action-doesnt-work-when-playing-from-keyboard/page__view__findpost__p__310513
Basically, all points I listed above so far make one thing clear: What they all have in common is that they all are somehow related to the output routing
problem. Thus, the difference between old-behaviour and new-behaviour (or tracker vs. live mode), in my opinion is whether output routings (via FX chains) will be
defined or not. As explained above, the mute groups for instance would work differently depending on this “mode”. Also, Redux would only work with output routings, of course.
Modulation concept
Not much to say. Bit_arts explained this very well here:
Maybe “sample groups” would be a solution here, i.e. the possibility to group together a bunch of samples which share the same modulation (see for instance Shortcircuit 1 for an example).
Another thing that should allow to be modulated are sample/loop start and end.
Optional: DFD (direct from disk streaming of samples)
Would in general be nice to have, since it allows for big instruments to be loaded. This would be in addition to the default from-memory mode of course.
Optional: SFZ support
Seems to be a widely used (and open!) format. So supporting is would certainly be a plus.
Well, I hope this gives a good idea of what my thoughts on the topic are (in one place, rather than scattered through many threads). I hope these thoughts are helpful to taktik and his crew in some way.