[done 3.1] Instruments with FX are tied to one track?

I just realized that instruments which include FX can only be played at one track at a time!!! What the deuce??? This makes it completely useless for drums (except of course if you are never intending to properly mix your drums), doesn’t it? Is this a bug or is this (broken) by design?
I must clearly oversee something obvious here. Could someone please make this a bit clearer for me?

I dont see a way to route the instrument fx chains to the mixer tracks…but you can stiil program your drums on multiple tracks as normal and then group them. Is that what you mean?

You can mix your drums within the instrument using different FX chains which is pretty much the same as it was before. Would be nice if multi outs were implemented as well.

it is a limitation by design (primarily to avoid CPU overheading).

if you put all effects inside the instrument on different chains for different drum pieces (f.e.: one chain for bassdrum, one for snare, one for hihats, and so on), then you don’t need to add any other effect outside in the track.

if you instead keep working as you were doing before in 2.8 (id est: putting all effects in tracks/groups), then there is no difference with the past.

the current design is not “broken”: this is the same limitation you have with VST instruments; multitimbral VST instruments are similar to XRNI with multiple FX chains inside.

if the first way of working explained above does not fit your needs (for example because you want to share some effect such as reverb among more than one chain), then just keep on doing as you did before organizing with tracks and group sends.

of course this would solve the problem, but would make instruments more dependant to songs

I noticed this from the start …and it is a bit of a let down …
I mostly spread …kick , hihat , snare etc…across multiplle tracks …to keep an overview.
Sadly enough when using the new instrument design , this aint possible anymore …
Thing is , with multitimbral vsti’s , it doesn’t matter in which track you put the note data , as long as you define an output , if you don’t define an output …the track were the data is placed = output
SO +1 for defining outputs

This is not completely true …take microtonic for example , has eight drum sounds…starting from c2 to g#2 …( can also be routed to diff.midi chan…to play each part chromatically …but in this ex. it’s irrelevant)
I can place them accros as many tracks as I like , as long as I define an output …for example track 1…all sound will be routed to mix.channel 1
This is currently not possible with the new instr.design…spreading across multiple tracks will give unexpected behaviour

This shouldn’t happen. What do you mean by “unexpected results”? Could you provide an example XRNS?

It might be OK to protect the user with good default behaviour, but imposing artificial limitations by design on advanced users makes the software feel … artificially limited by design :)

I think a setting, “Max simultaneous tracks” (default=1) on instruments with FX would relieve the tension a little. For me at least. I have strange needs, folks.

It’s a bit more complicated and not artificial. To play the same set of DSPs simultaneously on X number of tracks, you would need X instances of the DSPs under the hood to play this back correctly.

An example: Let’s say you have a reverb DSP in an instrument. If you play this instrument on 2 tracks at once and want to route each output of the instrument into separate tracks again, there must actually be 2 instances of the reverb running in parallel within the instrument to do so. Else there’s no way to split the signals that have fed the reverb back into the track’s inputs.

In theory this could be done by creating as many DSP chains under the hood as there are tracks in the song, just in case you’re going to play it on multiple tracks. In practice this won’t work, because this means that when you add 1 single DSP to one instrument, you’re instantiating 10 or more (as many tracks you have in the song). This eats up way too much resources, also means all those 10 (or more) DSPs have to be magically kept in sync somehow.

What we need instead, is making it very clear that there is such a limitation. You’re not the first and surely not the last one who stumbled upon this problem - or limitation.

I understand the concept of mono/poly processing of voices …and indeed polyphonic processing of voices , adds each voice with the given nr. of poly/efect voices …thus increasing cpu …
But why can’t we use multiple tracks , solely for overview , and stil rout the output to the master instrument track .
The added tracks are just there to spread notes ( no audio stream ) across multiple tracks

I understand how taking an unoptimized/naive approach would result in a crazy amount of redundant DSPs. I was thinking “Auto Suspend” (as per the existing VSTi feature) might mitigate that, but perhaps there are prohibitive overheads with even that :-S

Anyway, thanks for taking the time to explain, I appreciate it.

I think a simpler approach would be to allow FX chains to be routed to different tracks. So currently on FX chains you have the dropdown to select which output on the soundcard an effect chain is routed to, this should instead be a list of available tracks (sound card routing is then done on the mixer). If a sample is not assigned to an FX chain it will just play out of whatever pattern track it is played on.

This approach could also be applied to redux in future (assuming it will be a multitimbral plugin).

Currently Renoise offers better audio routing to multi out VST plugins than it does to it’s own native sampler.

Yeah, i’ve figured it right yesterday and it is sad because when i create some drumpack i wanna spread it across the tracks because of pattern matrix (i wanna be able to simply switch tracks of hihats, or turn it off etc for example). But i also want to have reverb on snares etc.
I used to create drum group of these track and then add some bus compressor on the output what is something i can not do even in instrument editor (can not route different fx tracks into one where bus compressor should be). So for now i have to program drums in one track where bus compressor will be what is more uncomfortable than creating different tracks for different sounds so can not use these nice features of 3.0. Have to decide between more advanced sound possibilities from 3.0 or great matrix editing from R2.8.
I understand why it is complicated problem but it should be solved like afta8 say for example.

(It is funny how advanced features in great 3.0 raises more problem and suggestions than more limited R2.8, because what it did it did perfectly, R3 is somehow more controversial in this aspect).

as I said above, this would certainly solve the problem. I favour this approach

What if I would like to use the bundled drumkits that now come with Renoise in a mix? I cannot! They only play on one single track, so they all need to go into one track. If I then go to the mixing stage, I see all my drums in one single track in the mixer. This just does not seem right to me.

Fully agree with this! I actually expected this to be the case, as the beta announcement already mentioned that the effect chains double as a routing matrix…

The most funny thing is , now 2 weeks since release …some of those that couldn’t cope with the criticism Ren.3 received , it was all pure awesomness…are finally starting to see the litlle cracks

This design somehow reminds me of those drumtracks in Energy XT2…they sound nice on paper (self contained and everything), but in reality (that is, when actually making music) they don’t help at all, quite on the contrary, they prevent me from mixing properly. You end up in splitting all single drums again into different intstruments…just to make mixing work. But this of course doesn’t make drumkits nicely distributable. So I am afraid we won’t see any good (or even commercial) drumkits in Renoise format in the future.

Really? I thought Renoise can do multiouts for VSTi? So you are saying I cannot even use another VSTi sampler to get around this?

I think it alien is wrong …
If you have a multitimbral vst …it does not matter on which track you put your data , as long as you specifiy a midi channel and an ouput , assuming the vst supports multi out .
vsti still has multi out …
Nothing changed from 2.8 .in that regard
You can still use your multimbral /multiouput vst 's .