I am satisfied with mute-group only applying on a per-track basis. I think taktik said it well: if we for some reason would want to sidestep the mute group, hey - that’s a good thing.
Then we can still play our open hihat sample without getting it cut, and also, organizing each mute-group in a track seems to make sense from a purely “organizational” point of view.
Of course, that still leaves the following bit to be desired:
If we imagine that we got actual sample->track routing done in 3.x, then you could continue to use tracks to organize your “mute-grouped” samples, but use the track routing to apply mixing to the various samples in different (well, other) tracks.
Much like how any multi-output plugin works: no matter where your note is located, the actual audio will arrive in the tracks appointed within the instrument FX/routing panel.
I think initially, the decision to limit routing within the native instrument was done because of the freely roaming nature of the native sample instruments: put them anywhere, and they will generate audio in that track.
And keeping this level of flexibility would basically mean that every instruments FX section would be applied to every track in the song (BOOM!! your CPU usage just went through the roof).
But really, even if having a fixed track routing is a lot less flexible, I think it makes sense - Redux already has this, and Renoise could get it too.
I don’t really understand this statement… If we, for some reason, would want to sidestep the mute group, we would simply not assign the sample to one…right?
Of course, that still leaves the following bit to be desired:
Well, that statement is a bis misleading, isn’t it? More correct would be: Every instrument’s FX section would be applied to every track in the song THAT ACTUALLY PLAYS THAT INSTRUMENT (which is exactly what you would expect, since you placed the note explicitly there, because you wanted the audio to be processed separately…really don’t get the problem here…would you mind to help me out here?).
Ideally, yes. In practice, more complicated…Imagine you have just written/recorded a good hi hat pattern and would want an extra layer on top? “Sorry, need to copy the entire instrument to achieve that”… seems like.a waste to have to do that.
Generally speaking, I am not entirely certain how I am going to use my sounds when a song is started, so keeping the mute group a little flexible is a good thing, IMHO. Achieve some kind of routing of instr.fx to tracks, and we suddenly different ways to work with these things.
One thing I don’t really understand - how would you proceed with recording notes into separate tracks anyway? It has been suggested that notes are routed, once instr. FX are routed to a track. That doesn’t seem very intuitive to me?
Nope, as I understand it those resources would need to be preallocated for every track. Not sure how much the CPU is getting hit for non-playing tracks, but the are definitely some penalties involved here (will link to a source if I can find it). Predefined routings, on the other hand, should hardly be a problem.
Mute groups are most convenient for correctly behaving complex instruments (entire drumsets, etc.). Those will need some work and also planning to set up properly. That is a good thing.
Right now my approach is to record everything into one track, then use my “split into separate tracks” tool to split the track, once I am finished and I am entering mixing stage. The way mute groups work now will break this workflow, since the groups will stop working once they are split.
Again, I am afraid I don’t understand. Renoise knows exactly on which tracks there are notes of each instrument at any point in time, right?
Well, I think we shouldn’t have the need to mix in the sampler anyway…
This is how I would prefer to do it: one track with four columns (one for each hand and foot) - this track holds the note data. Then a track each for kick, snare, toms, hi-hats, ride, and crash cymbals - these tracks hold the audio data. This is similar to how I work in Sonar - one MIDI track holds the note data, while a bunch of audio tracks have audio data from a VST plugin routed to separate tracks for mixing, the main difference is that I find the ability to represent the instruments each limb using the tracker interface is playing more efficient than standard drum notation and piano rolls.
As far as binding instruments to tracks, that is my preference anyway (and I’ve posted elsewhere about being able to hide the instrument column and assign instruments to tracks instead of notes), but there needs to be support for routing audio in an instrument to multiple tracks for mixing purposes in the same way multi-out VST plugins are supported.
I mentioned it that way because i discovered during scripting of the multilayering of instruments that when trying to record a multi-layered instrument set to one track (all notes are recorded into one track), yet have their output linked to another track, the audiostreams of the empty tracks are not rendered when you export your song to wave.
If that is fixed, i do not really see much problems aside from perhaps having too little note-columns to be able to record all notes into the same track (which is also a reason to cast notes for sample groups to the track where they are bound to, because you evade that limit if you would e.g. create a zillion voices-pad).
I just tested mute groups again while setting up a simple layered drumset. Apparently envelopes are still triggered (well, the attack, sustain and decay stages), even when the sample is set to one-shot. This is good news! Mute groups work quite nice now actually, even after recording everything seemed still correct. Pretty cool! Only limitation remains the being tied to one track…
I miss a way to check via Lua for the mute group of a given note and instrument. This will be essential when bringing my “Split into separate tracks” tool to 3.0.