The current track delay seems to be only from -100ms to 100ms, however there are libraries that require more, for example Cinematic Studio Strings and Audiobro Genesis Children’s Choir both need around -350ms for their slowest legatos and look ahead to work correctly, so would be nice if you could set the track delay freely.
I kinda like this idea for your purpose AND for abstract pattern work. The entire track within the pattern could be shifted by an LFO or envelope. @Raul - this idea listed by @retrothruster - if @taktik could add a greater delay adjustment for a track in a pattern, could that be automated?
Only @taktik can extend that range internally. For example, -500 to 500. But I suppose the choice of -100 to 100 has a reason.
On the other hand, through LUA it is possible to assign new parameters to execute anything available in the API For example ZExx (Z?xx is the only value available for these things ("?" is a letter free)).
Inside my Piano Rol Editor tool I use MBxy and MLxy, but in reality these parameters can conflict with automating track device parameters, if a track has dozens of devices reaching that value.
- A: the device index in the track’s effects chain (1 to Y)… (1,2,3,4,5,6,7,8,9,A,B,C…X,Y.). “Z” is free for special parameters.
- B: the index of the device parameter (0 to Z)… (0,1,2,3,4,5,6,7,8,9,A,B,C…X,Y,Z.).
- xy: the amount assigned to the parameter (00 to ZZ).
The execution would be instantaneous (a very small time delay), it would not be in real time. In this case:
-- Delay. renoise.song().tracks.output_delay, _observable -> [number, -100.0 to 100.0]
I also remember this other matter:
-- Constants. renoise.Track.MUTE_STATE_ACTIVE renoise.Track.MUTE_STATE_OFF renoise.Track.MUTE_STATE_MUTED -- Mute and solo states. Not available for the master track. renoise.song().tracks.mute_state, _observable -> [enum = MUTE_STATE]
As you can see, they are properties of the track (do not confuse “track” with “channel”).
Great news! I hope in the future that these values could increase - that would be a very interesting way to perform some more algorithmic methods on sound. At a small amount, say the -100 / 100, it could create some flamming to the sounds - similar to making a chord sound more natural by having the fingers touch the keys at very slightly different times. We already have the capability to do that in the sequences, but no capability to automate something like that to have it be slightly different each time. Would be fun!