moving towards the next xrni-generation I thought of this little step on the way to there.
In short I’m asking for this feature: An option to randomize Instrument-Envelope values only when a new note is triggered. Not changing during timeline.
Natural instruments have always a different color whenever a new note is played on them. Not only their loudness changes but also their brightnes and a couple of other accoustic characteristics. The instrument envelopes fortunately enable us to not only access a sample’s volume, but also its pitch, cutoff and resonance. Furthermore we have an option to let these values be randomized which is a great feature to do some very basic accoustic modeling. However once this option is turned on, the random value changes all the time, much like the LFO would do on a DSP-chain. In fact, one can have more control over the LFO by turning it on/off by desire rather than using the random-feature in the instrument envelopes.
Now think of a high hat. It would sound so much more authentic if its cutOff would slightly change everytime it is triggered. Note: This is something different than changing its cutOff during timeline, since it would also change during its playback in the middle, shortly after trigger, shortly before ending, etc. which is not so natural for a singleshot instrument.
Now if we had an option to randomize vol/pitch/cutoff/reso of an instrument only whenever it’s triggered, then we could do way more complex things. Think of a singleshot guitar sample. Playing the same note a couple of times would have a different color every time. Playing chords with this sample on subtracks of a track and humanizing its delays (thank delay-columns) would sound much more authentic.
I think this feature offers great benefits while it is a rather small change to Renoise.
although I’m not a big fan of “blind” humanization (when you don’t know at compose-time the exact value the parameter will have when listening), a configurable humanizer of XRNI parameters would be indeed interesting.
automatable xrni parameters! +4
Thanks for the input guys.
I was actually asking for something simpler which would have a different use than automatable xrni parameters. Let me try to explain it another time:
The thing is, no matter how you change or control your Instrument Settings - be it with future automations or with the currently given Instrument-LFO - you’ll have a hard time to get it sounding like a relatively natural instrument (I’m very careful with the ‘natural’ word though).
Consider this suggestion as a very little step towards more intelligent instruments. I think Renoise instruments will for the first time have a slightly intelligent behaviour when they decide on their own how to sound each time they are triggered. They can do this right now already but unfortunately the Instrument-LFO values will also change during the timeline. I.e. their sound won’t only change for the moment they are triggered but also during their playback. This is what I want to turn off optionally.
If you load a VSTi like Realguitar, then this instrument will do some things on its own without the need of your influence. If it has a large sample bank, then it decides to take “sample a” one time and the other time it takes “sample b” while you hit the same note with the same velocity two successive times. If it has some DSPs, it changes their values for each time you hit a note so it doesn’t sound equal all the time. But once the note is triggered, its color/settings won’t change after it’s triggered.
I’m not saying that we should rebuild Realguitar nor I’m suggesting to randomly use multilayered samples. And my suggestion is far from automating xrni parameters because they still wouldn’t make our instruments have an autonome behaviour. I’m suggesting something that is easily able to be realized with the currently given features:
A checkbox that would disable the instrument-LFO during timeline. In other words, the instrument-LFO shall only change values as long as the note is not triggered. So the instrument will have a different sound each time you trigger it but it won’t change its sound until it is triggered another time.
@Itty: You can have slightly better control over the “blind” humanization by choosing the sine/saw/pulse oscillator in the instrument-LFO instead of choosing the random one. I know that this is still no high end but in my opinion it has a very high value for making singleshot instruments sound more natural.