Best practices for Humanization

Hi,

since I’m using Renoise in Linux most of the time I’m looking for the most promising techniques, that aid in creating the psychoacoustic impression, that the instrument was played by a being.

Basically it can be broken down to setting Attack-Values (+ randomization, different attack envelopes, etc.)

Where do you apply humanization behaviour ? In the pattern editor, in the automation-section or via plug-ins ?

Thanks for the tipps !

Edit :

The intention of the thread is to find out, whether implementing an own LUA-plugin is necessary or not.

Where do you apply humanization behaviour ? In the pattern editor, in the automation-section or via plug-ins ?

All of the above mostly using tools, now that midi plugins can be used there are pretty nifty vsti for manipulating the timing of note events. The swing dial in kirnu for example, or this one;http://www.mucoder.net/en/hypercyclic/ (also available for linux).

I use this tool a lot for randomizing/generating volume column values between set limits in a pattern;https://forum.renoise.com/t/new-tool-2-8-velocity-randomizer/38684

Still hope one day to figure out lua and hack it so you can generate more patterns/song at once, having to boot up the tool for every pattern gets tedious quickly, also to have it work on the pan & delay column would be great.

Thanks man, did not know, that midi-support is implemented in 3.1 (did not upgrade, yet). I’ll take a look into that LUA-plugin.

a lot of times there are multiple ways to achieve similar results, for example instead of using tools to randomize volume values, you could also link up a lfo with random setting to a gainer slider in the track dsp tab, or achieve it with a lfo in the instrument editors modulation tab. The YXX command is also handy in setting up a probability on how many times a note event will hit, it also works with note-off events.

There have been more threads on humanization in this forum, i once asked about different ways of achieving less static sounding snares, might be helpful for your quest as well :slight_smile:https://forum.renoise.com/t/how-do-you-variate-the-sound-of-your-snares-if-at-all/33213

highly recommend modulation via aftertouch. I’ve been getting into it with my qunexus, and now in renoise, even without a controller I’ll assign various things to aftertouch, and it completely changes the sound in subtle ways. I programmed a quick patch in massive the other day that sounded like a damn sitar, and it was surprisingly easy, and very organic!

Thanks, well appreciated. Am I wrong or has there been a huge increase in features (commands) from 3.0 to 3.1 ? Feels like I there is a ton of new opportunities to explore (not that I was that well versed with Renoise before).

Is this common minor release practice for Renoise or just coincidentally happened because of the Redux merge ?

Do you “old-school” Renoisers make use of all the new features or do you keep your old production style ?

well, my best practice is to play evrything possible by myself and then watching closely to the recorded pianoroll data. if possible ask someone, who is at least a decent instrumentalist with rhythmic feeling to play a few drum or keyboard patterns with no automated quantisation at all and then analyse his stuff in the pianorolls.

take a close look at the little and subtle shiftings and velocities near the beats. thats the difference between human players and precise to the bone sequencers.

I also found that randomization mostly will just make the music sound like played by drunk monkeys. Either that, or it is so sublte that it won’t make real good effect when put next to a “straight” version. The real difference can be made by actually implementing some “groove”, in tempo as well as in velocity or other pronouncing parameters (filter env depth, whatever). Now my live playing sucks big times. And it is somewhat time consuming and frustrating to manually delay notes to make it groovy, after studying some breakbeat’s groove or whatever. Wish there was some automagic tool to extract grooves from audio samples after marking the beats, and then apply it to your mechanical set pattern data. Or to go through notes step-by-step and bang on a drum pad for making good velocity data for humanized accentuation.