Chain State Blender Metadevice

One of my biggest pet peeves with renoise is my inability to affect several parameters through one. I’ve been bitching about this for ages, asking for scripting, param connection and generic slider metadevices, bla de blah blah.

Most of this bitching comes from my work in 3d animation; Maya is just such a crazy versatile tool in terms of making your own scripts and shortcuts to make your job easier. This isn’t Maya specific, but it’s one of the greatest things in 3d animation today and i just started to think there’s no good reason the methodology isn’t applicable to Renoise DSP chains; Blend shape animation.


I spend a lot of time making f****ed up sounds, especially kick drums. This can involve DSP chains that get absurdly long, and i refuse to render down to sample because it kills my ability to tweak. The most fun to have in Renoise for me (aside from the obvious) is to put a sound through a hellish amount of tweaks, and then change parameters early in the chain to hear the effect cascade down the chain. So baking/freezing/rendering is not an option for me.

A problem arises when i hit that sweet spot that produces an excellent sound; often involving miniscule parameter changes in a bunch of devices, and i get afraid that if i change the params around i’ll get lost and not find my way back to my sweet spot. In other cases, tweaks done by typing in parameters are un-tweakable by effect column due to hexadec resolution issues (and i loathe using automation for rapid fire parameter changes). A good example is how the gainer can’t be set to 0db gain through effect column. Closest you get is 0.034.

My suggestion to solve both the getting-lost problem and the multiple parameter tweak problem? A Chain state blender metadevice:

So what’s the idea here?
This is an exclusive device, as in every track can only contain one(1) of them. Its position in the chain has no importance. What it does is, every time you click “New state”, it makes a “snapshot” of the parameters of every DSP in the chain, and adds a new item to the state list. This item is renamable to whatever you feel describes the state best. What the device does from this point is to take that snapshot and “mix it” with whatever parameters are currently set on the DSPs by a percentage. So at 0% State A, no parameter change is applied to your DSPs. At 100%, your DSP’s are set to what they were at when State A was stored, no matter the automation or pattern commands. At 50%, 50% of the automation, pattern commands and current state remain, multiplied by the difference of the current state and State A.

In short, you have “wet” parameters and “dry” parameters.

As several states are stored, all their values are multiplied together. The state higher up in the list takes priority, so if State A (above State B ) is at 100%, setting State B to 100% will make no difference. If State B is at 100% and State A is increased past 0%, A will again take precedence, so State A at 25% and State B at 100% means State B is effectively at 75%

I’m sure much of my “maths” here is wonky but you catch my drift :stuck_out_tongue_winking_eye: Just thought this up in the shower for chrissakes.

Uses? Well aside from the obvious, this also allows for some primitive parameter linking: Saving a state when a filter’s cutoff is at 0 and resonance is at 100%, and another when it’s the opposite, setting the second state (with lower priority) to 100% and using the first state’s slider to automate both cutoff and resonance.

I haven’t thought much about filtering out certain devices, but perhaps a checkbox on all other devices (defaulting to On) would serve as a filter?

Finally, the “flatten” button takes the current blend of states and applies them to the current “dry” values.

Any thoughts?

Just to clarify: This would mean you could automate every single parameter on your DSP chain with one slider :P

I think I kinda like your idea, and I think it might be useful, but your description of it confuses the shit outa me… you might wanna try summarizing it in point form :P


Consider 3 images. One is your face. The other is a monkey. The third is a horse.

You put the 3 images in a fictitious image morphing program. Your face is the “dry image”. The monkey and horse are “wet images”, and have a slider each from 0 to 100%. If the monkey slider is at 100%, the final image is, yep, a monkey. If it’s at 50%, the final image is your face combined with the monkey. If the monkey and horse sliders are at 50% each, the resulting image is a combination of the monkey and the horse, no trace of your face. Drag the monkey slider down to 0% and leave the horse slider at 50%, the resulting image is your face combined with the horse.

Monkey (50%) + horse (50%) = monkey/horse
Monkey (0%) + horse(50%) = you/horse
Monkey (25%) + horse(50%) = you/monkey/horse
Monkey(0%) + horse(0%) = you

The device, in short, would let you create a bunch of such “images” of the device chain it’s in, and blend them together in the way described above.

Lets say you store a state where a slider is at 100%, another where it’s at 0%, we call them “Max” and “None”. By default, their sliders in the device are at 0%. After doing this, you move your slider to 50%. This “dry” unaffected state is what renoise would normally consider.

Max(0%) + None(0%) = slider at 50% (where we put it, separate from our stored states)
Max(50%) + None(0%) = slider at 75%
Max (0%) + None(50%) = slider at 25%
Max(100%) + None(0%) = slider at 100%
Max(100%) + None(100%) = slider at 50%

The thing to remember here is that these states are “frozen”, as in they will always be unaffected by automation, and as such are combined with the dry values after automation. They will not animate in any way.

To put things into context, this is how you animate facial geometry in 3D. You’d make a bunch of copies of your character’s head, make geometry changes to make him smile/cry/scowl/wink etc, and then assign these blend shapes (or morph targets as they’re also known) to the original geometry, letting you change how much of the altered shape will affect the original geometry. Common face animation rigs have sliders for “smile”, “angry”, “blink” and vowels, AEIOU.

It’s tough as nails to explain :) But let me tell you, this would be a ridiculously powerful tool. No other sound package out there has this if i’m not mistaken, but it’s been used to simplify complex 3d animation since forever.

Here’s a Maya video tutorial that explains it a lot better than me :P
When he says “CVs” that’s the points in geometry. Think of CVs = Device parameters, and the DSP chain as the geometry.

i really like that idea sunjammer.

that’s a thing i’d use a lot. simple but clever!

This idea is simply awsome!!! Gimmie gimmie!!

Mighty useful in a live situation as you’d only need to keep track of a few midi knobs to tweak seamless between these “presets”.

But I was thinking that maybe it doesn’t have to be a one per track exclusive device. All you’d need is a list of all devices on the track with checkboxes. That way you could “automate” a chosen number of devices in several effect groups. So one group might control say the spatial effects, another filters and modulation, and a third distortion etc…

If one would like to complicate things more there could also be some choices in that list like logarithmic or exponential morph.

just want to add buzz with btd’s peerstate (a modular morphing machine inside buzz). so you can morph states of choosen parameters with the selected inertia. thats possible with vst and buzzmachines.

It would work almost like the Combinator in Reason then??, where you can assign several parameters of other devises to the different knobs on the Combinator.

I would love a DSP like that, and i would use it a lot. It’s one of the things i use mostly in Reason to make nice blends fast and easy. =)

I’d sign for this feature! =)

I still don’t understand WHAT you would be morphing in Renoise… sound channels? If so, isn’t that just called “Fading”?

… or are we talking device parameters? and if so, how will this device work logistically exactly? do you want to have 2 possible values for a parameter and you can “morph” between them? if so, isn’t this still just fading parameter values? … I understand the application of such a concept in 3d, just not with Renoise. :P

It’s the weighting of dozens of parameter values between several values with a set of faders, not just interpolating between 2.

You’d be morphing parameter values.

So your diagram of it there is a bit minimalist then? :P

My diagram is 100% accurate




I find myself making such state changes all the time with verb and delay times by copy-pasting automation curves from before, during, and after the state changes.

This would make that process so much easier!


Yup. With this metadevice, coupled with the VST and CC automate device, you can approximate VST preset morphs as well.

I’ve started working on a cut-up project which involves heavy effecting and different effect variations, and I realise how tedious work it is to automate everything now without this meta device. I’ve mostly done this on a smaller scale before. The work load difference for the same result would be tremendous.

This is definetly on the top of my wishlist!


What happens if you morph something that you at the same time are automating?

Perhaps you are automating the morphing device and at the same time automating something the morphing device is morphing.

What should happen?

Should automations not be allowed on morphed parameters? Or should automations be applied after the morphing?

Another thing that could be a problem is if you changed some button from on to off in a vsti and created one morphstate with the button on and another with the button off.

When would it swith the button. Should it interpolate, so that in the middle of the morph the button would swith on. Or should button switches not be included? Though they are included in the vsti meta device as sliders…

Good questions!

One option is to simply let automations override morphing, just like LFO overrides automation.

Another would be to morph the automations too. So if a slider is automated to move a bit back and forth around 50% and a snapshot in the chainstate device has that slider set to 100%, the automated slider would move around 50% as automated with the chainstate device wet/dry at 0%. With the chainstate device at 50%, the slider would move back and forth around 75% instead.

But I guess this might introduce some more problems… haven’t thought through every detail. ;)

As buttons are represented by sliders I think they should simply be left handled normally. One slider can also represent several buttons so I don’t have any good ideas besides not using the cahinstate device to morph them. Or use one chainstate device slider for buttons and automate without interpolation. :S

In my mind, automated morphs override automation, much like automation overrides pattern effects (iirc).