Combine AHDSR and other modulation devices

I really like the new XRNI system and it’s great that we can combine AHDSR and other devices to modulate vol/pan/pitch/filter per-sample. But when it comes to simulating how an instrument behaves, the overarching factor that should influence everything is a per-instrument polyphonic AHDSR for volume. The parameters of this one main envelope should respond dynamically to velocity and/or note value and also allow different modulation and fx changes to be be applied at each stage. This method would go a long way in allowing us to create not only ‘realistic’ instruments, but also convincingly complex unreal ones.

You can somewhat approximate the first part by combining macros and Track DSPs, but it’s not polyphonic, wastes 5 macros and is not a self contained instrument. The second part would be excellent for things like: adding reverb during Sustain then extending it for Release.

I’ve spent some time figuring out how this could work and sketched it out in the image above. Firstly, a Modulation and FX* set can be assigned to the instrument via the Instrument Properties panel**. Secondly, the concept of AHDSR has been rethought to take advantage of the options that Renoise provides, freeing us from the standard constraints of instrument creation.

There’s no real reason the volume envelope has to be five stages long; it could be two, nine or sixteen. Neither do the various stages have to be arranged in their typical order with their traditional properties. ‘Attack’ could start off at any level of volume and fall like a Decay instead of rising. The Sustain can occur within any stage, or not at all. The Release will always be last, but could end at any volume instead of fading to silence.

The shape of the volume envelope is constructed with the Instrument Envelope device where each stage has Volume, Length and Curve*** parameters. The results are displayed graphically, highlighting the optional sustain-loop, which can be assigned to any stage. If you need to add more stages, use the arrows at the bottom right to expand the device. Clicking on a stage’s name box will select and highlight it, while double-clicking allows you to rename it.

Using the second device you can assign velocity and key responses for the currently selected Stage (displayed in brackets at the top-left). The Destination can be set to parameters in the Instrument Envelope or from devices in the assigned Mod and FX* sets. If you need to add more responses, use the arrows at the bottom right to expand the device.

  • I’m aware that effects chains do not currently work per-voice. This option could be eliminated, or simplified to work with: 1) no routing options, 2) a limited number of devices, 3) both.
    ** To make room for these options on lower resolutions, the Samples panel has been moved down and the Instrument Properties line now spans the full GUI width.
    *** I don’t know if using Curves like this is feasible, but having Linear as the only option isn’t ideal.

Excellent mockup as usual Duncan. The ability to deal with NNA from a modulation set has my vote. I’m interested by the Velocity key response matrix. Did you thought about some “meta-modulators” (like the meta-devices but that would work on the sampler), able to control parameters located beyond their own sets, for example, like parameters located on some FX chains… ?

There would be no need for extra meta-modulators. You still set up a Modulation and FX set with the existing v3.0 system, then assign it to the instrument. So you’d use meta devices in the instrument FX chain to alter devices in any other sample FX chain. But from what I understand, if effects were processed polyphonically per-voice of an instrument, it would very cpu-intensive and this would become exponential when you start linking it to chains across multiple samples. So this may not be something that could realistically happen.

It happens. I’ve successfully linked a hydra device in a FX chain to another device located in another fx chain in the same instrument. Some cross chains modulations between different samples of instruments, are allready possible. What’s not possible is to get for example a hydra modulator in modulation set n°1 that controls the lfo of the modulation set n°2…

Concerning the CPU overload risk, in the past taktik declared that for each new instrument note that has to be played, a dedicated “hidden track” has to be created “on demand” and has to work hard in the background. Taktik declared that, blindly attaching FX chains to these instruments (that could be played freely on any track at any time) would quickly lead to CPU problems. And this CPU problem has been worked out by introducing a “rule” : instruments with chains/fx can only be played in one track at a given time.

If it’s possible to modulate without a problem a parameter between FX chains of the same instrument, why could’it it be possible to get a modulator located in modulation set one, that targets another parameter located into the modulation set 2 ? We’re not talking about FX like the exciter or the cabinet simulator here, just “modulations” of those effects. How much CPU a modulator takes ? Is it as intensive as for example, the convolver ?Extensively why not building a hydra modulator that also targets the fx chains and modulates something there ?

If we can get meta devices in modulation, then I’m all for it, but that will be worked out separately from what we’re discussing within this topic.

I’m just putting forth ideas for XRNI extension. Whether it’s feasible to allow for potentially excessive routing through polyphonic instruments, I’ll defer to the team members with technical knowledge of the situation.

Some good ideas in this thread.

I’ve said it before in another thread along with a bunch of other ideas, but here might be a good spot to repeat it:

What I would like to see are an option to have track-based instruments (in addition to normal instruments), ie instruments that are loaded on a per-track basis rather than being attached to the note value, along with the ability to hide the instrument column in the pattern editor and use a ‘null’ instrument, ie enter notes without any instrument attached. The ability to copy/paste and drag/drop selections of notes without instrument data would also be nice.

And also support for .sfz files :)

A couple more thoughts. First, being able to re-order (drag’n’drop) the entries in both devices would be of great benefit. Re-ordering the Stages would obviously be used to change the envelope, while re-ordering the responses would be very useful for keeping things organised.

Secondly, if it is indeed necessary to limit the abilities of instrument Mod and/or FX sets then we can’t just allow them to be arbitrarily assigned, as this would become confusing. If they were automatically assigned as ‘Instrument Sets’ that are used only for the instrument, that would solve the problem and also remove the need to assign them manually via Instrument Properties (and remove those options from the image example above). To make things absolutely clear to users, these Instrument sets appear at the top of each list and cannot be renamed, moved or deleted.

No disrespect, but why is this pinned?

Yay. Idea was to collect other related XRNI things in this topic. Didn’t happened. Unpinning…

yes , a curve setting for the decay stage of the adsr would be nice , without the use of the fader operator