Instrument modulation concept

the only problem with redundancy is that if I want to change something on a “class” of redundant items, I have to change it as many time as many “instances” of that “class” there are in the set, and this can be a pain.

Well, what I take from this discussion about redundancy are mainly two things:

  1. It seems that setting up the instrument will take a multitude of what it would need to set the same instrument up in any other software. So, who is going to do this? Are you counting on commercial third-party instruments? Convincing a company to sell instruments in a rather uncommon format seems complicated enough, let alone if setting this up takes them way more time than setting it up for more common samplers. The last time a third-party vendor stopped producing instruments, it was because of the limitations of the Renoise sampler, I would rather like to not see this happen again in the future.

  2. CPU usage! Envelopes seem to use a huge amount of resources. So having unnecessary duplicates seems rather sub-optimal.

Straying a bit offtopic …

You mean PureMagnetik? These nice folk were assimilated by the Ableton collective, and no longer allowed to support other DAWs. Most likely a good business decision
But on the contrary, I have been in contact with a number of sample/sound design companies and got rather good feedback from them on Renoise 3.0

Btw: I’m currently writing a little article for the renoise blog on how to create a layered instrument. Will take you through the whole process, with screen captures and all :slight_smile:

Hear, hear!

Oh, sorry, I did not know that. I only remember them dropping XRNI support and suggesting people to use the EXS files via the additional file format tool.

Good to hear!

Cool, looking forward to reading it!

Not completely correct …after a lot of testing I found that it is the the overlayed antialiased envelope tha consumes a lot of cpu …even when a nomal maximized …;the overlayed always processing it’s input when the is focused (even when overlayed env. not visible ) …so if this is fixed we will see a huge cpu improvement

I really like the posted idea to share modulation sources by the means of color coding

Maybe it’s because I’m a tad drunk, but it’s hard to understand the complicated and somewhat philosophical problems people have.

If Renoise’s method is “chained stompboxes for everything”, then being able to instance/reuse such stompbox in another location and have it follow the settings of its master is a good thing for me, no matter what is wired next or after instanced stompbox. Even better if instances could be whole chains.

In my opinion there were already some pretty nice and feasible ideas in this thread (like modulation matrix).

Interesting. You’re writing that article, while still playing “let’s discuss concepts” in this thread. To me this first of all proves, you never ever had any intention to change anything. Also perfectly fits your behavior in the thread. I guess that must be the professionality Taktik was talking about. Tells enough.

Free advise: You better don’t ask them again, when they get aware of what’s going on in the background.

Please note that what I proposed some time ago, though seems to be more modest than Bit_Arts’ idea, would also provide – I believe – quite powerful toolset, without big rearrangement of the current workflow logic and without any layout changes:

Maybe a limitation, similar to what we have with track DSPs, would have to be introduced, that if a given modulation chain mix-in device does not refer to the same modulation set, the referred set must be above the current one in the list of modulation sets. Not sure if such a limitation is necessary, but maybe it is (e.g. for easier managing…).

And if, also, some “meta-modulation” devices taktik referred to (i.e., using modulation signal to modulate parameters of modulation devices) were added to the equation – we would have really flexible toolset.

Oh yeah, danoise is part of the team…Mmh, that is indeed a bit telling then.

As I read danoise’s post, it seemed that they already gave good feedback on the current implementation?

Interesting. You’re writing that article, while still playing “let’s discuss concepts” in this thread

Of course I’m writing that article. It is a brief introduction to a host of new features, which - among other things - teaches you how to insert an volume operand (!!!)

If the implementation is changed, cool. Ever written a piece of documentation? Many aspects of will also need a refresh very soon…

But actually, this discussion made it painfully clear to me that I do NOT want to go into details yet, so skimming - not swimming.

They might have done so on what they’ve seen so far. If they did. That still doesn’t mean they understood what is working in which way and it also doesn’t mean, they made any experiences with true sound design yet. The surface looks fine on the first view. It also did to me. You come along the issues, when you’re diving deep into things. There are good reasons to doubt any professional sound designer would give positive feedback on this implementation, once they really understand it. Specially these guys choose effective platforms, because time is money. And setting up redundant envelopes and modulations means wasted time and wasted money.

this would certainly be a big plus


Why not ? Indeed there are some things that you can’t modulate at all through the new instruments tabs.

Sampler : Modulation tab

There are some “static parameters”, available from the sample properties panel, but that you can’t modulate at all :

¤ modulating the default sample loop start/end markers is not possible
¤ modulating the default sample loop type (forward, backward, ping pong, none), and the ability to “exit the loop on note off” is not possible
¤ modulating the NNA behaviour (cut, note off, continue) is not possible

Those “sample properties” could be “moved” just under the volume, pitch, pan, and filter cutoff/resonnance buttons.

Plugins tab

[/b]When you can add some devices and FX chains in the sampler tab, it is not possible to add some “devices” such as *Instr. Automation and *Instr. MIDI Control, in the “Plugin” tab instrument, and logically …


¤ It is not possible to use macros to modulate any instr. type device (that are not available) in the plugin tab
¤ It is not possible to use macros to modulate the global instrument Scale type, Quantization level…, available from all the 3 new instruments tabs.
¤ It is not possible to use macros to modulate anything in the MIDI tab including : MIDI input params, MIDI output params example : channel, bank, program,…

Phrase Editor
¤ It is not possible to modulate the LPB through a ZLxx command in a Phrase. Instead of modulating the “Global Song LPB” ZLxx would only change the default Phrase LPB.

¤ modulating the default sample loop type (forward, backward, ping pong, none), and the ability to “exit the loop on note off” is not possible
¤ modulating the NNA behaviour (cut, note off, continue) is not possible

Can you tell me any daw or sampler that is possible of it? I dont ask because i have anything against it, i am just curious how it can be useful and work fluently. Seems little weird to me (like that automatization of scales, midi input etc.).
Thank you.

But I agree that loop points should be automatized! It would be very handy…

EDIT: forget about my post starting.

Additional points:

  1. It’s hardly possible to add some kind of “analog” or “human” touch to the sounds of the sampler without using track dsp: Virtual analog synthesizers like Korg’s have simply a parameter called “analog tune”. It varies the pitch in very little range randomly just like in real analog synthezisers.

So the pitch LFO should have possibility for “0.0000Hz” of frequency. And maybe a more fine amplitude range between 0.000 and 0.020.

Also the AHDSR parameters should directly influenced with randomness in very small portions. This should be possible without using track fx. Just my two cents.

  1. The AHDSR module needs a “curve” parameter: Log, Lin, Exp etc.

  2. Possibility to control the loop position offset/ sample play position using an lfo.


So that is why we are having the debate now simply because Beta time is also for that discussion.
The current structure is not fit for duty yet, that doesn’t mean it will still be like this when Renoise enters the Release Candidate stage.
I’m not sure if Renoise 3.0 will have state of the art modulation chains by then, but perhaps by then, the foundations have a good starting point to extent from into the future.

what I would really like to see is an additional curve setting for the adsr ‘decay’ stage .
At the moment we have to insert a slider mod for that .

Isn’ t a bit weird to talk about a whole structure redesign that is already implemented ?It’s like un upside down process …this should al have been adressed in the design stage with some people involved ( like bit arts ) from the community ,who know their way around in modular environments .

Let’s compare renoise to one of rolands old romplers s+s synthesis : the jd 990 or jv 1080 /2080 etc…each patch could have 4 layers/elements (user defined sample ) each element could have it’s own cut off/pitch /amp enve/ and lfo …renoise can do this to …but in renoise case everything is still happening at control rate level …meaning no possibily to do ring.mod fm or a.m.between samples = this is imho a must .
At the moment the new is just capable of basic modulation with no in depth synthesis methods …really sad

I confess that those 2 modulations if not limited, could quicly lead to some crazy and not so clear things . For example, modulating the NNA behaviour with a fast random LFO… would just sound lika mess. However modulating it with a “keytracker” or with a macro, could allow some very interesting levels of control. So indeed, that is unusual, and probably no DAW allows it, but… if you play with a very limited selection of allowed modulators it won’t look so crazy in the end.