Instrument modulation concept

I’ve fiddled specially with the instruments for several days now and have to say, there is basic conceptional stuff, that should seriously be thought over. Not the least regarding the upcoming Redux. It is common synthesizer architecture, to setup modulators & envelopes independent and then bind parameters to them. Renoise atm handles it exactly the other way around and therefore for a lot of sounds requires to setup actually redundant modulators. That really doesn’t make much sense. It’d not only be way more effective to provide independent sets of modulation sources, it’d also make sound design way easier.

This imo really should be thought over again.

I don’t think this will happen , since it’s already beta .
But who knows…

Can you elaborate on what you’re trying to do, how it currently needs to be done and how it should be changed to make it easier?

I know, that’d be an unusual step, mean a hard cut, serious work and serious changes. But that’d all be better, than carrying over the wrong concept to later versions and things like Redux.

During the alpha phase, there was suggested to allow devices in both fx chain and modulation chains to feedback into existing or parallel chains, which would sort of allowed basic fm synthesis and would also resolve a lot of double device requirement problems. The alternative compromise suggestion was to allow linking (docking) devices directly to eachother so that you can influence parameters of the other device that is directly docked to it so that you can get devices to control others.

A pretty simple example:

One of the things I was going to setup was my 303 emulation. We’re talking about a most simple monophonic synthesizer here.

A 303 uses the same exponential envelope for volume, filter cutoff and resonance, plus the accent. That means, I’d actually be able to trigger all affected params with a single envelope. I’d just have to assign all parameters to the same envelope. Additional modulation would be lenght of the envelope + depht. I’m dropping the description of additional stuff to setup here, to keep it easy.

So, summary for the common setup: 1 envelope, 2 envelope modulators, 4 assigned params

The current Renoise way:

  • To achieve the exponential curve I need, I need two Modulation Faders.
  • Those two Faders have to be setup for volume, +2 for cutoff and 2+ for resonance. +2 for accent.

That’s already 8 actual envelopes. Those now also need 2 envelope modulators. Still, I can do that via 2 Macro buttons. But I have to assign those to all the envelopes. This means, in Renoise it needs:

8 envelopes, 16 assigned paramters and 2 envelope modulators

Well, this is only a part of the whole setup, but I guess, it makes things clear. Remember, we were talking here about a monophonic sound. If this was a synth for playing chords, with 8 notes we’d easily have reached 64 envelopes, 128 params and 16 envelope modulators used in the background.

It’s not about what is where allowed. The entire concept imo is a maldevelopment, whyever. As uncomfortable as it might be, NOW it is still time to change this. Carrying this over is not gonna make anything better.

It sounds like the current implementation adds a lot of unnecessary cpu overhead indeed, looking at bitarts’s example…

Thanks for the info, current way seems redundant indeed, guess you’d need something like this, having the modulation before the options, being able to link the same envelope to multiple destinations;

edit; mock-up sucks :) , because both lines should come out of one output, but you catch my drift.

Yeah, I’d imagine it a bit different, but that’d basicly be it.

I agree that the current system isn’t perfect, and most of the points you raise are perfectly valid.
The whole building-block approach to modulation is a pretty unique one, and could be abstracted/optimized even more (as you suggest).

But, IMHO the most important thing for us to focus on right now isn’t that the modulation system involves some redundancy (it certainly does), but rather that we actively seek out and identify things that simply aren’t possible (a couple of examples: smooth movement, asynchroneous LFOs)

To even extend a maldevelopment? This isn’t “only” about (imo massive) redudancy. This is also a massive usability issue. About basic conception. A sound designer has to spend a multiple of his usual time here. Things are organized confusingly, because they can’t kept synced easily. This might be no issue for someone just dropping a sample in. But for someone going to use the potential of the new instruments, this is going to become a huge pain in the ass in no time. And it’s going to be the same for each and every new sound he starts to work on.

I’d also prefer to fiddle with sounds now, instead of writing on the forum. And I completely understand there is none going to spend applause to my suggestion. But sorry, I really can’t agree with you here.

Well, as I said, I agree with what you said. My primary concern is that we think about all possible features, and it helps to have some clear usage cases then.

As for “maldevelopment”, I think this is overdramaticing it a bit. We don’t end up painting ourselves into a corner if we can upgrade the system transparently. I think this would be the case with your single-envelope 303 example?

When a concept works exactly the opposite way of the way making the most sense, I’d call it clearly a maldevelopment. I don’t think that’s overdramaticing, but just realistic. Anyway, it’s not my development and it’s not my job to convince someone here.

I don’t know if have the right to say anything , cause I don’t have the beta yet and haven’t had any real hand experience with renoise 3.0 .
The way bit arts describes the modulation set sounds like a real pain in the but , compared to djeroek mock up of how it should be ( and how it is in most other programs .),

The result of keeping this is gonna be, none is gonna use the potential of the new instruments. New Renoise is gonna sound like old Renoise. So be it.

Deleted -

Don’t be to pesimistic , you can’t expect the dev’s to change something they have been working on for 2 years , while I totally agree with you …I say hold on …maybe in a future update

I’m mainly with Danoise here that the current system isn’t perfect, but at least is a great way to do a lot of stuff that wasn’t possible before. As weird (uncommon) it is, as helpful it may be to do interesting new things.

Reading your description you mainly seem to worry about how many envelopes are used internally. This is not a bottleneck at all regarding CPU usage. Other things are way more relevant here.
It also should not be the goal of any “modulation concept” to waste as less as possible CPU cycles, but to do a lot of interesting stuff in not “too many” CPU cycles.

You are also right that to emulate some of the classic synths, you need more devices and more parameters in Renoise. But the good part of this is that you actually CAN set up individual modulation for all this in the first place?

It doesn’t matter, how long they’ve been working on this. It is wrong concept and it is bad. Imo all Alpha-Testers involved in this should consider to maybe start painting or drawing. None really knowing his stuff would have let that pass an Alpha testing. If this concept is kept and before January 2nd there is no statement about changing it, I’m gonna take the current FL Studio offer and walk off.

When Redux is gonna hit the shelves with this concept and anyone of the specialized press gets aware of it, then “Good night, mum!”. For Redux AND Renoise. Why do I even care!? Screw it…