Instrument modulation concept

This “debate” is still up only because every day in this thread is groundhog day. “The current system is better, because I just make up some fantasy stuff.”, “The current system is easier, because I just make up some fantasy stuff.”, “I don’t care about some redundancy, only because striking 4 keys eats up 30% CPU power of a Q9300.”. It’s the same each and every day.

What’s going on here is not a debate. It’s whistling in the dark as a form of failure management. It’s a lot of blabla talking around things without making any sense. And meanwhile it’s a development team making a laugh of itself, without even noticing.

You know, things like maldevelopments can happen anytime and anywhere, because we’re all humans. It might even happen after 2 years of development. What makes a real pro then, is identifying the problem and work on it to straightly solve the issues. But I can’t see that happen here. All I see is still “The current system is better, because I just make up some fantasy stuff.”, “The current system is easier, because I just make up some fantasy stuff.”, “I don’t care about some redundancy, only because striking 4 keys eats up 30% CPU power of a Q9300.” again and again and again. Groundhog day. Just like the current concept, I’d call that idiotic. I know, that’s awfully unprofessional. But I fail to find a better description.

It’s not all black and white and it’s not “them” against “us”.

It’s indeed very unlikely that we’re rewriting the modulation features in r3 within the beta stage because we simply don’t have the resources for this. We are a small team. There are a big bunch of other things we’d like to do for Renoise 3 as well apart from working on the new modulation features. So even if we wanted to, we couldn’t. But we’re also not making up stories here just to avoid changing anything in the current system.

But we indeed have to make compromises and find stuff that “works”, while you can relax and simply consist on your idea. We’re trying to find a solution, looking for ideas of how to improve things - that’s what this whole beta is about apart from bug fixing. Compromises have to be made in this process. Lots of ideas have to be weighted against each others. Saying that we’re simply the bad guys and are fooling people is not true and also not fair. We obviously care a lot about this Renoise thing.

You also have a quite deep understanding on how such things may work under the hood, yes. But you’re still guessing a lot and do try to sell this as facts. The notes about CPU usage and “redundancy” are for example not true. I tried to explain in a different thread that not the processing of the modulation devices is the main bottleneck here, but the individual processing of the filters per voice (when using filters). Of course running an LFO only once, then applying the output to 10 targets is faster than processing 10 LFOs individually, but when comparing this overhead to what else happens within the instrument, this simply is not relevant, not the main problem, not the main bottleneck, not what makes the whole thing either efficient or slow. So if CPU usage is one of your main concern, then removing this redundancy does not solve the problem.

And once again, could we please stop with all this personal “I’m right, you’re talking bullshit” attitude, collect ideas, look for solutions to the problems together instead of building broad based fronts between “them” and “us”?

I’m getting tired of this attitude so far. People fill their mouth with “professional” bullshit, while they are speaking and behaving more like moaning teenager girls.
Instead of getting on our nerves, try to speak the other way round. Maybe for us will be easier and positive to discuss constructive criticism.
Period.

Self-perception. Some have it, some don’t. This thread btw is full of constructive criticism. You can easily find it, when you stop waiting with closed eyes for someone crouching up your ass.

Will get back to this, when I calm down from reading the shit of the last poster.

The problem is, the criticism in here is submerged by your shit so that it’s not comfortable for us to find it.

There’s a german saying: “Hit dogs bark.”.

great guys , let’s do an old fashioned bitch slap fight …NOt
Bit arts has great ideas …the thread proves it , but bit arts should also understand that the dev’s have their hands full of fixing bugs, I think he got the illusion that the mod.system would be overhauled by the next official release …which is clearly not possible (which is understandable )

Probably we all should spend more time thinking about a new way of brainstorming, discussing, planning and explaining such features than discussing the features itself. This thread is one big frustrating mess.

I’d consider this indeed a very important point, as a lot of this mess is caused by a lack of communications. None of “us” is here to fight “them”, I guess. But it’s often hard to remember that, because of missing (background-) informations and the way things take because of that.

At the moment the new instr.design is just capable of basic modulation with no in depth synthesis methods …really sad

I think this is symptomatic of what goes wrong in a thread like this.

Synthesis? Yes, that is definitely interesting and relevant to R3, but only slightly related to the actual topic at hand.

I think Bit Arts have clearly demonstrated that he wanted to keep the discussion focused on one particular area, namely the potential redundancy of the modulation system.

Speaking of which: I personally do not think we have yet reached the best possible implementation. I voiced some criticism of Bit Art’s latest concept, because - once I understood it - I felt that the restriction to single modulation sets seemed unnecessary - I want to see aliased devices as a way to link the device and it’s parameters only, and not include the actual computed values flowing through the device (so basically, prioritizing flexibility and ease of use over a potential for CPU optimizations).

Unfortunately, during these past 8 pages or so, it seems that my approach has angered Bit Arts a lot. But one thing is sure: being called all manner of names doesn’t improve things.

Btw: here are some discussions related to other aspects of the modulation system

Better editing workflow for the modulation matrix - “Pattern matrix” alike drag and drop, alt+drag functionality

Global/shared instrument modulation, enhanced ADHSRs (with custom curves)

Routing of modulation devices (modulation meta-devices)

Grouped modulation processing (a.k.a. bracket devices, also mentioned in this thread)

Free-running LFO devices

Routing modulation to FX devices

danoise, thanks for this detailed link collection. IMO the forum is missing some kind of theme tags functionality, so you could easily find threads related to a specific topic.

@ danoise
I critized the lack of audio modulation possibilties , because I think , if these will be implemented sooner or later it should be done correct from the beginning .
And since audio modulation is a big important part of the modulation system , I thought this thread was the best place to post .future suggestions !

Indeed. I’ll try and expand this into a pinned topic. Thanks for the heads-up.

Edit: done

Good point. I think Renoise development would benefit from more openness and wider user base during feature planning (like a blog post about feature with example implementation and then open discussion). I think beta phase is too late a point to start rethinking features.

Regarding modulation (synthesis-style), one idea suggested previously (years ago) would be to add AM, FM and related effects as track DSPs so anyone could modulate anything without first preparing an instrument for it. Phrases would have exact same DSPs to implement such synthesis inside instrument.

To my limited understanding that one previously posted colorized modulation matrix mockup by afta8 made sense. Add single modulation device instancing to it and it might be enough for now (fixed number of chains that could share instanced components between them).

I’d still consider that a mistake, because the instruments are a core competence of a sample player. But I’m not able to change it anyway, so it’s pointless to discuss about this any further.

If that was the case, me and everybody else would have stopped thinking after the first suggestions, waiting for the implementation. Everyone contributing on topic in this thread is trying hard to find a way that solves the problems or at least helps to AND causes the least effort to the dev-team. Everyone of these guys, including me, is seriously trying to support you and the rest of the dev-team and work towards you. None is just leaning back and waiting for results.

Well, speaking for myself, I have to say, it appeared indeed like that here and there and quite often specially in this thread. But as I said before, already the simple lack of communications might be the cause of that. And I guess, a more open talk about what’s going on and/or causing problems on the dev’s side would be an important contribution to prevent things like that. Users aren’t clairvoyants. But atm they often kinda have to be, when they’re trying to help.

And you’re very obviously not the only ones.

Well, that’s half of the truth. While I’m absolutely willing to believe you, you didn’t think of the exponential redundancy caused by both redundancy-levels in combination. But it’s there and it becomes huge in no time. (For those who don’t know about the issue with redundant sample processing, this is the related thread.)

I’ve posted about this here earlier in this thread:

An 8 voice unison8 with a quite basic envelope setup (taken from the 303 example) would need, with both redundancy-levels (modulation & sample level) multiplying each other:

  • 512 envelopes, 1024 assigned parameters, 128 envelope modulators.

when a proper concept would only require

  • 8 envelopes, 32 assigned parameters and 16 envelope modulators.

10 redundant LFOs might not affect the performance. Maybe 20 do neither. But none’s gonna tell me, 512 processed envelopes, 1024 parameter movements and 128 envelope modulators for a single 8-voice instrument don’t affect the performance. This is not a bottleneck caused by filters only. Those might cause the main hit on the CPU. But with each sample layer I use in an instrument, the whole thing becomes more exponential redundant. And it does so on modulation level. That’s happening BESIDE the redundant filtering. When you use several instruments like that in an arrangement, you easily reach a few thousand envelopes and parameters. And those WILL affect the performance. These are facts and nothing I’m making up.

Actually it’s not about WHO is right, but WHAT is right. I doubt discussions on topic can always be held like carnival in rainbow land. But I agree, “more together” would be nice.

That indeed seemed to be the original main point of the discussion, and I am very curious to hear the devs opinion on this in particular.

Have to get up early tomorrow, so it’s already a bit late to respond to the whole thing on its topic. Will do that tomorrow. But I wanted to pick up one point now:

Yeah, I’m pretty pissed. I guess, that’s no news or surprise. I don’t think, I called all manner of names. But I did use the words “idiotic” and “stupid”. And you’re right. That doesn’t belong here, doesn’t make anything any better or help anyone. Sometimes the bit arts unit overheats. I want to apologize for that, specially because you still stayed respectful all the time. I hope you can accept that apologize. Just know, I wouldn’t do that, if it wasn’t meant honest.

512?

So… every sample assigned to a particular mod set is not first mixed together before modulations and filtering applied? I hope one doesn’t need 448 or so filters to mock up a simple 8 voice poly supersaw, even if they do deserve it for making supersaws ;).

Every voice is processed separately. So, yes… when you hit 8 keys on a unison8 supersaw atm, 64 parallel filters will kick in and 64 complete modulation sets.