Probably we all should spend more time thinking about a new way of brainstorming, discussing, planning and explaining such features than discussing the features itself. This thread is one big frustrating mess.
I’d consider this indeed a very important point, as a lot of this mess is caused by a lack of communications. None of “us” is here to fight “them”, I guess. But it’s often hard to remember that, because of missing (background-) informations and the way things take because of that.
At the moment the new instr.design is just capable of basic modulation with no in depth synthesis methods …really sad
I think this is symptomatic of what goes wrong in a thread like this.
Synthesis? Yes, that is definitely interesting and relevant to R3, but only slightly related to the actual topic at hand.
I think Bit Arts have clearly demonstrated that he wanted to keep the discussion focused on one particular area, namely the potential redundancy of the modulation system.
Speaking of which: I personally do not think we have yet reached the best possible implementation. I voiced some criticism of Bit Art’s latest concept, because - once I understood it - I felt that the restriction to single modulation sets seemed unnecessary - I want to see aliased devices as a way to link the device and it’s parameters only, and not include the actual computed values flowing through the device (so basically, prioritizing flexibility and ease of use over a potential for CPU optimizations).
Unfortunately, during these past 8 pages or so, it seems that my approach has angered Bit Arts a lot. But one thing is sure: being called all manner of names doesn’t improve things.
Btw: here are some discussions related to other aspects of the modulation system
Better editing workflow for the modulation matrix - “Pattern matrix” alike drag and drop, alt+drag functionality
- http://forum.renoise…950#entry306950
- http://forum.renoise…post__p__306967
- http://forum.renoise…y-device-chain/
- http://forum.renoise…single-sub-set/
- http://forum.renoise…oard-behaviour/
Global/shared instrument modulation, enhanced ADHSRs (with custom curves)
Routing of modulation devices (modulation meta-devices)
Grouped modulation processing (a.k.a. bracket devices, also mentioned in this thread)
Free-running LFO devices
Routing modulation to FX devices
danoise, thanks for this detailed link collection. IMO the forum is missing some kind of theme tags functionality, so you could easily find threads related to a specific topic.
@ danoise
I critized the lack of audio modulation possibilties , because I think , if these will be implemented sooner or later it should be done correct from the beginning .
And since audio modulation is a big important part of the modulation system , I thought this thread was the best place to post .future suggestions !
Good point. I think Renoise development would benefit from more openness and wider user base during feature planning (like a blog post about feature with example implementation and then open discussion). I think beta phase is too late a point to start rethinking features.
Regarding modulation (synthesis-style), one idea suggested previously (years ago) would be to add AM, FM and related effects as track DSPs so anyone could modulate anything without first preparing an instrument for it. Phrases would have exact same DSPs to implement such synthesis inside instrument.
To my limited understanding that one previously posted colorized modulation matrix mockup by afta8 made sense. Add single modulation device instancing to it and it might be enough for now (fixed number of chains that could share instanced components between them).
I’d still consider that a mistake, because the instruments are a core competence of a sample player. But I’m not able to change it anyway, so it’s pointless to discuss about this any further.
If that was the case, me and everybody else would have stopped thinking after the first suggestions, waiting for the implementation. Everyone contributing on topic in this thread is trying hard to find a way that solves the problems or at least helps to AND causes the least effort to the dev-team. Everyone of these guys, including me, is seriously trying to support you and the rest of the dev-team and work towards you. None is just leaning back and waiting for results.
Well, speaking for myself, I have to say, it appeared indeed like that here and there and quite often specially in this thread. But as I said before, already the simple lack of communications might be the cause of that. And I guess, a more open talk about what’s going on and/or causing problems on the dev’s side would be an important contribution to prevent things like that. Users aren’t clairvoyants. But atm they often kinda have to be, when they’re trying to help.
And you’re very obviously not the only ones.
Well, that’s half of the truth. While I’m absolutely willing to believe you, you didn’t think of the exponential redundancy caused by both redundancy-levels in combination. But it’s there and it becomes huge in no time. (For those who don’t know about the issue with redundant sample processing, this is the related thread.)
I’ve posted about this here earlier in this thread:
An 8 voice unison8 with a quite basic envelope setup (taken from the 303 example) would need, with both redundancy-levels (modulation & sample level) multiplying each other:
- 512 envelopes, 1024 assigned parameters, 128 envelope modulators.
when a proper concept would only require
- 8 envelopes, 32 assigned parameters and 16 envelope modulators.
10 redundant LFOs might not affect the performance. Maybe 20 do neither. But none’s gonna tell me, 512 processed envelopes, 1024 parameter movements and 128 envelope modulators for a single 8-voice instrument don’t affect the performance. This is not a bottleneck caused by filters only. Those might cause the main hit on the CPU. But with each sample layer I use in an instrument, the whole thing becomes more exponential redundant. And it does so on modulation level. That’s happening BESIDE the redundant filtering. When you use several instruments like that in an arrangement, you easily reach a few thousand envelopes and parameters. And those WILL affect the performance. These are facts and nothing I’m making up.
Actually it’s not about WHO is right, but WHAT is right. I doubt discussions on topic can always be held like carnival in rainbow land. But I agree, “more together” would be nice.
That indeed seemed to be the original main point of the discussion, and I am very curious to hear the devs opinion on this in particular.
Have to get up early tomorrow, so it’s already a bit late to respond to the whole thing on its topic. Will do that tomorrow. But I wanted to pick up one point now:
Yeah, I’m pretty pissed. I guess, that’s no news or surprise. I don’t think, I called all manner of names. But I did use the words “idiotic” and “stupid”. And you’re right. That doesn’t belong here, doesn’t make anything any better or help anyone. Sometimes the bit arts unit overheats. I want to apologize for that, specially because you still stayed respectful all the time. I hope you can accept that apologize. Just know, I wouldn’t do that, if it wasn’t meant honest.
512?
So… every sample assigned to a particular mod set is not first mixed together before modulations and filtering applied? I hope one doesn’t need 448 or so filters to mock up a simple 8 voice poly supersaw, even if they do deserve it for making supersaws .
Every voice is processed separately. So, yes… when you hit 8 keys on a unison8 supersaw atm, 64 parallel filters will kick in and 64 complete modulation sets.
That seems frankly a bit overkill. Seems to me, each processing ‘lane’ should be treated as one voice and each sample thought of as one ‘oscillator’. Outside of some few exceptions and modular modules with separate osc outputs, synths of all sorts mix all oscillators together before sending forward. You only need one filter per voice of polyphony. More than one should be better spent as parallel and serial filtering features. I wonder what the reasoning for diverging from the norm is, here.
[I’m not talking about the other layers. Of course, they need their own filters and modulation options too. But I equate that to normal multitimbral stuff]
I don’t think this happened by intention and just slipped through. The point is, how much effort it takes to fix this, because it’s most likely going down to basic structures and voice architecture, I guess. Modulation sets and processing is applied to individual samples, not to layers. Still, I have no clue, how huge or not huge the problem behind this really might be or might not be. The devs will know.
Yes, exactly. The case of a supersaw with 8 layered samples, all using the same modulation set is a special case which definitely should be addressed. Already have this on my list.
Reason why it’s not yet, is because samplers (well, the Renoise sampler) works different from a synth here: in synths you usually mix down [and cross modulate] all oscillators first, creating a voice, then apply modulation and filters on this single voice. In our system every !sample! creates a voice, because you may want to route it differently, may want to process it individually. So you could for example use a different modulation set for every individual sample, which is a pretty cool thing to do in the super saw example.
When all samples share the same modulation set, this indeed is not necessary. At least the filters should then processed per “layer” instead of processing them individually.
I’ve mentioned a few times before that the filters are actually the real bottleneck here. Running individual modulation sets per sample of course also is less efficient, but compared to the filter processing nearly irrelevant.
You cannot not state such things if you don’t know how the inner code is done. I don’t assume anything here code-wise. I could guess all functions are modulary build up and can be easily shifted to connect differently and work in a different fashion. If that is the case, then the UI is usually a small thing to change as well as shifting functions. If you know in advance that something is not going to be finished fully but you concentrate on a specific element, i personally would design it so that i can easily change functional behavior if the later implementation of specific features require so. You always have to play chess like great master Kasparov when you do programming. If you translate each hour that can be spend in a master chess match to a year, then go figure how long it takes to develop a good tool for anything (including music production)
There was no planning for this in the current release anyway, you may perhaps not be glad with it and i would have considered it nice if at least parallel modulation and feedback would have been possible, but currently none of that is put up by the devs for debate for Renoise 3.0. If that would be an option for 3.1, then there is plenty of room to consider a good design for that and then specially now during 3.0 is the best time to look ahead if everything build so far will do or need changes under the hood to make it happen later.So that is why i frankly said that now is the good time to discuss it.
Yes, exactly. The case of a supersaw with 8 layered samples, all using the same modulation set is a special case which definitely should be addressed. Already have this on my list.
Wow, this is really good news!
I am personally not that concerned about the CPU usage, having had no such issues since I purchased my (now-aging) computer in 2009, but I know that it is an important factor for many people who are using massive amounts of plugins / voices / samples / whatnot, or just concerned about efficiency in general.
But perhaps more importantly than raw performance, the onus should not be on the person using the software (in terms of needlessly complicated and/or limited features), when it can be addressed quite smartly in the background. Just wasn’t sure if such a thing was going to materialize in 3.0, or not.
Does this mean that there should be more flexibility in being able to arrange order of modulation sets and fx chains?
Or some sort of routing abilities?
Well, I am afraid this is just not true!
Take the Mau5chors example Bit Arts posted here at some point. It uses 1 modulation set (Volume, Cut, Res) that is shared by 4 samples. If I play his demo song Renoise (RC2) will take 61% CPU on my i7 quadcore with 2.2 GHz. If I disable this modulation set on the four samples, CPU usage goes down to 20%!!!
So yes, the modulations take A LOT of CPU!
It’s the filters which eat up most of the CPU and not the modulation. Try deleting all modulation devices in that example but keep the filters enabled.