Every voice is processed separately. So, yes… when you hit 8 keys on a unison8 supersaw atm, 64 parallel filters will kick in and 64 complete modulation sets.
That seems frankly a bit overkill. Seems to me, each processing ‘lane’ should be treated as one voice and each sample thought of as one ‘oscillator’. Outside of some few exceptions and modular modules with separate osc outputs, synths of all sorts mix all oscillators together before sending forward. You only need one filter per voice of polyphony. More than one should be better spent as parallel and serial filtering features. I wonder what the reasoning for diverging from the norm is, here.
[I’m not talking about the other layers. Of course, they need their own filters and modulation options too. But I equate that to normal multitimbral stuff]
I don’t think this happened by intention and just slipped through. The point is, how much effort it takes to fix this, because it’s most likely going down to basic structures and voice architecture, I guess. Modulation sets and processing is applied to individual samples, not to layers. Still, I have no clue, how huge or not huge the problem behind this really might be or might not be. The devs will know.
Yes, exactly. The case of a supersaw with 8 layered samples, all using the same modulation set is a special case which definitely should be addressed. Already have this on my list.
Reason why it’s not yet, is because samplers (well, the Renoise sampler) works different from a synth here: in synths you usually mix down [and cross modulate] all oscillators first, creating a voice, then apply modulation and filters on this single voice. In our system every !sample! creates a voice, because you may want to route it differently, may want to process it individually. So you could for example use a different modulation set for every individual sample, which is a pretty cool thing to do in the super saw example.
When all samples share the same modulation set, this indeed is not necessary. At least the filters should then processed per “layer” instead of processing them individually.
I’ve mentioned a few times before that the filters are actually the real bottleneck here. Running individual modulation sets per sample of course also is less efficient, but compared to the filter processing nearly irrelevant.
You cannot not state such things if you don’t know how the inner code is done. I don’t assume anything here code-wise. I could guess all functions are modulary build up and can be easily shifted to connect differently and work in a different fashion. If that is the case, then the UI is usually a small thing to change as well as shifting functions. If you know in advance that something is not going to be finished fully but you concentrate on a specific element, i personally would design it so that i can easily change functional behavior if the later implementation of specific features require so. You always have to play chess like great master Kasparov when you do programming. If you translate each hour that can be spend in a master chess match to a year, then go figure how long it takes to develop a good tool for anything (including music production)
There was no planning for this in the current release anyway, you may perhaps not be glad with it and i would have considered it nice if at least parallel modulation and feedback would have been possible, but currently none of that is put up by the devs for debate for Renoise 3.0. If that would be an option for 3.1, then there is plenty of room to consider a good design for that and then specially now during 3.0 is the best time to look ahead if everything build so far will do or need changes under the hood to make it happen later.So that is why i frankly said that now is the good time to discuss it.
Yes, exactly. The case of a supersaw with 8 layered samples, all using the same modulation set is a special case which definitely should be addressed. Already have this on my list.
Wow, this is really good news!
I am personally not that concerned about the CPU usage, having had no such issues since I purchased my (now-aging) computer in 2009, but I know that it is an important factor for many people who are using massive amounts of plugins / voices / samples / whatnot, or just concerned about efficiency in general.
But perhaps more importantly than raw performance, the onus should not be on the person using the software (in terms of needlessly complicated and/or limited features), when it can be addressed quite smartly in the background. Just wasn’t sure if such a thing was going to materialize in 3.0, or not.
Does this mean that there should be more flexibility in being able to arrange order of modulation sets and fx chains?
Or some sort of routing abilities?
Well, I am afraid this is just not true!
Take the Mau5chors example Bit Arts posted here at some point. It uses 1 modulation set (Volume, Cut, Res) that is shared by 4 samples. If I play his demo song Renoise (RC2) will take 61% CPU on my i7 quadcore with 2.2 GHz. If I disable this modulation set on the four samples, CPU usage goes down to 20%!!!
So yes, the modulations take A LOT of CPU!
It’s the filters which eat up most of the CPU and not the modulation. Try deleting all modulation devices in that example but keep the filters enabled.
I disabled not only the filters, but the whole FX chain of that instrument, before testing. So the CPU usage I report is without any filters involved.
But doing it the other way around, as you suggested (all modulations disabled, like in the testing I already did, but additionally enabling the whole FX chain): 20.6%.
So, on my machine it is the modulation, not the FX.
Filters = instrument modulation filers. Not talking about FX chains here.
But in this case disabling the volume modulation indeed also reduces the CPU load a lot, because this reduces the overall number of played voices by removing the release phrase from the volume AHDSR as well.
Well, it is a combination of the volume and the cut and res modulations. Volume takes about 21% of CPU, cut and res together about 19%.
Here the cpu is around 28%. When I set the LP Moog filter to None in the modulation the cpu is around 9%
The modulation system still uses the old filter 2 types right?
Perhaps when using the new filter 3 types (butterworth etc.) the cpu is performing better because the filters are optimized?
Or is this just a silly suggestion?
So…now is the moment to bitch again?
Has this actually been addressed in 3.0 final?
Nah, wait a couple of weeks, then Taktik gets back from his holiday. Else your post could get snowed under the pile of time-oblivion.
Well, we certainly don’t want this to happen Thanks for the head up!
Yes, exactly. The case of a supersaw with 8 layered samples, all using the same modulation set is a special case which definitely should be addressed. Already have this on my list.
Reason why it’s not yet, is because samplers (well, the Renoise sampler) works different from a synth here: in synths you usually mix down [and cross modulate] all oscillators first, creating a voice, then apply modulation and filters on this single voice. In our system every !sample! creates a voice, because you may want to route it differently, may want to process it individually. So you could for example use a different modulation set for every individual sample, which is a pretty cool thing to do in the super saw example.
When all samples share the same modulation set, this indeed is not necessary. At least the filters should then processed per “layer” instead of processing them individually.
I’ve mentioned a few times before that the filters are actually the real bottleneck here. Running individual modulation sets per sample of course also is less efficient, but compared to the filter processing nearly irrelevant.
Seeing that this has not been solved in 3.1 yet, I would like to mention (again), that the introduction of a “sample group” could help with this redundancy issue as well (since you could then assign the group to the modulation set, instead of the individual samples within the group).