Processing Predicted Silence In Another Thread

just seen the thread about using cores efficiently

probably maybe my post here is either of no use or renoise does it already. internal stuff that i cannot know about, i’m guessing a lot here.

i asked myself what renoise could do in the following scenario:

(in order to make it more thoughtful i’ll say core instead of thread)

core 1: needs to produce A at first, then it can go on with dependent block B (like: generator machine (A) => effect machine (B ) => output)
core 2: has nothing to do

could renoise do the following?:

core 1: produce A
core 2: produce B with silence as input (silence_B )

  
when core 1 has produced A:  
 if A is silence and silence_B is complete:  
 output = silence_B  
 else:  
 stop producing the silence_B  
 produce B with input A  
 output = B  
  

ps: i just realized, producing silence_B would need copy of the machine’s context/state, most probably the idea is not possible with closed source machines/plugins

You aim for plugin or instrument inactivity?
I guess the “Autosuspend plugin when silent” option should already reduce any load not required. (And seems the only possibility to somewhat control the CPU consumption)
Though some plugins that rely on time schedules and positioning behave erratic when they are autosuspended. (Plugins that synchronize their timeline to that of Renoise)
It is not just about silence, but also what the plugin is doing for which reasons. And you are right that this is not something Renoise can predict. For certain not base on silence alone.

Plugin inactivity. (and instrument inactivity, if there’s no way to be sure that it will render silence or not)

Autosuspend will stop to process a plugin (or some signal source) that’s creating silence. (is this right?)

My idea: Guessing that the input of a renoise effect could be silence.

I think, silence can indicate two different wanted reactions, which can improve overall performance:

  1. Omit an operation
  2. Parallelize two operations

so if renoise effects had:
renoise_effect->save_state() and
renoise_effect->rewind_to_saved_state()

then, it would be possible to parallelize the processing (instead of waiting for the input) and rewind it, if the input was assumed to be silence but was not.

example:

  
synthie.dll => native renoise reverb => output  
  
  
without predicting:  
-------------------  
  
C-5  
...  
OFF  
... here renoise processes synthie.dll and then use it as input for the reverb, in series, all on only one core  
... here  
... here  
D-5  
...  
...  
...  
OFF  
... here  
... here  
  
  
  
  
with predicting:  
----------------  
  
C-5  
...  
OFF  
... here renoise processses synthie.dll on core 1. and on core (thread) 2 it predicts silence as input for the reverb, processes  
 the reverb, and when synthie.dll is ready: check and see if the prediction was true, etc (see code snippet in my first post)  
... here  
... here  
D-5  
...  
...  
...  
OFF  
... here  
... here  
  

“here” means a line (block of samples) where i expect a performance improvement, due to silence from the plugin, when the predicting is active

the overall processing would happen at all lines of course, there would be even more (not too many/big) operations, but on more cores.

ps: vV: at first i was not sure whether a different processing of plugins would be involved. so i thought too, that with a method like this, there could happen these third party plugin problems you mentioned. but if the only machines that change behaviour (rewinding) are renoise effects, then there will be none of these problems.

Autosuspend is a method where Renoise asks the plugin to cease all possible activity based upon the fact that no output activity is measured.
Plugins will then go idle and use very low cpu resources. This means that CPU time can be spend on other things.
Taktik explained once how multicore cpu’s are used and divided but there are also limitations in how these CPU cores or threads can be used.
I suspect that your described methods are being used to some extent where possible but i don’t think you can simply dump processes from one core to another if that would be more effective.

Using a core to predict silence, why would that be useful? Renoise simply asks the plugin to stop if no output comes from it.
Yet if the autosuspend option is not enabled, Renoise doesn’t touch anything. If you have a plugin with internal LFO’s running in synch with the song and you want to enable and disable it at strategic points in the song, you don’t want autosuspend to kick in for these plugins.

Ok.

Well, I assumed it to be a (if not thread affinity is used) transparent handling of cores by the OS. So I just tried to find out, how the number of threads could be increased when doing a certain operation. (namely processing two
machines, where the second one depends on the input of the first one)

If done properly and not gaining overhead or whatsorver multi-thread issue, then doing A in thread 1 and B in thread 2 (instead of A and B in thread 1) should increase performance, just afaik.

Ok. Well I mean specifically the possible 2-threads-instead-of-1-thread improvement.

Because machines A and B are processed in series, since B depends on the output of A. If B (e.g. reverb) knew the output of A (synth), both machines could be processed in parallel.
At a block to be processed: Does the reverb know the output of the synth? No, it can’t. But it can guess it. Guessing (predicting) the silence is of use here, because machines often produce silence.

Prediction was true:

  • Great, B is ready earlier and it was just as performance-decreasing as all other multi-threaded signal-chains.

Prediction was false:

  • It ran on a free (not used by other threads) core? Then no CPU time was wasted.
  • It ran on a used core? Then performance of the other operations decreased.

Conclusion: The overall performance can increase.

PS: I’ve just ran renoise and had a look at autosuspend using Synth1 and want to add some thoughts:

Silencepredict and autosuspend would be very different.
As you described, autosuspend stops (among other things) processing the chain (after a lot of checked silent blocks).
Silencepredict cannot replace autosuspend. It rather would per each audio block (!) try to parallelize something that would actually be serial.

The outcome would look like this:

  
no silencepredicting:  
thread 1: 5 ms loud-plugin then 3 ms reverb => 8 ms  
  
silencepredicting in unlucky case:  
thread 1: 5 ms loud-plugin then 3 ms reverb => 8 ms  
thread 2: 3 ms reverb (dismiss)  
  
silencepredicting in lucky case:  
thread 1: 5 ms silent-plugin  
thread 2: 3 ms reverb (use) => 5 ms  
  

As you can see, core usage would increase, time usage would decrease.

Also, for the unlucky case a rewind function would be neccessary.

  
clipping_effect->rewind()  
{  
 // probably no code in here  
 return ok;  
}  
  
delay_effect->rewind()  
{  
 // lotsa code  
 // ...  
 return ok;  
  
 // or  
  
 return cant_rewind;  
}