Process slicing of a single api call?

Hi,

is there any way to split up a single api call into processing slices, so the script can’t run into the timeout? More precisely, reading of the plugin’s active preset data can take a really long time for some plugins, and if then the architecture bridging is involved, it will be even slower. So some plugins seem to be so slow in providing their inline preset data that the Renoise api will run into a timeout… You will have the same effect while loading a project with such a plugin… Do you have an idea how to solve this problem?

Is there a way to actually start an own process, similar to a web worker or so?

If not, could the Renoise API be so clever and pause the timeout/countdown while an api call is going on? Since this seems to be “unfairly” weightened then. In the end you can’t predict how long a single api call will take.

ProcessSlicer.
xrnx/tools/com.renoise.ExampleToolSlicedProcess.xrnx at master · renoise/xrnx · GitHub

Thanks, but that is not what I meant and also wrote. I am using the process slicer of course, you have to, for long processing. But I was writing about a single API call which takes a lot of seconds. Process slicer won’t slice this up into pieces as far as I understand it, or can it do that? The problem then also seems the way the timeout in Renoise is calculated. It should substract the delta of microtime an api call takes time, or so…

please break down what the “single api call” is so we can have a fuller image

I wrote that in the first post already, but I guess it wasn’t clear enough. I think the slowdown simply happens at a line like this:

local activePresetData = device.active_preset_data

So simply accessing the active_preset_data… Now that I am thinking about it, I accessed it multiple times directly, instead saving it into a buffer variable first. Might speed up the thing by factor 3 already… Have to test it.

Yet Renoise or some plugins are very slow when accessing their preset data… I mean like 10 seconds on an high speed ARM powerhouse machine. So it could happen that this single API call already will outrun the lua scripting timeout. Which then makes no sense.

My point was here, either there is a way to split even a single call into “process” slices (which I doubt is possible due to the single process nature), or the delta of the timeout measurement should be increased by the time any api call will take, from a conceptual point of view.