Brainstorming how to implement a parallel container mechanism

Maybe the best approach for parallel kind of routing would be to use the current instrument structure. The automation device could be also a generator placeholder, so at this position, the audio was generated. Similar as dragging the generator itself to the position.

Benefits:

  • Already accessible thru LUA API, no breaking changes
  • Redux wouldn’t be affected, since Redux has no automation device
  • We could use the parallel chains in the instrument section for various routings
  • The whole VSTi setup would be saveable and easily swappable, for example in multiple projects (e.g. setting up a guitar with proper distortion fx)
  • Layering of generators would be possible
  • Makes the instrument preset system much more attractive to use

Drawbacks:

  • Effect routing is always bound to the instrument

Effort:

  • Refactoring of the automation device
  • Make midi / pattern input available in each chain of the instrument structure

If you have any ideas how to implement parallel containers from a technical perspective, please continue…

2 Likes

What if the line input device could accept audio routing internally from any VSTi? Then any xrni fx section could be as deep a parallel container as wished for. Seems a simple enough solution

What I would ideally wish to see would be a doofer that functions as a collapsible/expandable instrument fx section, with options for routing audio and modulation in and out. Then the sky would indeed be the limit. I’m ignorant of the practical and technical concerns here, however

4 Likes

Seems to be a simple solution at first glance, but I think this would lead into priority issues while processing. I imagine that the instruments are processed by the position in the tracks, from left top to right down. Now imagine you would like to route audio out of instrument on track #2 into the fx chains of instrument 1, which is on track #1 already. Your idea could be a good approach, but first it needs to be thought thru more.

If instrument 1 had only fx, but no generator, it kind of abuses the instrument structure and is not complete. On the other hand, maybe there could be a subset of an instrument, so only “fx-container”. So you could convert an instrument to an fx container, now only containing the fx chain page. This could be also interesting for Redux, since it already can process audio thru the line-in device, but here, it could be actually also a redux-fx plugin. I think vst3 format can provide a single plugin which then is loadable as fx and generator.

But this idea looks still somewhat incoherent, the units are not nicely encapsulated anymore, but instead there is now wild routing going on.

This thread is supposed to present some technically already coherent ideas. Please elaborate a bit more detailed first, before you post. Try to imagine the processing paths in Renoise, differ from audio to midi/note/meta signals, etc. Also think of Redux, which almost complety mirrors Renoise’s instrument structure.

Maybe only post ideas, which

  • can’t create feedback loops
  • respect processing graph and cannot lead to decision conflicts
  • encapsulate processing units nicely
  • do not feel like a spaghetti structure
  • do not feel like a workaround
1 Like

Some small thoughts about an aspect that I think is very important.

One thing that Renoise doesn’t need more of is “hidden workflows”. Coming from S1, even the splitter device which seems very simple and obvious doesn’t seem too easy to understand for beginners. You gotta take into account that people are pretty stupid/lazy in general. Even some stuff in Renoise I tend to forget now and then how to achieve, if I haven’t used the feature/“workaround” for a long time.

From this standpoint alone, my spontaneous suggestion would be a “Doofer” with two lanes and one gain-knob for each lane. Of course one could be placed inside another. Later on, split modes like m/s and freq could be added for convenience.

I don’t claim this is the technically correct way to do it, but it seems most accessible and comprehensible to me.

You had some use case objections to this, if I remember correctly?

1 Like

Maybe you are right. Yes, that’s what I thought at beginning, too. Would be also just like Ableton then. Though if I understand it correctly (maybe I am wrong), it would mess with the API structure?

I think doofer contents simply can’t be accessed via API at all, is this correct? Maybe then the whole parallel container could be invisible to the API, too? But then you can’t anymore automate the parameters, it also would require macro parameters + all the mix amounts. I think doofer’s contents can be invisible hence the macro parameters, which are mapped by taste.

Also a container currently lacks of sidechain support. This also seems to be a conceptual problem, since again you can’t access the contents directly.

Imagine now the contents were accessible, then they would have to be refected in the LUA api, too. But how do you manage that, using those numeric indices and the very linear structure? A better solution then would be a tree structure with nodes, no numeric indicies at all, instead pointers to children, silblings, parents, etc. Now the api is totally broken and no tool would work anymore… I hope I could describe the problem that I see here properly.

If I compare then suggestion from post #1 with yours, no #1 seems to be much more easy to implement and wouldn’t break anything.

Parallel container as container device with all parameters accessible:

Benefits:

  • Fits into dsp chain nicely, most intuitive
  • Similar to Ableton, Bitwig and so on

Drawbacks:

  • Breaks the API, if the parameters of each contained device should be available for automation and API
  • API changes most certainly would break any available tool
  • Huge effort

Parallel container as container device with parameters accessible via macros:

Benefits:

  • Fits into dsp chain nicely, most intuitive
  • Medium effort, seems to be an extended version of the current doofer device. But a new processing logic needs to be implemented, too.

Drawbacks:

  • Only very few parameters can be automated
  • No sidechain at all

I don’t quite see how such structure would be more difficult to implement or make accessible in the API. The structure would be very similar to the one dealing with tracks and group tracks, with similar API calls. All devices would be accessible in a flat DocumentList in track.devices imo, just like tracks are in a song even when they’re part of a group. Hence all parameters/automation would be accessible for all devices.

Regarding what would be happening under the hood (this is where the “problems are solved”), I assume the signal flow is just stored and represented by structs within structs that have pointers to the devices. Such a struct is a container which is either the track dsp chain itself or a splitter. Some slight overhead for the “tree” audio rendering and ad-hoc updating of the tree, I suppose.

1 Like

Can you give me an example in pseudo code? Do you mean something like this:

renoise.song().tracks[].devices[].devices(lanes)[].devices[] ?

This only if the device type is a container?

Or do you mean that the lanes of the container are actual tracks? But then, which indices do they have? Also the track object has a lot more stuff than just dsp devices.

All devices would be flat in renoise.song().tracks[].devices[], just like today. If the second device is a splitter, the structure is interpretable via renoise.song().tracks[].devices[2].members_1 and _2

This together with add/remove-methods is all you need to make whatever abstraction you need even in native LUA. Similar to how tracks in relation to groups are available today.

Well, that wouldn’t be wise to limit the container to just two members.

I think this then would be a proper structure:

renoise.song().tracks[].devices[].device_chains[] ( .devices[] )

Similar to the instrument chains. Maybe once the parallel container device was opened, it would show the exact same view as instrument dsp chains. Just now for that container device instead. Also a container device then was not really neccessary, as soon as device_chains exist in a device, the chains will be processed first.

I think you misunderstood. Member_1 and member_2 are DocumentLists of the effects for lane A and lane B. As long as a splitter can contain a splitter, the signal flow is very flexible with just two lanes.

I’m assuming the absolutely most common use cases will be parallel compression/dist/etc, m/s-processing… and delay design. Maybe you’re thinking about much more complex routings?

Yeah but container in container is the opposite of convenient. That’s why as many lanes as you want. Just as in Ableton and Bitwig…

Sure, but I’m also imagining the GUI. I’m not fond of the idea of a separate frameview for this, but rather just something very similar to the doofer. (could have a vertical “vb:switch” to select lane).

So I guess what I’m suggesting is just lanes in the doofer.

I haven’t used ableton, but I’m quite familiar with bitwig… are we talking “modular canvas” in both cases?

Oh you have Bitwig now? Nice!

Look how it’s done in Bitwig and Ableton. This is the way:

1 Like

So basically “doofers” with parallel lanes (?). I believe that in Ableton you can have maximum 12 lanes? And in Bitwig even more.

The DSP chain has room for a device having a vertical switcher with maybe six buttons (“A-F”). In case more are needed, maybe it’s ok to put a splitter inside a splitter?

It seems a pretty clear concept. (And the API+internal signal flow doesn’t seem like a big hurdle… famous words).

There could also be a small gain slider next to each switch item, for convenience. Or even mute and solo buttons.

3 Likes

Mute, solo, gain or mix knob, assignable macros would be a must as well, imo

1 Like

Why assignable macros if everything is normally automatable? (Or you can put the splitter inside a doofer if you need the controls)

Because each lane can contain multiple devices, and if the architecture is comparable to a doofer, the only way to interact with nested devices’ parameters is through assigning macros, afaik

I.e. you can’t automate contained device parameters within a doofer except through an assigned macro

…Still, I find this idea more attractive:

There is a container device with a single “open” button and macro controls. If you press the button it opens dsp chain view. For each chain, you can choose if it receives direct input or not, so a mute button each lane. This view could appear as new tab above. The channel selectors of the output devices in each chain of course are removed. You can also use the send device as you like.

Maybe then even a lot of code could be reused?

The Ableton / Bitwig approach maybe is conceptually not the best ever created. Renoise already has a very comfortable view for parallel processing. Reuse it.

The specific “track fx” tab view could also contain a select box and prev/next stepper, so it could be still there, even if no container device was selected, it then shows the last selected, or the first one.

1 Like

Yes, not a bad idea, but minus the tab imo. Consider the LFO and its button “Ext. Editor”. It’s bringing up a detachable view without any need for a tab. It’s easy to envision a “DSP chains” device doing something similar - and macro rotaries would fit well in the dsp gui if needed

1 Like

Yes, if a doofer/splitter could essentially function as a modular instrument fx section, but with potential for audio i/o and modulation input/output through macros, that would be ideal, quite powerful and game changing. I like the idea of a pop up editor window comparable to the lfo external editor. That would be elegant enough, and functional.

Presumably some code could be reused.