New Tool (3.0,3.1): xStream

██╗ ██╗███████╗████████╗██████╗ ███████╗ █████╗ ███╗ ███╗
╚██╗██╔╝██╔════╝╚══██╔══╝██╔══██╗██╔════╝██╔══██╗████╗ ████║
 ╚███╔╝ ███████╗ ██║ ██████╔╝█████╗ ███████║██╔████╔██║
 ██╔██╗ ╚════██║ ██║ ██╔══██╗██╔══╝ ██╔══██║██║╚██╔╝██║
██╔╝ ██╗███████║ ██║ ██║ ██║███████╗██║ ██║██║ ╚═╝ ██║
╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ v1.55

xStream is back!

Since the last version, which I released almost a year ago, I’ve used this tool - but not as much as I’ve wanted to.
Too often, the party got spoiled by a number of annoying, recurring issues - all of which should now be gone with this release (well, hopefully).

When it comes to this, I’d like to thank to pat for reporting everything he came across in just a couple of days of xStreaming :wink:

Also, there was some much-needed internal reorganization that needed to happen. Joule and others has made it clear how multi-track operation is the future for this tool.
And so, considering how much extra work this would involve - also resulted in xStream development being put on the back burner for a while.

But now, the tool is back, and I dare say, better than ever. It’s pretty much the same featureset as the previous release, but with a more solid foundation.

Download from the tool page

http://www.renoise.com/tools/xstream

As the featureset has stabilized since that “sprint” of last year, I’ve also had time to update the documentation.
It’s now spread across topics, with a better introduction to xStream coding than before, and generally just better organized.

You can check out the new documentation here (on github). Some of the pictures are a little outdated, I’m working on fixing that.
https://github.com/renoise/xrnx/tree/master/Tools/com.renoise.xStream.xrnx

Changelog for this release:

- Core: refactored several internal classes
- Core: more solid, simpler streaming implementation
- Fixed: loading favorites.xml was broken
- Fixed: selecting [no argument] would throw an error
- Fixed: table constants (e.g. EMPTY_XLINE) are now returned as a copy
- Fixed: read-only value arguments are not MIDI assignable
- Fixed: error when trying to create argument with just one "item"
- Fixed: failure to export presets when arguments are tabbed
- Fixed: setting custom userdata folder is now applied immediately
- Added: ability to migrate userdata to a custom folder
- Added: xLFO class + demonstration model
- Added: RandomScale model 
- Added: Updated documentation

So great to see an update! I’m trying the Euclidean Rhythms 1.2 model and it’s awesome :slight_smile:

I really like the preset system. With this model, I notice that it would be helpful being able to copy/paste the individual abcde tabs. I guess these are adhoc for the Euclidean model, and that it would be difficult to achieve. How are they even defined? (by a “tabs” variable in the sandbox?)

Was switching models on the fly while running through patterns in playbackmode, also with ‘auto-clone patterns’ enabled and got this;

‘C:\Users\pluge\AppData\Roaming\Renoise\V3.1.0\Scripts\Tools\com.renoise.xStream.xrnx\main.lua’ failed in one of its notifiers.

The notifier will be disabled to prevent further errors.

Please contact the author (danoise [bjorn.nesby@gmail.com]) for assistance…

std::logic_error: ‘ViewBuilder: invalid index for popup: ‘0’. value must be [1 - 4].’

stack traceback:

[C]: in function ‘popup’

.\source/xStreamUIArgsPanel.lua:348: in function ‘build_args’

.\source/xStreamUI.lua:1479: in function ‘on_idle’

.\source/xStream.lua:352: in function ‘on_idle’

.\source/xStream.lua:202: in function <.\source/xStream.lua:201>

How are they even defined? (by a “tabs” variable in the sandbox?)

Haha, guess it’s time to check the documentation:

When there are too many arguments to fit on the screen, you can organize them in a tabbed interface, simply by prefixing the name with the tab name.

For example, “voice1.volume” and “voice2.volume” will create two tabs, labelledvoice1andvoice2, and add a volume inside each one.

Behind the scenes, the tab name and argument nameis indeed kept separate. But from the main method, such arguments can be accessed as a table structure.

-- accessing a tab with variable name
local tab_name = "voice1"
print(args[tab_name].pulses)

Especially for something like the euclidean model, this has simplified things a lot.

And you’re right, presets are dealing with the entire model. At least, that’s true for newly saved presets - because, if you add additional arguments to a model, these will only be saved along with a newly made preset - if you load an older preset, it will also work, but obviously not be able to update the newly added arguments (the scripting console even tells you if this is the case).

So, let’s say that you were to create a special preset manually, by editing the model .lua file… then you could load that preset and it would apply only the specified values. Of course, this is far from an optimal solution, but it should work…

Was switching models on the fly while running through patterns in playbackmode, also with ‘auto-clone patterns’ enabled and got this;

Pretty sure the auto-clone tool isn’t to blame :slight_smile:

I can’t replicate it here … perhaps you remember which model that caused the error? Or able to repeat it?

Edit: OK, I believe “ChordMemory” is the culprit, as it reads directly from the pattern and updates arguments.

And you’re right, presets are dealing with the entire model. At least, that’s true for newly saved presets - because, if you add additional arguments to a model, these will only be saved along with a newly made preset - if you load an older preset, it will also work, but obviously not be able to update the newly added arguments (the scripting console even tells you if this is the case).

Thanks for clarifying! I guess what I’m waiting for then is stacked models, and then handling my need by stacking single-layer euclideans. It’s the combination of trial’n’error + preset system that I’m after.

Already, I just updated the tool page with v1.57.

- Fixed: error when updating view with out-of-range value #99
- Fixed: error when trying to select data from editor popup #100
- Fixed: error is thrown when entering "return" into main method #97
- Fixed: model doesn't work when last line is comment #95
- Fixed: expecting "models" to be present in custom user-folder #87
- Fixed: favorite icons and preset highlighting (got broken in 1.55)
- Fixed: ChordMemory model had a few flaws
- Added: "Apply to Line", for quick single-line output
- Added: additional keyboard shortcuts and midi mappings 
- Changed: more compact, cleaned up GUI

Seems to work allright, but then I did refactor things a bit more…there might be new bugs lurking?!

The GUI was tweaked too. Got sort of triggered by joule explaning how to remove the focus border around the textfield, I also made some other things more clean and compact.

Looks like this now:
7410 v1.57-expanded.png

A bit retro B)

Already, I just updated the tool page with v1.57.

- Fixed: error when updating view with out-of-range value #99
- Fixed: error when trying to select data from editor popup #100
- Fixed: error is thrown when entering "return" into main method #97
- Fixed: model doesn't work when last line is comment #95
- Fixed: expecting "models" to be present in custom user-folder #87
- Fixed: favorite icons and preset highlighting (got broken in 1.55)
- Fixed: ChordMemory model had a few flaws
- Added: "Apply to Line", for quick single-line output
- Added: additional keyboard shortcuts and midi mappings 
- Changed: more compact, cleaned up GUI

Seems to work allright, but then I did refactor things a bit more…there might be new bugs lurking?!

The GUI was tweaked too. Got sort of triggered by joule explaning how to remove the focus border around the textfield, I also made some other things more clean and compact.

Looks like this now:
attachicon.gifv1.57-expanded.png

A bit retro B)

Got this when installing;

C:\Users\pluge\AppData\Roaming\Renoise\V3.1.0\Scripts\Tools\com.renoise.xStream.xrnx\main.lua’ failed in one of its notifiers.

The notifier will be disabled to prevent further errors.

Please contact the author (danoise [bjorn.nesby@gmail.com]) for assistance…

main.lua:155: attempt to index field ‘preferences’ (a nil value)

stack traceback:

main.lua:155: in function main.lua:153

Opening the tool after install gives;

'C:\Users\pluge\AppData\Roaming\Renoise\V3.1.0\Scripts\Tools\com.renoise.xStream.xrnx' failed to execute in one of its menu entry functions.

Please contact the author (danoise [bjorn.nesby@gmail.com]) for assistance…

.\source/xStream.lua:42: attempt to index field ‘prefs’ (a nil value)

stack traceback:

.\source/xStream.lua:42: in function <.\source/xStream.lua:28>

[C]: in function ‘xStream’

main.lua:116: in function ‘show’

main.lua:142: in function main.lua:141

Got this when installing;

Opening the tool after install gives;

I’m not getting that error, but maybe I have an idea - try replacing main.lua with this version?

7412 main.lua

I’m not getting that error, but maybe I have an idea - try replacing main.lua with this version?

attachicon.gifmain.lua

that seems to have fixed it :slight_smile:

that seems to have fixed it :slight_smile:

Cool - but also a bit “scary”, because I didn’t actually fix any errors, just left out the whole keybinding/midi-mapping bits. Which is 100% workinghere.

Seems I’m running against some internal limitation of the lua runtime here. I’ve been known to do that…

did you know that our lua engine allows you to create no more than 300 local variables per function? Neither did I, until recently B)

Cool - but also a bit “scary”, because I didn’t actually fix any errors, just left out the whole keybinding/midi-mapping bits. Which is 100% workinghere.

Seems I’m running against some internal limitation of the lua runtime here. I’ve been known to do that…

did you know that our lua engine allows you to create no more than 300 local variables per function? Neither did I, until recently B)

^I have no to a very slight idea what all that means :slight_smile: , but as long as you manage to turn in cool useful tools I can’t complain. Are these limitations beatable with a new Renoise, or intrinsic to Lua?

Are these limitations beatable with a new Renoise, or intrinsic to Lua?

Beatable, for sure. We have options when compiling the Renoise-flavored lua.

It’s just that I’ve not come across this before. But I have experienced similar problems in other embedded languages.

Teasing what might be the most important step so far for this tool:

7453 xstream-stack-sneak-preview.gif

The idea have been around for a while: allow models to pass input from one to the next, in a serial or parallel fashion.

So you can have something like a sequencer, and plug it into another model that transposes the notes.

And not limited to a single track either - the first model could output to one track, and the transposed version could be written to a different track.

It opens up the door to a multitude of possibilities, as well as making it much more attractive to write small, focused models that perform a single task.

The processing would be linear, a bit like the left -> right processing in Renoise.

But even with that limitation, extensive routing is possible - if you could then specify for each model, where it should get its input from.

Here’s a flowchart which describe the signal flow:

7454 xstream-flowchart.gif

Edit: maybe not so clear from the illustration, but those lines are arrows, going in one particular direction

The tricky thing was to come up with a reasonably simple way to express this visually - but I’m pretty happy with the result (see GIF above).

Now, as stacks introduce the idea of connecting multiple models, obviously they need to be able to be imported & exported, just like regular models.

But even better, I wanted this to be something that you didn’t necessarily have to worry about - so the tool will be able to save the entire state in the song,

and recall it the next time you load the song.

Now, this feature will take considerable time and effort to finish - it touches upon about a million small things and will need a LOT of testing.

Until then, v1.57 is the premium choice.

Best tool with a bright future! I’m very much looking forward to this kind of modularity.

Here is an example of a user case that I hope will eventually be possible with stacked models:

  1. Select a model - “Chord input” - as model A. Recognition of chord and inversion happens here.

  2. Select a model - “Invert chord” as model B. Basically you can use some effect number in track x, to modify the chord inversion from model A here.

  3. Select a model - “Chord to arp” as model C. This can be some kind of scheme to output the chord according to some kind of indexed notes from track y, for example.

Some issues here:

  • It would require voice handling to determine the “active chord”. Essentially, logic that spans over multiple lines - keeping a table unless pattern data changes it. Or maybe this is meant to be covered by the “event” system?

  • Also, the function described in point 2 would require an ‘effect voice’ handling. The inversion should be active until some other effect number happens.

  • Multiple track inputs is still not addressed? This would be required by model C above. Maybe this is covered in your flowchart explanation - I’m not sure.

  • Some auto-rerender feature would be good. Auto-generating data (thru-out song??) whenever some arg or code is changed.

Just throwing these ideas here, in case they have something that requires consideration.

Yes - please bring on the scenarios.

As you mention voice handling specifically, this is something I thought about too.

See, there is a realtime voice manager in xLib and it does a great job at keeping track of voices - which track, instrument, etc.

But xStream is a different kind of beast. Here, the output exists in a kind of “quantum state” because it’s written to the track ahead of time and this content might change at any given moment.

Which means, you can’t just decide that “ah ok, got some voices here”, because with the flick of a switch, those voices would never have existed.

Same goes for output, you can’t just cancel a voice because the output contained a note-off at some point.

I know, this is not relevant for you because you’re not interested in the streaming aspect.

But voice-management with streaming support is indeed something that will need special attention.

And this could all be prototyped as a model, btw. :slight_smile:

Multiple track inputs is still not addressed?

A model will still be passed a single xline - the one of it’s designated “read track”, or the one passed from another model (by means of the routing as illustrated in the GIF).

But in addition, models will be able to directly access the buffer of any previous model - in the flowchart, model C has access to any track as the xline, and/or the output from models A+B.

And of course, this is just the “managed” part of the tool. You have always been able to read the input from any track -

the problem is that you’re then hard-coding track indexes into models - which is a bad practice. Better to have some degree of management here.

Another thing you bring up is “indexed notes”. Now, this happens to fit nicely with the idea of an xline - using columns to represent indices.

But there is a bigger topic here: sometimes you need to pass data around, in one form or the other.

So, in which other ways can the xline can be used as a “protocol” between models?

imagine for example, that the first model is an LFO. It creates high-res data and writes it into the “xline.automation” property?

Now, this automation is not going anywhere before the xline is actually written. So if it got passed on to Model B, then Model B would be able to interpret the automation and use it for “whatever” it wanted.

For example, writing Sxx commands using an LFO is quite fun.

What do you think of treating the “xline as protocol”, rather than “passing arbitrary sets of data” - can you think of some immediate limitations?

Take MIDI as the example, they implemented sysex because they knew that the protocol would otherwise be too strict.

Some auto-rerender feature would be good. Auto-generating data (thru-out song??) whenever some arg or code is changed.

Yes, or simply yet another output scope: MTX (matrix). Complementing TRK/SEL/LINE

I mean (and I’m guessing here): what you’re really looking for is the ability to have something that evolve over time, in more than a single pattern? Like when you’re streaming.

Auto-render would then be a simple checkbox, somewhere.

What do you think of treating the “xline as protocol”, rather than “passing arbitrary sets of data” - can you think of some immediate limitations?
Take MIDI as the example, they implemented sysex because they knew that the protocol would otherwise be too strict.

In a private tool of mine that has this kind of serialness, this is pretty much what I do. In addition, I have a table inside my “fline” named _meta, where I can store custom stuff like origin, previously analyzed chord quality and such.

Technically, I feel that an xline is a bit too specific for this task, though. Not sure about your data structure, but maybe just dump a table from one model to another (and if the table happens to contain an xline key, so be it).

I’m thinking… In model B, you could access model_a[“xline”] for example. Maybe all models could share the same sandbox? (or kind of a “parent sandbox / evironment” for __newindex. Of course, model “A” being executed first, then B, C et c). This would allow something like the following scenario:
Model A and Model B has different inputs. Model C outputs to a track, and alternates between data from model A and B (depending on some criteria).

I mean (and I’m guessing here): what you’re really looking for is the ability to have something that evolve over time, in more than a single pattern? Like when you’re streaming.

Not really, I think. Personally, I’m only interested in abusing xstream for generating pattern data (offline). So the main use for “songwide auto-generate on arg change” is to make it fit into a trial’n’error workflow.

I have a table inside my “fline” named _meta, where I can store custom stuff like origin, previously analyzed chord quality and such.

Great that you bring up a concrete scenario.

In case of the chord analyser, “origin” is what - the originating track index? This we’ve got covered, as the stack will specify the actual track index for each member - both for reading and writing.

The “chord quality” would obviously be some kind of data associated with the model, and as such, also directly accessible.

And we need a streaming voice-manager as well. I mean, you mentioned this:

It would require voice handling to determine the “active chord”

This is where I’d imagine that a voice-manager would allow you to figure out the active voices. What would make it a “streaming” voice-manager is that the xinc decides what results you will get.

Therefore, it needs to be a process, automatically running in the background. I imagine it would take a few cues from the regularxVoiceManager, perhaps even include some of the methods - release_all() could write note-offs in relevant columns, etc.

And the streaming is also my biggest concern when it comes to arbitrary data. Accessing these data could (should) yield a result, depending on the xinc that you’re currently located at.

This is not too bad with a voice-manager, but it could get messy with arbitrary data. Definitely hard to come up with an efficient, elegant solution here.

Other than that, it really sounds like the approach I’ve taken would be working for you as well.

Wonderful! I’ve been away from the forums so I’m a a bit late, but thanks so much for the update!

Cheers. :slight_smile:

Wonderful! I’ve been away from the forums so I’m a a bit late, but thanks so much for the update!

Thanks a lot!

Yeah, it felt good to release that update.

I am thinking to expand it in a number of ways, but fundaments are pretty much in place now.

And: if you do encounter something strange or broken, please report :smiley:

Hi…

I have a specific use case I think maybe this tool could be useful for. Not sure right now, so I better ask here before I waste lots of time figuring out it isn’t possible. It is one use case that I think would be of liking by many renoise users… I desperately want this, and if xstream wasn’t made for such action, I’d consider trying to make a dedicated tool for it.

The idea is to have every track of interest (tracks where notes are sequenced) duplicated. One will always be muted, the other live, maybe some extra tool could help keeping this snappy.

What the xstream or custom tool should do is look for special channels, where only a row of delay values is sequenced. I will name them “groove channels”. Then it should look for the special channel pairs, and shift the notes from the left one by the delay values of the groove channel, pasting the results into the right channel. The format of the groove channel

So its a custom groove tool! Yowsa. Maybe there could be multiple channels with delays, and these should be applied to each pair to the right of it, until another groove channel will ask for another groove for the channels to the right - or disable it somehow for those channels.

The idea of duplicating the channels comes from the need of still being able to jam the instruments, play back the straight original, or alter the groove while keeping the original intact and not cumulating groove changes. The process of trying out and adjusting grooves, and also being able to work with note delays for tuplets etc in the master channels, should be totally transparent and automatic. Also the groove channels should be sequencable in the matrix like normal pattern/track slots, and the tool should react to changes by updating the processed channels, either automatically or if that is too slow then by demand.

I still see some shortcomings in the concept, or points where it could be driven even further…for example it would be a nice idea to be able to also “grooven up” graphical automations by skewing the graph according to the delay values.

What you say - could work with xstream, or should I start trying to make my own tool for this?