New Tool (3.0,3.1): xStream

Thanks that’d be most helpful, it’s more of a syntax issue for me - Lua is a relatively new one for me.

Thanks that’d be most helpful, it’s more of a syntax issue for me - Lua is a relatively new one for me.

D’oh, maybe my example was also not that helpful. Cloning, like I described it, is only available in the development branch.
I need to release that one anyway - wanted to get rid of this annoying issuefor some time now.

But it’s still possible to achieve what you want, it’s just that it’s quite a lot more code (using Renoise API calls).

-- amount of semitones to transpose by 
local transpose = -2

-- define our target track index
local target_track_index = 2

-- figure out the pattern-track in which the line exists...
local patt_idx = rns.sequencer:pattern(xpos.sequence)
local patt = rns.patterns[patt_idx]
local ptrack = patt:track(target_track_index)

-- we have our pattern-track and can figure out the line
local rns_line = ptrack:line(xpos.line)

-- finally, we loop/iterate through all visible note columns 
-- in the source track, transpose the note (clamping it to
-- make sure its between C-0 and B-9) and write it to 
-- the line that we previously figured out. 

for k = 1,rns.tracks[track_index].visible_note_columns do
  if (xline.note_columns[k].note_value < 120) then
    local new_note = xline.note_columns[k].note_value + transpose
    rns_line.note_columns[k].note_value = cLib.clamp_value(new_note,0,119)
  end
end

Ed: now here’s why cloning would be nicer -
the above example could be reduced to something like this:

-- amount of semitones to transpose by 
local transpose = -2

-- define our target track index
local target_track_index = 2

-- clone the xline 
local new_xline = xLine(xline)

for k = 1,rns.tracks[track_index].visible_note_columns do
  if (xline.note_columns[k].note_value < 120) then
    local new_note = xline.note_columns[k].note_value + transpose
    new_xline.note_columns[k].note_value = cLib.clamp_value(new_note,0,119)
    new_xline:do_write(xpos.sequence,xpos.line,target_track_index)
  end
end

Awesome, thanks Bjorn! That works great,

Yeah I can see that the sugared syntax makes it a lot more succinct. Also, having the tool manage the track routing would be great, like specifying a source and destination track and it won’t matter which track is selected, the notes always go from src to dest. I found I had to consciously make sure everything was set up.

Another thing, will the next version manage the visible columns shown? At first I was baffled as to where the notes were going in the dest track, only to discover they were hidden!

Another thing, will the next version manage the visible columns shown?

Obviously, in this example, as we are just writing to the “raw” target track, nothing happens (visibility stays as it is).

But yes - if the track was managed by xStream, then note columns should be revealed in that track too, as it’s being written to.

At least conceptually, everything should works the same with multiple/stacked tracks as they would with a single track.

Of course, once we go into detail, the new stacked-models approach can get tricky. Right now, whether note columns are revealed or not is controlled by the ‘expand_columns’ option. So the behavior can be set per-model (override the globally set option). But I have to think about what would happen if multiple models were targeting the same track and specified different settings.

PS. @Danoise,

If xStream needs optimization when going multi-track, I have some ideas that should speed up the reading/writing of xline a lot.

Reading:

Creating an xLine from tostring(renoise.PatternLine) instead of iterating columns.

I’m not sure exactly how you convert it in xlib/xstream now, but it shouldn’t be any problem adding an optional argument to xLine:__init() to populate the data quickly from a PatternLine string.

Writing:

Comparing xLine (string) to the destination PatternLine (string), only writing what is updated.

It might sound like a lot of string operations et c, but it’ll be really quick in comparison to accessing renoise.song(). I can submit an example of the xLine class allowing init from patternstring, if it sounds interesting.

PS. Another far fetched/vague idea I got is to make xLine some kind of sandbox with meta table, only accessing what is really being accessed by the end user (never populating everything). Perhaps interesting, as well…

It might sound like a lot of string operations et c, but it’ll be really quick in comparison to accessing renoise.song(). I can submit an example of the xLine class allowing init from patternstring, if it sounds interesting.

Sounds very promising. xLine is at the heart of xStream - it could indeed bring a lot of extra performance :slight_smile:

I imagine that you could then initialize an xLine in three different ways:

  1. Using a table (a “descriptor”)

  2. Using another xLine instance

  3. Using a string (PatternLine)

I will make an attempt at clarifying what the xLine does, and needs to do. That will definitely make it easier to nail down some specs :slight_smile:

Is there anyway to lock the tracks i.e. hard code the src track and destination track?

So I replaced the line:

for k = 1,rns.tracks[track_index].visible_note_columns do

with

local src_trk = args.src_trk

for k = 1,rns.tracks[src_trk].visible_note_columns do

But it seems that you need the source track selected in the Pattern editor. Is that right?

it seems that you need the source track selected in the Pattern editor. Is that right?

Yep. xStream is currently hard-coded to the selected track. Again, this is something that would change in the planned version

(I would like to add as a sidenote: it’s nice how your questions align so nicely with those plans ^_^)

Sounds very promising. xLine is at the heart of xStream - it could indeed bring a lot of extra performance :slight_smile:

  1. I assume that 90% of overhead is related to song() access and 10% to the classes/flexibility :slight_smile: It seems that this can be optimized simply by modifying xLine.do_read and xLine.do_write, if I want to experiment?

  2. Is your buffering-system a bottleneck in itself (using idle loop), or will the idle loop make the performance adapt? What I’m asking is if speed improvements will be noticable when stress testing, or if the 0.1s update frequency will work as some sort of constant, with the only variable being how many lines ahead are needed for reliable writing?

EDIT: Well, I can use the TRK button for measurements anyway.

xLine.do_read and xLine.do_write, if I want to experiment?

Yes - but not quite that simple. If the xLine itself managed everything - note columns, effect columns and automation - it would be quite a monster class.

Instead, you want to look closer at xLinePattern (which really should be called xPatternLine, eheh…). This class deals with things that can be expressed through the pattern (basically the xLine without the automation envelope component).

And if you keep following the breadcrumb trail, it will take you to xNoteColumn and xEffectColumn as well :slight_smile:

Together, those classes form a complete, ‘virtual’ representation of pattern data, without any references to renoise.song().

The purpose of the xNote/EffectColumn classes is also to accept both numbers and strings as input (number_string/amount_string), which is always stored internally in the class as a number (number_value, amount_value).

But with the optimization you have in mind, it might make sense to flip this around and make those classes prefer strings internally. Otherwise, there will be some unnecessary converting back and forth.

Btw: I have just committed a few things on github to properly split the pending changes for the stacked model.

The goal is simply for us to have a clean slate to work on. So master is now the current version of xStream (1.57) + some bugfixes I’ve committed in the meantime.

I’m testing this a bit right now, to make sure I didn’t accidentally break something…

I’m familiarizing myself with the structure… I guess it would be fine to just hi-jack the xLinePattern.do_read() to start with.

By the way! When TRK-rendering a track with only one note column and 256 lines, the do_read() function is triggered 32896 times (checked with a global variable++). I also get the feeling that the number of times this function is executed is non-linear to the number of lines (136 times triggered on a 16 line pattern). I’m mentioning it if something really fishy is going on that takes up a lot of resources.

I’m familiarizing myself with the structure… I guess it would be fine to just hi-jack the xLinePattern.do_read() to start with

Sure, working on a single part should make it easier for both of us.

By the way! When TRK-rendering a track with only one note column and 256 lines, the do_read() function is triggered 32896 times (checked with a global variable++).

Haha, that’s a lot !!

But yes, there would be an overhead when streaming, because it reads lines multiple times to pick up “just in time” changes.

But it shouldn’t be that much, and not needed at all when applying to track (“offline mode”). So, yes, something’s stinky in there.

To be fair, I have deep-dived into the “xStreamBuffer” in the newer sources (the stacked model branch). So that one is already a lot more efficient in the whole input/output department.

Guess I’ll focus on that branch, and figure out how it can be merged nicely what you’re bringing to the (lua) table :wink:

I think I managed so commit something on Github. It turns out the overhead of other stuff is a lot more than i expected, so the speed-up of do_read() is only 66% or so (depending on how many columns that are read).

Merged !!

66% faster is what I call a substantial speed-up. And nice and readable, too :smiley:

If you were expecting more, well, blame it on me. I will port some of my improvements over in the following days -

that should prove especially beneficial for the offline mode :wink:

Yep. xStream is currently hard-coded to the selected track. Again, this is something that would change in the planned version

(I would like to add as a sidenote: it’s nice how your questions align so nicely with those plans ^_^)

Hehe. Just fresh eyes man, fresh eyes!

Together, those classes form a complete, ‘virtual’ representation of pattern data, without any references to renoise.song().

The purpose of the xNote/EffectColumn classes is also to accept both numbers and strings as input (number_string/amount_string), which is always stored internally in the class as a number (number_value, amount_value).

But with the optimization you have in mind, it might make sense to flip this around and make those classes prefer strings internally. Otherwise, there will be some unnecessary converting back and forth.

I think it makes sense to use strings only, as Renoise accepts them. Potentially, values could be provided on a need-to-use basis with a minimum amount of conversion needed. I think the conversion functions took about 50% of the total time or sth, when parsing a patternstring :frowning:

By the way. I’m hitting the “*** main.lua:155: attempt to index field ‘preferences’ (a nil value)” error on one computer with my update. I remember you said it had to do with how many variables that are used. This must be per file, and not just in the global scope? I’m now at a sweet spot where I can turn the error on and off by typing “local test” inside of a function.

EDIT: According to the Corona forum, the maximum amount of local variables per file in LUA is 200. It seems like a good idea to table-ize variables that aren’t accessed very often.

I have created a branch containing the newer buffer implementation, which, as suspected, makes things a lot more efficient:

https://github.com/renoise/xrnx/tree/feature/xstream-new-buffer-implementation

(still needs some testing, but good enough as a proof-of-concept)

You can see for yourself, as I have added a small console printto both the master and this branch that tells you how long time it takes to process a track.

The difference is most striking in offline mode, where it can bring more than 20x times the speed.

Euclidean Rhythms 1.2 (“whispers” preset)

Before: ~8 seconds

After: ~0.3 seconds

That’s an improvement of, well, embarrassing dimensions !!

Had I known it would’ve made such an impact, I would surely have back-ported this feature when I originally wrote it four months ago.

But, I also think think I know why it escaped me: the tool is tested on my netbook - my current “lower computing threshold”.

And since realtime streaming was always working fine on that machine, I’ve not really experienced just how much of a difference the newer implementation made.

Moral of the story: profiling code is boring important.

By the way. I’m hitting the “*** main.lua:155: attempt to index field ‘preferences’ (a nil value)” error on one computer with my update. I remember you said it had to do with how many variables that are used.

You’re saying it happens even with the current/master? In that case, ugh… I have only seem it happen when the tool is initializing with the full set of keyboard/midi mappings.

EDIT: According to the Corona forum, the maximum amount of local variables per file in LUA is 200. It seems like a good idea to table-ize variables that aren’t accessed very often.

That local-variable limit can be set as a compile time property, so it seems Corona are compiling with 200 variables. I think we accept 300.

But unfortunately that isn’t the problem here - there are no variables being declared inside those mapping definitions.

Whether the problem occurs or not seems to be a combination of the code and the computer it’s running on - it really is quite mysterious.

Good stuff! I think it’s working ok. The rendering time also seems linear to amount of lines rendered. (But the optimization i submitted doesn’t seem to make much of a difference, so I’m guessing it’s other stuff that still takes time). There seemed to be a lot of other bugs in that branch, but I’m guessing that you’re sorting it all out for an update (perhaps also a case of me not having exactly the same vlib/clib/xlib as you).

@joule: you forgot to mention if you got the “preferences (nil)” issue on this test branch or some code of your own?

perhaps also a case of me not having exactly the same vlib/clib/xlib as you

I’m having this slightly ghetto solution where I symlink the libraries into the tool folder.

This ensures that, whatever tool I’m working on, the changes are propagated to other tools that use those libraries too.

So to make things work on your side, you’d have to either copy or symlink those libraries - from the respective folders.

For example:

com.renoise.xStream.xrnx/source/xLib => symlink =>com.renoise.xLib.xrnx/source/xLib

The .gitignore should make sure that the symlinked folders are not cleaned when switching branches.

This means that the symlinks themselves manage to ‘survive’ such git actions and still point to the right place.

When it works, it actually works quite well :slight_smile:

The preferences (nil) warning seems to be gone! I was having a look at the do_write function. If I understand correctly, you only write to .song() those columns that are updated (and quickly skip/clear empty lines when target is already empty?), so it should be pretty optimized already.

PS. I made a line copy function that is even a lot faster than the native :copy_from(), but it’s only usable if you’re certain enough that the amount of unnecessary column writes are more than the one tostring(line) access that is required. I think I’ll post a simple class with these functions in another thread, as food for thought.