I’ve got this idea that relates to the concept of “clips” in Renoise: from the discussion I’ve had with Pysj, it has become clear that if we ever were to have arranger-like features, everything in a song/pattern would need to be interpreted as a clip, on some level. Which is all well and good. It’s quite easy to imagine a workflow where you start a new song, add some notes, they become a clip, et cetera.
But how about loading an “old” song, from the days before clips were invented? That song would still need to converted into clips, and I just don’t like the idea of having each track in each pattern become a seperate clip, it would result in way too many clips, and totally without regard to what was actually going on inside these tracks.
So, what I am talking about is a way to look for patterns within the song data (as in information patterns, not just pattern editor data), which could then be used as a starting point for further (clip) editing! I’ve got a lot of ideas already for note/value-based interpretations, and a pretty good understanding of regular expressions, yet I think this is slightly more complicated than the average text-based search & replace script - and a reasonable performance would be important too.
So, what language would be best, how would one go about such a thing in the first place? Pointers needed?
Edit: and no suggestions about using c++ - I’m actually thinking about using actionscript ::
First, if I remember correctly and understood correctly, the clips discussion gave me the impression that clips are in ADDITION to notes, and not replacing them altogether.
So loading old songs, should play as usual, and only if you want to add clips to that new song, you select your bunch of notes, and “make a clip”.
If I did not understand correctly, then I am very worried about that clips idea.
Now, if I got your coding question at the end right, you are looking for a small programming language to test your theories. If so, I strongly recommend Autohotkey. It is excellent for:
creating advanced batch files
creating fast utilities
manipulating windows / mouse / keyboard / clipboard and other windows elements
creating full blown applications (although without the bells and whistles that visual studio gives you)
This is a very easy installation, and there is no studio environment to work in - you just use your favorite text editor.
Yes, everything is made into clips. But you really shouldn’t be worried, since clip functionality is completely transparent until you decide to register a clip. How would you ever be able to see an “old” song, if it wasn’t somehow interpreted as clips? IMO It’s just about making a good starting point.
Yes, that’s very much like what I had in mind.
This picture depict a song with four tracks, where each track has been divided into clips of varying length.
Let’s say that first track is a beat, second track a bassline, third track a solo/synth thing, and the fourth track contains some percussion:
Now, the beat in the first track will repeat over the course of two patterns, so it’s spanning both of them. The bassline, on the other hand, fit nicely inside a single pattern, so it’s repeated four times. For the synth/solo track, we never found any repeating pattern, so the entire synth track is made into a single clip. Finally, the percussion part is found to repeat two times per pattern, so that makes a total of 8 identical clips over the course of the song.
At least, this is what I had in mind. It’s not restricted to diving into (x) parts, results really depend on what’s going on inside each track.
My preferred approach to the entire issue of clips would be to focus more on what happens at the actual (pre-)pattern cursor rather than what happens from the perspective of (post-)arranging data. As I see it, although I’m not a coder, clips would be best utilized by Renoise in idle mode, at the most CPU friendly stage: when manually entering data into the pattern.
I’d just add the xml tag XX or something that easy to the xrns structure, and let XX be the external clips referrer – i.e. which script to process the xml. All processing would then be done in post-mode via a button or menu item (e.g. “Render clip tags to static pattern data” or something like that).
I recorded a small .swf video (alt link) to better illustrate what I mean. The idea is simply to have an external script read (recognize?) the note(s) and value(s) 00-99 in the volume column, render the appropriate xml and then insert into the pattern. In this video, the script enters chords based on which rootkey value the user enters in the Renoise pattern row. Which type of chord to be formed is based on the (temporary) value in the volume column (10 is major, 20 is minor, etc).
Yes, that would make a lot of sense. So, which clips are picked out is based on pattern data, with the processing all happening in the background. To gain control of clips (instead of having them in this state of flux), you simply choose to register them.
I’m working on a open source CMS with one of the ideas being a Naive Bayesian classification system to substitute blog style “tagging.” Sourcecode for the base/parent Naive Bayesian class can be viewed here. This class works as follows:
I’ve been holding my tongue and reading “identifying clips” for the last few weeks, wondering if Naive Bayes could be used here too? There could be a pre-trained data set, to identify basic patterns, and the user could train their own clips to supplement the categories and vectors.
This is all very theoretical, and would need to be complemented with non-bayesian results, but it would be cool to try?
The big challenge is Naive Bayes works well words / languages but I don’t know to begin translating it to identifying patterns and storing those as tokens.
Thanks for inspiration. During my tour of the Naive Bayesian world I came across many interesting places, and friendly creatures.
Now, while an actual implementation of such a principle might be overkill (after all, we’re not looking for probable matches, but actual matches), many aspects are still valid. Especially the way that the class starts by collecting information, and reducing it’s complexity / standardizing it, before any actual processing is done.
What I’ll do next is to try and define what we want to look for, based on what (I think) clips should be capable of.
1: exact matches based on entered pattern data (varying length with max/min size boundaries).
2: matches that are transposed versions of an original theme (but which one is the original?)
3: combine extracted data with automation data (looking for automation envelopes should be optional)