Hello everyone. First post here. I had a simple request. I was hoping to be able to get a simple script that I could copy and paste into the terminal and hear some audio playback from Renoise. Can I get some advice on how to go about doing this? Do I need to somehow first load a sample or instrument in order for this to work? I have tried pasting the appregiator example from PatternInterator.lua but I don’t hear anything when its executed and just see a lot of green text pop up in the terminal.
AFAIK, it is not possible to control the audio engine from a script. You can only use OSC to trigger notes (press and release). Then, it is necessary to create a script with OSC instructions from the API, always with an instrument that has some sample or VSTi loaded. In any case, you will be playing notes…
So by not being able “to control the audio engine from a script”, you mean that the playback can’t be started or stopped by script? That seems okay. I mainly want to be able to generate notes across different channels with different samples/instruments loaded on the different channels.
You should start studying the available API. In it you will see that you can practically replace 80% of Renoise in an advanced tool. That is, you can control externally (from the tool) the vast majority of controls that Renoise has, including, play the song, stop it, write notes and make them sound, etc. You can play notes without recording. You can even build your own virtual piano…
Nice yeah I’ve been reading that over too. I guess I’m just trying to conceptualize how to use the scripting capabilities in a compositional workflow. I was thinking of generating sequences of different meters by scripting and then going back and editing events in the tracker. Even though there is some documentation, there doesnt seem to be too many tutorials or anything online on how to start from scratch.
The simplest thing is to take tools already made by other users, and start there. But I advise you to learn by yourself, to avoid the “vices” of other programmers.
Find some simple tool, and try to understand how it works. The important thing is that you know how to use the API documentation and know how to make trial and error at all times…
xStream is a very nice tool for this. It probably has a bit of a learning curve and scare factor, but I can highly recommend checking it out. It’s very practical for rendering note data algorithmically if you want a tidy framework and don’t want to make everything from scratch by yourself. (and you only need to care about its text editor and very few of the buttons to start with… it might look scary at first).
Hey @joule, good suggestion on xStream. I’m gonna start looking into tutorials on how to use it and just reading up on it more in general. Do you know if it makes use of these types of effects commands?:
It just seems to me that these types of hex based parameters are perfect for live coding musical performances.
Are there more xStream examples available elsewhere online besides the bundled ones?
Not that I know. Maybe something can be found in the xstreams forum thread, but I’m not sure.
By the way, xStream is a complex beast. It’s probably best to familiarize yourself with the basics of the tracker effects before going into generative stuff, if you haven’t already.