Will it be possible at some point to script rendering? to do things like this
Not yet, but there should be a way to do this.
Maybe anyone has an idea how this could look like in the API?
Would a script be able to include MIDI-triggered audio from external hardware inputs in a realtime render (whether selection, pattern or full song)? That’s something that’s seriously lacking in the regular render, but if anything can encourage me to learn the scripting bidniz, it’s that. However, it’s been said that DSP/audio stuff mostly can’t be done with scripting at the moment…but I don’t know if this necessarily falls under that heading.
Currently no realtime stuff indeed. Rendering external hardware would also require a few extra things internally and would probably have to be done in a few rendering phases to mix the sample-part with the external audio (latency).
samplebuffer render_to_sample(start_song_pos, end_song_pos, start_line, end_line)
it would obviously be insanely cool if you could create song objects that can get rendered. ie. not the current song, but one you can construct and discard in the background, without affecting the current song.
it would also be nice to apply the fx chain of any track to any sample, just what the “render fx of current track” does in the sample editor.