Question about the limits of scripting...

So, I’ve got an idea of something I’d like to do with ReNoise scripting, but am not sure of it’s plausibility…

Basically, the idea is to turn the keyboard and mouse into hardware controller for live performances.

To be more precise, I’d need to take away normal keyboard function and mouse function from ReNoise itself and have them control everything from specific sample triggering, moving samples between tracks and activating/deactivating track effects and using mouse relative mouse positions to control effect parameters (for instance, I’d be mapping one effect on/off switch to Q…then to control that effect, you’d hold 1 (right above Q) and move the mouse around to manipulate the effect)…

Is this at all possible or is scripting strictly for automation and MIDI interface options?

Other things would be ‘mapping’ effects to samples, which I’d probably attempt by having ‘wet track columns’ and ‘dry track columns’. So for instance, if you held ‘1’ for that effect control as before and pushed a sample trigger ‘pad’, in this example, Z, then that sample would be designated to the wet track column of the associated effect. If I did the same button combo while it’s in the wet track, it goes dry again.

I hope this makes some kind of sense. I’m sure the experts would be able to tell immediately if this is possible.

You can control effects (on/off) and their parameters from within scripting and you can midimap as well as keyboard-map your (GUI) controls to perform them.
If you want to reuse fixed keys, then you need to hijack the key(combo) functionality from within the script, the downside of that part is that you need a GUI dialog that has always focus to capture the keyboard event messages.

1 Like

All of the effect manipulation (on/off, changing parameters, etc.) you mention can be done via the API.

If you open a GUI from the scripting API, you can get all kinds of keyboard input.

I don’t have comprehensive knowledge of the Renoise APIs, but I’m pretty sure the they don’t provide very deep access to mouse input.
For example, detecting mouse movement if you’re not clicking on something like an XY pad control in a GUI isn’t something that (afaik) can be done.

If detailed mouse input isn’t possible, you could always write an external program that uses system APIs to capture and process input, and communicates with a Renoise tool via OSC (or a protocol of your own making) over sockets.

Thanks for the responses. Much appreciated!

In this case I would almost think that it’s easier to have a separate application for this, that controls renoise via OSC. Java-style media-based Processing (.org) might be a good start, there’s a OSC library for that called oscP5. I’ve used it to make a proof-of-concept drumcontroller out of a usb gamepad once. Anyway, in processing I guess mouse x/y info and all that stuff is built-in. You do have to think about what kinds of status/visual feedback you need for the application though.
PS I think you’ve got some great ideas and if you want to work together on this we might come up with something cool.