Hey Renoise fam! After years of tracker wizardry, I’m diving into coding with this AI assistant for our beloved Renoise. [1]
The Technical Stuff (Still Learning)
I’m figuring out this Python thing as a bridge between Renoise and AI models. Lua scripting in Renoise is familiar territory, but this Python business is new to me. Seems like the smart approach though. [2]
What It Should Do (When I Get It Working)
Turn text descriptions into actual patterns (mind-blowing!)
If any coding ninjas want to help a Renoise veteran/programming rookie, I’d appreciate feedback! But if you want Renoise workflow tips in return, I’ve got you covered. [5]
Even this post was made with AI , obviously… I’m not familiar with LUA either
That’s a nice tool start, wow. Might be especially interesting for people with disabilities. On the other side I think such a thing should be thought more in a bird’s perspective. Doesn’t the recent macos versions already provide such functionality, too? And what about Windows?
Well, MIDI 2.0 CI and property exchange is made for this. It is already available in Korg’s Keystage and their own recent VSTis. Sadly in a market without an authority there are no common standards. I actually don’t think using AI is the right approach, since this is all about simplyfing and standardizing the APIs. Also Renoise could implement very simple workflow improvements here, like Bitwig’s recent “touch control with mouse + move knob, done” mode. I think it is not about the discovery of possible parameters at all. Instead, AI would over complicate things here.
Personally I would be really interested in AI driven composing assistance, not only in Renoise. So for example you have a melody and tell it “generate some chords for it”, or “generate an alternative version”, “adapt melody to drum rhythm”. Such stuff, very integrated in the DAW, and always only as a suggestion, similar to recording multiple takes.
I have no idea how advanced current AI is regarding composition.
Renoise as it is is basically a perfect DAW, any additional features at this point are “nice to haves” besides some minor tweaks / bug fixes
That’s really the appeal of Renoise in many ways, its well balanced.
If it was something like Ableton, a huge company with massive overhead, they would be forced to bloat it out to keep up with quotas etc.
Ive been using Renoise since I was 15 (about 15 years) and I still find features that I haven’t used. I write music in Renoise faster than anything else.
You can already make generative music with a myriad of different programs and then sample them into Renoise
I guess the draw of this new wave of AI would be to learn from your patterns etc and create something based off of that, but whats the point really? It seems like hopping on a bandwagon that isn’t necessary.
Now if you could control Renoise via a BCI, thats a different story…
You can already make generative music with just renoise, too Yxx can do a lot, especially when nested in phrases that are key mapped and then given their own Yxx commands…
I’m new to vibe coding. This is my first github repo, but it’s not the easiest first project. I have been spending all my time on learning and vibe coding. I will give it my all to make software, whatever that means. This project will defo continue. The tools in renoise get installed even if not as .xrnx format. You can drag’n’drop the Tool folder, including the manifest.xml and it should work. Again, it’s my first repo ever. I have to learn the Git and all…
Would implementing an MCP server produce better results than this?
I understand that the Renoise AI Assistant generates and eval Lua code that can be executed by Renoise. I would expect that MCP would not generate uncompilable code, but would directly perform more meaningful actions… I don’t know if it will work.
Local trained LLM + MCP for external LLM (agentic workflow with dockerized n8n)
Something along these lines. I’m learning… WE will get this done.
This is the new norm
Sometimes I use AI to create a specific sound. Elevenlabs has a sound generator tool and if you ask it to create an 80s cowbell sound with ridiculous reverb, the result is somehow quite nice.
My main issue with AI in software is: It usually utilizes cloudservices or takes very, very, very much GPU/CPU-power if run locally.
I use linux because I don’t like clouds and software which phones home every second. If renoise would have an AI integrated, I would probably not upgrade to that version. (Oh and I am a Renoise user since over 25 years now)
I started messing with A.I image generation with chatGPT and I had it make some crude lua code as well a while back.
I have an Output account so I’ve been using the beta version of Co-Producer since it came out. And tbh I’m very interested in sample generation via prompt. There are some free ai music generation libraries out there and I’ll keep researching if it’s something the tool creators would be interesting in adding to this project.
I’m honestly a big fan of A.I. as a tool not a substitute for the creator. I use Grok on X now for my basic A.I needs now(mainly image generation) and it uses the Aurora library. If there is anything I can do to help let me know. I stay pretty up to date with the trends.
Right now Suno and Output have the highest quality music generation imo and Output has turned Co-Producer into a plug-in. I think it’s pretty neat.
And also here is a GitHub of a list of music generation A.I. tools. Some are free. And I think a tool that dumps generated samples into the sampler would make for hours of fun. And much more intuitive than a plug-in. For example; keybind brings up sample generation prompt window, user inputs prompt, sample(s) are dumped into the the sample slots in the instrument.
And just for fun these beats on my FB page all used ChatGPT for the images. I got pretty good at prompting hahaha. I’m gonna delete this page and start over(again)
Also Synplant is on my wishlist although I promised myself not to by anymore plugins. I use CDP and morph samples a lot to create new sounds. But that’s all based on the processes that I don’t quite understand myself and it obviously doesn’t reference the input sample in regards to the output.
I think something like intelligent sample morphing akin to synplant would be pretty nifty as well.
Long read I know but if there is anything I can do please don’t hesitate. I’m all for keeping Renoise up to date with modern features. It’s ahead of the curve in so many ways already.