I’m developing a tool right now using Grok, X’s AI. ChatGPT and Grok both seem capable of creating full-fledged Renoise Tools from scratch if you tell them what you want them to do and how they should work.
I got a lot of errors at first, but I fed those back into the AI, and it was able to figure out its own mistakes and eventually make working tools for me. I was about to give up on Grok programming my tool when I fed its script to ChatGPT, and ChatGPT immediately identified the problem, which I then sent back to Grok.
By no means a seamless process, but it does work.
There is a lot of flexibility—you can even ask it, “Do you have any ideas to improve my tool?” and it will offer all sorts of suggestions.
Worth checking out. You don’t need to know anything about coding at all.
Yep, been using ai similar for a while now and recently been feeding stuff into googles Gemini which works much faster for me compared to chatgpt. The first few prompts, chatgpt will use a slightly better version and switch to a lesser one afterwards, sometimes it loses the plot midway.
You also don’t have to create full fledge scripts, you can create handy stuff to paste into the testpad.lua file and run it from there instead of packing it into a xrnx file. What works best for me is posting an already working script and ‘remix’, pimp it from there with adjustments. Problem with chatgpt is the data set it works with is capped to whatever was present (in the api) until 2021, everything new isn’t known afaik. You can feed it links and it will act like it reads the link and learns from it, but imo best practise is to copy necessary references (and needed info that you can deduce from error notices) yourself.
Funny enough, Grok (which gets real-time data) actually was using an old version of the API until ChatGPT caught it for me.
I just asked ChatGPT about its data set: “ My core training data spans up to April 2023, with a knowledge update extending to around June 2024 for high-quality curated data and facts. That means for official releases like software, plugins, or albums, I usually have accurate info up to mid-2024.”
Just trying out Perplexity’s lab models for co-piloting, more for api checking than coding whole scripts. No sign in and chats aren’t savable. Handy ‘for on the go’
A slight annoyance is that their Deepseek 1776 model is the first in the popup which is a thinking model that takes some time to think and answer, so you need to remember to change to sonar-pro or sonar for quicker requests. Think they’re either llama/deepseek derivatives aswell.
yeah that sounds useful! I know in google gemini free tier you can use pre-prompts but they are limited in size. It also has search (RAG) with some specific things like for youtube you can do an @youtube. Not sure if you can point it directly though, not signed in at mo.
Perplexity main page said it could read the docs live:
I have access to the content of the page at Introduction - Renoise Scripting as provided in the search results. This page is the Renoise Lua Scripting Guide,…
I use Vscode so I just have a “references” folder in the workspace whenever I’m working on something, Renoise or not (I don’t really make LUA tools). Then you can @workspace in the chat or drag and drop the references folder whenever the LLM gets something wrong. Also useful to swap between models right in below the chat.