A tool for putting together pre-made sample loops by AI?
Because of Genopatch (170 $)? What about Icarus (170 $) or Replicate (70 $)?
A tool for putting together pre-made sample loops by AI?
Because of Genopatch (170 $)? What about Icarus (170 $) or Replicate (70 $)?
very.
If i may add my 50 cents after having been gone a long time.
Having spent my own time on working with AI components locally, the biggest turn-offs for you all would probably be that you need a fat Nvidia GPU that has at least 16GB of vram that can at least run a moderate LLM model. Which would be fine for having better help assisting you in finding your way around in Renoise without having to browse through the manual all the time. But spending a whopping 600 to 900 dollar on a GPU to get a little assistance is likely not the road to go, let alone shelling out 3K for a 32GB vram model that actually would poop out any specific sample loop or multilayered instrument (at least i favor the latter feature for use) for you, that is not even mentioning the 15 to 16GB of training data you need locally stored for the model. So for Renoise allowing for running a local AI engine, i would not see that coming for a long time. The Renoise team deciding to invest in an online LLM server dedicated to make the manual more easily searchable, is up to their terms.
AI still makes mistakes so i doubt you will see a dedicated AI assistance server on renoise.com as well.
I have created a GPT (Renoise Scripting Assistant) to aid with lua script creation and adaptation, i fired the Lua 5.1 manual to it and listed all API changes from 1.0 to 6.1, their connected Renoise version and date of change to it and included the Lua scripting documents. I even added the Renoise Manual PDF into it, so perhaps setting up a dedicated Renoise manual AI assistant is not even necessary.
It is not a miracle GPT though, you have to do things step by step and function by function, i however tried to make it aware of modulary and complex tool structures by supplying it mine, but it still makes errors. The more tokens you supply to it, the higher the chance of errors on its side.
mainly actually did it for myself to at least get something old over to API 6.1ā¦
But perhaps someone else finds some use in it. It is publically available.
If you have a ChatGPT plus subscription and can access model 4o, you will likely get the better answers, you have to check carefully as it does still return typos.
Under the free account you can still use model 3.5, but i am not sure how great that one will work out for you.
I think over time, it should just become standard to actively create documentation as Markdown for LLMs. I also built myself a GPT for the Renoise API and the new phrase scripting, just for quickly looking stuff up or finding a function, it totally does the job. Also handy for grabbing code snippets for inspiration. Code editors with LLMs, like Cursor, Windsurf, etc., have that kind of thing built in, where you can actively parse documentation websites with a crawler. They pull out the important content from the pages, removing all the fluff, and feed it directly to the LLM for questions. The Tailwind CSS creator mentioned in a podcast that if they released their docs as Markdown, theyād basically give away their business model, since their website docs are a big way they promote their paid products. So yeah, it has its pros and cons. Personally, I canāt live without GPT anymore, Iāve just gotten lazier but also way faster with it, and it saves me a ton of time.
can you guys share these GPTās or is this not possible?
It currently does not yet has API 6.2, but you can try it:
I plan to also recoup the whole API object structure from within Renoise (there used to be a quick command to have it list everything) which will make it more precise.
I think we can go pretty far by having scripting console accessible via MCP + context-7 or deep-wiki accessing LUA docs.
Same for new phrase scripting, but we would need a markdown docs for that.
Maybe it should be (or already is?) possible to create a phrase script inside scripting console and send it to a specific instrument / phrase, that would probably simplify stuff.
A kind of automatic zoom in the spectrum depending on the frequency sound to have the peak isolated and good visible right in the center. On a plug in on the master bus that I would bypass mostly. Probably no AI for it needed but I would like something like that.
Iāve been ironically asking my A.I about Renoise and Stable Audio Open, a local open source audio generator(python).
Specifically, if a lua script(tool) could provide an interface that would interact with the ai model that operated externally. I figured it would in theory, knowing no lua, and it said yes. I saved all the answers because they were very detailed.
So I researched a bit more about the ai model and like @vvoois mentioned, I think the main drawback for users would be the heavy install, and the requirements, although from videos and examples Iāve seen with this model, sample generation sat around 16gb of VRAM, according to the overview AND this guy:
This particular model was trained from sounds from freesound.org, haha, there is a tool for that on the Renoise tool page that assists with downloading samples from that site and throwing them in the sampler. Pretty cool.
Letās say something like this was available as a free tool, would anyone be interested in using it? Of course, like the CDP tool amd others, youād have to download the external files, but perhaps that could be facilitated too?
Iām also looking at external open source, local models that could potentially just āassistā
Imagine telling the bot to program a phrase script that does āxā or āyā. Or program the formula device to āblankā
I can see it adding value in the sample editor, suggesting precise chop locations
The autoslicing in Renoise works well enough and adjusting/adding slices is not complex at all.
suggesting waveform tweaks
??? What
auto-normalizing
It is 1 button.
tuning samples
Grab a tuner VST (something like Gtune and just⦠tune it). Or just a C Sine Wave and Tune it to that. You donāt need AI for it.
I can also see it generating patterns or phrases with interesting variations on rhythm, suggesting chord progressions.
Stochastical processes have been around long enough and used in more experimental music for ages now. You again, wouldnāt need AI for this. Also if we let it do that it would probably suggest the most basic/most common chords, given the data itās going to be trained on.
I always struggle with the actual recording process itself, getting a good take at the right volume without clipping. I wonder if an AI could just be a general āsoundgoodizerā to ensure you get optimal levels and sound quality at the recording stage?
Iāll sound like an asshole here but just learn how to use a compressor and limiter. Soundgoodizer isnāt just a āmake levels optimalā VST it is a compressor that can be very overwhelming.
I have also been using AI to teach me Lua while analyzing Renoise Tools, having it scan through existing plugin files and telling me what everything does and teaching me in plain english how I might extend the tool to make it do what I want.
Kids used to mod quake to learn C.
And besides. Think of people who use Renoise/trackers because their PCs arenāt that good? What would happen then? Theyād just have to stop using Renoise? Realistically most of these things can be scripted. Renoise doesnāt need AI.
Ai ai caramba!
Have you tried SynthGPT from fadr (they have a VST-3 version as well!)?
Now such feature would be a nice addition, a tool that can generate a multilayered instrument for you.
Dynamically adjust FX parameters on analog hardware to align perfectly with specific reference attributes, whether youāre tackling compression or spectral analysis. The essence lies in the ability to alter parameters in real-time, transforming your experience into that of a true Mixing Assistant. This allows you to jam live alongside musicians, with AI stepping in to perform tasks as seamlessly as a human would. Given the diverse roles within the music industry, AI stands as an invaluable ally.
Imagine a world filled with Mastering Agents, Mixing Agents, Sample Selectors and Organizers, Arrangers, Editors, Image and Video Generators, Lyrics Creators, FX Generators, Orchestrators, Soloists, and whatever else you desire. Looking for a commentator? Simply assign that role! Envision prompting your AI to fine-tune sidechain settings, intuitively responding to the songās dynamics. Or invite it to explore various reverb settings through A/B comparisons, tapping into its logical capabilities.
For every task that a skilled human can undertake, an Agent can be crafted to master it. This is the exhilarating future that awaits us. As for coding, the tools are already at our fingertips. With innovations like Claude Code, RAG, the Renoise LUA API, Agentic IDEs, and Context Engineering, the foundations are in place. They just need that final touch of maturity to unlock their full potential.
Definetely gonna check that out now bro
āimagine a world where art meant nothingā
What does art mean? Itās a matter of opinion, as LTJ Bukem said. Your role with AI is to provide this opinion through context engineering. While itās challenging, once you have an artistic idea, you can clearly articulate it to a language model, allowing for detailed analysis of its output.
A creative model can generate inspiring ideas you might not have considered, especially if you have a strong understanding of your own artistic workflow and preferences. The real advantage comes when AI assists with tasks you excel in. You need to deeply understand even the nuances of your process to truly leverage AIās capabilities.
This technology can unify creative efforts, reminding us that while individual genius exists, collaboration with AI can enhance creativity . Geniuses often follow identifiable patterns and hold secrets that elevate their work. Without these secrets, AI is limited to general knowledge.
To help it understand specific concepts, such as automation points or sound rendering, you must be knowledgeable. Think of how you visualize experiences while listening to music āoften imagining scenarios within a DAW or powerful moments before a drop
. Artists will need to communicate these ideas, perceptions, and vibes to AI, including effects like foley and digital signal processing.
Ai is fun for ai enthusiasts, to others it takes all the fun away.
Iām perfectly happy making my own cups of tea, thanks.
You AI generated this entire thing because you couldnāt form a coherent argument, could you.