Modern vision of the development of a new DAW (such as Renoise 2.0)

Currently, the use of AI in programming is nothing more than a support tool, which also allows you to learn a great deal. However, it’s fair to say that many of the returned results are combinations of things that already exist, created previously by humans, by the way. You simply have a much more powerful tool than you had before. This applies to studying things (not just in programming).

However, in my particular case, I mentioned AI as a tool to be used by the composer himself (since not considering this in creation software isn’t thinking about the near future). Although it might surprise you, with current AI you can’t program whatever you want (and integrated into a DAW you couldn’t do whatever you want either). You need programmers who know what they’re doing, with extensive knowledge and planning skills. AI is a tool that allows you to do things faster, and sometimes better (sometimes much worse).

I’m the first to defend human creativity. In fact, for me it’s a challenge, because I strive for perfection in some way, or at least something close to it. Furthermore, any programmer will tell you that using AI for programming is counterproductive for them (they won’t even understand the code generated, since they didn’t write it themselves). This leads to a loss of control over the programming process itself. It doesn’t all sound so rosy. But we must be aware that the tools are there. You can use them or not.

In any case, AI is just one aspect to consider here (like a sampler or a specific editor); it has practically nothing to do with the planning and efficiency of programming complex software, where the human factor is present in every process (and it is inevitable that it is there, since reasoning is needed (AI does not reason, it cross-references data and returns results by statistics, and that data may be incorrect from the start.).

Unfortunately, it seems like everyone operates this way, selling their time for money. It is modern slavery, what we commonly know as work. But I agree that time is the most valuable thing we have, and it’s finite. What’s more, I think most people aren’t aware of how little time we actually have in life. It saddens me greatly that many people are now losing it on social media with hardly any deep conversations, basically; it’s all about numerical interactions. We are now experiencing a decline in social relationships. Society never stops changing…

That’s why it’s interesting to develop projects, share ideas, and that generates deep interactions.

2 Likes

After reading again on point #12 ‘Old Architectural Assumptions’?…

Let me show you this… AXS 3.0b4 freeware from 2001 which was previously a DOS commercial product-

This is one of many skins I have made for it-

This is only 460 kb full directory size yet has 7 instance multitimbral subtractive synth that still sounds good plus a sampler instance as well, has ‘smart pattern’ system which was adopted by Arguru into NoiseTrekker2 & is still in current Protrekkr… The pattern viewport is small but the navigation of it is ingenious… I have saved song files over 200 MB in size & it’s certainly not the limit… That’s over 400 times the root directory size, try that in any current anything… Sample lengths can also be anything 16-24bit mono or stereo… If you try to load a 32bit it will just not accept it but it will never CRASH!… Yup, it’s crash-free been using it for years & never a crash or instability of any kind plus it runs on any windows… I run it on win10 64bit as well as a host of winXP 32bits, it will run on Win95-UP… Has a well done pattern FX command system, MIDI input, choice of audio out… Has note velocity filter modulation which is nice to get varied sound from same notes, has smoothest distortion of anything… Does not need install, starts up in a microsecond…

So the deal is nobody codes like this anymore, they don’t know how, they are ‘vibe’ coding, LUA or going to some other AI which is being dumbed down now, so good luck… Peeps HAD to code tight back then to work on grim machines, nowadays peeps don’t hafta as there’s so much RAM & speed there can be tons of holes in the code… This is why old music soft was expensive as with CD distribution it had to be right before it went out the door or it was doomed, nowadays they can get an alpha out the door to ‘sell’ as “We’ll update it through the Net”…

So if someone, anyone can code something like AXS, that will work on ANY windows, has all those features (and more) for less than a megabyte, is absolutely crash-free, can save song files 400-500 it’s own size, no install, etc THEN I will be impressed…

BTW AXS was coded by 2 Danish University Students… Hope they are doing well, they deserve it…

I think others might be catching on there’s over 1,030,000 views on my AXS thread at warmplace.ru which I started a bit over 3 years ago…

1 Like

Renoise 2.0 came out like 17 years ago though

Apologies if this seems confusing. We’re obviously talking about a current version 2.0, as if it were a brand new project. Using the name Renoise is just an example; don’t take it literally.

Perspective: Imagine you’ve been developing a program for 20 years. Everything is fine. But you wonder if, with current resources, there are ways to improve it, and if so, what those ways are. In theory, rewriting a program from scratch could be feasible (don’t focus on Renoise, but on any software development project in a situation of this magnitude).

I don’t know if you’re familiar with the history of Virtual DJ (I’m not trying to advertise here). I believe this software had to be rewritten when it jumped from version 7.0 to version 8.0 because its foundation was no longer modern enough; it was a drastic architectural change. It took several months of programming to rewrite and adapt everything to drastically improve its graphical interface and engine, which allowed it to evolve to where it is today. It had already been in development for about 15 years, adding layer upon layer of features. This led to a build-up of development problems, as it became difficult to maintain. Things they did:

  1. Rewrite for future scalability.
  2. New audio engine and internal architecture (better effects and sampler system, and better support for current hardware)
  3. More modulation and customization, with a more configurable and modern interface
  4. They gained stability, audio quality, long-term improvements, and easier maintenance
  5. This changed the business model, representing a significant leap forward.

VirtualDJ can be a good example of why sometimes software needs a 2.0 vision, from a project perspective, not just a program version. It’s like having a 2008 Opel Corsa and deciding to completely upgrade to a 2020 Opel Corsa. They are two cars of the same model, with similar features, but drastically different internally.

While they’re not exactly the same, Virtual DJ and Renoise have this in common: they’ve been around for roughly the same amount of time. But their histories are quite different. Renoise still uses its original codebase, while Virtual DJ doesn’t.

Something similar happened with Fruity Loops and FL Studio. Apparently, FL Studio has been rewritten several times in very important internal parts. If you do some research, you’ll find that many DAWs have had to be rewritten internally (not just updated, but completely rewritten and redesigned) to improve and modernize. It’s not just about improvement; it’s about maintaining a business model.

In fact, that 2.0 could even mean a change in the program’s name.

This is much more common than it seems. It happens all the time in many well-known software programs. It’s perfectly normal.

That’s what the value 2.0 means in this context (regarding the internal architecture, not the program version).


I don’t want this to sound provocative. Renoise may have a version 4.0 or 5.0 as a program, if things continue as they are, but there will come a time when version 2.0 will be needed as an architecture, because the environment will somehow require it. The problem is whether it will be economically and in terms of time and effort. In fact, I honestly believe that if any program in history deserves to be rewritten to modernize it, it’s Renoise. But it’s important to understand that this is a monumental task, just like the changes made to VirtualDJ and other well-known programs.

Renoise 3.5 is fine imho. :relieved_face:

Feel free to vibe code something else. :person_shrugging:

1 Like

Idk, Renoise just works - please don’t touch it, don’t improve anything, don’t align it with ‘modern workflow paradigms to leverage competitive market synergies’ or whatever - just let it be. :wink:

1 Like

Some of this may belong in an OS.

But I do sometimes think DAWs have become bloated and a Tracker perspective, with the typical DSP included (not plugins) could be good.

1 Like

Interesting point, but it’s a bit off-topic from the theoretical focus here. Internally, architecturally, any “old-school” software can have limitations that are outdated.

Consider it this way: It’s possible to have software that does practically the same thing, but better, smoother, more capable, and, importantly, easier to maintain.

It would be something like lightening the code and reorganizing it with new code in a way that makes it easier to maintain and more suitable for current hardware technologies. No one here is suggesting changing anything about the essence of Renoise as a complete DAW whose core editing system is a tracker.

Besides, you already have the current version, R3.5.4. So, from now on, if Renoise receives more maintenance updates or a major update with changes, wouldn’t it make sense for you to update it?

These are simply ideas on how to improve things. Improvement doesn’t mean changing how you use it, but rather making that same use more efficient. Furthermore, let’s be frank, Renoise could tweak a few things and still work better, and composers like yourself would probably applaud that. No one is talking about copying things from other DAWs. And Renoise is just one example here; we could discuss other software in this thread.

2 Likes

This could be an early prototype of a DAW, but it’s designed to accommodate more advanced features later. It seems reasonable to think that’s how things are done. You create a project, always thinking about and anticipating what you’ll add later, and you prepare it for that, even though those features aren’t implemented yet. The code is designed to support those features later. This allows you to scale, and in this way, you can test each phase of the project.

But it’s curious, some of you (I’m not referring to you in particular) compare Renoise to other DAWs as if there were a fear that the essence of this tracker would disappear, and right here, in this thread, it’s quite the opposite.

For a tracker to survive in the future, it will have to modernize its codebase. That moment will come (or perhaps not, although it deserves it). It’s about perfecting and updating the architecture, taking advantage of all the hardware and software (compatibilities) so that it works much better, provides a better user experience, and a better experience for the developers who maintain it (less work time when updating something that’s broken or adding something new).

It’s like investing time upfront (creating that new thing) to save time later (because it’s optimized and designed for it), and with that, you get better software, without losing its essence, obviously. We all love Renoise here, there’s no doubt about that. It’s precisely because of this that threads like this one appear in these forums.

1 Like

There’s a question I think is very good and helps to understand all of this. If Renoise didn’t exist and we had to create it from scratch today, would it have the same architecture, everything the same, or would there be substantial improvements in how the CPU, GPU, and memory management are used? Would it still use BMP images, or would it use vector images to allow for scaling? Would the threads be separated somehow so that no window freezes when dragged (Renoise completely freezes if a plugin window is dragged)? How can we make the most of the hardware’s capabilities to work with small audio packages (which are CPU-efficient) and larger packages (which could be handled by the GPU)?

I don’t know, questions like that. How can we perfect all of this? I’d like to focus on answering these questions somehow.

Can we talk about these things? I want to know what this is all about. I want to understand it.

I think a lot of how software looks, feels and works is zeitgeist dependent.

Trackers emerged with surging popularity of electronic music, rising affordability of home computers, increasing technical engineering skills of users.

Imho that’s kinda how it starts, they have to start with something, build something, see if it sells, await feedback, implement new requirements - cycle continues, at the same time keeping up with new hw and os.

I wonder if that question even translates to nowadays, who would make new tracker or daw - there is already quite enough of them.

But if they did, I suppose, I hope they would use best design paradigms available at the time.

That’s my theoretical take on it…

How would you reconcile the low-latency audio (around 1ms), and a GPU that’s inherently supposed to do faster maths but work with 10-20ms blocks ? (asking because gpu.audio tried, and miserably failed)

I get and sympathize with your frustration of updates coming out slowly, and sticking to some major version for over a decade. But frankly it’s fine.

UI is already GPU-accelerated, and looks good and smooth on both my excellent desktop and my miserable 15 years ago laptop, current protocols are well-supported (vst3, au, …)

Take an honest look at the competition, the hard passes (some “impossible” routing, for example) are few and mostly boils down to some forced optimizations which all have pretty easy work-around

If you need some LLM to conceptualize and express what “seems to be missing” I’m afraid this is a pretty bad approach.

The GPU wouldn’t just speed up the graphical interface if used this way; it could also be used to work with larger blocks of code. A DAW doesn’t just handle audio for real-time tasks. It can handle much heavier, non-real-time operations, thus offloading work from the CPU. This would be a good approach.

No one in this thread has complained about the current pace of Renoise updates. We’re talking about something very different. It’s even explained in the title of this thread.

I believe this is incorrect (please correct me if I’m wrong). In Renoise, graphics acceleration is handled by the CPU. The GPU only performs a final step in rendering the graphics; the primary processing remains with the CPU. Therefore, there’s significant room for improvement. Surprisingly, Renoise could still perform better graphically and even support more graphics-related tasks simultaneously.

I don’t know what you mean, but there are products on the market that are true works of art in terms of programming (development consistent with what currently exists). We can choose the path of absolute conformity, or we can strive to improve things. It’s a way of life in itself, too.

This entire statement is somewhat contradictory. If you already know how to do everything, why ask, or what’s missing? What’s the point of asking? Asking questions, challenging things, and rethinking situations, or wanting to discover them because you want to learn and understand, is not bad at all; it’s precisely the path to development. I think we all know what this thread is about, or almost all…

Are you sure though?

I don’t seem to see tracker that focus on good ux. Many tracker that supports vst seems to always have some weird non-tracker ux issue that is counter intuitive. Renoise is already the least counter intuitive one yet it still suffers from the same problems.

If you think so, that means there are good options out there. Please recommend me a tracker supporting vst3 running in 64 bit system. I have tried radium and openmpt, and have seen the ui of the ultra daw, but they all seem to suffer from the similar ux issues. Renoise is really the only option that I am kinda comfortable to use when I need sample libraries.

I hope people would break free from this stereotype since when people talks about trackers, they often seem to bind with electronic and chiptune music, and thinking it is too limited for anything else; however, it is just as fun to write string ostinatos and epic horns melodies.

PS: Your statement is correct though, the point here is that I wish people seeing tracker a bit more ambitious since this presentation really have a lot of merits yet many seem to miss, considering seeing opinions outside the tracker circles.

1 Like

Bingo, and GPU also includes a bit of latency because cpu need to communicate with them, telling what to render and receive, and it is hard to program with GPU since they are hardware specific. For example, Cuda only works on Nvidia cards, and we need something else for AMD or Intel Arc. There are reasons why aren’t many companies using that approach.

I could confirm this since I have been experimenting some gui libraries and this is how they exactly works. All the events and the layout of the ui are done in cpu, and has transformed them into a bunch of textures and triangles for printing process. The only gpu tasks here is to print the triangles on screen.

I hate using LLM for writing code and doing contents for me, but brainstorming is really the only exception for me simply because their inaccuracies leading to some interesting inspirations that we might not think before.

Bitwig Studio this could be a related example. This software efficiently and effectively separates audio processing from GPU processing. This ensures:

  1. A very smooth interface
  2. Real-time animations
  3. Modulation visualization
  4. Stable performance even with very large song projects.

In Renoise, because audio and graphics are processed by the CPU, graphical lag is visible when the audio is overloaded.

Another issue is how audio works in multi-threaded processing. Apparently, not all threads are being used equally (that are equally beneficial).

Another issue would be dedicating a specific thread for Lua tools. It seems everything is currently handled on the main thread.

How to synchronize all of that and work on parallel processes as much as possible…

The goal is to minimize lag as much as possible and ensure smoother performance, allowing for larger projects and enabling the GPU to handle both heavy data loading and non-real-time audio processing, thus freeing up CPU resources for greater fluidity.

1 Like

An important point too → Virtually Audiomanagement with realtime streaming handling to avoid that…

which ?

it’s asking for a renoise complete rewrite… you’re playing with word here :slight_smile:

i assume you’re talking about gpu.audio, and they never actually delivered, i was part of the early testers, now they’re mostly buzzwords

then read and research, and come up with your own words, the generated OP sounds like a really bad project manager ppt that came back from some convention. It’s frustrating to read, and you’re breaking the balance of an actual debate/conversation, you’re better than a bot despite what the sister-f***er Sam A. want you to think so he doesn’t end up in jail.

1-3 are trivial, almost everything after is non-sense.

Starting with item 4, non-critical nodes blocks results are actually needed, but this contradict the real-time requirements, larger blocks are your new minimal block. You invented PDC again, which works pretty well already.

Sure it does line and paragraphs.

tl;dr : you won’t have an almost realtime system with bigger blocks in the game. you’re searching for dry water here.

It’s indeed very appealing to assume you could do realtime DSP like you render some html frame in a browser, but again this is dry water.

but feel free to explore im not your manager.

1 Like

Some of these are listed in point 4 of the first comment in this thread. This is particularly interesting, so we could investigate and delve deeper into this in particular…

A DAW or similar program doesn’t just work with real-time audio. Sometimes, it processes things that could be handled in parallel while processing real-time audio. All those parallel processes that can be handled in larger packets could be shifted to the GPU. Each piece of hardware should excel at what it does best.

If you can get the GPU to perform some of the tasks currently handled by the CPU, and also have the GPU handle all the graphics instead of the CPU, you would free up more CPU capacity for audio processing, which could be especially useful in critical situations. Apparently, this is neither new nor far-fetched; there is already software on the market that is using it in this way.

All of this, combined with efficient multithreading, where small packet processes are distributed more efficiently among threads, would make a noticeable difference in a CPU with many threads, not just in raw power per thread. This is similar to the idea that all threads should become saturated simultaneously. It should be impossible for one thread to become saturated if others are free.

1 Like