If processing power wasn't an issue, what would you like to see happen in the audio/music industry?

I’ve been thinking how processing power has increased in the last 10-15 years and how developements in VSTs, plugins and networked collaboration haven’t really taken advantage of this. Imagine the tech pushed the hardware more.

With that in mind, what innovations, developments and possibilities can you visualise happening?

Super powered synths, samplers, amazing DSP, live collaboration, hardware integration etc

I’d love to know your thoughts!

I’m lazy and probably have brain damage caused by my environment and promoted food, it’d be nice if there was random computer generated music that had it’s own radio station and used how many people where listening and how the reacted to it to create the best music that was people free.

1 Like

Learning the basics of music production is still too hard. Newbies are forced to master the arcane details of the analog tools digital DAWs are based on before being able to make a good song. I’d like an AI “soundgoodizer” to help overcome rookie mistakes like peaking audio, thin synth sounds, too much reverb, not enough reverb, the wrong reverb, etc.

1 Like

I don’t think that CPU power increased that much in the last ten years at all. CPU “CISC” development reached its peak around 2012, and after that, single core performance only increased very slightly for the amount of time.

Lately though there were bigger jumps again, due ARM innovations, which also motivated quasi monopolists like Intel to do a better job now.

Then the OS vendors (mainly Microsoft and Apple) are constantly adding a lot of high level complexity to their OSes and APIs, and also very questionable approaches in pseudo security which makes the job a lot more difficult for the audio developers. So only if you always keep in sync with the recent APIs and ways in your plugins, you can benefit with a little performance improvement. This is way too much effort for the most plugin devs. Then most GPU development barely takes audio processing into account. Maybe it is possible already with shaders, metal or whatever, but I assume that no developer ever would like to do the hardware / os vendors job trying to use these APIs for audio, since it might lead into a total loss of investment, and I think the support of the OS vendors simply sucks.

A new area were those recent plugins using neural networking stuff, e.g. that tape ai plugin (TAIP? forgot the name), but honestly, I can’t hear a benefit so far. In the end, audio development also is a lot about finetuning the parameters, translation curves etc. within a tiny sweetspot which is defined by historical experience in audio production.

Also there is only few innovation in workflow, especially in plugin development. This of course is only my personal opinion. So I couldn’t find a vendor which unites awesome programming with the recent UX / usability standards. What we see are mostly analog / and now digital emulations with little vision. Even the UX limitations of the ancient originals often are emulated. This might be fun, or feel like a game.

Personally I am still looking for the one-for-everything-synth. U-He for me is the vendor with the most sophisticated synth engine and stability. Yet their UX often is a bit clunky when it comes to their workstation synths, like Zebra 2 (which is very old to be fair, from 2002). I really like VPS Avenger for the super easy workflow in modulation, the huge free drawble envelopes, very good and modern filters and also the stunning fx. Yet it suffers from poor performance / coding and ignores common programming disciplines, so it easily kills any CPU due spikes. It seems to me that the effort to fix each vendors weakpoints is so big that it barely will ever happen. Once a plugin reached production state, usually a vendor barely will improve it anymore, because it already sells. Maybe this would change, if a roadmap with payble regular updates was more common. Lack of functional updates might also be caused by the lack of audio dsp frameworks in the past, i.e. due the inflexible code structure and oldschool frameworks. Sometimes though smart vendors join into a team, just like it happens between U-He and Bitwig. Maybe the result will be something great. The synths fulfilling my dreams here so far are (highly subjective): UVI Falcon, U-He Zebra 2, Melda MSoundFactory, Bitwig modulation system, (VPS Avenger). But each of those lacks in a specific area. It is a pity that these smart people do not join into a collaboration.

In short, I think audio development does not lack in CPU power or better quality algorithms, but instead of real innovative UX and flexibility. And by that I do not mean the most original skeumorphism. I mean really well thought speedup and simplification in usability and visual feedback for example, or there is not reason to limit the amount of filters, OSCs etc in a digital environment. This though requires a bird’s view understanding across a lot of disciplines, and in the end, the captalistic system / west world education is weak exactly at this point.

On the other hand recent synths which easily kill your CPU often sound clearer/smoother than synths from 2002, which is fascinating, not neccessarily better though. Regarding CPU usage, the opposite can be true, too: So it happens that MSoundFactory produces alias-free output and barely stresses the CPU at the same time. This is a proof that it is actually possible to deliver the recent algorithmic quality and yet not stressing your old CPU at the same time.

I hope I could contribute something to your investigation.


Artists getting paid.


To have these AI powered filters that take apart a recording into different parts reach god tier level soundquality instead of the watery FFT’ish bandfiltered sounds they often produce now. Probably processing power isn’t the issue, we need new dsp paradigms?

1 Like

Machine learning AI plugins that can transform sounds into anything you want. :smiley:

1 Like

AI will definitely both help us with tools but also create it’s own good music

1 Like

Yeah this is a cool idea! I think this is feasible for the near future - I’d like to hear it invent genres …maybe map music made through out histroy and predict the music of the future!

That would be completely nuts

A lot of people get music production, but educating them better would open things up.

I like the implementation of the ableton learning tool - https://learningmusic.ableton.com/

There are plugins which are moving to do this… not fully automated but a big help from what existed before.

I see your point - it would have to define what a ‘wrong reverb’ was which might be tricky, working only on a case by case basis


Yeah! I love the idea of AI inventing new instruments - or allowing us to ‘play’ voices

1 Like

I’d like to see software not emulating hardware but using it’s potential to bring out new solutions. Less gamification. More interoperability. OSC had that approach. There’s more than Midi to connect things together. How about syncing all your devices just by connection independent from a company? How about a virtual soundcard, which acts as a little mixer (with fx, sends and aux and additional routing), which can combine all your soft- and hardware, just in time and simple, without an extra display that has to emulate the hardware visually?

I´m fucking tired of the cell phone development. One Window docked hard driven by the graphics card, where the rerendering of the undocked windows will retire your machines.

Hey! It’s 2022 and you still fancy for fiddling around mini displays to get things done, like in 1982! What’s the goal behind that?

The funniest thing is, we already had these approaches in the mid 90s. They were all killed by the CEOs and their locusts (private equity and hedge funds). In fact, they are killing all the creativity ressources on the planet.

Even Behringer had been more innovative lately. The XR series of their mixers were cross-platform and a good and affordable solution for small projects.

It’s time to overcome this fucking garbage system. The CEOs and their friends in politics are telling you, that they can’t feed even the poorest citizens, but can spend billions of Dollars for every lord of war.

1 Like

if we’re dreaming here, I’ve always thought it would be wild to develop hybrid neurofeedback/AI/deep learning music systems. Where realtime human brain imaging offered a heuristic input into musical deep learning AI, offering a meaningful aesthetic goal based on certain brain regions (presumably associated with musical pleasure) offering the feedback target. Kind of a bio-feedback guided AI music generator that could produce maximally enjoyable music perfectly tailored to the individual

you might need to capture and then model the brainstate you’re using as your target, just so you don’t inadvertently torture people with endless generations of attempts at creating the “maximally enjoyable music…” Although I suppose if the technology got good enough, the AI wouldn’t necessarily need to start at square one for each new input state


Well, Processing Power SHOULD NOT be an issue nowadays…If it is then somebody is doing something wrong like “OK instead of disc streaming let’s hold all that shit in RAM!”…

I use older machines & older OS & I don’t have a problem…Of course I use older more efficient builds…I have a couple windows 10 machines given to me but can’t stand that os so I use them offline for watching tutorials or burning data/movies…

If your goal is to have DAWs that do everything then what is the point?..Generative music like Brian Eno with Koan Pro but for the modern day? Just go play a video game & enable cheats which is about the same thing…

All ready too many assembling ‘construction kits’ which are everywhere & also terrible. Damn ads on YT for ‘MIDI Chord Packs’ you simply ‘Drag onto your DAW’…Then what? You’ll have the same progression as the 26,000 other customers out there uploading their shit…

In the end massive processing power will do one thing…Push developers into more & more BLOAT, unnecessary features in order to have ‘new features’ whilst dumping aspects that actually made their soft efficient in the first place and eventually culminate in un-useable instability & crashing. Also since processing power not an issue there will be all different kinds of spyware, server-side confirmations & your HDD an open book, which it is already if you use win 10 (your machine accessed starting up, shutting down or completely off)…

In essence this will never be as they want everyone ‘on the cloud’…

Since I hear music that is quite excellent made 20 years ago in IT or FT2 or Project5 or Reason 2.5 or 3 then I personally am quite happy with what I got…

In the end itt is your OWN personal satisfaction that is what is to be measured, not anybody elses. I like to use old betas, weird trackers (and great ones) just to see if I can ‘do it’…Thus much more personal satisfaction in this ‘constriction’. Plus you find facets not seen in others, yes even renoise for shit’s sake…

Recently did tune in rare H8 tracker just to see if I could do it as it’s 21 year-old beta but with cool automation curves right up front! I think I hit the ‘max’ on directsound coupling…

But after that everything…EVERYTHING ELSE, seems easy!..Eat bitter to taste sweet…

For the most part I feel too many desktop musicians don’t know music enough, maybe their software yes, maybe compression yes, reverb yes…but composing NO! And using Ableton to beat slice, dice, combine, stretch, beat match, destroy, debauch or otherwise sodomize loops hoping for a ‘Happy Accident’ is not composing…

You think Alan Silvestri, Mark Mothersbaugh, Hans Zimmer or other when contracted to do a soundtrack say to client “OK, Thanks for the contract…I’ll go back to my studio & start going through slicing & re-arranging loops I’ve made & collected & see if I can come up with something suitable!”…They’d be instantly fired…

Here, Let me give another example from another angle that focuses on someone else…Not me…

Here is a guy who did alot of stuff under the ‘Helios’ moniker in which ALL tunes used were Synoptic Probe as the sequencer, not cubase, not logic or any other-

And he did THIS TUNE for a contest at Planet Mu in 2001 where they were given one 4-bar beat & asked to make a track out of it-

Here he is doing ambient example in Probe-

Tons more & many i the ‘Helios’ moniker-

All this early stuff in PROBE!! Yet he doesn’t complain about it, plenty of people listen & comment, liked immensely…

Think about it…THINK ABOUT IT!!!

I just want to hear pleasant music at the shopping centers that makes me want to buy things that doesn’t have words that are social engineering me lol :man_shrugging:t4:

1 Like

What’s the problem? A good musician will be a good musician until he passes away.
What’s wrong with different approaches when they lead into music?
Even two people who are jamming with their guitars at the corner of the city are making music which can lead into pleasure or else.

Even if the music is just sampled and brought into a new context, it’s still a homage to music.

When I´m at a rave party, I don’t want to hear the anthems of Hans Zimmer. I want to listen to them afterwards.

You can’t even define good music without the existence of the opposite.

Why would somebody compare The Sex Pistols to Hanz Zimmer? Really?

This thread is about the imagination of further possibilities.

If somebody says: “No don’t wanna use that under any circumstances! I’ll stick with my H8 tracker!” It’s just fine. But development is unstoppable.
Imagine someone is coming to a shooting with a knife. :wink:

Since there are no unlimited resources and we’re heading more and more towards the point, which will make resources very limited again, it’s a cool thing to dream and brainstorm around and imagine.

Most new styles in music were quite affected by DAWs and Apps at the time they were around. You may don’t like them, but all those influences are resulting in the music of today.

Personal habits and personal taste are not considerable in that case.
It’s about what could be a joy in the future.
If you know a lot of other easy ways to already do so, it’s fine, too.

But why you should use your fist to get the nail into the wall? It’s about the tools.
No tool would replace a skilled musician, a supreme composer, or even a mixing engineer. If they’d do so, the people behind the loudspeakers were not really into music. There’s also a market for audio dramas. :wink:

And what about the kids? Why shall there be no easier ways to get your first steps into music?

I’m still addicted to trackers because of my habits. I don’t blame someone for not being alike.

If It’s still a dream to have an orchestra at the hand, one can engage one. It’s probably more affordable than trying to emulate one by machine.

At the current state, AI is more ML (machine learning). But I hope, that AI will be more an accompanying automatic, than a producer. :wink:
It’s probably cool to see AI producing, but imagine every idea on your own is translated into something you don’t like. Ok. probably better, than just being sampled.

Unfortunately, this is caused due to the Buffer capacity issue between different audio interfaces. They can not create a universal protocol to keep those latencies as minimum as wanted.

For me, as a classic\baroque music composer “originally”, i would like to see a vst that can emulate 100% the sounds of orchestral instruments “Especially for strings -Violins, violas and cello” & can simulate the behavior of human playing styles “Marcato, Staccato. Cresendo …etc” , based on algorithmic-programming fundaments, with no rompler-sampler integrations. Example : A Violin vst that can do anything that a human violin player can. Also, a vst that can simulate an ultra realistic human voice would be a game changer. Just dictate it some lyrics and “Name your singer” .