GPU Audio [new technology or not new]

Hello!

Some news about fresh ideas

What you think about this?

2 Likes

OpenCL based VSTs has been around for some years now, mostly as github projects. Since Apple now killed OpenCL, I would assume it currently is a problem of an existing cross platform API. I have no idea if this would work with Vulkan/MoltenVK API, I guess so.

The downside might be that the audio latency doubles with any additional serial instance of a GPU accelerated plugin just like UAD. I don’t remember if this still is true, or what the reason was for this.

That’s why the video is not showing a new concept at all. Would be interesting to see about the details, the videos leaves out any details. Also they use only one instance, I guess for a reason. Could be that they are using CUDA, nvidia/windows only.

I also think its gonna take some time and technical development, before it becomes standard in the audio world.

With 3d GFX and number crunching it became popular to use the gpu, because the gpu can crunch more stuff in less time in such parallel contexts. The GPU is just faster than CPU for those tasks, because the Work can be split into very many very small bits.

With audio the problem is that audio is a serial stream. So it cannot be split into hundreds of independent tasks that easily. Also audio is often realtime and depends on latency. GPU calculations often have extra latency.

And it is extra work to port, transform and optimize existing CPU code for GPUs. You cannot just run any program on the GPU, it must be designed to work well with it.

Audio indirectly benefits from the GPU though, because with GPU the CPU has less work to do for the graphics, thus more ressources for audio.

2 Likes

I believe SIMD instructions are fairly commonly used for DSP, so parallelization has been happening on a smaller scale for some time.

Sound on Sound suggests convolution as an ideal use case, but points out that latency can be a problem.

Their Reverberate LE is donationware and is available in both native and GPU editions, although its developers do warn that, depending on which NVidia graphics card and CPU you’re using, you may find the native version more efficient, especially when using smaller audio buffer sizes for lower latency. This is due to the extra processing overhead of ferrying blocks of data to and from the GPU.

via https://www.soundonsound.com/sound-advice/using-your-graphics-card-process-plug-ins#top

I think if the DAW would really offload the GUI CPU load to the GPU, using recent graphics APIs like Vulkan/MoltenVK/Metal/DirectX, it would already help a lot to having more resources available for the CPU, just like OopslFly said.

But in reality, even today a lot of DAW’s GUIs still are mainly rendered on the CPU, like Reaper, Bitwig (3.1 beta is using Metal2 partly though!), Renoise, Mulab, Waveform, and almost all VST plugin GUIs. Obviously because it is a tough job, and the graphics apis seem to change each 5 years.

2 Likes

Even with something like OpenGL 3.2 you can do a lot of things. But just writing wrappers around the drawing functions probably wouldn’t suffice, since GPU stuff thrives on batching things and reducing state changes to a minimum, so it likely amounts to writing a second GUI from scratch. But doing that just gets harder the longer it’s put off…

If I were making a DAW, I think I’d want to build it on top of something like Cairo, Skia, or ANGLE. The popular web browsers take this approach.

2 Likes

I would love to see this. Especially considering that the current CPU-rendered GUI is having major performance problems on HiDPI displays:

All news in this group.

Great Idea, but nothing new. GPU Impulse Reverb VST | NVidia/ATI GPUs used as DSP for convolution reverb calculation

When I had my creamware cards 22 years ago, the dsp development was great. You even could blast the other acts using analogue gear away, just because your project operated in 96/24 in real time.

Times have changed. If you’re focused on gear which is not developed by you (e.g. Nvidia or ATI) you’re trapped to their drivers. If some CTO doesn’t want to include APU or else, you’ve to build a workaround for that case. Tricky.
For actual Nvidia cards, there’s a major update every 2 weeks, that means you’ve to develop everything from scratch every two weeks, if you’re unlucky or you will have to stick with the old drivers, which is a no go, because people want to benefit from the new support of games and 3 D tools.

Believe it or not: The second problem is latency. That is, because still the old pc architecture is given. Imagine, what could be possible, if the old power pc architecture would have made it through the markets.

Latency is not an issue, when processing the audio on the GPU. But the audio has to get in and out there, which is still a severe problem at least when it comes to Windows computers.

3rd major problem is, when your system doesn’t support the GPU to APU
switch, your DAW has to support it. If too many cooks are included, the meal becomes shitty. There’s no swarm intelligence beyond humanity, only swarm degeneration. You can refer to wikipedia for an example.
If globalism faces capitalism, always the lowest standards will make there way through to everyday live.