GPU Audio [new technology or not new]

Hello!

Some news about fresh ideas

What you think about this?

2 Likes

OpenCL based VSTs has been around for some years now, mostly as github projects. Since Apple now killed OpenCL, I would assume it currently is a problem of an existing cross platform API. I have no idea if this would work with Vulkan/MoltenVK API, I guess so.

The downside might be that the audio latency doubles with any additional serial instance of a GPU accelerated plugin just like UAD. I don’t remember if this still is true, or what the reason was for this.

That’s why the video is not showing a new concept at all. Would be interesting to see about the details, the videos leaves out any details. Also they use only one instance, I guess for a reason. Could be that they are using CUDA, nvidia/windows only.

I also think its gonna take some time and technical development, before it becomes standard in the audio world.

With 3d GFX and number crunching it became popular to use the gpu, because the gpu can crunch more stuff in less time in such parallel contexts. The GPU is just faster than CPU for those tasks, because the Work can be split into very many very small bits.

With audio the problem is that audio is a serial stream. So it cannot be split into hundreds of independent tasks that easily. Also audio is often realtime and depends on latency. GPU calculations often have extra latency.

And it is extra work to port, transform and optimize existing CPU code for GPUs. You cannot just run any program on the GPU, it must be designed to work well with it.

Audio indirectly benefits from the GPU though, because with GPU the CPU has less work to do for the graphics, thus more ressources for audio.

2 Likes

I believe SIMD instructions are fairly commonly used for DSP, so parallelization has been happening on a smaller scale for some time.

Sound on Sound suggests convolution as an ideal use case, but points out that latency can be a problem.

Their Reverberate LE is donationware and is available in both native and GPU editions, although its developers do warn that, depending on which NVidia graphics card and CPU you’re using, you may find the native version more efficient, especially when using smaller audio buffer sizes for lower latency. This is due to the extra processing overhead of ferrying blocks of data to and from the GPU.

via https://www.soundonsound.com/sound-advice/using-your-graphics-card-process-plug-ins#top

I think if the DAW would really offload the GUI CPU load to the GPU, using recent graphics APIs like Vulkan/MoltenVK/Metal/DirectX, it would already help a lot to having more resources available for the CPU, just like OopslFly said.

But in reality, even today a lot of DAW’s GUIs still are mainly rendered on the CPU, like Reaper, Bitwig (3.1 beta is using Metal2 partly though!), Renoise, Mulab, Waveform, and almost all VST plugin GUIs. Obviously because it is a tough job, and the graphics apis seem to change each 5 years.

Even with something like OpenGL 3.2 you can do a lot of things. But just writing wrappers around the drawing functions probably wouldn’t suffice, since GPU stuff thrives on batching things and reducing state changes to a minimum, so it likely amounts to writing a second GUI from scratch. But doing that just gets harder the longer it’s put off…

If I were making a DAW, I think I’d want to build it on top of something like Cairo, Skia, or ANGLE. The popular web browsers take this approach.

1 Like