Request for clarification on VST3 parameter automation update frequency

Hi!

I’ve noticed parameter automation in Renoise is not nearly as smooth as in other DAWs (tested Reaper, Bitwig and FLStudio).

Scenario is the following:

Load Surge XT into Renoise, create a single pattern of length 32, BPM 120, LPB 8, play a single C4 note from the start (Surge defaults to saw osci). Add an instrument automation control that varies Surge’s filter cutoff from 0 to max in those 32 lines. So +8 per line. The resulting changes in filter cutoff are audibly stepped. I am running at 48khz with the default 12-ticks-per-line, but varying any of those 2 doesnt seem to help much.
Is this expected behaviour? And if so, why? Reaper and Bitwig, on a similar simple demo, perform filter cutoff modulation absolutely smooth. Sorry if i’m missing something.

So really how often are automated vst3 parameter updates supposed to be sent to the plugin processor?

For some in-depth details: I’m developing my own VST3 plugin (just used Surge as an example because anyone can download it) and these are the results I’m getting when I am debugging automation events in my own synth:

  • Reaper with 160 block size, 48kHz gives me 1 update per block, so 300 per second
  • Renoise with 480 block size, 48kHz gives me 1 update each 8 (!) blocks, so only 12.5 per second
  • Renoise with 128 block size, 48kHz gives me 1 update each 20 (!) blocks, so only 18.75 per second

TBH i have not debugged Bitwig nor Fruity but they do sound smooth, so I am assuming loads of parameter updates in those, too.

Now I know all of this can in theory be solved with internal parameter update filtering in the plugin (actually tried that, sounds smooth with an 150ms or-so low-pass), but that’s an additional 150 ms of latency. Also surely it cannot be the case that a somewhat mainstream synth like Surge behaves so radically different in Renoise vs any other DAW just with default settings?

Any insights and/or clarification greatly appreciated. Or even better please just tell me that i’m wrong and i messed up somewhere:)

Still loving Renoise so thanks once more for a wonderful product.

I don’t know the details, but I assume that the parameters are updated at tick rate?

Bitwig is really neat in this regard, everything is most precise, nailed at sample rate. No random results here :laughing:

I actually recommend to not use Renoise automation or meta devices for volume modulation. There are better alternatives like Devious Machines Duck, triggered thru a vst3 sidechain signal. There you have sample accuracy again.

I would be interested in some definite answers of the developer about this topic, too.

2 Likes

It probably sounds stepped because either you recorded the automation changes across pattern lines in the pattern editor or set to points in the automation editor?

I think Renoise defaults to points when realtime recording inside the automation editor. After you have recorded the automation, change from points to either lines or curves and the automation will sound smoother.
image

1 Like

@Jonas

Thanks a bunch for that!

Been over a decade renoise user and never took the time (my bad) to look into the automation curves. Honestly didnt even know they existed. Although, i never record automation, i just punch it in. That’s why i use a tracker in the first place! Just go 0x00 - 0xff, interpolate column.

Anyway, this is NOT smooth:

And this IS smooth:

Thanks for this. It’s not my preferred way of working, but definately good to know the option exists.

1 Like

yes, this is to be expected when you set the automation in the pattern editor since every line represents a value change. Depending on the bpm, lpb & pattern length resolution you could improve it sounding less steppy though (use tools like; GitHub - mogue/renoise-pattern-zoom: Renoise tool to expand the lines per beat and repositioning the relevant data. to easily expand patterns across a song, so you spread the same music across more pattern lines), but I guess this still won’t sound as smooth as simply drawing in a line in the automation editor.

Also imo it is worth checking out some of the automation tools for playing around with shapes in the automation editor, fun way of quickly generating stuff which would otherwise require a ton of manual editing;

1 Like

@Jonas

<< this is to be expected when you set the automation in the pattern editor since every line represents a value change

This is exactly what i’m trying to clarify!
Lets say i have 4 lines with automation 0x0, 0x40, 0x80, 0xc0.
Also for simplicity lets make it 4 ticks per line instead of the default 12.

I see 3 options:

  1. renoise sends updates per line, i get 0, 0.25, 0.5, 0.75 => 4 times
  2. renoise sends updates per tick, i get 0, 0.0625, 0.125, 0.1875, 0.25, 0.3125, etc, => 16 times
  3. either 1) or 2) but with interpolation

Unfortunately neither (2) nor (3) seem true, and we’re really stuck at (1).

On a 32 line, 16 ticks-per-line pattern, I would indeed expect 16*32=512 parameter updates. That’s plenty more than enough for a plugin to reconstruct a smooth signal internally without significant lag.

That’s not what happens in renoise though!

Given the above mentioned example of 32 lines, 120 bpm = 2 bps, 8 lpb, 16 ticks-per-line, this works out to 2 seconds of audio.

If renoise sends updates per line, I get 32 in 2 seconds (not so good and i suspect this is whats happening). If renoise sends updates per tick, I get 256 in 2 seconds (easily good enough).

BTW, comparing those numbers with my first post, it really seems renoise does automation data per-line, not per-tick, and also does NOT interpolate lines (let alone ticks). An obvious workaound, then, is to increase the pattern size and just have more lines. Not the end of the world, but annoying. Growing up in the 80s/90s i still love me some manual punch-it-in-automation haha. Would be a shame if that’s ultimately inferior to mouse-drawn curves.

@taktik sorry tagging you again but I don’t know of anyone else who knows the details of this – what is exactly happening, under the cover?

  • Does it update per line?
  • Does it update per tick?
  • Does it interpolate, from the host side?

@ffx

<< but I assume that the parameters are updated at tick rate

That’s what I first thought, but see above.

<< Bitwig is really neat in this regard, everything is most precise, nailed at sample rate

Not really at sample rate. I observed bitwig to send updates for CLAP modulators every 64 samples. I would be surprised if that’s any different for either CLAP automation or VST3 automation. Still, @48kHz that works out to 750 (!) updates a second, which is easily more than enough to reconstruct a smooth signal inside the plugin.

<< I actually recommend to not use Renoise automation or meta devices for volume modulation

So as it appears renoise does smooth curves already, just not within the pattern editor!

<< I would be interested in some definite answers of the developer about this topic, too

As do I. See above. Fingers crossed :slight_smile:

1 Like

Ah yes, I remember that Urs mentioned similar one time in the KVR forum. Or maybe it was in the discord channel of the Surge devs, namely Bacon Paul. I also remember that the goal was sample accuracy though. And wasn’t the recent Cubase version claming to provide sample accurate automation, too? I doubt that VST3 is more capable than CLAP… Lot of confusion for me here…

1 Like
  • Pattern automation and Points mode in graphical automation are not interpolated. Parameter changes are simply sent out at the point specified in the pattern, sample accurate.

  • In graphical Line or Curve automation, points in the automation are sent sample-accurately, as in Points mode, and values in between are interpolated and sent on each “tick”.

A “tick”'s duration in samples is defined via the sample rate, BPM, LPB and TPL settings in Renoise.

tick duration in samples = (SR * 60.0 / BPM / LPB) / TPL

So e.g. at TPL 12, LPB 4, BPM 120, and a sample rate of 44100 Hz a tick and thus the interpolation step is 459 samples.


To schedule parameter automation for plugins, the plugin’s processing block size must be adjusted so that parameter changes are applied at the desired time before each beginning of a processing block. This causes quite a bit of overhead, as smaller processing blocks cause more overhead in general. For this reason every host out there will do some quantization here. In our case its ticks, other hosts will use different values.

The VST3 protocol supports sample-accurate scheduling of parameter changes in plugins. But we do not currently use this because I think most plug-ins out there discard the sample time anyway. Especially if they are ported from a VST2 plugin.

2 Likes

@ffx

<< I doubt that VST3 is more capable than CLAP… Lot of confusion for me here…

I’ve been working for over 4 years with the both of them so i do have some opinions on the matter. I do think CLAP has some killer features that really make it stand out from VST3, but sample-accurate is not one of them. Both do sample accurate automation (i.e. scheduling parameter changes midway through a processing block) just fine.

For me, what really sets CLAP apart are:

  • Full access to raw midi data inside the plugin
  • Ability to join in on the host threadpool (ie all plugins inside the host as well as the host itself share a single multi-core worker task pool, instead of “to each his own” - should give the DAW way more opportunities in deciding how to allocate CPU cores, and frees the plugin dev from having to do so)
  • Non-destructive automation (clap calls it modulation). Allows you to run f.e. a host-provided LFO against any plugin parameter of your choice without messing up the patch state. Bitwig goes really crazy with this.
  • Per-voice modulation: allows any host-provided modulator to target individual voices in a polyphonic synth. E.g. with partially-overlapping voices in a chord progression or whatever, allows each synth voice to have it’s own virtual copy of said host-provided modulators.

There’s probably more stuff i haven’t even looked into yet, but for me, those are the big ones.

Anyway, if you want to know more of this stuff i’d be happy to clarify, just pm me. But note i’m by no means an expert on the matter – just a user of both API’s like so many other plugin devs.

BTW also nice to see BaconPaul’s name popping up over here. He’s helped me out a lot over the years.

1 Like

@taktik

Thanks! That’s exactly the info i was looking for. That also makes @Jonas answer the accepted answer/solution. But before I do so:

<< The VST3 protocol supports sample-accurate scheduling of parameter changes in plugins . But we do not currently use this because I think most plug-ins out there discard the sample time anyway. Especially if they are ported from a VST2 plugin.

I really want to ask you to reconsider!

I don’t discard it, i think Surge doesn’t discard it, a quick google search says FabFilter doesnt discard it, and there’s probably more.

But what’s more important: many plugs today are indeed rooted in VST2 with a shared code base to VST3. But since Steinberg ditched VST2, there’s bound to be more and more new plugs without the VST2 heritage. I can confirm that I was not even allowed, from the start, to publish a VST2 version of my own plug.

The other big concern in this regard is JUCE. I’d wager over half of all plugs today are JUCE plugs, and JUCE doesn’t do sample-accurate ATM, but it’s been a topic of discussion for years. Once they decide to support it, that opens up the door for thousands of plugin devs to support it, too. Of course that still requires developer effort on the plugin side but i’d be very surprised if none of them did it. And JUCE is huge in terms of user base.

<< This causes quite a bit of overhead, as smaller processing blocks cause more overhead in general. For this reason every host out there will do some quantization here. In our case its ticks, other hosts will use different values.

Agreed. FLStudio is notorious for doing this, up to hand-full-of-samples per processing block. It’s a real performance killer. However sample-accurate just might be on your side here! The question is, is it more performant to schedule 100 blocks of size 10 without sample-accurate, or to schedule 1 block of size 1000 with 100 in-block param updates. Gut feeling says, the latter will be faster but the plug of course has to be able to cope.

I can easily force Reaper to give me multiple updates per block with not even that large blocks (say couple milliseconds or so). Bitwig is even easier since I get updates every 64.

Anyway the point i’m trying to make is this: there’s more and more hosts supporting sample-accurate and as a result I expect more and more plugs to do so, too. Especially when JUCE gets it.

And this might just be me nitpicking but – please don’t assume every plug to play by the JUCE rules! VST3 (let alone CLAP) allows for loads of stuff that JUCE cannot do. This Possible (probable?) bug w.r.t. vst3 parameter flushing was one really nasty example of that which also affected at least one other plugin (i think it was supermassive, but not sure). This Saved automation data does not respect VST3's parameter id, works in Renoise for probably 90% of plugs because most plugs are either VST2 or JUCE-VST3, both in which cases, Renoise is fine. But that thing still possibly breaks many native-VST3 plugs.

Well, end of long rant. I really hope I can change your mind on all of this a bit, but of course TBH i do have a personal stake in this – which is to be able to write music using my own plugin on the only decent DAW that’s a tracker. Still hating piano rolls. And still loving Renoise.

-cheers, Sjoerd

1 Like

@taktik

One more thought.

I used to think that the block size specified in edit->preferences->audio->buffer-size controls both the host-to-AudioIO-block size (so Renoise <> WASAPI or Renoise <> ASIO or whatever) as well as the Renoise<>VST3Plugin block size. I now understand that this is not the case, and the DAW is at liberty to schedule smaller or even varying-size blocks to the plugin, when compared to the DAW<>audiodevice buffering.

But, would it not in theory be optimal to have them all at equal size? Say, ASIO requests a 128-sample block from Renoise, then Renoise goes on it’s way to request a 128-sample block from all of it’s plugins, and in the process schedules sample-accurate changes for plugin A at 8, 16, 24, etc within that 128-block, and possibly something completely different for plugin B?

I fully understand a scheme like that is just plain not feasible at present given that most plugs just cannot handle it, but it seems theoretically optimal to me. Thoughts?

1 Like

Ideal for the plugin, yes, but then the plugin must support parameter automation scheduling. There are also some situations where the host still needs to split buffers. For example, when musical timing (BPM, signature) is automated.

Also, not all audio drivers can guarantee fixed, static buffer sizes on all platforms.

So you need to support variable block sizes in your plugin anyway.

1 Like

@taktik

Point taken! I had not thought of bpm/timesig or any other stuff that’s only transmitted at block boundaries. You are right, in this case the host has to split blocks anyway.

Thanks for the clarification and all the background info. I still stand by my remarks regarding VST2 and JUCE, so I do hope you’ll give it some consideration.

Closing this since discussion isn’t really about the original topic anymore.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.