Hi, there is something I don’t understand about Renoise 65536 PPQ resolution.
It says “With the new timing engine added in Renoise 2.0 you can place events with an insane maximum resolution of up to 65,536 PPQ. This high precision allows live recordings to retain their natural, human feel.”
Why is it so high, and how can it be so high ?
If I am correct, at 120 BPM that corresponds 60/120/65536 = 7.62 us per tick … even at 96 KHz sampling rate that’s not even one sample !?
You can do the math, even at 30 BPM, an ultra slow tempo, it is still like 30us.
So I don’t know how renoise can really efficiently place an event at this resolution, but more importantly, I don’t understand the usage of it since even 4096 at 120 BPM would be 0.1ms which is just crazily good for a human performance…
each line is divided into 256 slices; you can assign an event to any of these 256 slices using the delay column.
so each line has a resolution of 256.
if you set LPB resolution to 256, using F1FF command, you will be able to reach this enourmous resolution, provided that your CPU and your brain can afford it
Often software allows for detail greater than that which can be displayed/heard/whatnot. Often, this allows for greater possibility when processing source material… for example, many analog modeled plugins use a crazy 64bit internal mixing depth to avoid certain processing errors typical of digital sound processing. Renoise’s seemingly crazy 65,536 PPQ could actually be useful, for instance, if you’re doing something like placing a very sped up .wav sample (say +4 octaves) and you want to vary the sound ever so slightly on every placement. The high PPQ would allow you to get a textured wavetable-scrubbing-like effect out of one individual .wav sample without sacrificing sample timing. Even though only ever Xth data sample of the original waveform is being processed, it’s choosing a different data sample every time based on your micro-placement of the waveform… and voila! more texture!
This is just one possible use of such capabilities. I’m sure there are many others.
I understand what you say and also that if your LPB is 256 and you have 256 events per line, you get a PPQ of 65536.
But now we are talking about two different things and overall, it does not seem that renoise, as a whole, can resolve to a resolution of 65536 PPQ.
Well, the sequencer does, but the renoise audio synthesis engine won’t follow, so overall, it is a bit misleading that the authors claim you can achieve that in Renoise.
I might be wrong in my conception of what PPQ is, and please correct me if I am wrong, but I thought it represented an actual timing resolution, not a virtual or theoretical software notion that finally quantizes your events in an unknown fashion.
For example, if you take some old Commodore 64 tracker and a tempo of 125, events typically occur at 50 Hz (PAL VBL rate). That’s 6050/125 = 24 PPQ
Some of those trackers can also speed up their event processing loop to a certain factor, i have seen 12x (600 Hz) a bunch of time, which would be 60600/125 = 288 PPQ
Like you say, this finer resolution allows interesting sound manipulation, and in case of the c64 it allows to do something similar to granular synthesis.
However, on c64, we really get those 288 PPQ, meaning that each event will be immediately translate into sound since the SID chip synthesizer runs asynchronously at nearly 1MHz.
Also, if a key was pressed by the user, this event would be quantized to 288 PPQ.
That’s how I understand this notion of PPQ, it is an actual timing resolution that quantize all IN + OUT events to it.
In soft sound synthesizers (like renoise), usually samples are processed in blocks of let’s say 8 - 16 samples. I would be surprised that renoise is an exception and that it does a crazy “per sample” processing where events can change DSP parameters in between each sample.
I guess this would be a highly technical question for the renoise developers ?
Well it seems to me that if I put lofimat with a sample rate of 4khz on a track and mess around with finetuning my sample timing, I’ll get different sounds out of the track, even though lofimat restricts output to 4khz. This is because the background processing runs at a much higher rate than 4khz… if it were restricted to the same rates as the sound output, I would not be able to achieve a difference in sound by messing with fine sample timing. PPQ as I understand it is typically a term used for MIDI timing and such… internal messaging. If the PPQ matched the sample rate exactly, a message to cut the volume wouldn’t reach the sequencer until the following data sample was triggered, thus creating a 1-sample delay in the effect timing. Thus, it’s entirely possible, and preferable to have a PPQ higher than the sample rate, if only to avoid such latency.
The sample decimation you are talking about with lofimat is an audio DSP processing that is independent of the events (which involve PPQ) when they do the actual processing.
They can achieve even beyond one sample resolution with interpolation … they can do whatever they want in fact.
PPQ only comes in the game when the sequencer changes parameters related to the algorithm.
It basically says “how accurately” in time can we send (or receive) that parameter change to the algorithm so that it changes its behavior accordingly.
During sending or capture of events, I still cannot understand how having even one sample of accuracy could be useful (especially capture of human performances, and that’s what renoise is advertising).
Even that, like I said before, 65536 goes even higher than the sample rate, and I don’t believe renoise sound engine can do anything with this accurate timing information that the user is able to enter in the sequencer.
That’s precisely my technical question to the developers, what is the real accuracy of the sound processing vs the one presented in the sequencer ?
Samples. All events are scheduled sample precise down in the bowels of the engine.
The term PPQ indeed is very confusing and comes form a time where sequencers only spit out MIDI events, did not support audio files. Base timing in nearly all nowadays sequencers will be the sample rate.
I don’t know why the 65536 made it to the announcements for Renoise, but its just the max note resolution that you can enter in the pattern editor → 256 LPB * 256 note column delay steps. If you exceed the sample rate with this, you will end up in double processed samples this way. We would have to oversample internally to go beyond that rate.
So Renoise internally uses samples as timing base, but our current sequencers (pattern editor, automation) only allow you to enter stuff into the line/sub line grid which max is the max LPB and note delay resolution. When for example PDC is running it will shift events sample precise back and forth. Its just about how we can “enter” stuff now…
To me the PPQ says at what resolution with respect of the BPM all the events will be quantized.
So that’s a precious information. On c64 and Amiga (which I guess hosts the fathers of Renoise) and even Gameboy (hehe!) PPQ has a direct equivalent, if I can place events on their trackers, I am sure their events will be processed in a timely fashion.
If Renoise can indeed process new events every single sample, then yes I agree the notion of PPQ may not be useful anymore.
But I am actually surprised that Renoise does that Does it mean that it process one sample, then look if there is new events to be processed for the next sample, then process that next sample, then look again at the list of events, etc ?
Sorry to always come back to this subject, I am just being curious !