Well you probably won’t believe me but here goes
I received my cirklon yesterday ( I sended it back because there were some cosmetic issues ) and now it has a faulty encoder , I need to send it back (again ) to Berlin .
So for the moment , no hardware sequencing from cirklon . will probably take 2 weeks to get a new one
Well you probably won’t believe me but here goes
HI Taktik , how is it going ?
Any news about the midi timing improvements when renoise is a vst host ?
Perhaps a beta upload ?
p.s. why did you move the thread ?
It was about midi in , cirklon ( or any other hw seq.) into renoise
Not about internal mid routing
Could you please do the same test without Loomer - just Renoise - with varying latencies, now that your device is back?
Sorry. Moved it back now…
O.k . did some tests , it seems the smaller the buffer size the more accurate .
A low 64 samples being the best , while things get progressively worse from 512 samples with a high step speed ( 1/32nd-1/64 th notes ) , while other hosts like loomer and reaper stay accurate at 512 samples buffersize
test signal =microtonic
Example a pattern with steps 1/64 @2048 buffer sounds stable , but the steps are not really 1/64 …they are somewhere in between 1/64 and 1/32 ( see screenshot 1 ) where files are alligned
Screen shot 2 shows 1/64 and 1/32 nd both @ 64 samples …and acurate timing
Here’s the file with the descriptions of the used buffer size and the sequencer step length ( cirklon )
Could you please let me know how exactly you are testing this?
Cirklon is sending out a periodic MIDI event pulse train.
Renoise is receiving it. How are the events played back in Renoise - is a sample triggered?
How do you record the output of Renoise and Cirklon to compare them?
As I tried to explain above, real-time MIDI input always is received and processed by DAWs once at the start of an audio buffer, which usually is as big as the audio card latency. So the latency basically quantizes all incoming events by the latency.
When recording such events, and the events are timestamped, the events can be backdated into the DAWs timeline. When such events are played back live, they can not, as they happened in the past (well, except the entire MIDI stream is delayed by a whole buffer, but this usually is worse).
So smaller buffer sizes do improve the timing of real-time played events in general, and this should be consistent in all DAWs. From my tests Reaper behaves the same here as Renoise. Larger audio buffers = More sloppy realtime-MIID playback timing. Same with Ableton Live. I haven’t tested Loomer though.
When comparing different DAWs, please use exactly the same audio settings and preferably ASIO on Windows. With DirectSound things may work a bit different.
Am I amazed you’re asking this
I have explained each and every step of the process since the beginning .
–Use of the Cirklon and the ROland integra
-Cirklon is sending midi , it’s playing a pattern at tempo 120 bpm with various pattern step lengths ( see provided files ) 16th steps , 32 steps , 64 etc …this is the same as changing the lpb in renoise.
I don’t record the midi stream in renoise , I record the audio output of the instrument triggered by midi , for recording midi in renoise , renoise’s sequencer should be running and this is NOT the case in any of the test I have provided
-The roland integra is used as a midi interface , the serial ‘MIDI OUT 1’ of the cirklon is going into the serial 'MIDI IN’of the Roland integra , then the usb out of the Roland integra is used to send midi to renoise and also functions as an AUDIO interface
- I can also bypass the Itegra as a midi interface , and used the cirklon USB midi out to connect to the pc, whhich I will do in the future ( Atm I just don’t have a spare usb cable )
-The integra is thus also used as an audio interface .
—The use of renoise
Under no circumstances in the test examples is ANYTHING sequenced in renoise , it purely funtions as a vst host
I just load up an instrument ‘microtonic’ set to pitch mode and NOT using the sequencer at all , so it functions as a synth
I then choose the approprate midi input channel to which the vst instrument responds and press play on the cirklon , Renoise is NOT playing …the cirklon is
I can then easily record the audio output of Renoise’s masterbus by pressing record in the sample editor .
The composite files are manually copied pasted files , sample aligned to show the differences between different step legths ( the screenshots in the previoius example , comparing 32 nd against 64 etc…)
The other files are pure audio recordings and are not altered in any way , just pure screenshots of the sample editor .
I have not used directx , only asio 4 all
And sorry taktik , but renoise does not behave as accurate as repear , it’s sloppy with a buffer size of 512 samples and 32-64th step length , reaper and Loomer architect passes with flying colours with the exact same buffer size
See the verry first example of this thread
I really have the feeling we’re walking in circles here , if you want to get to the bottom of this there is no way around then to grab a hardware sequencer and do the tests
Using a renoise instrument instead of a vst instrument yields the same result
Here’s a renoise instrument , sequenced at 120 bpm , 16th step size ( Renoise is NOT playing , the cirklon is )
Test example : sine wave like sound with adsr envelope , instant attack
Assio all @ 512 sample bufer size
It’s sloppy as hell
Slopp.xrns (878.8 KB)
I can load up a sampler in reaper or loomer , and it will be much tighter
If renoise behaves sloppy at 512 samples buffer size while other hosts do not , we can say that the problem lies within renoise , no ?
Used the cirklon usb out midi out straight to pc ,
Same ( bad result ) at 512 buffer size samples in renoise
In my tests Ableton Live and Reaper behave the same way: The higher the audio latency, the sloppier the passed through real-time MIDI playback gets.
I’ve tried to explain why this is happening here:
But Bitwig is not affected in Midi input precision at all. I am pretty sure that Ableton and S1 here will show the same result. So obviously the midi is buffered in another thread/process whatever, which is not dependent on audio buffer at all. For example macos has the “midiserver” process running all the time. I would guess that the coremidi api then provides a method to get the past midi buffer over a time frame, so then you would fill in all the notes accurately for the last audio buffer time frame. Same then for the output, once the audio buffer filling happens you would need to fill the coremidi midi out buffer, or something. And for windows, there would be a similar mechanism.
I never programmed such a thing, and this is not intended to critize your work, but I think your above statement can’t be true.
Also maybe I am mixing up things here, but since you mentioned midi INPUT above, I am responding here. Since Renoise already can accurately record midi, only with a time offset then, why the compensation then messes those timings up?
Different problem. Different thread. See Midi recording precision dependent on latency - #15 by ffx please
There’s technically no way around this, unless you are adding latency just to fix the jitter, or if the audio latency isn’t the internal used buffer size. Which often is the case with DirectSound.
Ah ok! That’s clever.
It’s not that I don’t believe you
But when using the exact same BUFFER settings of 512 in Loomer architect and or reaper the result is much tighter .
They can easily process crazy amounts of superfast incoming midi data without a hick-up ( 512 samples buffer)
Renoise OTH needs a buffer size as low as 64 samples to get an equally tight result .
So there has to be something that is causing this difference .
BTW , the cirklon is a marvelous piece of gear , highly recommended
Hi Taktk , I asked Colin (developer of architect ) how Architect managed to get such accurate timing at 512 samples buffer .
This was his reply , you probably know all thiis , but maybe there is an aha moment
It sounds, from that description, that the MIDI message’s sample offset is not being used. MIDI is a stream of events. As these events can occur at any time, they are tagged with an offset: the event’s relative position in the buffer. Architect uses this sample offset to keep MIDI processing tight. (Caveat: eventually everything MIDI in Architect is squished to a PPQ of 960, which I think is accurate to about 8 samples at a tempo of 120 bpm.)
If this offset isn’t used, another approach is just to treat each MIDI message as if it occured on the first sample of the buffer. For small buffer sizes, say 64 samples, this means that a MIDI message can at most be out by 63 samples. But as you have observed, once the buffer gets larger, so does the potential error
Yes, that offset can be used for recording, in order to backdate the events in the DAWs timeline. That’s what we already do. But it can’t be used when playing back events in realtime.
When playing back MIDI, the application always receives events which occurred in the the past: there always is a little delay between the time you hit a note, and the time the note event actually is received by the application.
The only way around this would be by adding some extra latency to the playback:
I had thought about this problem in general too, and got some Windows specific fix ready, which may under some circumstance improve the timing a bit. Maybe it works for you, maybe not.
The trick is to boost the priority of the thread which sends the MIDI events to ensure that the event is delivered as fast as possible by the system and that it also interrupts the Audio thread. Usually the driver will and should take care about this as in most case the driver manages this thread, but in case it doesn’t this may help to improve the timing of the events.
But architect does it when playing back events (sequenced from cirklon ) , which is the reason why it’s so accurate .
Or am I missing something ?