Offline Rendering Ultra Slow

Not a single thing I said in that thread was incorrect, I just couldn’t be bothered with your attitude so decided to walk away. I suggest you take a long, hard look in the mirror! (EDIT: that’s only half true. I have a lot of respect for what you have brought to this forum and my initial post was unnecessarily bluntly written and even if I disagree with you and think you might have a personality disorder there was no need to continue such a discussion really. Came very close to sending you an apology message multiple times since but really wasn’t sure if that would make things better or worse…)

Or maybe you want to go and show me anywhere within the program Renosie where the Bandwidth of the EQ is labelled Q? It’s only the manual this term is incorrectly used. And your description of Q was 100% incorrect!

ROFL! :lol: After finally trolling and laughing you walked away. Yeah, you’re really an innocent and deeply good guy. :) What you laughed about, may forever stay your very own secret. Because you actually made yourself look like a fool, not knowing what the ‘Q’-settings in a Renoise EQ really do. Not a single thing as wrong? Related to the Renoise EQ everything you said was wrong. You picked up some knowledge somewhere on the net and thought it would make you look smart. As I already said with my first reply in that thread, I am absolutely aware of the usual handling and meaning of Q. The difference between you and me is just, I am also aware of the fact, how it’s handled in Renoise.

Nothing wrong there. :guitar:

Well, if the official manual is not reference enough, then I really can’t help you. My description of Q related to the practical handling in Renoise and refering to the manual, was 100% correct. No matter how you turn it and no matter how many Waldorfs and Stettlers from the Muppet-Show add reputation to your posts.

There is not much we can do to help you with this example because 2.7 didn’t had track groups, the multitap delay wasn’t in 2.7 either so there is no way we can convert this 2.8 example to make it work in 2.7 and supply you an honest conclusion about differences here, i hope you can understand that.

My test facts for at least 2.8 (clocked with a stopwatch) on a quad core 2.4Ghz:
44,16 cubic, low speed:3.06
44,16 cubic, high speed:1.90

44,16 Arguru, low speed:48.75
44,16 Arguru, high speed:43.50

44,16 Arguru, low speed:47.27 (all fx removed)
44,16 Arguru, high speed:42.27 (all fx removed)

And as Kazakore already explained, rendering times between cubic and Arguru have always been huge because of the huge quality difference so
nothing new to announce here.

I picked one excerpt pattern from my Billy Jean demonstration which is only samples a few EQ10s, a gate and a reverb and rendered the same snippet in 2.7.2 and 2.8:

[Billy Jean excerpt, sequence 4, pattern 2]
Renoise 2.7.2
44,16 cubic, low speed:7.68
44,16 cubic, high speed:0.96

44,16 Arguru, low speed:31.06
44,16 Arguru, high speed:24.36

Renoise 2.8 (64-bit!)
44,16 cubic, low speed:1.68
44,16 cubic, high speed:1.25

44,16 Arguru, low speed:24.41
44,16 Arguru, high speed:21.78

Conclusion:
Renoise 2.8 renders faster, only with the cubic high speed rendering it went over a second, but i also got a hickup before it said “done” so if i didn’t got distracted by that i suspect it was actually faster.

It may be a different story when using track groups and effects that have been changed or added, but if you would create a song in the same structure as you would in 2.7 then you would notice improvement on stuff. Using new stuff will always have impact on performance but imho it is too easy to jump into conclusions without doing the full math.

Is this really true? Stopped rendering in Arguru mode a long time ago, because I couldn’t notice any quality differences…think I also read something on the forums about the cubic setting being good for synthetic sounds, and arguru for acoustical sounds…but can’t remember the details.

Q is a very clearly defined term! Just because Renoise calls Bandwidth Octave Q in the manual pages (although correctly terms it Bandwidth everywhere in the program itself) does make incorrectly describing Q correct.

If I didn’t understand these principles well do you think I would be working as a Broadcast Engineer for the world’s largest broadcaster? Although admittedly as much of my work, if not more, is with moving images rather than audio many of the same principles as valid to each.

yes, the difference is huge. The more processed the sound is (especially pitched) the bigger difference you will notice. The sinc is very very slow compared to cubic.
You can listen to a simple test here.

thanks for the link! Will render in arguru from now on for that extra goodness ;).

Took my old Dell vostro Core2 duo (2GHz) laptop 60 seconds to render in 44.1kHz 16bit, Priority high, Arguru’s Sinc.

I don’t recommend using the Arguru render anyway, it distorts the signal in few cases where a looped sample doesn’t behave like it want to, can’t remember the exact reason but listen to this example I render long time ago (Cubic render first):

http://soundcloud.com/music-in-progress/test-cubic-sinc

I also have had one or two files where I got unwanted artefacts when using sinc interpolation and thus more often than not use Cubic since. Often the differences are very small but generally the sinc should more often than not be more accurate but in some circumstances weird things can happen…

It goes both ways as I mentioned in the post I linked to.
So you have to try different interpolations. For each instrument.
To make this an easier process I would love to see sinc as an realtime option as well. So you can design sounds directly with sinc. Also to be able to render from render dialog directly to a sample slot.
And finally a new interpolation in between cubic and the current 512 point sinc.
It’s been suggested a few times before.

As everything in the audio world there is no “best” way, just different ways.
If the sinc did not change much of the sound, then what would be to point of having it? :)

Anyway, on topic, I have not noticed any significantly differences between 2.7 and 2.8.
But we must rely on real a/b tests here and try to find the evidence. Could there be any special device combination causing it to slow down?

It’s also very clearly defined in the Renoise manual. If you’re not flexible enough to understand or accept things, that are different from your view or what you found on the internet, that might be a personal limit. As long as it’s described as Q in the manual, I will refer to it as Q. Because explaining things to someone new to Renoise, only to afterwards tell him “but actually it’s all upside down here and totally different”, doesn’t make any sense. It maybe does in your head. But not outside here, where I am.

Oh, really in the program? Where? Or did you mean just somewhere in the code?

Should I be impressed now? Sorry, I’m not. I don’t care what you do. Working at an automobile manufacturer doesn’t make each employee a skilled race driver by far.

I’m fed up with discussing thing like that or nuts stuff like “software branding is no copy protection”. You’ve tried a lot of things, from calling me things like “megalomaniac” over “retard” to several other stuff. Actually tells all about your intentions. I’m still not gonna get down to that niveau, dude. Keep your subtly trolling up and your hyena like lurking for mistakes, while chickening out of a straight compo duel. I don’t care. You’re on 100% ignore from now on.

@psyj If “arguru’s sinc” is related to a sinc impulse response, I don’t think its possible to use it in realtime because the signal at any point in time is dependent on information from the future.

@all:

Thanks for testing! I’m gonna try to setup something similar (like the file I posted) for 2.7.2 or maybe 2.6. Meanwhile I also thought it might be things like grouping or the massive use of detuned samples, that Psyj mentioned could cause this.

I’ve often wondered how many point Renoise’s sinc function worked with and now I know :)

512 seems pretty high and may help explain why I had the weird artefacts, which I’m pretty sure happened during parts I already had audible aliasing in normal playback (IE Cubic mode) as working with a large number of points at such high frequencies is likely to be a way to be able to illustrate this.

A lower order sin(x)/x function may well be worth considering experimenting with. There are also other interpolation techniques that may be interesting but after you get more complex than the sinc function I have to admit my knowledge becomes sketchy and most of them I have looked into are more for multi-dimensional (2 upwards for pictures) rather than the single dimension of an audio signal.

I could call an apple an orange on a piece of paper or a website but it wouldn’t make it correct!

Never heard of a little thing called Automation?

Also when querying the parameter with the API.

I haven’t done the test with 2.8 32-bit though (never downloaded the 32-bits anylonger since Beta period), so it is one fair test analysis that is lacking here. Speed perhaps won’t decrease much between 2.7 and 2.8, but i suspect it won’t raise either.Plus a fair amount of processing effects heavily armed with deep parameters might indeed raise the cpu consumption thus increases the render-time.

I just tested this again. Tried rendering a song - just three tracks playing, no groups - and it is painfully slow rendering with sinc. Then I tried removing some mastering compressors and eqs (3rd party stuff: Ferric TDS and SonEQ) from the master channel. Rendering tempo did not change. Then I tried removing all “big” effects from the song, especially the TAL reverb. Still no change in the rendering speed.

Comparison is a problem though, because I can only compare the current rendering speed with how I remember it being, i.e. much faster. I have never used cubic, so that is not it. I use Renoise 2.8 32 bit, so this has nothing to do with bridging over plugins. Big projects with multiple tracks, groups, sends and various effects can take 10-20 minutes to render and I am pretty sure it never took more than 5-7 minutes on 2.7.

I can live with this, but it is annoying, and I am surprised that not more people seem to have the problem.

This sounds like exactly the same over here.

It’s important to understand that interpolation only applies to samples which are being played at different pitches, ie. not their base frequency. For example, if a sample has a base frequency of 44.1kHz at C-4, and you play a D#4 note in your song, the sample must be resampled to play at that new pitch. This is also true if you have a sample at 44.1kHz and you render the song at 48kHz - the sample must be resampled from 44.1kHz to 48kHz.

The output from DSP devices and VST/AU plugins is never resampled, so the interpolation mode has absolutely no effect over them.

If you want to benchmark the different interpolation modes, then your test song should consist of sample-based instruments being played at a variety of different pitches. This is where the resampling will come into play, and where you will see a clear difference in rendering speed between cubic and sinc. I would also advise you to create the test song in 2.7, so that it can be rendered in both 2.7 and 2.8 for a proper comparison.

If you’re trying to do a more general benchmark of 2.7 vs 2.8 rendering, it’s probably still a good idea to use a song that was designed for 2.7, ie. no groups or other new 2.8 features. Try to establish a good base measurement first, just to see if they’re even remotely similar.

pfff little, as usual, … kind of agressive/arrogant answers i allready know from you…, opposed with your shares… as usual, with _ or - or coma or whatever between bit and arts…

edit: have a nice day…

@Subskan and all others, a kind request to stay on topic or i’m going to remove non-topic replies from this thread to stick to the reported problem.