Audio Engine

CLASSIC@ Seinfeld clip…haven’t watched for ages…dunno if we get it this side of the pond.

Physically mixing signals together is literally just adding them together, like 1 + 1. There is really nothing special going on there, and this is a completely trivial thing to do when processing audio as floating point values as Renoise does (along with almost every other audio app).

The process of converting a 32-bit signal to 16-bit - changing a high resolution signal into a lower resolution signal - involves quantization. This would typically be applied as the final stage after rendering, and would not be involved in any way during the mixing process. Dithering can optionally be applied in order to introduce a tiny amount of noise to the signal, which serves to mask some of the artifacts that can be introduced by the quantization process. In reality, dithering reduces the quality or ‘purity’ of the signal, but manages to trick us into perceiving something more natural.

The interpolation setting in the render dialog has nothing to do with any of this, and only sets the quality of the resampling that takes place during rendering. Resampling is necessary for sample-based instruments in order to change their pitch/frequency, thereby allowing us to actually play a sample over different notes.

Resampling doesn’t need to be applied to the ouput of VST synths or VST effects, because these plug-ins are constantly outputting a signal at the correct sampling rate to match the host.

The manual confirms that Cubic is used during live playback, and also contains some extra information on the Sinc option that is worth reading:
http://tutorials.renoise.com/wiki/Render_Song_to_Audio_File

Sorry to keep banging on about it, but I’m genuinely interested to hear this ‘massive difference’ for myself. The sooner you upload some example sounds that demonstrate what you’re talking about, the sooner we can hopefully help you fix it.

Are you in the process of preparing some examples now, or?

I want to help you figure this out.

i have an issue with results right now, as the track i wasd mixing was on a R/w cd which has now been deleted and the mix, has now been mapped through cubase, when i have time i will dig out an old track remix it in cubase and give both examples, its nothing that really needs fixing, i am still happy with renoise i will just be using a seperate audio engine for mixing, i was just wondering if anyone had come across this ? or if anyone records alot of external sources, as i know many renoise users are big fans of vst instruments, and i have none lol. i have a few mixes to finish then i will get on it .x

Many samplers actually add forms of brightening to counteract the dulling by interpolation. Possibly battery does too. I doubt cubase does though.

dBlue:

Yeah I know about dithering. If you have Renoise set so it’s operating at 96kHz internally then Render to 44.1kHz for easy conversion to mp3 that will play on all systems it will have to interpolate all the values, whether from samples or VSTi! There is no other way to calculate what they will be, that’s what interpolation is. Even Nearest Neighbour (in time in this case) is a form of interpolation (although the simplest and shouldn’t really use the name.) I know the same is also true spatially, say up-scaling video, and I would be surprised if audio summing and downwards quantisation used nothing but nearest neighbour and dither. I know most video systems use sin(x)/x, which I believe is Arguru’s sync isn’t it? At least I believe what is usually referred to sync interpolation, no idea what it has to do with Arguru though!

Will try and dig out the file that it was clearly different in sound between Cubic and Arguru’s but it was written on my pretty much knackered and I don’t have the VST on my laptop. Trying to tell me it can only affect samples when played at different rates when I know for 100% I have experienced otherwise is not going to get us anywhere.

If you’re running at 96kHz during live playback and then you render to .WAV at 44.1kHz, then Renoise’s entire audio engine will be re-initialised and changed to 44.1kHz, including all of the VST plug-ins that you’re using which will then be outputting directly at 44.1kHz. When the render is complete, Renoise switches the audio engine and VST plug-ins back to 96kHz ready for live playback again. In the process of developing my own VST plug-ins, I have monitored these samplerate changes taking place.

So in your example you’re not getting 96kHz from the VST which is then being interpolated/resampled down to 44.1kHz by Renoise, you’re simply getting 44.1kHz directly from the VST itself. Trust me on this.

this is interesting topic… myself, i always thought rendering in real-time mode was most accurate, but i could be wrong. perhaps taktik could clear all this up? :]

the only difference between cubic and realtime methods is the way the process is ran, not in the quality of the results: if you use many HD intensive plugins (such as VST samplers) it is recommended to use realtime mode

Exactly.

Rendering with cubic at the same sample rate you’ve played back stuff before, does exactly the same thing like playback. No magic here.

Try a soft crunching lofimat on the master in Renoise then ;)


I can only add the following to this discussion:

Like others already said here: The -6dB that Renoise applies on all tracks is the only thing that results into obviously audible difference: Its half as loud. For everything else trust your ears and feeling. If you don’t like what you hear, change it with whatever FX - that’s what they are for. Maybe there are people who can hear the effect of dithering. But if they hear it, they can also change it to whatever they want it to be?

There’s not much magic going on when simply playing back a sample, unpitched. In Renoise, Cubase or whatever other sampler/sequencer.
The only thing we can do, and have done very carefully in the past, is being careful not to add noise, bogus quantization and stuff in the engine. Keeping the unprocessed signal path clean. Everything else is just a matter of taste and mental effect.

If there really is some “error” introduced somewhere in Renoises signal paths that we can measure, then let’s treat it like a bug and fix it. But arguing with “I think that”, “feels better in sampler XYZ” unfortunately does not help at all. In contrary: you are just opening Pandora box here:

For example, mental effects are IMHO extremely important in this case. How good something sounds for you, can not really be measured, but mainly is based on feelings. Of course there are rules of what in general sounds better to most, but thats not everything.
So if Sequencer XYZ or Sampler XYZ sounds better because of something you can’t describe, then well, you probably simply should use Sequencer XYZ to get that good feeling? The UI of the program, or because your sister bought it for you and you like your sister and it reminds you of her, or [add some personal stuff here], are a very important part of the composing process. Of course you then can also hear them, just like some track sounds better when you are depressed and another one when feeling good.

hey thanks for the response, dont feel like i am nit picking at the software i think it is great, and will continue to use it for a long time, i was mearly asking if anyone had noticed a tonal differnce when rendering ?, my issue with this situation is that, if you are mixing within renoise and it renders each track -6 db, and this could be my issue? when is the -6db applied? pre fade on each channel ? post fade on each channel? will this take into account sends? groups ? if i am mixing and adding compression and EQ, and a proportion of the signal is turned down in volume this will affect how each process reacts with the next ? eg using mix bus compression ? i beleive my issue is purley a tonal issue with renoise’s sound engine, as i have had issues with the clarity or Logic, and i have had issues with music equipment such as microphone re-issues which fail to deliver etc, in my professional carreer i just want to put sound into what ever i choose to use and it is spat out the other end relitivly unaffected, which was my issue, mixing through cubase has resolved this issue for me. i hope i havent upsett anyone, but it has raised some pretty interesting arguments, and it also seems many people dont understand to true workings of this mighty software, with cubic sync and softclipping, dithing etc there is alot of options that could alter a mix quite drastically .x

Maybe so but there are still differences which are due to VSTs and here is the proof!!!

(My FTP/Website seems to be throwing a wobbly, trying to upload to Rapidshare at moment… In fact things in generally just don’t seem to be going as they should today!!!)
http://rapidshare.com/files/427458113/Kazakore_-_Renoise_Interpolation_Differneces.zip
EDIT: Server now running so http://www.deaddogdisko.co.uk/Stuff/Kazakore%20-%20Renoise%20Interpolation%20Differneces.zip for those that don’t like Rapidshare.

Both are using Real Time mode. AS = Arguru’s Sync, C = Cubic. Other file has been created by inverting on and mixing signals together in SoundForge (MASSIVE output from this quite surprisingly.)

Main differences can very clearly be seen where DFX Buffer Override and Waldorf D-Pole filter are used.

Can provide the .rns (Renoise 2.0) but sorry to say it was one of the first two tunes I ever made when getting back into music and finding Renoise and before I went Freeware so there are some warez involved :(

Try and explain that to me then!!

Also not all VSTs use the same sample rates, so if you have one that can only work with 44.1kHz and Renoise is running at 96kHz internally how does it deal with that without Interpolation, whether the sound has come from a sample or a virtual synth? Although looking at the difference between waves and re-listening again I would say that is far too much difference to have been caused by interpolation of any kind!

Downloading now. It’s gonna take me a while to download since my rapidshare pro account expired, but I’ll check out the sounds as soon as possible.

To the best of my knowledge, Renoise will process the VST plug-in at 96000 samples per second, regardless if the plug-in supports that sample rate or not. If you do happen to run into a plug-in that does not support higher sample rates, then it’s quite easy to spot.

If it’s an instrument, then it’s going to be generating audio at the wrong rate which will be very obviously out of tune. I’ve ran into this a few times myself with certain buggy instruments that don’t actually respond correctly to sample rate changes, where they will occasionally get stuck in the wrong mode and be completely out of tune.

If it’s some kind of time-based effect, then all of the timings will be off. If you were running Renoise at 96kHz and the plug-in only supported 48kHz, then the plug-in would appear to be running twice as slow as it should be, for example.

Anyway, I will check out your example sounds in a little while and post any thoughts.

Cheers bdlue.

Sorry if I’ve sounded a little arsey in any of this, you obviously have a lot more experience in this particular field, but I was reminded of the example posted (only time I can remember such extreme results everybody should be able to hear very easily) and as I know for a fact it’s not due to samples played at different pitches and the only difference is the interpolation method of the render it makes me think this must apply to more than just resampling for pitched sample sounds.

I just dont render. I record the sum using voxengo`s recorder plugin. The result is excellent.

Alternate link up:

http://www.deaddogdisko.co.uk/Stuff/Kazakore%20-%20Renoise%20Interpolation%20Differneces.zip

I nearly never used “render”, men. I’ve always used softwares that simply and directly record the Directsound Output on Windows. During my first extensive tests with Renoise 1.9 (I became a registered user with the 1.9 version but I’ve started to work with the 1.5 version), I even noticed that rendering songs containing some VSTis could produce slightly inaccurate results (for example VSTis synths like UGO - Motion 2.8 with integrated step sequencer), whatever the export mode choosen (cubic or arguru’s sync) : so I’ve definitively dropped the idea of “rendering” songs.

There are indeed some very obvious differences in your example files. Just a quick glance at the overall waveforms reveals some clear physical differences, but they’re so extreme that it makes me believe something else may to blame. It’s difficult to say what exactly, because your example files are really rather complex and detailed, and I have no idea what is going on behind the scenes.

One unfortunate problem with Buffer Override that I just want to get out of the way first, is that it gives you very different results if you render at different sample rates. The exact same source material will sound different when processed at 44.1kHz, 48kHz or 96kHz, for example. I suspect this is due to the plug-in not being programmed in a way that allows it to be totally independent of the sample rate, resulting in some internal counters performing slightly differently, or looping/wrapping at slightly different points in time, etc. I assume that both of your examples were rendered at the same sample rate, so we can hopefully rule this one out.

Something else that can easily result in these kind of differences is automation by LFO. If you do not explicitly reset your LFOs to a default value at the start of your song, then each time you render you’re going to get slightly different results, due to differences in the phase of the LFO cycle. If you are controlling any of Buffer Override’s parameters with an LFO without resetting the LFO to a default position first, then Buffer Override is definitely going to behave differently each time you render the song.

Another thing to keep in mind, which I hadn’t fully considered at first, is how different interpolation methods could potentially alter the source sound in some strange way, and then cause the VST to generate unexpected output because of this. It’s already been established that Cubic and Sinc can each result in subtle (and sometimes not-so-subtle) differences in how sample-based instruments will sound. If you are sending those sample-based instruments through a DSP chain, then any changes in that source sound could have a cumulative negative effect.

In other words, if your sample sounds noticeably different when in Sinc mode, this could very easily be amplified or worsened by the DSP chain, or it could result in slightly different pieces of sound being buffered and processed by the effects. Something which may not be audible in Cubic mode could suddenly appear in Sinc mode, for example (or vice versa, I suppose). Buffer Override could easily pick up a tiny portion of the audio which had a slightly different tone or different amplitude level, and then exaggerate it even further than expected. I can easily imagine this happening with filters, too, which can often be very sensitive to changes in sound.

Overall, I think a combination of these things could possibly be responsible for the differences in your examples.

Edit:
The output of the VST is not being resampled, but the effects of resampling the input going into the VST can certainly cause the plug-in to produce different results in certain situations.

Anyway…

Some quick examples of my own:
dblue-bufferoverride-test.zip

First off, you can hear how different Buffer Override sounds at each sample rate:

  • dblue-bufferoverride-test-44-cubic.wav
  • dblue-bufferoverride-test-48-cubic.wav
  • dblue-bufferoverride-test-96-cubic.wav

Then you can compare Cubic vs Sinc (and also the difference). I’ve made a rather silly example song with a very nasty ‘dirty’ sinewave, designed to really show the differences between Cubic and Sinc, and how this would result in a very obvious difference when comparing them with the phase inversion trick:

  • dblue-bufferoverride-test-48-cubic.wav
  • dblue-bufferoverride-test-48-sinc.wav
  • dblue-bufferoverride-test-48-diff.wav

Finally, a very quick example of an LFO rendering slightly differently each time:

  • dblue-lfo-test-1.wav
  • dblue-lfo-test-2.wav
  • dblue-lfo-test-3.wav

I’ve included the .xrns files if you wish to recreate my results.

PS. Don’t worry about being arsey. I didn’t get that impression from you, and there’s no hard feelings here from me.

As I said they are very, Very different!

Both the same, rendered straight after each other with nothing changed but the settings. I’m fairly sure that at the time I was using Renoise at 44.1kHz and all settings on Render were the same as production (although did notice some weird bits then as well iirc but is a fair few years ago now.)

No LFO used in this song at all so definitely not the culprit here. Also did quite a few renders, or different version of the track (the kick at the end was a very late addition) and always suffered same/similar differences.

EDIT: Most worrying and notable was always where it kicks back in around the 2min mark. Seriously drops out on Cubic version.

OK they are sampled sound (hihats and snares) but all played at C4, no pitching done to them, so by your earlier argument so interpolation has happened for this to be the case in this instance.

Will digest this better over time. Found out about an hour ago I have to be in work for 0600 tomorrow (usually 0730) and had a pretty stressful day of nothing quite going how it should (and me near enough putting a hole through my bathroom door to cap it off.

Cheers.

PS Just doing a couple of renders from fewer tracks (minus D-Pole, GEQ7 and Waves C1 in places) to see how notable it is then…

Is it possible to save/move parts of a song easily, including samples and DSP chains? Guess simplest would be just delete all Tracks and DSPs you don’t want to keep…

EDIT2: Done and had a look and can see a lot more subtle difference in waveforms but still exist, although audibly much less obvious (although am very tired, not got great set-up at the moment and really need to install latest 2.6 so I can use the AutoSeek to easily switch between two playing at the same point in time. The Mixer Snapshots Script is actually a wonder for doing this kind of thing within Renoise!)

I’m working on a really crappy system here (a pair of cheap speakers and only software -mainly Renoise and Reaper). Mostly I use 44100 sampling rate as most of my samples are not very high quality (though I take advantage of internal oscillators and manipulating them with dsp fx). I’d be very interested in what kind of rendering setup you guys are coming up with. This indeed is a very interesting topic for me, though I cannot contribute much.

(sry I’m drunk and English is not my native language :) )

Cheers!