i have a question. i don’t really know anything about audio programming. i have a discussion on another board on how channels get mixed together in an audio application.
my point was:
if you set your output (in one apllication) to 48 khz, everything in this app, channelwise, is processed at this samplerate.
so: in one second the audio data gets processed 48000 times in a second.
i don’t know exactly, but i think that most common (when not all) software handles audio streams internally with 32 bit floating point values. so that’s the bitdepth everything gets mixed.
every single trackoutput begins with the vsti or the sample (<-the sample would be in renoises case kind of an internal vsti) whatever.
this vsti gets midi data, and calculates the output for this single instrument.
once everything is calculated, the vsti sends this output for further processing. with the samplerate still being 48kHz & 32bitfloat.
now this signal goes for further processing to the effect plugins.
they read the data (48kHz/32bitfloat) data gets processed, effect sends data for further processing to next plugin (still 48kHz…)
inside each effect plugin or the vsti itself, the signal can be calculated with any bitrate/depth the plugin is programmed to, but the data calculated is send to next part of the chain in the format 48 kHz/32bitfloat (in this case).
in the END of the whole process, ALL tracks just get added together (simple addition of each sample (in this case 48000 times in a second), and outputted to the soundcard.
of course we can set a buffer for all this calculation have time to process (latency)
my original point was: how much does interpolation methods really affect the output sound comming from the program?
of course soundcards cannot output 32 bit floating point values. so almost every software has somehow to dither their internal mixing to the bitdepth that your soundcard accepts.
is e.g. cubase really so superior above renoise concerning mixing algorithms? although mixing is nothing less than just adding samplevalues together?
i don’t believe that. everything dithering does is calculating minimal differences. if one sample falls one step above or under the “linear” calculating method.
maybe i’m completely wrong with my assumption how channels are processed tho. so… hope somebody can clarify.
does nobody has an idea how it works?
Dev’s ? Taktik ?..
Last time Renoise community is kinda asleep. Nothing happens. And very little amount of posting people here onda forum…
It’s no good
I can’t answer the question. But I can give the following advice:
A higher definition render/mix will make poor engineering sound better.
A higher definition render/mix will make good engineering seem about the same.
Moral of the story? Take your time with you mix. Even if you are working with lower bit rates/strange interpolation/poor sampling you’ve still got everything you need to make everything sound amazingly perfect: EQ, Compression, Pan and Volume. That’s all you need. To develop those mixing skills takes years, and if you were dead serious about commercially releasing stuff then you’d send it to a mastering lab anyway.
For my money Renoise has a soft dip from about 10k up (see the old threads by Internal Engine to read about this). It’s not as bad as to be a problem, because if you’ve got the skills to mix around it you’ve got no issue. And if you haven’t got those skills you’re not hearing it anyway. Besides, it’s a unique ‘renoise’ sound, which is just toasty in my book.
No matter what the bit depth or sample rate digital is a pale comparison to the real instrument giving the real sound listen to by your real ears. I think the way digital sampling is done needs to be totally reconcieved. Anyway, most punters won’t pick the difference… Don’t worry about it. Just mix better and write some decent original tunes…
I think the devs are way too busy…
Dithering happens only once at the end, when converting the internal data stream into the soundcards stream. Mixing is usually done in 32bit floats, by smply adding the samples (there is no magic). The internal and soundcards samplerate are always the same, so there happens no interpolation.
Btw, you can en/disable dithering in Renoise. Some people hate it, other love it, most dont care. I personally dont hear the difference. Imho thats just theory.
thx taktik. exactly what i wanted to know (and already thought)
so the whole “soundengine debate” is nothing more then hot air i think.