Samples Sound Darker/Less Vibrant In Renoise?

http://www.youtube.com/watch?v=KHy7DGLTt8g

Donā€™t listen to them! If you want to see if two audio samples are identical, phase invert one and mix paste it with the other. If they are identical, they cancel leaving only silence, and that is exactly what happens. Try it yourself (with the new polarity invert button in the sample editor)

THE SAMPLES (original and rendered/captured from renoise) ARE BIT-FOR-BIT IDENTICAL

I guess they guy is talking about how the whole shit sounds in the mix.

I`ve tried the kinda identical test with complex sound, drummachine + synths and it sounded identical visually and by ear.

But when u start to mess with samples from different sources and live instruments recorded so here the pain comes. There is something sometimes weird happens with renoise summing process. I cant explain.

For example yesterday i wrote a techno track, and the overall loudness was so low, and it was peaking on master, but on channels it was far more than ok (on pre and post indicators), but when i started to compress and limit all sound screwed up to fuck. I`ve precisely cutted all the ā€œbadā€ frequencies in low and mid low range but anyway the overall sound still wasnt LOUD but fucking distorted.

Is anyone here can explain me how the fuck the summing and the whole process works inside renoise?

p.s. I remember my complex projects in SKALE, but i was rarely (better say never) using there lo and hicuts butterworth filters, and all shit sounded glue stick together well and punchy and shit like that. I just wanna know what happens in renoise summing process cuz all those sine wave tooth bollocks tests are one, but you guys have years, music is years not eyes.

round 2, cmon!

I remember one more thing. When i wrote shit in SKALE years ago, i maximised all the samples to ZERO DB. And still all sounded ok, fucking damn loud and not distorted, i was able to crank the level over the top still not distorted much but really damn loud. And ive tried the same thing yesterdat with that techno track, ive rendered all the samples and tracks as wav 24 bit and shit, so no vst and fx was applied. So i press play try to raise the master level - all fucked up and distorted. I`ve tried to turn off interpolation, different samplerates, volume compensation to +3 & 0 and nothing helped. All sounds just muddy and distorted.

I`ve tried everything.

Why im always talking here about SKALE cuz i wanna know why its not fucking up there? Im sure some devs here used this app before. Isnt it?

I`m not switching from Renoise to anywhere, cuz the workflow is brilliant but THE MIXING in renoise is a real PAIN IN THE ASS. That summing algorithm is just wrong i guess.

Maybe skale has a limiter on master?

Or maybe you could share the song, so we could see whatā€™s wrong in there.

May be there is a hidden one, but belive me its not related to the limiter. Yesterday ive tried may be all available hi end limiters, and one which was very very close to good result is by Slate Digital. Anyway its not the good way to get the sound you want.

I used to worry about Renoiseā€™s sound engineā€¦

ā€¦then I took an arrow to the knee.

Youā€™re preaching to the choir here, mate. Iā€™ve mentioned this technique countless times in other threads. Amazingly, some people choose to reject the entire concept and insist that they can still hear differences, even when the sounds null to absolute silence. I find it completely baffling and frustrating when people cannot accept the simple logic and mathematical proof behind the null test itself. Oh well.

Anyway, my point with those test files was not to prove that theyā€™re identical on a binary level (theyā€™re not all identical in that particular test), but that they sound identical to the human ear. Dysamoria said that Renoise made his samples sound dull and less vibrant, so I wanted to learn if he could honestly hear the difference between the original untouched sample and the same sound as rendered by Renoise. He could not.

It has been proven time and time again that a simple difference in volume can often have a large effect on the perceived sound quality. The human ear does not detect frequencies in a linear fashion, so different listening levels will exaggerate or attenuate certain frequencies in slightly different ways. By default, Renoise tracks have -6dB of headroom, so if a person was not aware of this headroom while doing an A/B test against some other software, itā€™s conceivable that they might misinterpret this as a difference in overall sound quality. In many cases, simply adjusting Renoiseā€™s output level to match the other software immediately fixes the ā€˜problemā€™.

This is really frustrating. Why do people assume that weā€™re doing some kind of strange processing here? There is no special ā€œsumming algorithmā€ to go wrong in the first place. Summing is literally just adding numbers together, nothing more! If the sum total of the numbers (ie. audio signals) youā€™re adding together exceeds the maximum possible value (ie. 0dB), then you get clipping. When you have clipping, you have distortion. This basic principle is true in any audio software, not just Renoise!

If the sound from Renoise is distorting more than the sound from Skale, then itā€™s because thereā€™s a difference in track levels at some point in the mixing process, not because Skale is doing something special that Renoise does not do.

Anyway, I wanted to take a closer look at what might be happening here, so I downloaded Skale (v0.81 beta) and tested it side by side with Renoise (v2.8 beta 6).

In both trackers, I started with a completely new song, using the default factory settings (except for adjusting pattern length and bpm to match). I took a sample that is fully normalised to 0dB (a nice deep kick drum, if you must know) and loaded it into both trackers, then placed a single C-4 note into the pattern. I rendered each song to .WAV at 44.1kHz 16-bit, then compared the results in Wavosaur.

Using Wavosaurā€™s statistics function (as well as my eyes and ears), I immediately saw that the output levels of Skale and Renoise did not match:

  • Skale render peaks at -8.31dB
  • Renoise render peaks at -6.0dB

Before going any further, I needed to ensure that both trackers were rendering this basic test sound at the exact same level, otherwise it wouldnā€™t be a fair test. Since Skaleā€™s mixer doesnā€™t show any useful information like track levels in dB, I adjusted Renoise instead and set the track headroom to -8.31dB (Song Settings tab), and then I re-rendered the test songs.

Wavosaur now shows that the rendered levels match:

  • Skale: -8.31dB
  • Renoise: -8.31dB

The next thing I wanted to test is how the levels of Skale and Renoise compare when mixing multiple tracks together. I modified the test songs so that two C-4 notes were playing simultaneously on two different tracks, and then I re-rendered.

Wavosaur shows that the levels still match:

  • Skale: -2.29dB
  • Renoise: -2.29dB

At this point, Iā€™m getting identical results from both Skale and Renoise, and I have no reason to believe that anything strange is happening. I decided to go a bit further anyway, and tested it with 4 simultaneous tracks, and then 8 simultaneous tracks. With this many tracks playing together, the sound is definitely pushed beyond 0dB, so itā€™s clipped and distorted, but the important thing to note is that Skale and Renoise are both clipping in the same way. The resulting waveforms look and sound the same.

Here are my test files so you can see/hear the results for yourself:
2903 test_skale_renoise.zip

So, whatā€™s the moral of the story here? Well, Renoise simply has a different default track headroom than Skale. If you do not bother to match these levels, so that both trackers are behaving in the same way, then you simply cannot expect to get identical results.

@atarix: Regarding the techno song that you mentioned: Assuming that you havenā€™t made any big adjustmens to the mixer in Skale and the tracks are at the default levels, you will need to set Renoiseā€™s track headroom to -8.31dB in order to get the same levels from Renoise. Can you please give this a try and see if you get better results?

Cheers.

Summing is one thing, but what about the SUMMONING ENGINE?! You know, the audio magic found in more expensive pro gear?

Maybe this whole discussion could have been avoided if people would stop making stupid typos.

Already doing some testsā€¦

I have some info about skale sound engine from the dev itself, i hope it`ll make sense.

[i]Here we go:

  • The floating point mixer uses the range [-32768.0,32767.0] instead of the standard [-1.0,1.0].
  • Samples are also stored with the previous range
  • Samples are not pre-processed (no dithering or similar algorithms applied)
  • It detects blocks of signal with amplitude < 1.0 and are ignored (not mixed), also filters/effects are not applied in this case.
  • Clipping is done only for the final output (sound card or wav file) intermediate buffers are not clipped
  • Bilinear resampling uses no any precalculated table (real floating point interpolation) and the cubic resampling was done with an own algorithm (a 64k float table used if I remember well)
  • Tweaked bits displacement for the sample play position fixed point representation (192khz/low freq compatible)

Thereā€™s no any special filter/chain/etc in the mixer pipeline, just the previous details. All the channels are added in a first step, later all the filters/effects are processed into extra buffers and as a last step all the buffers are mixed in a final one. Nothing special here.[/i]

Because it canā€™t be linked to often enough:

And almost forgot another one: http://www.theaudiocritic.com/downloads/article_1.pdf

1 Like

btw, related to that techno track. can someone here take a listen and tell me is it sounding well balanced or not?

http://soundcloud.com/atarix/serjio-hakkinen-pisstank

It doesnā€™t sound horribly distorted like you were describing earlier, but I suppose this is all subjective anyway. Our opinion of your music is not really the important issue here, is it? I think the most important thing is whether it sounds correct to you? Did changing the track headroom in Renoise give you a better result, ie. does it sound more like it did in Skale?

still doing the testsā€¦

Muchos gracias to dBlue, -8.xxx db of headroom and a nice combo of comp+lim on master SAVED MY ASS. Delete this topic and fuck that guy who started it all.

Ok Iā€™m Necrobumping this but I have to say I MUCH prefer the ā€˜Soundā€™ of Renoise to any other DAW / Audio software Iā€™ve used - I find itā€™s sampler simply awesome sounding and for once I actually didnā€™t need to read the manual to use it to create my 1st few instruments .

Ok Iā€™m Necrobumping this but I have to say I MUCH prefer the ā€˜Soundā€™ of Renoise to any other DAW / Audio software Iā€™ve used - I find itā€™s sampler simply awesome sounding and for once I actually didnā€™t need to read the manual to use it to create my 1st few instruments .

100% agree here. Also since this discussion, the interpolation was even improved. And you can switch to bandlimited mode and sinc interpolation, which results in quite nice quality. And if someone is comparing with of daws, almost none of them do pitching in real time but do precalculation.

Itā€™s interesting, Iā€™m skeptical that thereā€™s an actual (or at least, very significant) difference in sound between DAWs, but I use Reaper alongside Renoise a lot and I would swear Renoise sounds better. Possibly psychosomatic, due to liking Renoise a lot and being biased in its favour, butā€¦ ya know, Iā€™d swear it really does sound better, somehow. Also as mentioned, instrument building in Renoise is actually kind of fun, despite the inevitable tedium that can happen when building big instruments. I have Kontakt and will choose making a Renoise instrument over Kontakt every time now, even though Kontakt is the technically more sophisticated sampler.