Samples Sound Darker/Less Vibrant In Renoise?

what kind of a name is poppy crum anyway???

sounds like something that fell off a muffin…

i wonder if you are talking about the way renoise sounds during replay and not after rendering. i would guess this is where some of the disagreement comes from? renoise does sound different after rendering. you can mix/master in renoise if you are not a total control freak but otherwise you may have to compose in renoise and mix in something else.

this whole thing is hardly a new issue for musicians, producers, recording folks etc. guitars, drums, everything sounds different on tape than they do live. that was annoying sometimes but could also be used to good effect. how many great records where made with tape? at this point in time, still prolly most of them…

so you should regard your ‘condition’ as an asset and not an impediment when you hear these differences. maybe for you it is an aptitude and not a skill you had to work hard to develop. either way, frankly, you will need it if you ever want to be a cut above. hard luck, good fortune is…

I made a triangle wave sweep test the other day, was having trouble with my ftp so hadn’t uploaded and linked. This time names are hidden so you don’t know which is which.

Process:
Files kept as 24bit 96kHz all the way through.
Soundforge Audio Studio 9.0 - Synthesis - Simple - 30 seconds Triangle Wave Sweep from 40Hz - 18kHz.
Play in Renoise at +4 semitones.
Render using Cubic and Sync modes.
Play new samples at -4 semitones.
Render respectively at Cubic and Sync modes.

Plus two files where I have inverted the final output render and mixed with original wave. I should of called these Inv X and Inv Y, not used A and B again. They do not relate to A and B of the original samples.

Most surprising thing to note is the huge amount of aliasing that already exists on the Soundforge generated sample. Maybe I should of used a sine…

http://www.deaddogdisko.co.uk/Stuff/Triangle%20Sweep%20Test.zip

I hope somebody can tell me which is which…

Agree with Dysamora! The way to bring it back to normal is the maximizer. It’s just a fact that it sounds dull in every tracker; must be some secret comb-filter secret or what ever… to make it sound good/normal. EXS24 definetely has it own sound compared to Kontakt.

Here it is again:

More than a year later, i see this post (and a kind note from another user in PM that he notices the same thing as i). i’m glad it’s not just me though i’m still at a loss to demonstrate it. i’ve finally gotten back to the music thing a bit now that i’m done with the psych drugs (though their damage to my body is likely going to linger the rest of my life), and then there was the bankruptcy… etc. Whatever. Life goes on.

i have a tendency to avoid using trackers (including Renoise) because of this sound quality perception and i’m wondering if any progress has been made on understanding what’s happening (for those of us that perceive it). It’s sad because i basically “grew up” my computer music experience on trackers and i enjoy the methodology as at least an element in my production.

i’m traveling at the moment, but when i get back to my home base, i’m thinking i’ll give this another shot (recording samples that demonstrate the issue i’m perceiving). In the mean time, if anyone has had any insights or whatever… :wink:

http://www.youtube.com/watch?v=KHy7DGLTt8g

Don’t listen to them! If you want to see if two audio samples are identical, phase invert one and mix paste it with the other. If they are identical, they cancel leaving only silence, and that is exactly what happens. Try it yourself (with the new polarity invert button in the sample editor)

THE SAMPLES (original and rendered/captured from renoise) ARE BIT-FOR-BIT IDENTICAL

I guess they guy is talking about how the whole shit sounds in the mix.

I`ve tried the kinda identical test with complex sound, drummachine + synths and it sounded identical visually and by ear.

But when u start to mess with samples from different sources and live instruments recorded so here the pain comes. There is something sometimes weird happens with renoise summing process. I cant explain.

For example yesterday i wrote a techno track, and the overall loudness was so low, and it was peaking on master, but on channels it was far more than ok (on pre and post indicators), but when i started to compress and limit all sound screwed up to fuck. I`ve precisely cutted all the “bad” frequencies in low and mid low range but anyway the overall sound still wasnt LOUD but fucking distorted.

Is anyone here can explain me how the fuck the summing and the whole process works inside renoise?

p.s. I remember my complex projects in SKALE, but i was rarely (better say never) using there lo and hicuts butterworth filters, and all shit sounded glue stick together well and punchy and shit like that. I just wanna know what happens in renoise summing process cuz all those sine wave tooth bollocks tests are one, but you guys have years, music is years not eyes.

round 2, cmon!

I remember one more thing. When i wrote shit in SKALE years ago, i maximised all the samples to ZERO DB. And still all sounded ok, fucking damn loud and not distorted, i was able to crank the level over the top still not distorted much but really damn loud. And ive tried the same thing yesterdat with that techno track, ive rendered all the samples and tracks as wav 24 bit and shit, so no vst and fx was applied. So i press play try to raise the master level - all fucked up and distorted. I`ve tried to turn off interpolation, different samplerates, volume compensation to +3 & 0 and nothing helped. All sounds just muddy and distorted.

I`ve tried everything.

Why im always talking here about SKALE cuz i wanna know why its not fucking up there? Im sure some devs here used this app before. Isnt it?

I`m not switching from Renoise to anywhere, cuz the workflow is brilliant but THE MIXING in renoise is a real PAIN IN THE ASS. That summing algorithm is just wrong i guess.

Maybe skale has a limiter on master?

Or maybe you could share the song, so we could see what’s wrong in there.

May be there is a hidden one, but belive me its not related to the limiter. Yesterday ive tried may be all available hi end limiters, and one which was very very close to good result is by Slate Digital. Anyway its not the good way to get the sound you want.

I used to worry about Renoise’s sound engine…

…then I took an arrow to the knee.

You’re preaching to the choir here, mate. I’ve mentioned this technique countless times in other threads. Amazingly, some people choose to reject the entire concept and insist that they can still hear differences, even when the sounds null to absolute silence. I find it completely baffling and frustrating when people cannot accept the simple logic and mathematical proof behind the null test itself. Oh well.

Anyway, my point with those test files was not to prove that they’re identical on a binary level (they’re not all identical in that particular test), but that they sound identical to the human ear. Dysamoria said that Renoise made his samples sound dull and less vibrant, so I wanted to learn if he could honestly hear the difference between the original untouched sample and the same sound as rendered by Renoise. He could not.

It has been proven time and time again that a simple difference in volume can often have a large effect on the perceived sound quality. The human ear does not detect frequencies in a linear fashion, so different listening levels will exaggerate or attenuate certain frequencies in slightly different ways. By default, Renoise tracks have -6dB of headroom, so if a person was not aware of this headroom while doing an A/B test against some other software, it’s conceivable that they might misinterpret this as a difference in overall sound quality. In many cases, simply adjusting Renoise’s output level to match the other software immediately fixes the ‘problem’.

This is really frustrating. Why do people assume that we’re doing some kind of strange processing here? There is no special “summing algorithm” to go wrong in the first place. Summing is literally just adding numbers together, nothing more! If the sum total of the numbers (ie. audio signals) you’re adding together exceeds the maximum possible value (ie. 0dB), then you get clipping. When you have clipping, you have distortion. This basic principle is true in any audio software, not just Renoise!

If the sound from Renoise is distorting more than the sound from Skale, then it’s because there’s a difference in track levels at some point in the mixing process, not because Skale is doing something special that Renoise does not do.

Anyway, I wanted to take a closer look at what might be happening here, so I downloaded Skale (v0.81 beta) and tested it side by side with Renoise (v2.8 beta 6).

In both trackers, I started with a completely new song, using the default factory settings (except for adjusting pattern length and bpm to match). I took a sample that is fully normalised to 0dB (a nice deep kick drum, if you must know) and loaded it into both trackers, then placed a single C-4 note into the pattern. I rendered each song to .WAV at 44.1kHz 16-bit, then compared the results in Wavosaur.

Using Wavosaur’s statistics function (as well as my eyes and ears), I immediately saw that the output levels of Skale and Renoise did not match:

  • Skale render peaks at -8.31dB
  • Renoise render peaks at -6.0dB

Before going any further, I needed to ensure that both trackers were rendering this basic test sound at the exact same level, otherwise it wouldn’t be a fair test. Since Skale’s mixer doesn’t show any useful information like track levels in dB, I adjusted Renoise instead and set the track headroom to -8.31dB (Song Settings tab), and then I re-rendered the test songs.

Wavosaur now shows that the rendered levels match:

  • Skale: -8.31dB
  • Renoise: -8.31dB

The next thing I wanted to test is how the levels of Skale and Renoise compare when mixing multiple tracks together. I modified the test songs so that two C-4 notes were playing simultaneously on two different tracks, and then I re-rendered.

Wavosaur shows that the levels still match:

  • Skale: -2.29dB
  • Renoise: -2.29dB

At this point, I’m getting identical results from both Skale and Renoise, and I have no reason to believe that anything strange is happening. I decided to go a bit further anyway, and tested it with 4 simultaneous tracks, and then 8 simultaneous tracks. With this many tracks playing together, the sound is definitely pushed beyond 0dB, so it’s clipped and distorted, but the important thing to note is that Skale and Renoise are both clipping in the same way. The resulting waveforms look and sound the same.

Here are my test files so you can see/hear the results for yourself:
2903 test_skale_renoise.zip

So, what’s the moral of the story here? Well, Renoise simply has a different default track headroom than Skale. If you do not bother to match these levels, so that both trackers are behaving in the same way, then you simply cannot expect to get identical results.

@atarix: Regarding the techno song that you mentioned: Assuming that you haven’t made any big adjustmens to the mixer in Skale and the tracks are at the default levels, you will need to set Renoise’s track headroom to -8.31dB in order to get the same levels from Renoise. Can you please give this a try and see if you get better results?

Cheers.

Summing is one thing, but what about the SUMMONING ENGINE?! You know, the audio magic found in more expensive pro gear?

Maybe this whole discussion could have been avoided if people would stop making stupid typos.

Already doing some tests…

I have some info about skale sound engine from the dev itself, i hope it`ll make sense.

[i]Here we go:

  • The floating point mixer uses the range [-32768.0,32767.0] instead of the standard [-1.0,1.0].
  • Samples are also stored with the previous range
  • Samples are not pre-processed (no dithering or similar algorithms applied)
  • It detects blocks of signal with amplitude < 1.0 and are ignored (not mixed), also filters/effects are not applied in this case.
  • Clipping is done only for the final output (sound card or wav file) intermediate buffers are not clipped
  • Bilinear resampling uses no any precalculated table (real floating point interpolation) and the cubic resampling was done with an own algorithm (a 64k float table used if I remember well)
  • Tweaked bits displacement for the sample play position fixed point representation (192khz/low freq compatible)

There’s no any special filter/chain/etc in the mixer pipeline, just the previous details. All the channels are added in a first step, later all the filters/effects are processed into extra buffers and as a last step all the buffers are mixed in a final one. Nothing special here.[/i]

Because it can’t be linked to often enough:

And almost forgot another one: http://www.theaudiocritic.com/downloads/article_1.pdf

1 Like