Samples Sound Darker/Less Vibrant In Renoise?

first: this is not flame bait. this is not an attack on Renoise. i love renoise. i am looking for solid, calm and rational responses, not arguments, defensiveness, etc.

i notice a qualitative difference between a sample outside of Renoise (in Sound Forge, Media Player, Akai S6000 sampler, Kontakt3, etc) and the same exact sample loaded into Renoise (played in the Renoise sample editor or played in tracks). it sounds less vibrant and quieter (i’ve cranked up the output and it’s still like this). i’ve noticed this with ALL trackers for many years now. i assume it has something to do with the mixing/summing of channels. the thing is, i don’t believe it needs to be this way because the same samples sound fine in Kontakt as a plugin loaded into Renoise or on an Akai S6000 sampler. surely it’s just a matter of doing the maths differently(??).

one example: i loaded Kontakt3 into my Renoise project because the pizzicato string i was using wasn’t working for me, sonically. Kontakt’s pizzicato sounded wonderful. in order to reduce CPU load, i rendered the instrument to samples using Renoise’s convert Plugin to Samples feature. the result was that the playback was completely different. the samples were dim/dark/muffled.

another example: i converted an XM to MIDI and SoundFont using LifeAmp some time ago. i rebuilt the song in Cakewalk Sonar and an Akai S6000. everything sounded clearer and brighter and louder on the Akai, compared to MOD Plug Tracker.

test it out: get a nice clean sample from somewhere you know is good. CD quality attributes. heck, rip it from a well produced audio CD or something. listen to it in an audio editor OUTSIDE of a tracker (like Sound Forge, or something). then bring it into an instrument slot in Renoise and listen to it. it’s quieter, darker. it’s as if it’s been resampled down but there’s no obvious distortion. it’s not dramatic. it’s subtle. it’s more noticeable on some sounds than others (harmonic-rich stuff). it’s like what happens to sound when it’s compressed by minidisc compression or MP3 (but without artifacts).

please don’t tell me that all these other apps and products where i think the sample sounds better are using some kind of filter or EQ to make them sound better. i know that’s not true, especially in the case of an audio editor like Sound Forge (this is the response i got from MOD Plug Tracker forum people years ago when i first noticed this trend- it’s just not true).

i’m especially interested in hearing serious discussion from Renoise developers about this because i really want to work with the cleanest, least modified sound and right now i’m feeling Renoise isn’t going to do that for me. i’m looking to understand why this happens. is there anything i can do as a user? is there anything that the Renoise developers can do to change the audio engine so samples aren’t compromised in any way?

serious discussion only, please. Thanks for your patience.

I must admit I’ve noticed this too. I started both the finished tracks I have produced to date in Logic using Battery 3 as a sampler and noticed the sound quality was notably more crisp / polished, however the workflow for doing beat-edits, sample chopping, etc was a frigging nightmare so Renoise won that battle!

I actually kinda like the slightly darker sound in Renoise, especially for drums and acoustic instruments, and think it gives it a more old school / hip-hop quality - reminds me of the sound from my old MPC2000. However, I have started to import all the live recording I make in Logic into Renoise, because I find the sound engines are too different when rewired and regardless of tinkering with eq settings, etc, the best way to maintain consistency for the tone and mix of the track is too either use one or the other. Again, most of the time this means I use renoise because it lets me do pretty much everything I want to do in Logic quicker and easier, and a lot of things that take hours to do in Logic can be done in minutes in Renoise.

The only thing that does get on my tits is the quality of the resampling, as most hardware or software samplers I have used will let you get away with the best part of an octave or at least a fifth before the chipmunk effect starts to kick in, but Renoise starts to sound unnatural after about 3 or 4 semitones. This is not particularly evident on simple / analogue waveforms but for more layered pads and acoustic samples like strings, piano, woodwinds, etc it becomes limiting as you only have a melodic range of 6 or 7 semitones before things start to sound a bit dodgy.

Check this out:

This ZIP contains:

  • sawtooth_110hz_original.wav - A naive sawtooth (one of the most harmonically rich sounds possible) generated in Sound Forge.
  • sawtooth_110hz_rendered_from_renoise.wav - The same sample loaded into Renoise and then rendered (* see below)
  • dblue-renoise-sawtooth-test.xrns - The Renoise song to prove it.

(*) I have applied a 6.012dB boost on the master track to compensate for Renoise’s mixing headroom which is approximately -6.012dB. This brings the level of the rendered sample in line with the original sound generated in Sound Forge. No other editing has been applied apart from this simple gain boost, and this gain boost does not alter any other properties of the original sound.

Play both samples through Sound Forge and see what you think. On my system they are identical in terms of their physical characteristics and the way they sound. Please tell me if you observe any differences.

There are tiny, tiny, tiny, tiiiiiiiiiiiny differences in the samples which can be attributed to the fact that the sound has been resampled through another audio engine (and probably because 6.012 dB is not quite the precise amount to boost by), but this is simply a by-product of various insanely small floating point differences. This is to be expected and has no noticeable effect (within the limits of human hearing) on sound quality or frequency response or anything of that nature.

@dysamoria: If you are convinced that something else is occuring other than the simple reduction in gain/volume (which can easily be perceived as being more drastic than it actually is), then please provide uncompressed .WAV samples to demonstrate a before and after result, then we can take a closer look at what might be happening. You should also provide an .XRNS song to demonstrate how things are set up there - preferably something simple that doesn’t require additional plugins.

Some interesting reading on the subject of resampling/aliasing:

It’s worth noting that Renoise scores very well in both reports.

Why does Renoise look like it only has half the amount of original notes?

Many of the tests were done by other people who submitted their results to discoDSP.

You can see the following message: “Renoise Tracker - Thanks to k.m.krebs \ 833-45 for the files”

In this case I think the person doing the test simply used fewer notes for some reason. Seems they only did 7 notes (C, D, E, F, G, A, B) up and down, rather than the full octave including sharps/flats. It doesn’t really affect the test results, though.

Many samplers have a lower quality resampling method when operating in realtime, so it’s important to also take note of the results when rendering with more accurate methods (such as sinc):

Thanks for the info and the response, dblue :) i appreciate it! i’m in the process of working on a project at the moment (moved the project in question over to Record/Reason, after fiddling around with LifeAmp conversion), and i’m starting to physically crash (my eyes are gonna fall out, you’d think i’d be passing out… time to give the ears a rest, too), so it might be a while till i get around to checking out your data here. i do appreciate the effort to give me the kind of info i was asking for. thanks for referencing Sound Forge, too, since i can do a direct comparison with that (having the same tools is handy for exchanging this kind of info). also, see below:

Thanks, too, Rex, for your response. i’m glad the first response was an “i’ve noticed, too” kinda response. it helped me feel a little less tweaked ;) (i hope you and i aren’t the only ones, though! hah!)

re: the difference in sound: i’m not really going for the “retro” sound, so the brightness and clarity is important to my work. i do a lot of subtle (very) manipulation to sounds, lots of subtle high-frequency texturing, etc, over the course of a project and i also have very sensitive hearing (must be something about being autistic??), so i really notice this stuff more than i would like. i have crazy hearing, actually. my brain really attaches to the qualities of sounds. i’m noted for picking out sound effects/stock samples/foley on tv/film and telling people where they came from and where else they were used. some stuff really stands out at me from overuse, but people rarely notice.

overall, i think the difference is spectral as well as gain, but you folks are already looking at spectral analysis graphs and i’m too tired to go there right now. thing is, i don’t know how to demonstrate it. Renoise doesn’t modify the data of the samples. if i bring one in and then save it out, it’s the same thing (this is the first way i noticed: saving samples out of trackers and listening to them in an external audio editor was the first place i started noticing the difference). how would i get a sample of the output?

is there any possibility that Renoise might get any changes in it’s mixing engine to eliminate this (admittedly subtle) change to the samples?

either way, i still love Renoise, but i think i have to accept certain facts about what methodologies are more suited to my own personal ways of working. i think Renoise is a fantastic place to do nitpicky construction, but once i get to the point where i want to do sound design, layering, etc… i’m better off in a more traditional timeline & piano roll enviro like Logic/Sonar/Record, etc. 90% of the music i’ve made in my life was either done entirely in a tracker, or started life there. i don’t think i’ll ever stop using trackers :D

You can export your song to .WAV using the Render function: (only available to registered users)

First we need to figure out exactly what you are perceiving as a ‘change to the sample’.

I think he’s talking specifically about realtime and how it differs to rendered output. At least that’s what this comment implies to me.

There are a few VST recorders and the like you could put in the Master track though.

I agree with Dysamoria, samples sounds different when puted in renoise. So there is some complex shit in the engine and in the mixer mechanism.

To solve the problem i usually turning off the sample interpolation, but this not helps much.

Fair comment.



  • sawtooth_110hz_original.wav - Original sound generated with Sound Forge.
  • sawtooth_110hz_voxengo.wav - Captured live from Renoise using Voxengo Recorder.
  • sawtooth_110hz_renoise_cubic.wav - Rendered from Renoise using cubic interpolation.
  • sawtooth_110hz_renoise_sinc.wav - Rendered from Renoise using sinc interpolation.

I consider each one of these files to be identical. They are obviously not 100% identical on a binary level, because there are always going to be incredibly tiny variances whenever you process and mix sound through different methods like this, but this does not affect the sound in any meaningful way, and for the purposes of this test they can be considered identical. They look the same, they sound the same, the frequency response is the same, the phase response is the same. They are the same. No big or obvious change has taken place here at any point.

Bottom line: what you put into Renoise is what you get out of Renoise.

The question of resampling is another matter entirely. If you change the pitch of your original sample, then it must obviously must be resampled somehow to play at that new pitch. The output from different resampling methods can vary greatly, which has been clearly demonstrated by the tests I linked to earlier: here and here.

Ironically, lesser quality resampling methods can result in more aliasing and distortion, which often manifests as extra high frequency sounds being heard, and this is often responsible for that ‘crisp’ sound that a lot of people attribute to older hardware (or even other software). Whereas better and more advanced resampling methods give a much smoother/cleaner sound, and sometimes people don’t like it because it’s too clean or they think it’s ‘dull’ because of this.

Can’t please everyone :)

There’s also the other really tricky subject of simple human perception, and how we trick ourselves into thinking something is happening which really isn’t, or that something sounds better/worse but is really the same, or that ridiculous audiophile snake oil products (like Monster Cable) are somehow superior, etc. The list goes on and on.

Different how?
Different compared to what?
Can you actually give proof for any of this? Have you done extensive testing?

I’m afraid it’s not much help to say ‘this sounds different’ without actually being able to demonstrate exactly what is different.

(Not trying to sound confrontational… it’s just important to clear up what’s going on)

If you disable interpolation then you will get a worse sound with more aliasing and distortion. As I explained in my other reply a few moments ago, this might sound ‘nicer’ to you if you are used to the ‘crisp’ sound of other samplers (especially old Akai gear, for example), but the fact is that this is actually a worse sound from a technical point of view.

So much of this stuff appears to come down to personal preference.

it’s definitely true that a wildpitched sound is much more rough when you’re disabling interpolation. It’s like a more low-level LoFiMat, except that each note has a different “buzz”.

Great one, that Batch Change Sample Properties ++

I think this test will prove that :

Sound is identical to original when played in it’s original tempo
Sound is subtly different from original when played in a different tempo

Because, it takes some amount of de-tuning for the sample interpolation to kick in (AFAIK). And different interpolation methods each have subtly different characteristics, so it’s kind of personal preference which one the right one. Renoise has three interpolation algorithms, cubic and linear and sinc (sinc is only available when rendering, and is the one that scored a “perfect” ranking in the sampler aliasing test). But this test of course tell us nothing about the default value: cubic interpolation. I say: bring on the empirical evidence

Edit: more or less what dblue said. Yeah, definitely personal preference. Will go check out the samples now!

ok, this is where the things are gettin interesting…wait a min or so

I composed a little loop, all sounds are sampled from VST at 44khz/24bit wav (not zip or flac). No fx (like reverbs, compressors) or whatever used, just samples.

I will make couple of renders like this now:

  • Internal renoise sampler, interpolation is on (cubic), render is arguru`s sinc at 44/24 wav

  • Internal renoise sampler, interpolation is off, render is arguru`s sinc at 44/24 wav

  • Native Instruments Battery 3 sampler, interpolation is HQI:Perfect, render is arguru`s sinc at 44/24 wav

  • Native Instruments Battery 3 sampler, interpolation is HQI:Standart, render is arguru`s sinc at 44/24 wav

Renders`ll be here in 5-10 mins…

you guys need to spend more time making music instead of debating placebo effects ;)

Sine waves and spectral analysis graphs are all well and good, but out of pure curiosity (and cause I’ve never actually done a proper ‘taste-test’) I just tried looping four bars of an identical drum loop in both Logic and Renoise, then rendered to a 24-bit 44.1 kHz wav in both applications. This is with no eq or other dsp, although I did apply a 6.012dB boost to the Renoise render in Bias Peak audio editor.

Using the quick time player as a neutral listening device, I am pleasantly surprised that the Renoise render infact sounds noticably more crisp and harmonically rich than the Logic render, to my ears anyway. See what you think:

Logic: mono render - please ignore epic fail! :rolleyes:


Edit: I can’t hear much difference now!

I dunno how you have this setup, but the Logic beat is mono (identical audio data in both left and right channels) while the Renoise beat is clearly in stereo. So I think you might be using incorrect settings in Logic’s sampler?

Either way, if I convert the Renoise sound to mono and then compare the two, they are effectively identical except for some incredibly tiny details that are well outside of hearing range.

Close up: totally inaudible differences above 18,000Hz that are quieter than -120dB. Good luck actually hearing that on top of some noisy drums :)

Absolutely. But i did a lot today, so i`m daaaaam happy to fell in all those geeky talk stuff right now.