Quality of rendered songs

Iv got the problem that my tracks sound great in renoise but when i render to disk, the audio quality is just horrible…i tried with different songs and different plugins and different settings and 24 bit and 96 khz and…what to do now???

nah, everything is well mastered, i just wanted to say its the poor quality of renoise. plz do something here, because it makes renoise totally useless!

Some shots in the dark here, but: have you tried different interpolation settings? Offline versus realtime rendering? Is it the audio player that you’re using to play back the rendered audio? What happens if you reimport the rendered audio into Renoise as a sample?

The “quality” of rendered songs of Renoise should be the “same” as any other DAW. Thats not the problem. What do you mean by “horrible”?

It would be maybe good to include a sample xrns (something simple) and a wav rendered from the xrns so that the “horribleness” could be inspected. Alternatively, if you can capture the audio when playing it in renoise somehow (VST recorder of some sort?), that could be useful too.

I know some debates about (rendering) sound quality have already taken place before…

One thought - what’s your track headroom setting? (In lower panel, song settings)

In the render-preferences window set interpolation to Cubic. I don’t recommend using Arguru’s Sinc interpolation as it sometimes creates weird artifacts.

Renoise plays your songs with Cubic interpolation. If you use higher quality rendering options, expect different sound, if you use Arguru’s sinc with 96Khz, expect a very different sound if plugins or samples are simply not supporting that frequency. The lower quality the samples are sampled, the more different they will sound if you interpolate them on higher frequency settings:This is not a problem that would be solved in any other DAW:they simply render as is and Arguru’s sinc is a special type of interpolation that is not available in “any other” DAW and works best if all your samples in your instruments and plugins are recorded in the bitrate that you use in the export settings to render them.

First of all, the word “quality” is a highly subjective term that is unfortunately rather useless to us when it comes to debugging any potential problems. We cannot make anything better until we understand exactly why it’s bad in the first place. So it would help us if you could please define exactly what you mean by “poor quality”.

What exactly is bad about the quality? Is the volume too loud? Is there distortion or clipping? Is it too quiet? Are there strange artifacts? Do you hear clicking or popping? Do things sound muffled, or a bit “blurry”, or otherwise lo-fi somehow? Do things sound too harsh? Do you hear strange high frequency noise? And so on…

Try to be as descriptive as possible.

Secondly, as vV and others have pointed out, Renoise uses Cubic interpolation during real-time playback. If you want to guarantee that your render is exactly the same as what you hear during real-time playback, then your render settings must match your real-time audio settings. Sample rate and interpolation are the most important factors here, so they must be identical. You can also check what bit-depth your audio interface uses natively, and then use the same bit-depth when rendering.

If your render and playback settings are the same, but you still believe that your render is somehow lower quality, then we’ll need to dig a little deeper. If it comes to that, then you’d better share your .xrns with us so that we can do some proper testing on it.

Finally, one very important thing to keep in mind which has tripped up a lot of people, and has been the #1 source of “render quality” discussions in the past: Renoise has a default headroom of -6dB. This default headroom may result in renders that are slightly lower volume than what you expected. This lower volume can often trick the mind into thinking that the audio itself is somehow lower quality, but that simply isn’t true.

Nonsense.

The quality of the rendered output from Renoise is exactly that of the normal playback, it nulls with it.
Something else must be going on there.

I’d like to add here another thing to consider: where you listen/playback the rendered audio?

I mention this because some people will play the rendered .wav files through e.g. Windows Media player and discover that it doesn’t sound the same as in the original host software.

may be that he used “offline” rendering and this caused issues with some plugins?

Hm, maybe we should pass the microphone to kolacell?
Everybody’s second-guessing here, and there is more than enough information for him to chew on.

…wasn’t there an issue with using / rendering the cabinet simulator on higher bitrates as the convolution is based on 44.1 kHz impulses? Could be the culprit?

XRNS/MP3’s or it didnt happen

Well, in fact that headroom exists only on the signal’s way to the master channel. A proper leveled and limited master track output doesn’t have that headroom anymore, even if it was set in the song settings. While I’m sure dblue and lots others know about that, I thought it’d make sense to avoid misunderstandings here and make that clear to everyone who doesn’t know yet. So please, don’t drop your headrooms now, because of probably thinking the end result might be 6 db louder then. That’s of course NOT the case, if your master track was/is handled right already!

Edit:
I feel like this needs some more explanation. What the “headroom”-feature in the song settings does, is reducing your in-signals on the pre-mixer with the amount you set in the song settings. All your level- and peak-meters are still showing the correct values afterwards. So limiting to 0db really means limiting to 0db, no matter if you set headroom before or not. Hope, this makes it easier to understand. :)

If you’re “testing multiple songs” like op said though, by just dropping a track into renoise and rendering, you will have the -6db drop.

I can neither see Dblue nor Kolacell saying something like that. That’d also assume, Kolacell was using Renoise as an MP3-player, which I seriously doubt, because it might be some slight overkill. Beside that your scenario only confirms, what I said.

Hey, seems the kid is gone.

If you come back, please, really, bring evidence

I can’t imagine you’d like to be called kid. Why do you think someone else would? I also can’t imagine, you’d want ppl to tell you’re gone, just because you’re not here 24/7. So, what is the useful essence of this posting now? No need to answer me, just answer to yourself.

ahh…hehe well…its my turn…

ok, what i did is to try every setting which is possible in renoise…i noticed that i have to go at least to 96 khz (i wonder about this, as i thought humans cannot hear such frequencies) and 24 bit. best results i get with “cubic”. i also noticed that some plugins had the wrong settings for offline rendering - and thats just important! dblue said something, which seems to be also true:

“Finally, one very important thing to keep in mind which has tripped up a lot of people, and has been the #1 source of “render quality” discussions in the past: Renoise has a default headroom of -6dB. This default headroom may result in renders that are slightly lower volume than what you expected. This lower volume can often trick the mind into thinking that the audio itself is somehow lower quality, but that simply isn’t true.”

i listened to the track at the same loudness on foobar and that led me to the conlusion that the differences are minimal. i still wouldnt say its the same experience as listening realtime in renoise but its very close now.

thx for the feedback!