Sound level of rendered xrns

Perhaps a noob question, but yesterday I was wondering about this:
Even I look for getting the most volume out of my tracks they do sound less than half as loud as professional tunes.
I tuned everything so that the 0db in Renoise aren’t exceeded, put the ‘Maximizer’ and alternatively some other brick-limiters to the master-channel.
With one (Tone Boosters TB_Barricade) I had a setting with +20dB :panic: and even so my tune compared to others is just not as loud.
When I render songs without a limiter and every track adjusted so that the 0dB aren’t exceeded and play it in foobar then the loudest impulses just go up to -20dB in the spektrometer view.

Why? Could someone explain this, please?

Of course I searched on the web for sites about mastering a.s.o., but couldn’t find something explainig the matter comprehensible for me.
If I listen to tracks of ‘Kryptic Minds’, ‘Versa’ for example they aren’t only louder, they just sound as if they’ve got more ‘room’ and so the loudness is not only to get more pressure - it changes the quality of the sound imho to the better side.

Ok, just found this and reading now…

For those who like to listen to (just a 4 pattern sketch):
Mastered (using TB_Barricade)

Go to the song settings and change the -6 db track headroom if you consider that you know what you are doing.

Examples needed.

I’ve been through the whole mastering song and dance. What it boils down to, in the words of producer Ill Gates: “I know a million and
one button monkeys that can mix (master) a tune for you but only a handful of people who can write a great tune. Many of the best producers send out their songs to be mixed (mastered) by someone else.”

It isn’t the db of a song that makes it loud, it’s the way it’s compressed and mixed. Mastering it just makes it sound a bit more polished and radio friendly. And these standards are always changing, which is why they continue to remaster tracks from the 60-80’s to bring them up to “today’s high standards”.

Don’t worry about loudness in terms of db. Worry about loudness in terms of how your music, arrangement, and orchestration sounds. How good is the track in general? To get more “room” make sure you’re using most or all of the stereo space provided to you. Pan things, stereo spread them (not too much!), and work on you mix in general to get a better sound. Check out She’s Electric Girl Preview. You want your stereo imaging to be similar to what’s happening in his phase scope. I had a thread like this earlier, and I discovered that most of my problems arose from boring mixing and sound design problems.

Long story short, make tunes, make a lot of them. When you get them sounding good, take a couple of the best ones and send them out to be mastered by a professional. Mastering is something that can be learned over your years. Good songwriting takes skill that needs to be cultivated.

Before I’ll answer in deep: Edited my op and added links to examples…

Tonebooster is linking to a specific document how to optimize your audio for broadcast and play devices…

Hm, the question ist:
Do I really know what I’m doing? ;)/>
Any suggestion?
What if I change it to -1dB?

Thanks for this!
Guess I have got a lot to read now - and to check out then…

Mixing is definitely an art in and of itself, and while many producers send their tunes out for professional mastering, good and practiced producers mix their own tracks, often enough, for the technique to be considered, “must learn,” if you want to be a well rounded producer.

There is a difference between mixing and mastering. Mastering is a post production process that balances the rendered wav of your mix. So… You mix your track, you render it, and upon that wav file you apply the process of mastering.

I like to draw a line between what I call, “Homebrew, or homegrown mastering,” and Professional Mastering. The former happens here, “in the home/project studio,” and is more like a, “loudness maxing,” process; the latter happens in a mastering quality studio, and is performed by a mastering engineer.

Mastering Engineer’s have a reputation for having, “the best ears in the business,” and mastering studios are dead silent.

Here in my home studio… I can hear my neighbors water running. :frowning:

I will never get a perfect master here. I’ve got way to much ambient noise, from all sorts of sources… Water, air conditions, people walking around in the apartment above me. Frankly, I’m surprised I even try, but whatever. :lol:/>

Mixing differs from mastering in many ways. Here is two biggies, right off my head. 1. It happens during production. 2. You are blending and balancing all the instruments, as well as any vocals, and shaping the sound and dimension of your record.

You can not have a great master, without a great mix. Its one of those, “rules.” There’s plenty of laws of physics, and rules of math. So lets just accept that, “all great masters, come from great mixes.”

Where the loudness really starts is in your mix. By compressing things proper, by balancing proper, by eq’ing proper, by not recording too hot, and by giving yourself plenty, plenty of headroom. That is how you create a mix render, that will get banging loud with mastering… and you only need 6 - 9 db of makeup gain.


I am not targeting radio, but there are couple of reasons why I want make my song louder:

  1. My reference tracks are loud. Make my own compositions louder makes comparing easier (Mike Senior: Mixing Secrets for The Small Studio)
  2. I think loudness helps if I want people to listen my song in SoundCloud (it will sound more “familiar” and perhaps someone will give me proper feedback)

Couple of tricks I’ve found very helpful in this…

  1. Check that bass is not eating your headroom. Use graphic EQ (FabFilter for example) and check how it looks compared to a commercial reference track. If bass or some other frequencies are significantly louder in your track, you will not end up with loud overall mix.
  2. Render your mix with headroom. Load rendered wav into new a Renoise song with same length as original.
  3. Set new song track headroom to 0 db.
  4. Set master volume to 0 db and autogain off.
  5. Insert limiter VST. Set output level (ceiling) to 0 db (or slightly less if you see the red blinking in master track). I use FabFilter L but Maximizer probably works well too.
  6. Push limiter gain up until it sounds like mush. Then pull down a little.
  7. Render it out. Sometimes there’s still some headroom when I load wav into Audacity, where I can use Normalize.

This way I can basically make my tracks as loud as I want (competing with Rammstein!). Although, my mixing skills aren’t that good yet, so don’t believe my every word :)

And of course during “mastering” you could still tweak with EQ and bus compressor to fine tune.

When i read those, i seriously must think about this post :P

correct if i’m wrong, but i always thought that wouldn’t effect the sound level of rendered song but when you’re playing it back on renoise…

Thank you for this!
I’ve to figure out how to do a good mixing and have to learn a lot, but the reasons for making your tracks ‘loud’ to compare with what you like to hear are just the same for me.

First I have to say that it’s nearly impossible for me to express myself in a way I could in german, so I want to excuse if I hit a wrong button formerly, that possibly affronts someone.
Sometimes, reading in this forum for while, I wonder why there’s such an arrogant note in some postings, mostly belonging to that members, who are used to express themselves using trackers for a long time already or audio production in general. Why don’t you just share your wisdom?
You know, I’m a musician for 35 years already and I can possibly tell about experiences with music some of you never dreamed of, but this does not make me a better human being or something I have to be proud of - it’s just a gift.
Now I try to translate my music into Renoise, because I like it. And I need technical help to let my inspiration sound like it is been heard in my heart already and the way I want to transport it to the listeners.
As I’ve been working as a music teacher, too, I never let my pupils feel like fools - I want to connect them to their individual potential and not to make them feel like little stupids who have to go a long, long way to nearly approach the possibilities of what I’m capable of.
I don’t mean you personally, but I remember threads against bit_Arts for example, who liked to share his incomparable experiences and somehow got dissed in a way, which make me ask myself: Why?
Music is a universal language of the heart and no matter how it’s produced, everybody who feels talking this way is right, needs help in every possible way - technically, emotionally and spiritually. Music is not a battle or a contest to see who’s the best. It is the only way to make the world a better one.
So, if I or other ones ask questions or say something ‘wrong’, that seems to be noob to some of you, just try to sympathise however remembering your first steps and share your knowledge in a constructive way, please.

This is a good one. Microphones will generally do a lot of this for you because of frequency response. But if you are not thinking about it, vsts and di to a soundcard can load you up with a lot of subs you may not be used to dealing with.

Would it be a pleasing idea to you, knowing you have done hours of work to get stuff in the right proportions only to find out you can re-tweak everything in the other settings mode because the outcome was sounding completely different from what you experienced in the editor?

The only option that is not WYHIWYG in Renoise is the availability of Arguru’s Sinc interpolation because that thing is too cpu consuming.