Two Questions Regarding Mastering

I’m trying to improve my mixes, and reading http://www.renoise.com/indepth/tutorials/avoid-clipping-in-your-final-mix/ I have a question: mr_mark_dollin recommends that the master should have both pre and post faders on the master channel set to 0db. But what if my 30+ track song clips, should I really lower every track, then? Why not insert a gainer (or simply turn down the post fader) on the master channel, I could probably count quite a few gainers and faders set at something else than 0db…

In http://www.renoise.com/indepth/tutorials/effects/using-filters-part-2-shaping-sonics/ I see reference to the moog filter2. Wouldn’t it make more sense to use a butterworth8 these days (I assume the article was written before butterworth8/filter3 was introduced)?

Well I am on thin ice right here cause I don’t know much about mastering , I think (could be wrong ) that lowering the volume on all the individual tracks has the same effect as lowering the master volume ( which is theoratically a binded slider for all tracks ) …dunno…Mr.M.Dollin enlighten us .
Noticed that you’re from copenhagenn

P.S. Copenhagn loveD it…We camped for 4 months in Christiania in the bushes …cafe WooDstock.Yeah AlcOhOlica SUPREME !!!, squated something in Amagerfaelledvej ;;Good times, Love c♥penhagen ♪♪♪
:clubs: rolkrullmefleu =) :clubs:

Considering Renoise doesn’t clip individual tracks, and only clips at the master output, it’s perfectly logical to drop the gain at the master track pre-gain, or the beginning of the master chain.

My advice was based on this idea: Sure you could lower the pre-fader on the master say by -6dB, but that doesn’t completely solve your problem.

Why? As soon as you bring everything down out of clipping territory you’ll be able to hear each sound behaving ‘naturally with headroom’ and those individual sounds may have problems like excessive bass or wild transients. It’s better to give close focus to each sound, and then in conjunction with the rest of the mix elements. Dropping the master fader as the only solution may skip past some very serious channel-level problems. Another reason why this isn’t the best idea is that you’re introducing bit truncation over your whole mix unnecessarily. I know this isn’t as big deal as clipping, but it is just cleaner practice to get everything right on a channel level and leave the mastering as transparent as practical.

It is a good habit to get into building songs from scratch this way and encourages you to make cleaner mixes. Read over the article again, all my reasoning is contained in it.

Having said all this I think it might be a good idea to do an article on mixing methods soon.

EDIT: regarding the old Filter2 article, yes I’ll do an update on that area too. Different curves are suited to different tasks.

I get the idea.

But the main reason (besides the “good habit” thing) is sound quality, right. And if there’s degradation associated with lowering the faders on the master channel, which is the same as on individual channels, gainers, even in-sample (F10) gain adjustment. So unless you happen to have samples that plays right back in the perfect volume, you have to alter it’s volume, and I don’t really see why sticking to 0db on the master channel should give any better sound quality than not. I’m sure your ears are better than mine, but do you really believe that you can hear this degradation?

NB: I’m not trying to be smarter than you, just trying to wrap my head around it…

In mastering context I can pick the ‘feel’ difference between -14dB off the master fader as opposed to 0dB, but I couldn’t put it in scientific terms. Dynamics don’t feel as real, bass feels a bit unfocused, mids can feel unbalanced. Fuzzy feelings, I know.

I think my ‘good habits’ point is the one I want to stress the most. Sure there are faders everywhere in DSP mixing, but it’s a matter of trying to get the cleanest approach using the best quality input sounds as you can get. If you have 26 VST plugins on just one sound just to get it to ‘sit right’ (artistic noodling aside) I think you’re probably murdering the sound and either need to re-think the approach or get a different input sound.

Hmmm. -14db is something else.

But I feel this 0db-good-habit-cleanness might interfere with the work flow when I’m composing, and when I’m mixing I’d rather not have to adjust 30+ faders to lower the master volume 1db :slight_smile:

And, “no”, I’m not doing all composing first, and then all the mixing, sometimes I do, sometimes it’s back and forth.

Amen to that!

Sounds like fun :P

its not just good habit.

there is a very very complex and complicated reason for this ( the GS thread is well over 300 pages long ), and I cant even pretend to understand even 10% of it, but it goes something like this:

in a nutshell, the numbers on an old school console and the numbers in daws apparently dont equal each other. apparently 0db in dawland isnt 0db on console land, but in fact alot more. and then there was something about converters at 0 and then gain staging all thrown into the mix,… you need a damn phd to understand whats going on there.

even the folks over at GS cant seem to agree on how to calculate what equals what, but there have been numbers bounced around such as -18, -14, -9, -6.

but the one thing that alot of people agree on is that if you pull your faders down, to -18 or -14 or -12 or whatever,… and turn your monitors up,… with proper gain staging of your elements you’ll get a fatter punchier mix than if you just left everything at 0 and pulled down the master.

i dont know why this is, cos it really shouldnt be, in a perfect world i guess,… and i didnt believe it but it does work on most mixes for the better. there is also something about volume affecting how we hear,… our ears have a flatter response to lower volumes, speakers as well, which explains why many pro’s will push the faders really low when fixing basic levels, especially vocals vs instrumental. get it sounding balanced at a stupidly low level and it will sound pretty balanced when you turn it up.

on your next mix, or current mix, try it,… pull your faders down, leave the master at 0 and turn your monitors up,… when you render and bring the volume up later it’ll be fatter than a fat camp sponsored by KFC.

and yeah, when the hell are we gonna be able to select and move a bunch of faders at a time?

it seems like such a basic requirement that i’m surprised it hasnt already been addressed.

Slightly off topic but in a digital realm we are always talking 0dBFS (decibels full scale) so 0dB is at maximum peak. On analogue gear 0dB was a nominal level and equipment is designed with headroom above this. Example, most Allen and Heath desks have around 24dB headroom above 0dB, most more professional desks (and DJ mixers too) will have somewhere from 16-30dB headroom (think on the Pioneer digital DJM mixers 0dBFS is +18dB on their meters so that’s 18dB headroom.)

There is a kind of industry standard but only ever been even vaguely enforced in broadcasting and related equipment, not so much studio/production stuff. For 16bits 0dBm = -18dBFS (or -16dBFS depending on country, which gets quite confusing.) 24bit systems seem to have standardised at -24dBFS though.

This is at least true within TV broadcasting anyway. But to throw further into the mix line-up is done at 4PPM for 0dB and program material is allowed to peak at 6PPM (each PPM division is 4dB so that’s +8dB) Also most equipment comes with a +4dB output switch or option.

As to the original point, it’s clearly a good idea to try and keep signal clean as far as you can. This includes have the least gain stages possible otherwise each time there is further truncating and interpolation going on. With 32bit float being used this should hopefully be minimally noticed on the final 24bit render though.

And can anybody explain why BYTE-Smasher seems to think individual tracks can’t be clipped? Have they got some magic, infinite headroom mojo going on? Sure floating point calculations give headroom above and beyond the usual maxim but this is being carried out throughout the entire chain, including the master channel, is it not? So nothing special about the tracks as compared with the master in that respect.

 

Because that’s the way it is. :)

Look at this xrns: http://www.sQeetz.co…atingPoint.xrns (right click-> Save content as…)

I have routed the output of Track 1 to Send 1, Send 1 to Send 2, Send 2 to Send 3… always amplifying by 12dB and muting the source. What you get at the end of the chain is an stupendously amplified sound.  
In the master I’ve put 2 instances of gainer in order to bring the amplified sound down to the normal level. It is not distorted or changed the slightest.

You can hear how loud & distorded the loop actually is after being amplified 16 times when disabling both gainers in the Master channel. 

Yes you can do this now in the latest version of Renoise. Go a learn how to use the Hydra Device :D Assign control to as many faders as you like :D

yeah the broadcasters and audio for video guys all seem to have a standard that they all work to, and for some reason music doesnt,… i mean you dont see loudness wars between cnn and bbc,…

cheers Foo!

but it really should be as simple as click track 1 + shift + click track 12 instead of loading a hydra device…

i’m just saying,… :ph34r:

Uhh Uhh i just got a brainwave :D how about being able to make a hydra device that is hook up to the parameters that just just selected like trunk said?
What i mean is hold shift click on the wanted parameters press the assigned shortcut and renoise inserts a hydra that is mapped to those parameters into the chain?

Sorry for off-topic, it just hit me :w00t:

I propose an objective test… using a 100% sample based song with no effects (so there’s no random or chaotic variation in playback):

  • Stick a -12db gainer on every track on a mix, then render to .wav
  • Delete all the gainers, then stick a -12db gainer at the beginning of the master chain, then render to .wav
  • Do a phase inversion merge on both files, and see if the result is a track containing nothing but silence… if this is the case, the waveforms are indeed identical

Renoise mixes internally with 32b (single) float datatypes… floats use scientific notation to achieve a massive range of possible values.

… the thing about floats is, you only have so many digits of precision for any given value. 2 decimals of precision and 1 decimal of exponent for instance would result in possible values ranging from 990 to 0.001… In a 32b single float, this equates to 24 (binary) bits of precision, and 8 (binary) bits of exponent.

In a 32b (single) float:
The minimum positive (subnormal) value is 2−149 ≈ 1.4 × 10−45. The minimum positive normal value is 2−126 ≈ 1.18 × 10−38. The maximum representable value is (2−2−23) × 2127 ≈ 3.4 × 1038.

With such a huge range of possible internal mixing values, it would be almost impossible to clip a single track.

Very interested to see(or hear rather) results of this test!!! :walkman:

I think you will find you have just proved MY point! The master has the signal going into it at +192dB so the Master ALSO has this headroom, there is nothing special about the Normal Tracks as compared to the Master.

EDIT: Just reread my post and it wasn’t as clear as it could of been but I meant why does he state Tracks do but Master doesn’t, where what gives it this functionality if the floating point and thus it exists up until the moment it is converted to the standard you have set for it to go to your soundcard. IE right at the end of the Master chain.

 You should stop thinking of mixing in 32bit FP strings like mixing on an analogue console.