Mastering

Hey peeps,

I was wondering what you guys do to your songs to master them.

After I’ve finished mixing the tune and I’ve got it as best sounding as I can, I do one of the following:-

I either:-

a) Run the whole tune through T-racks and use one of the presets.

b ) Add a little “Peak master”, or MELoudness Maximiser in Wavelab

Both these methods do improve the song to a certain degree, but I can never quite get the overall loudness to near 0db like commercial CD’s. The song will peak at 0db, but the majority of the song is levelling way below that. I know we’re moving away from “hotmix” a little these days, but most people who hear my music tend to judge it on this principle. They put on my CD and notice that it doesn’t sound as loud nor as punchy as the commercial CD before it. Is there anything I can do at home or is it off to Xarc Mastering?

Oh one last thing dudes if any of you got a minute:-

Can Renoise save the instrument settings of VSTi’s with a song? When I use the Edirol Hypercanvas drums in my songs I have to manually change the preset from piano to drums each time I load my songs.

Thanks peeps

some previous info about ediro can be found here .(“bug” in edirol)

i agree with Bantai, the better you mix down your seperate tracks, the easier the actual mastering of the song will be.

as for mixing seperate channels, try to shape the sound (using whatever fx) to get rid of all the frequencies you dont want to be in. multi-band equalizers will be beneficial, soft compression and/or filters as well.
the trick is to use your effects and stuff as tools to make your ideas come to life rather than f****ing the sound up with heavy armory just because it sounds so weird.

you can also try to experiment with panning and stereo field adjustment, a bit of this helps to let it sound more natural, i wouldn’t over-do it though.

one thing i can really recommend is the Cyanide2-vstFx by the smart electronix crew. im using this on allmost every percussive element to work it out, its so easy to use and so effective - basicly what every fx should be: a tool.

also worth considering is transient design which is a (quite) new approach to mixing that is far easier to get into than all that compressor/eq stuff. you just have two sliders (attack/sustain) to handle and it works well to get, say a little more punch out of your percussion ect. its really worth a check on the mixing front. doesnt work for mastering though.

hope this helps a bit, theres more things to say on this theme but im hungry now. im off munching garbage.

what exactly is “Mixing Down”?
is it taking all seperate channels into something else?

Ah Mastering… a dark art at best…

I’ve mastered 4 albums so far, each time I get better. You certainly train your ears a great deal in the hunt for a ‘better mix’.

Some general principals to remember:

  1. Like the other dudes say get your pre-mix, including individual channels, good first. This will make the mastering seamless. Even if you’re making nasty hard music try to make each sound ‘clean’.

  2. Bass is your enemy. Most home composers mix too much bass in, especially around 100-120Hz… You can still get a punchier bass or kick sound by cutting 330Hz by -6dB (give or take), and not touching anything below it. Have a listen to the bass mix of your fav pro tracks can realise how little bass there is in the mix. Taking that out gives your more headroom for a louder mastering.

  3. Make sure that your pre-mix has no clipping. If you’re clipping then your mixing isn’t good enough. Once you’ve got your ‘mix down’ (e.g. in Renoise it’s the RENDER function) run a normalise of the whole thing, bringing the waveform up to 100%.

  4. Run a 30 band EQ on your entire track. 30 bands is barely enough, but Audition has a useful 30-band where you can hit the ‘bypass’ thing on and off. This is the hardest part of the whole art of mastering. Your aim is to notch-cut (by subtle amounts) the ‘nasty’ frequencies in your song. Each song present different nasties. Generally I find them to be around 330Hz, 600Hz, and 5kHz - but they could be anywhere else. Also, all frequencies above 8kHz should be boosted, giving a sharper pro sound. Experiment with subtle changes and cross reference with your favourite tracks.

  5. Use a Hard-Limmiter (compressor) to boost your sound. You should be able to get at least +3dB boost with a reasonable sound if you’ve done all the above right. If it sounds like sludge then your mix needs to be better, or your EQing needs to cut more bass. Pop-music tracks can be amped by +6dB - +9dB giving them that loud loud sound. Arty stuff tends to be a little more gentle with a +3-6dB boost. Again cross reference the pro stuff to get a sense of where you are aiming.

Expect to toil with this dark-art for about a year before you get it right. Have fun!

My approach is this:

I render all instruments to WAV from the host application in the highest quality possible (32 bit float for the bit depth if possible, and the sample rate usually just 44.1 kHz).

I listen to them and check the waveforms in Sound Forge for any obvious problems that need to be fixed, such as removal of rumble or pops, and start loading the WAV files into Sony Vegas (a multi-track editor) and apply the necessary processors to counteract these problems (I only use Waves plug-ins)

I then also multi-band compress the sounds that need compression, as well as EQ all rumble and unwanted sub-bass harmonics from those that need it (usually all, and especially pads, leads and hihats)

At this stage I have a rough mix set up with all multi-band compressors and EQs intact, and roughly set each of track volumes.

Then it’s on to the convolving reverbs, which I add in varying amounts onto every track. Since this doesn’t change the volume much, I specially volume balance everything before this step as my processor can only handle one reverb effect playing at a time.

I then export the multi-track project to a single WAV file and apply further multi-band dynamic compression with the Waves LinMB and follow that with the Waves L3 MultiMaximizer for multi-band limiting. I save these settings and then load them onto the master channels of the multi-track project in Vegas, and export the final one more time with these last two effects now automatically added for optimum results.

And that’s it.
:eek:

At least that’s the general formula I follow anyway, but it does vary slightly depending on the song of course.

Foo?, interesting to read a post from someone in the scene who actually has some experience with all this mastering stuff. I was beginning to think I was the only one. :)

I find it both odd and interesting that you don’t mix and ‘master’ down each track individually before treating the track as a whole, as the quality of the final master certainly improves tenfold if the basic elements have a good quality to begin with. What do you do if you have to roll off the bass frequencies only on the cymbals of a track? If you try to do that on the final mix you’re obviously going to affect all other instruments as well. And why a graphic EQ? You can’t set the bandwidth of your cuts or boosts, which is why I stopped using them years ago. The only place I’d really see them useful is for live performances, or I guess for mastering on the mixed down song if you need to perform more than 6 cuts or boosts or so, yeah.

That’s where I find XARC mastering really stupid actually. I went to their site once and read that they only master the final mix down - such a rip off if you ask me. So Zooby, if you’re interested, contact me and I’ll do it for free if I think you’re song is great. :)

But Foo?, as for your approach, it really sounds more like an EQ job to me. You’re not even doing any multi-band compression on the final mix, which is certainly one of the most important things to do. And please, limiting by 6 to 9 dB?! My absolute maximum amount of attentuation is 2.5 dB when limiting, but it helps if you first multi-band the mix of course. But even 6 dB is way too much.

And just one more thing: why normalise in step 3? There’s really no need if you ask me, as you just need to leave the normalising until the end when the limiting is done.

Anyhow, interesting post really. I can relate to a lot of the other things, especially the bass problem. Know exactly what you mean there. :rolleyes:

This is generally how I do it now in Renoise (and how I did it with Xkraft when we were using ModPlug)

  1. I get the mix in Renoise sounding “clean”, as I believe Foo said. This pretty much means the Renoise 10 band on every track, although I sometimes use a different parametric one if I need to be more precise. Also, I’ll usually use a compressor on the beat and bass. Of course, this step also includes tweaking all the track effects I’ve used in the song, and also setting pan values. I actually do all of this in “real time”, i.e., while I’m tracking. If I introduce a new track, sample, or instrument, I automatically stick an EQ on there and tweak it. When the track is ready for mix down, I usually go back and tweak a little if I hear any glaring mistakes.

  2. Then I render to wav. In the past I’ve always rendered to 44.1khz and 16bit. On future tracks, however, I will start experimenting with higher bit and sample rates.

3)Usually, I load the mix down into Wavelab and normalize the wave. I mainly use the Steinberg Mastering suite of plugins. I stick the “loudness maximizer” plugin in the last “rack” position. This plugin is a compressor/limiter that works EXTREMELY well with practice. I usually don’t do any major eqing here because, by now, any EQing is done already, assuming I followed through with step 1. I might add a touch of some sort of “aural exciter”, like the Sonic Maximizer, or Steinbergs version of it (I forget what it’s called).

That’s pretty much it. To see the results, you can check out my track “Mindblowing” at www.ctgmusic.com/sonus

There pretty much followed the above steps with that track.

Peace.

I almost forgot:

It’s really important to try to make “space” in your mix. What this means is that it’s important to give each dominant instrument a seperate and dedicated frequency range, otherwise the mix will sound like muddy shite.

For beats (since I’m now producing hip hop, trip hop, and urban styles), i generally cut, if not kill, bass frequencies below 100hz on everything except the kick. Snares will usually get cut in the highs also, or instead maybe a little mid boost. Hats and such will get the bass killed, and usually thats all, unless I want to drastically alter the sound of my sample.

The same seperation techniques apply to everything else really. If you have a piano that’s playing on top of everything, cut the bass and boost either the mids or highs, depending on how you want it to sound.

There’s a plugin called “Spaceboy” that I hear is very excellent for this application. The way I understand it, two instances of SpaceBoy communicate with each other, making sure that the audio passing through one doesnt interfere with the other. They give an example of using it on a kick and bass on the website (just google it). I don’t have it yet, but I’m considering buying it.

Also, some multiband compression could be used to accomplish this seperation, but I’m not going to explain that whole process here, as I’m no master at it.

Peace

Atlantis:

By bouncing down each individual track/instrument to wave and EQing and multitrack mixing outside of renoise, aren’t you multiplying the noise floor?

I think you had a discussion about this in the ctgmusic.com forums…

It seems kind of strange that you would do this. Why don’t you just do that step within renoise? I’ve used the waves plugins in renoise for that purpose before. I just find the built in dsp effects to be less processor intensive.

That’s exactly right, yep, but not if the host can render with floating point bit depth precision, so that the extreme low noise floor values will be significantly smaller than if they were stored with integers.

But, as with anything, there are always going to be disadvantages, but the advantages and possibilities offered by mastering each track individually clearly outweight them.

I could do that step with Renoise, but two reasons:

  1. I don’t really produce myself but rather master other people’s music, and I get projects coming from Reason, Sk@le Tracker, or whatever. And obviously I’m not going to waste my time setting all the songs I get up this way.

  2. The disadvantage of this is that you can’t use high quality tools. The Waves linear-phase plug-ins is all I ever use and so I can’t have more than 2 audio tracks enabled with compressors and equalisers, and only 1 if strap on a reverb plug-in.

So yeah, the Renoise built-in DSP plug-ins are a lot less processor intensive, but does the quality match up to the Waves plug-ins? :rolleyes:

And a comment on your mastering process: “Usually, I load the mix down into Wavelab and normalize the wave.” Again, why would you normalise the wave here? The loudness maximizer will do this for you as well so there’s really no need to do it twice (you should always keep the number of processing steps to a minumum, as you already know after having brought up the issue of the noise floor increasing when mastering my way).

Will check out your track though. :)

So yeah, that’s just what I’ve progressively learned during the last couple of years, and so I really can’t master in the host application or on the final mix down instead.

Sonus:

Quite nice (Mindblowing), but I suggest more final mix compression and less limiting. I never understand why urban music always has to be limited so harshly so that every kick and snare drum distorts. :(

Atlantis:

About normalizing before applying compression/limiting…
Why not? If I normalize before hand, my output gain on my compression plugin will be less extreme. I don’t see the difference, really. Either way, you’re increasing the amplitude of the wave. In your method, your ouptut gain will just be higher, but you accomplish the same thing. I really don’t see how that effects the noise floor. You may be right though.

Anyway, you think the kick/snare in Mindblowing distorts? The samples themselves aren’t clean - they’re not supposed to be :-). They were lifted off of vinyl.

-Sonus

ps - I can’t believe you abandoned composing for mastering, although I think I partially understand. There’s nothing like making your mix sound hot and banging :slight_smile:

peace.

Nice thread, very helpfull. I’m going to try spaceboy as soon as i’m home.

What I miss in this thread is the use of panning. I tend to use panning to give each track/sound its own space in the stereofield. Mostly this is only necessary for bass and beats, so i’d copy the (beat)track and pan the first left and the other right. For the bass I do the same but a different value.

This seems to give more space, but maybe this sounds like crap if you hear it on a professional soundsystem.

Any comments?

Yeah, no need to pan the same beat track left as well as right, as it only doubles up the volume. And as soon as you have a slight delay on one of them, you get the kind of effect that you certainly don’t want to hear on a kick or bass. Just pan them in the centre and pan the rest around them.

Very interesting thread for noobs like me, thx !
It’s also interesting to see that everyone has a different recipe for making good sounds, I’ll try to find my own one.

everything below ~150Hz should be completely mono, one of the golden rules of mastering.

can someone recommend me a good multiband compressor,i dont have alot of money so something in the 50-80 dollars pricerange

Look up the engineers at some of your favorite artists and see if they did any interviews. Check out http://mixonline.com/