I dont get it I turn things down all the time but it seems like by the end of a song, every track has a compressor on it, some with two. any tips to maximize levels when using a lot of effects or plugins and multiple instances of the same sample?
Perhaps one of the mandatory steps to not use excessively compressors (with very high parameters to stand out a lot) is to keep the frequencies of each sound in place, using filters properly (Analog Filter or Digital Filter or any type of frequency equalizer). With this you will be able to clean the sound that “should not be there” and thus, “leave room” for those specific sounds that you need to highlight, giving them their space.
Working with all this and with the volume controls, you should be able to hear a mix correctly with hardly any use of compressors.
The compressor will serve to enhance, above all, percussion sounds. But you can even check the samples, or the VSTi and regulate other things to get a better bond without the need to abuse the compressor.
As a general rule, the fewer effects you add to a sound is better, only those strictly necessary. Some samples may not be appropriate (sound source). Perhaps only with the change of sample (of the audio wave) or its correction (normalization or increase of the volume), you can solve it, or at least not abuse the compressor so much (raise it too much).
If a sound really needs a very high compression, perhaps the problem is not in the use of the compressor, but the sample is not adequate (the sample wave is weak, or its volume is too low). This, together with a bad cleaning of the frequencies of each sound will influence.
In the end it’s just about regulating frequencies and volume. Even the configuration of a loop in a sample can serve as a corrector, because it influences the production of sound (variety of frequencies). By correcting the beginning and end of a loop, you can change its “sound texture” or clean or not certain audio frequencies, leaving room for others.
You can also use the adjust the opening of the stereo field. That is, it correctly regulates the position of each sound in the stereo space, that is, preventing everything from playing in mono. A very direct example of this is with the composition of symphony orchestra music through a computer. If you hear a musical piece where all the instruments seem to come from the same point in space, the stereo field has not been worked on. Some sounds will cost them more to stand out over others. All this must be done in the mix.
Volume is one way, EQ-carving (filter-carving in this case) really helps.
I tend to think of audio as a sculpture. One thing that really helped me actually came from this forum. EQ out all of the unnecessary frequencies for each individual track. This is not the only way, but can produce some immediate results.
Take a sine-wave sub-bass, for example:
Below 50 Hz and above 1 kHz is unnecessary (play around with that 1 kHz setting). With two digital filters, and the Chebyshev 8n at -2dB mode selected, you can pretty much make a set of brick-wall filters (not limiters) to scoop out space for that track. For the next track, whatever that might be, perform a similar set of actions. Find the frequencies that are unnecessary and trim those out. Repeat for each track. When you’re done, your mix will have more clarity.
Visually, I would describe this as a set of notches cut out of a piece of wood or stone or metal. Notches will guide water to where it needs to go. Or think of it as plumbing. Some people do better with a combination of audio/visual description.
Compression is handy after this, but might not be as necessary, because the space that each sound needs has been created. You may only want to use compression at the end of the mix for an overall sound/volume boost. For example, this is a track I made with no compression until the end of the track and in Reaper to “master” the audio. Full explanation in the post:
Compression can be a crutch if used to correct individual sounds. Personally, I’d rather make the sounds fit together with EQ and volume, then add compression as a “spice” versus it being the main ingredient. Everybody works different
my opinion on your post:
i tend to use some clipper after eq/filters (or one included with analogue filter by “drive” slider), (Distortion native from renoise for example (but mostly airwindows stuff)) on end of chain/individual drums, so i can max out volume/sweeten up the transient, without compressing/ruining the transients.
try to use compression only as parallel FX processing, in order not to ruin transients/dynamics overall on separate/group tracks
maximize volume by ‘mastering’ process:
when you finish your arrangement and creative FX processing in renoise> export song (in reaper/renoise/audacity) and maximize volume by normalizing either by peak value, or rms value (-12/-14/-18 etc))
do not use compressor to maximize volume, use gainer instead followed by some clipping device, or some maximizer(built-in), or do maximizing by normalizing sound waveform to target loudness levels…
try to separate “loudness target” part from creating/arranging tune
i mostly do not bother myself with loudness, since i almost never finish any project
i separate my “in-the-making” like this:
±create melodies, highpass them, create bass, lowpass it, chop some breaks, highpass that, add punchy kick to the break (layer on separate channel), lowpass it as well, group drums, put some eq into>distorton so you can get some color of your drums as a concept, and so on…
creative FX (example: huge snare on reverb as for sound effect.) >
±send kick and snare to a send track, route everything that’s not a kick and snare and sidechain it,
±send all drums to send (create drums parallel track) apply some highpass to relieve stress from the compressor, and squash the hell out of it, add some distortion in-between, experiment with order etc
±mask/de-mask frequencies, depending on the song genre/vibe/feeling/temper, (create room for snare, if the frequencies in which snare should reside, are too bloated, either apply static eq/dynamic eq, or simply sidechain the hell out of it )
±notch resonant/trouble frequencies out
±example: add air: glue group of tracks, per-say if you have string ensemble, process it as a group, make some parallels with silky top distortion, blend it so that you only add “air” to your original group track, not to morph/phase sound out completely far away from the original intention. (that is not wrong as well, but in my separation-phase context it is )
±beef up drums with some more parallel punch, as groups, or as separate (depends on material really)
± some minor tweaks in the dynamics range, maybe some really subtle eq boosts/cuts 1-2dbs (again - really depending on the material)
± loudness correction (mostly it’s maximizing volume, almost never the opposite)
these are not rules, those are my methods, as there is no rule that one should apply by realizing his/her sound vision
one could combine phase no2 and no3 as one phase, it really depends on the person and its vision of creating a tune… if you see this whole process as one step, then one step it is… I was doing it for quite long time as ‘one’ step, but it really gets messy when you try to edit some things while track is ‘fully produced’. cannot revert some changes.
Could you share a project, then we could take a look? Sounds that you are using too many sounds that share the same frequency range.
As said above, proper EQ application can really help reduce the need to automatically gain level with a compressor. A lot of issues are caused by frequency duplication and phase interference.
I’d say there is nothing wrong with a compressor, or even multiple ones, for the mix - if each step is justified by its influence on the sound. I often have 1 or 2 compressors going in the mixing step, one to shape the transients, another to level out the ratio of spikes/throughs in the volume envelope, or just to make things pump a little… Also important is the order of the compressors, and EQ pre/post compressor which each will influence what the compressor does in different ways.
Not sure if I understand the loudness thing with the compressor. Ofc a compressor can boost gain, or raise loudness of a sound…but isn’t it at the end of the day about the mixer faders, in the ultimate phase of mixing, leveling sounds against each other so they blend well? So I cannot understand how a compressor would be used for this. If you have a quiet sound, okay then you can boost it with a gainer, but compressor will always also alter the dynamics and is normally only used for that purpose.
If you use compressors to raise the loudness of a track against others, then you are doing something strange imho. And you risk destroying the sound with compression if you fall into the pit and keep boosting all the instruments again and again, trying to balance them somehow in the wrong way. You can always also lower the volume, of the louder sounds, to level out stuff! And as said, EQing can greatly help with clarity of sounds mixed against each other. Don’t level to make sounds cut through against each other - level the sounds to balance them against each other, so each has the weight as intended!
Yeah I have virtually no mastering process, and have recently been hearing about/focusing on the idea of starting your master or setting your basic mastering process as you mix/write, which I have yet to try but have stapled it for future tracks as a necessary tool. Definitely gonna try to focus on not going to ham on parallel processing and all that as my ‘one step’ process leaves me with the sample problems yours did!
I think one problem is that I use them to squash sounds like to separate layers(?) of effects so that it maintains consistent level but other times to basically as a gain if its low enough to still be under 0.0db after the compressors and that probably leads to habitual use. typically when I lower other tracks, however it will create a drop between prior parts that were vst’s or naturally loud/distorted.
yeah just let me make my vst a sample real quick
The stereo field editing is my go to right now, but I recently got fabfilter pro q3 which ive been using as I feel the stock filters/eq’s and difficult to get right and that has let me to start separating frequencies, but I always just think of that in terms of effecting “murkiness” and should think about its effect on volume thru clarity, thanks!
Maybe your problem is that you don’t like working on “dry” signals so much. I.e. you want full, punching sound already while tracking and mixing!
I also like to work with sound with impact. I like to have my “roughmix” pseudomastering chain on all the time while working on my music. It will compress the whole mix, add some subtle distortion and room reverb and push stuff through a maximizer (limiter). Maybe just add an maximizer on your master, so you get an idea of what I mean… What I work with might not be optimal, but it is loud and has no spikes. It is really a big difference imho to work through such a chain. In the final step, this chain will be disabled, and the output of the dry mix is then subject to final mastering.
What @OopsIFly says sounds good. Just looked into your file and you sometimes use two compressors on a track. That’s not necessary imo. Beside of that it sounds really good here (except that it’s a bit overcompressed, but that’s also a matter of taste). Maybe also ass the exciter to the master, it makes also punchier and clearer sound.
What others already mentioned.
Consider what is the ‘leading’ part of your song - bass, melody, beats…etc? Leave that as the dominant part, and then make other parts ‘supporting’ the main part.
Maybe it is the interaction between bass and drums, or effects and texture?.
I use ‘negative mixing’ - simply turning the gain down for supporting channels that go in to reds, or ones that make the whole mix go into reds.
Try ‘gainig-in’ a track - for example hi-hats:
have a hi-hat track ready, then turn gain down to ‘-INF’. Now listen to your song - and sloowly add gain to your hi-hats and listen - till you get to that ‘ok its good enough’ point.
As mentioned before by others - eq-ing helps - hi-hats might not need the lower frequencies, and bass might not need the hi-fq parts, etc etc. You can see that in Spectrum-View.
There is also the old trick of using signal follower to make a side channel compressor - drums compressing bass, or vocal compressing beats - etc etc.