Reach me the Butter

Hello there, everyone!

My name’s Maximilian Pfeiffer, known as Skelpolu when it comes down to music and gaming.
On the 23rd Feb. 2012 I got myself Renoise as my birthday-present - I will never regret wishing for it!
Up until this point I made about 7 Tracks I was happy with.
Looking back at them now, I feel embarressed about how they were mixed and mastered <— and I probably insult a lot of people by even calling it “mixing and mastering”.
It took me MONTHS to realize that there is something called “mastering”, ( in fact, I realized that just 2 months ago or so )
as well as the necessarity of doing a good mixing-job before even thinking about mastering.
I’ve been working with Renoise over a year now and I am still not quite happy with my results from time to time.
I just can’t figure out how to solve a problem in “sound” sometimes. ( For instance: “How can I make the bass more noticable?” )

The reason why I am here is because I wanted to get some feedback from Users with a respectable amount of experience with not only Renoise itself, but also with mixing and mastering.
I have a little goal: Creating a track in, if at all that’s possible, each and every Genre out there.
At the moment I am at the Genre’s Dance, Trance, Club and House and even created a track already.

Honestly though, it doesn’t feel right: The instruments for example aren’t coming through.
( my first guess would be that the sidechain is too heavy, though it might be just completely wrong setup on the track-volumes, so a mixing-problem )

Someone told me, on an older “master” of the track, that “the bass might be not enough as is, but overall it’s great to listen to” - alright, boosted the bass.
I really don’t like how the frequency-scale in Renoise itself looks like at this point - the Bass LOOKS way too big by now, at least in my opinion ( duh…? ),
and that doesn’t help the track I presume.

At the moment I am working on a different track, same Genre, but it already feels different and … well, better. I just don’t understand why and that bugs me a lot.

I’d REALLY appreciate feedback and any help I could get on that topic - it’s not THAT long ago that I realized that I can’t go on like this forever without any help from other users.
Thanks, greetings from Germany, and see you later!

cool story bra

well i’m not an expert so don’t take what i say as the absolute truth. i have LOTS of songs with terrible mixing.

first of all, mixing and mastering is not that important as you might think it is. of course everything in the song should sound like you want it to sound on good speakers or monitors. there also shouldn’t be any parts that are physically painful to listen to, but you don’t have to strive for the professionally mixed sound. this could actually hinder your creative process if you obsess over it too much too soon. in the end of the day the song has to be good.

making bass more noticeable can go in a few different directions depending on what you want to achieve. you can add bass-heavy instruments to create more bass, you can boost the bass you already have or lower the high frequencies by changing the volume of certain instruments. some instruments have high-frequencies that you don’t consider to be an essential part of the sound but they are still there. you can use a filter to restrict these instruments to low/mid frequencies which reduces the loudness of the high frequencies all together. some people just put an equalizer on the master track and yank the low frequencies up. it really depends.

you can change the scale to be linear but then you won’t see enough of the low-mid frequencies and the high frequencies are over-represented. the reason it’s usually a logarithmic scale is the way that sound works. the rule of thumb is: double the frequency is an octave i.e. 220 Hz is A-3, 440 Hz is A-4, etc.
also keep in mind that the goal is not to make all instruments have the same dB on the spectrum analyzer. that’s not how we perceive loudness. instead the spectrum analyzer is useful for seeing sound that you don’t hear right away (or at all) or that you think is not so loud as it really is, so you don’t always have to rely only on your ears to pick up these problematic sounds or frequencies (unless you want them in your song).

songs “feeling” different has a lot to do with what samples you use, if the samples are clean, what kind of song it is, what rhythm it has, how the instruments work together. i know this is very general for an explanation but what it comes down to are details. if you make a techno song with short samples it will sound differently than a techno song with samples that have a long tail. if you mix a song for a certain rhythm, the same samples and the same mix may not work for a different one. sometimes there are nasty dependencies when you use compressors, distortions or other effects over multiple tracks, etc. reverbs and delays can potentially make your song sound muddy if you don’t use them correctly. you should definitely know why you are using certain effects and why you’re using them in this or that order in the effect chain so you don’t have too much unpredictable sounds/noise in your song.

in the end mixing is about reducing unwanted noise that comes in from your samples, then adding very specific features to your samples that they previously didn’t have and adjusting all instruments to each other so it sounds “balanced”. after that you can think about accentuating certain frequencies of your song.

sadly the song you’ve posted is not available anymore but judging from your other tracks you’re doing pretty good. just keep on making music.

Thanks for your reply Mandulin!
Yeah, I deleted the track after a while because I am not really happy with the result - I have literally no experience with mixing and mastering so far. Heck, I’m 17 years old and started with Renoise and thus mixing and mastering on my 16th Birthday, I know that I can’t expect too much. One thing I like to add though is that I improved at compression since this track - it was necessary to get used to since I started doing Drum & Bass, since most D&B Tracks are boosted quite a bit.

So anyway, thanks for letting me know about afew techniques on how to increase the bass-level - all seem very plausible and logic, and I think I have seen the “higher-freq-reduction”-method in albums like “In Silico” by Pendulum - They sound pretty good, but if you look at the tracks in a spec, you can clearly see the lack of anything above 18k~ - I noticed it before, but now it makes sense as to WHY they did it, pretty nice.
So far I had to rely on the EQ-Boost on the Master to get the Bass going, as well as some frequency-sculpting, so basically the same you mentioned.
I am aware that not every instrument has to be the same dB - I wasn’t sure a few months ago, but I realized it by now, luckily.

now, since “Reach me the Butter” I tried to get behind Compression a bit more, especially multiband compression using a multiband-send + my favorite free-compressor TDR Feedback-Compressor 2,
and I really have not the slightest clue as to the proper use of a compressor. I am not certain whether or not it’s the goal to smoothen out peeks by having a fast attack and release-rate and thus letting the compressors needle
dance at the loudesdt part of the song. I guess I missunderstood the use of a compressor, but somehow I managed to get a rather “good” result (for someone like me with no experience) for one my latest Track.

I always have the feeling that the highs should have a quick attack and release, to keep the really pulsating highs lower, and then go longer with both attack and relase as I go down to middle and bass, but that’s just what I feel like about compression so far. Probably a bad idea to generalize it, but so far I have no other clue about it.

It would be really cool if someone could tell me if it sounds alright or not and what to improve and focus on next time. This time it won’t be deleted though, I’m sorry for the inconvenience last time.

https://soundcloud.com/skelpolu/resurrection

The compression-settings for “Resurrection” are as following - from top (treble) over middle (mid) to the bottom (bass):

I’m not sure I can help you improve your compression settings from that screenshot, but I have found that a lot of the production process becomes easier once you understand how each tool works.

In the case of a compressor, any incoming signal with an amplitude that exceeds a threshold value, is scaled down to a level under that threshold; If the incoming signal does not exceed that threshold then the amplitude remains the same.

Imagine a track where the amplitude can vary dramatically, maybe a vocal track, if the peak amplitude is too high, we have two choices, to limit that peak: The first is to reduce the amplitude of the whole track by reducing the gain, of course that would mean that the quieter sections would then become even quieter; The second choice is to limit the track using compression. If we set the threshold above the amplitude of the quieter sections but below the peak amplitude of the louder sections, we can limit the track but preserve it’s ‘energy’. The amplitude of the signal that exceeds the threshold will be limited but the amplitude of the signal under the threshold will remain the same.

In the example above, we can increase the gain of the track after compression, safe in the knowledge that the amplitude has been ‘averaged’ across the track and there are no ‘spikes’ in the amplitude will cause ‘clipping’ in the mix (where the maximum amplitude for the output signal is exceeded causing distortion).

The down side of compression is that you lose ‘dynamic’ because the subtle variations in amplitude that make something like a voice sound more human, are lost.

I hope I haven’t told you a bunch of stuff you already know, and that this helps you a little toward your goal :)/>/>

Note, The DSP heads in the forum will notice that I have omitted quite a lot of ancillary information, but I think the principle should be sound without it.

Aye!
Thanks mate, you actually did tell me one thing I knew before: That Compressors limit.
But what I didn’t know was the exact usage: To limit ONLY the loudest part of the track.
Now I also understand why the Make-Up is so important - to get the quiter areas louder without “maximizing” the loud-stuff too much, if I am not completely wrong.
So in a way, you changed my whole perception of Compressing - Thanks!

I will try to get a better feeling for Compression and use it on the next song I am already working on, Drum & Bass again, so pretty good for some Compression-Practice. ;)
And honestly, the ancillary information is not as important, considering I know the use of the different parts of the compressor rather well by now -
It was just the concept that was stuck incorrectly in my mind.

So once again, thank you for helping me out by telling me what a compressors uses are, or at least in the matter of boosting the track’s volume after compression!
One question I’d like to get an answer for though: Is it worth making a new topic for the (completely unrelated) Track I mentioned, or should I post it here?

Hi

yes, what you guys have mentioned already.

another thing though, is the ability of an instrument to “cut through the mix” at all.
this again has to do with the “timbre” and the actual content of parts of the frequency spectrum.

like different human voices for example:
let’s assume in a crowd, people talking all at once, one big mess, but you can very well hear that woman with that eeeeeky voice ;)
even though she is not really “screaming” louder …

even a bass instrument (f.e. tuba, bass guitar, low piano …) has frequency content up the range, giving it it’s “timbre” and making it “audible” and giving it contour, while the low frequencies mainly give a fundamental “filling” element.
so if you manage to accentuate these a little bit more (by EQ), rather than boosting and pushing the low bass … that way that voice will APPEAR louder in the mix, because it becomes more noticeable, apparent to be located.
it’s actually the same phenomenon that really low subwoofers can’t be located in the room, but rather “fill it” with low frequencies … same goes for “your” bass … if after boosting the low end, what remains is mainly a “sub” it will never “cut through” and will never be “loud enough” anyway

then again, it’s physically “normal” the bass taking up a considerable amount of the dynamic range, just look at a three-way speaker systems … the bass speaker is the biggest and takes quiet some energy to be driven and to move enough air in order to keep up with the overall loudness of reproduction
it’s true that there is only so much dynamic range available and compressors are the tool of choice to squeeze everything into that limit …
humm …while some genres even kind of live from the trademark abuse of certain effects (like compression), one old-school “secret” that often gets lost on the way is that music, groove, it doesn’t just live from an overwhelming amount of tones, hits and noises, but maybe even more from the “air” in between those … so try to let it “breath” somehow despite all the compression ;)

and to your plan with the genres and stuff: yes, nice. but don’t try to learn Karate, Jiu-Jitsu, Taekwondo and Muay-Thai all at once in two months … or, even more, don’t think you have mastered any of it just because you have done some ^_^

i wish you good continuation and success (and a lot of fun along the way !!)

I certainly appreciate your words, and I see the things the same way you do, when talking about the plans - I intended to get a feeling for music in general, and going through the huge
amount of different genres out there is, at least in my eyes, a rather neutral way to get a basic idea of music, considering I have no theoretical knowledge about music whatsoever.
Though it seems I am currently settling down with Drum & Bass - but I said that with Dance / House, too. xd
So thanks 'nix!