The MIXING thread

Spectacular advice. I’m curious @TNT or @slujr if you could give any insight on taking the step from having mixed in mono to adding stereo interest.

Panning seems fairly intuitive (to the same degree as getting your anchors leveled correctly, say), but adding width is where I struggle. I generally only add width to a few elements, and I don’t think I ever set the surround on the stereo expander to more than 50% (unless I’m doing specific mid-side processing).

I also a/b compare to mono to make sure the sound I’m widening can “take the hit” in mono mode.

are there any more advanced / purposeful methods that people take with “adding stereo” to a working mono mix?

Absolutely! But not only the selection of sounds matters. The sound quality is pretty important, too.

Yup. All you need is a stereo expander adjusted to mono in your master, then you just have to adjust the volumes of your instruments while checking through speakers at low volume. At least that’s how I start to mix.

Yes, this is the best solution in case you want to have a same-sounding reverb on several instruments. According to professionals this reverb should always be 100% wet. Personally I prefer an amount I like, it doesn’t need to be 100% all the time, even though it mostly is. But I have to say that I usually use this method to gain some “air” in the background. I prefer to have different reverbs on several instruments.

Yup, this is recommended if there are some disturbing frequencies in the sound. But I would only do it if the sound really doesn’t sound as expected and intended. But the native Renoise EQ isn’t the chosen one for such adjustments. I would recommend the Melda EQ, which is free.

Hm, not sure what you’re after, there is not much I can tell. All I do is to put a Stereo Expander in mono in my master track and activate it. Then I do all the mixing stuff as described. When everything sounds good in mono I simply deactivate the Stereo Expander and everything is stereo again. Last thing to do ist to finetune the mix in stereo. That’s all. Just like you I only “add stereo” (expand up tp 50%, surround max 15%) to specific instruments, primarily pads (gaining a “sound carpet”). When it comes to short synths or percussions I tend to use stereo delay effects and not a reverb. Is this what you wanted to know?

2 Likes

A lot of the stereo interest is on a per instrument basis in my music. I do a lot of randomized panning on rhythmic sounds so that they move across the field, which doesn’t negatively impact the mono signal. One thing I’ve experimented with is doing a mid/side process of a finished mix and using a stereo expander on the side signal’s mid and high frequencies. I’ve liked the result, but ymmv :slight_smile:

stereo fx like flanger, phaser, delay (as long as the left and right delay times aren’t identical), and chorus could also be interesting to experiment with adding to side mids/highs, but I haven’t tried it. might be trash :upside_down_face:

bumping eq here also makes the side image pop, but you probably already know that

2 Likes

Why exactly do you think the Renoise EQ is not right to do such things ?

I’m not sure what you mean, my question wasn’t regarding EQing, and not as far as I know :^)

Oops my bad, seems that I answered to the wrong post it was aimed at @TNT

1 Like

Exactly, same here. Panning on percussions, short synths, fx like delay output or reverb, no problem at all. The volume remains the same. The only problem I’ve noticed is when you want to expand the stereo signal. The more you expand the louder it gets in stereo, but it doesn’t affect the mono signal. I always bear that in mind while mixing.

I didn’t say “not the right one”, I said “not the chosen one”. You can do that with the Renoise EQ, too. But the Renoise EQ only has got a small window and is not as accurate as the Melda EQ, that’s why I prefer the Melda EQ for surgical interventions.

3 Likes

It’s just easier when the spectrum is superimposed on the eq curve, too.

I like to find the problem frequency on the spectrogram, then just type in the exact frequency using the full display on eq 10. Works fine, afaict

2 Likes

Actually, I’m not familiar with this – do you mean applying a boost (high mids, low highs, maybe?) to the isolated side signal?

Also, since you brought up chorus, flange, phase – I love the way these effects sound, but I generally have a hard time managing the extra mid-range activity they add. Is there a simple way to keep mid-range under control, or to deal with it via more processing?

Thanks!

1 Like

oh then I understand, actually I’m gonna try it I guess, I was used to the Logic EQ and tried TDR Nova when moving to renoise but didn’t like it that much

Yes, this exactly. To taste :slight_smile:

Multiband compression works, but I usually just compress the signal and cut mids on eq if it needs it. I’m usually using these fx on mid range leads, so sometimes the boost is what I’m looking for

Here is a generative loop for you guys to critique.

what do you think about the balance?

lots of borrowed ideas in here, as well as all synth sounds from Legowelt’s sample pack page on the shadow wolf website. I tried to credit the right people in the song comments.

Am I doing mid/side right?

the Valhalla freqecho (free vst) is used on the “stepper” track.

(I don’t know if google drive links work on this forum, here goes nothing…)[https://drive.google.com/file/d/1QN_jNimIlfPuWSk7TmB8btJvLLIoieqa/view?usp=sharing]

It’s ok, but the hihat is too loud. And I would strongly recommend not to put three different instruments in the same track. Kick, snare and hat do need different treatment.

No, actually you’re not doing anything in terms of mid/side. All you’re doing is sending the signal to two different FX tracks, but without splitting the signal into mid and side. So basically there’s no difference between your tracks “m” and “s” except the devices you’ve put in these tracks, but you can’t do mid/side processing this way. You need to split the signal first. There are several options to do so. I would upload your pattern including mid/side, but it’s too big and cannot be uploaded, so check the template thread instead. :wink:

Are you using an outdated version of FreqEcho? Renoise doesn’t recognise it and the song cannot be opened correctly. I’ve got FreqEcho installed myself (64bit Win) and it works fine, but it seems to be a different one than the one you’re using.

Thanks for the reply!

I downloaded your template a week or two ago. I’ll study it. I seem to remember you updated your M/S technique and replaced it.

I agree and actually already eq’d the hi hat to tame it somewhat.

the rhythm section elements (snare, kick and hat) are processed individually in their respective instrument fx sections. I feel like the flexibility afforded me there is ample for creating differentiated sonics.

Thanks for the clarification regarding Mid/Side. I thought I was doing it wrong.
Turning the mid track to mono and then using zenspheres “util - side isolator” device on the side track would do the trick, I thought. I’ll figure it out.

I’m not sure about the FreqEcho, except that I’m on a macbook so maybe that is the issue. I actually tried to replicate the effect using all native devices but couldn’t approximate the Bode pitch shifting.
LPC delay SYNC w SHIFT.xrdp (29.7 KB)

1 Like

image
Here is a simple Haas effect setup. You just adjust one delay either right or left and keep the other one at 1 and you get super wide mixes. Be careful tho it can easily cause phase issues so always check by setting the mix to mono. Because of the phase issues I usually only use Haas on high frequencies.
edit: You could set the minimum delay to 0ms by enabling line sync i think but you no longer see how exact the delay is.

1 Like

You’re absolutely right! This works exactly as described. You just have to put the device by @slujr on the side track. Or gainer (inverse R), stereo X (mono), gainer (inverse R). All i can see on the side track is an EQ5. Or you can do it just like @Jek and put the chain send (side) - stereo x (mono) - send (mid) - gainer (inverse left and right) - send (side) on your drum bus or whatever,

1 Like

A lot of interesting discussion here. :nerd_face: :blush:

I’ve been thinking about the mono compatibility issue for a few years and reached my own conclusions:
While mono checking can be really useful for checking your overall balances and picking up any potential phase issues, I couldn’t give a damn about mono compatibility, at all. :grin:

If someone actually listens to music in mono, that’s their loss, but I’m not going to start making my music mono compatible just because mono bluetooth speakers are a thing and some people listen to music trough their phone’s speaker. If I would think this way, I would seriously start to limit my creative toolset for a rather irrelevant reason. Even the good old fashion audio demonstration of feeding a mono signal into a stereo setup a flipping the polarity of one speaker - that’s an effect you could actually use right there. A creative possibility I would never ever bump up against if I was worrying about how the mix sounds in mono, and a pretty damn unique one at that. Yeah, the sound won’t even be there in mono, just like your side signal, but in stereo it can sound so god damn cool. It can even sound like the sound would be coming from behind you, depending on the characteristics of the particular sound and your room. :exploding_head:

I think about mixing in terms of three dimensions: amplitudes (volumes), frequencies and spatial information. All of those need contrast and movement to make an interesting mix (well, IMO at least), and a lot of mixing problems can also be solved or helped in the domain of spatial information: say you have two instruments that are masking each other, but also have sounds that you really like and don’t want to mess up. Well, you could “mirror EQ” them, but then you would risk ruining the sounds (frequency domain). You could sidechain one to the other, but this might cause pumping artifacts (volume domain). This problem can also be solved by just hard panning the elements on opposite sides of the stereo image (spatial information domain). In stereo listening situations that might be all you need, though I usually do use some subtle frequency carving and pushing in those situations too, just to make the sounds even more distinctly separate. And yes, this is a generalisation and a simplification, these are obviously not the only choices. But my point is that, if I would be too focused on how the mix sounds in mono, I would never even come up against this as a potential solution. And hence I would be actually sacrificing my stereo mix, only for the sake of having a good mix in mono. And as far as I’m concerned, that’s a really bad deal.

But yeah, since spatial stuff is a big part of my music making and mixing (I’m into binaural stuff big time), I just simply don’t care about how my mixes sound in mono, at all. That being said I do have a dedicated, single driver speaker setup for mono checking (contradictory and paradoxical, I know :grin:). For me, in a music making/production context mono means summing everything into a single driver, not playing my 2.1 in mono. That’s because in the real world, if someone is listening to music in mono, it’s most likely trough a single driver. If I’m playing my 2.1 in mono, I’m still getting the speakers crossover filters in the signal path, and have multiple drivers pumping out different information. I’m probably thinking about this way too deeply, but as I said, I started thinking about this few years ago. :grin:
Mixcube type speakers (single driver, full range speakers with closed enclosure) have many benefits, some people swear by them. I just use one as my second monitoring system.

Well, there’s many approaches. One thing I’ve learned is that thinking in terms of contrast is a really good approach for making your track feel wide/spatially open. I always have a stereo expander on my mixbus, and before drops or anything that requires addition of energy, I make the stereo image narrower, even totally mono if that serves the purpose. Then when the track needs the energy back, I just open the stereo field back to normal. You could also go to the opposite direction. Automation is the word of the day here.

You can also make a track have contrast in the spatial dimension by having a single element that is extremely different spatially. It could be your extremely wide reverb against a mono vocal, or it could be some effect in your side information. Or it could be a single mono element in otherwise a very wide mix. It all depends on the situation.

Then there’s the whole world of binaural stuff. That’s where you can go beyond the normal stereo image in headphone listening. Some binaural stuff is just straight up magical, it can feel like you’re really there. I have a couple of plugins and impulse responses for binaural stuff, and a pair of in-ear microphones for recording my own binaural foley/ambiance etc. The whole immersive audio in general is really worth looking into, if you haven’t already.
I think that with binaural, the same idea of contrast works really well: if you make everything binaural, it can get really messy, and my brain actually loses the pristine 3d image of the sound. Sort of similar to how making everything really wide can make your mix feel unfocused and lacking in the middle of the stereo field. You could either create the contrast with different elements playing simultaneously(e.g. one binaural element against many non-binaural elements), or with arrangement and structure (having one section that’s a binaural recording in the middle of a more “normal” mix can be super duper powerful and really grab the listener in).
You can also think of the binaural stuff as an alternative to panning. Now you don’t just have the axis of L to R, but a whole 360 degree space to place your sounds into. If memory serves me right, Rob Clouth has panned some elements in his tracks by putting a pair of in-ear microphones to his ears and moving around his studio while playing those elements trough the speakers. (I think it was the pad sound in “List of lists” by his alias “Vaetxh”).

In my experience, the binaural stuff works pretty damn well with stereo speakers too, it usually just feels like a weird spatial effect. But it can potentially cause phase issues if you’re worried about those.

Then there’s the whole rabbit hole of the much discussed mid-side trickery. A LOT of things can be done with that too. I usually have a mid-side “skew” utility on my mixbus too, so that I can blend between the mid and the side signals in a mix knob kinda fashion (0%=only mids, 50%=normal, 100%=only sides). Can be useful as a creative effect, or for checking your M/S separately.
You could also for example take the old concept of ping pong delay, and make it “ping pong” between the mid and side signals (vs just L and R). It can sound really cool, kinda like the delay would be bouncing back and forth depthwise. Or you could try the same with Haas-effect: delaying the mid signal in relation to the side signal or vice versa.The mid side stuff in general is exceptional for creating a sense of depth. Boosting the side signal in reverbs is one way of potentially getting them wider. And mid-side EQ can do a huge amount of things for your stereo image. The possibilities with mid-side processing are endless. But while it’s one hellishly deep rabbit hole, it certainly is one worth looking into.

3 Likes

Right! In this order! :+1:
As far as I know and as far as I experienced it, spatial information primarily affects the sound if listening through headphones, but not that much in comparison if you’re listening through speakers. So I would consider spatial information to be less important than volumes and frequencies, which doesn’t mean that it’s not important. I love stereo “effects”, they’re keeping the mix interesting and varied. But my philosophy is to get a mix which sounds good in stereo AND in mono. If that means that I will have to take the golden mean in terms of sound regarding mono and stereo, then I’ll do it (that doesn’t mean that I’m sacrifcing the stereo mix). It’s impossible to have a perfect mono mix and stereo mix at the same time. Of course the stereo mix is always prioritized, but I’ll never forget about the mono mix, too. If your mix sounds good in mono, it will sound good on every system. At least that’s my experience and that’s what most professionals I’ve ever listened to say all the time. Therefore I check in mono first and after that I adjust the sound in stereo. It works well, at least I’m pleased with the results. :slightly_smiling_face:

The binaural stuff is pretty interesting. I would like to generate a sound which seems to come from behind, just like in a cinema. It’s cool for sure, but it’s not crucial. :wink:

Edit:
I wonder what kind of plugins you’ve got in terms of binaural stuff.

1 Like

2nd this. I’ve played around with a few but it never quite felt … I don’t know, actually like they were doing what they promised? Since ur stuff @SimulatedZen is so well conjured spatially, very curious to know what you use (in addition to the other techniques described, which, again, very helpful)…

This is a great discussion btw - picking up lots extremely useful insights here.

2 Likes

@SimulatedZen So I think that the mono mixing thing doesn’t have so much to do with playback on mono devices (or if you lost hearing in one ear…). Rather it makes sure that no relevant info can be lost due to spatial differences in playback, i.e. quality of headphones or speakers or position towards speaker. If the mix is clear when listened to in mono, it can sound good regardless of these factors, while if you mix and hear something clearly due to spatial resolution, once factors come into play that alter the clarity of the spatial info, then things suddenly might disappear into the background that were clear to hear with a better spatial resolution before. I don’t really mix in mono, but I always have a stereo expander set to mono mode sitting on my master to toggle on/off and make sure things are also tuned to sound well and clear in mono. Other than that, I also prefer mixing in the spatial domain right away, working with depths/widths of the elements, symmetries etc. Just for a good mix it is vital that it will sound clear in mono, even if it does not preserve all the the subtle details that can be audible in stereo mode. This is especially important for reverb action - it is no biggie if some reverb detail is lost in mono, i.e. if it had full width it’d just disappear…but if you have a reverb that is distinguished in stereo, but makes everything drown in mud and sound featureless when mixed mono, then imho the reverb is not tuned right and needs to be tamed somehow.

Much more important for clarity in the mix is to consider the physical properties of human hearing, and to apply the finding onto the mixing. Human hearing is subdivided into frequency bands (like 24 or so are relevant, called “critical bands”), and in each the hearing can process only so much frequency/amplitude/stereo information of constant/rough or transient tones for each band, and the position of frequencies in the bands of the elements (middle of band or upper or lower range or between two bands, combining them…) also will determine the accoustic qualities of the sounds and that can be used to distinguish them more clearly from each other. Simply said, for each band on the bark scale you can have the info dominant from one element for each time, but of course the elements can use up the band alternatingly in time or using multiple bands at once, also interleaving bands. Some bands have special qualities, i.e. the sub that has weird traits, or the range around 1khz that has much different qualities, or the 3.4 khz zone where hearing is most acute and the relevant info of multiple different instruments can (and should) be combined where they must contain a lot of details.

So if you mix your elements so that each important element is undisturbed (i.e. eq down other elements slightly in the relevant frequency ranges) in it’s own time spaces in the same bark scale spaces, you will be able to distinguish it very clearly against all other sounds that have these properties, while those elements that are not dominant but present in the ranges will fall into the background once these sounds play, they are “masked” and no longer clearly audible. A human can without effort stay aware of 2-3 elements at once, and you can help with it by making your main elements be dominant in the relevant frequency bands that define their character in the most critical ways. But if you have a sound dominate in the mids well, while other instruments mask the high frequency action of it, then when the masking happens the sound suddenly will seem dull and have lost it’s detail. A sound that has it’s defining qualities unmasked in the highs, but is drowing in the mud in the mids, might suddenly seem thin in the mix. When two sounds of similar defining frequency ranges clash in the same bands, they can mask each other and can become hard to distinguish.

I think the actual band frequencies can slightly vary from person to person, but there are good averaged tables to start looking for in the mix. Try it, and make bandpass filters to listen to the bands or to ranges of them in isolation, and you might notice interesting things about it.

Here is the wikipedia entry for the “Bark scale” that can be used to find such mixing bands. It might not be the most accurate or modern definition, but it works well for this task for me: Bark scale - Wikipedia

4 Likes