What are the benefits with regards to using mono over stereo (or vice versa)? I’ve recently started mixing my samples into mono and I’ve found that it has made balancing volumes and eqing stuff a lot easier outside the loss of panning effects that were natural to the sample. Also, is there a way to set Renoise to automatically format your samples to mono/mixed via a script or something like that?
Mono files tend to be smaller than stereo.
You can use the stereo expander to make a signal mono in the device chain. (I often put this on the master channel and listen to mixes in mono).
If your track has multiple samples, or things you do not want mono then I don’t know.
Also, is there a way to set Renoise to automatically format your samples to mono/mixed via a script or something like that?
I’ve found it pretty useful for processing samples, especially things with bass to low mid range frequencies. Makes them easier to hear in the mix and more punchy (IMO)
I produce, record, mix and print everything in mono down to even completed tracks because I love the way it sounds. Two symetric metering bars all over the place is what I see all the time. All my tracks have the Stereo Expander set to mono, including Sends and Master. I believe my music is bidimensional. This way I only worry about frequency (from high to low) and volume (from front to back). My Spectrum Analyzer shows absolutely nothing on the sides. That’s because I enjoy the feeling that my songs can sound the same on either 1 or 2 speakers (just louder on the latter). Everything I produce is dead center. I never touch the pan knobs. I really don’t know the implications of not taking advantage of the stereo image during mixing.
Most time I use (or convert from stereo) mono samples/sounds for drums and the more lowranged instruments like bass. Once my arrangement is done, I set my audio interface to mono and mix everything on that image. When I turn back the audio interface to stereo, everything falls even more in place. This way my tracks can be listened on both systems, mono and stereo, without any lose of sound.
I produce, record, mix and print everything in mono down to even completed tracks because I love the way it sounds.
That’s really interesting… I produce and record everything in mono, too. I just like how solid it sounds - and I feel the song more than some fancy FX tricks.
It also means that when I mix, I have solid parts to work with… and I can add a bit of stereo spice here and there.
But I always forget to do anything stereo! I don’t even pan… just levels and eq, and print the mix. Afterwards I think, “Dang I forgot to make anything stereo! Oh well, it’s done now…”
I suspect that as I do more mixes, I’ll do more stereo stuff. But I definitely like working with mono for most of the time, because it helps me get a solid sound, focused on the song.
I use both, mono samples are so much easier to loop and for some reason sound much better with the sampler’s filters and dsp like chorus, but stereo samples add movement and excitement for sure. The problem is that just playing a stereo sample across the keyboard can sound static and lacking in punch, as usually synths that have features like unison and voice panning have a very dynamic stereo image from free running osc and osc drift
You could resample a softly filtered saw F#2 from Diva in mono and a detuned wide C3 from Zebralette, keymap everything correctly and take advantage of phrases, modulation and fx, you could make the stereo sample respond to velocity for example
Last, I might add that as a new user and 90s baby I didn’t get to experience the Amiga/FastTracker/PT days that Renoise has a lot of heritage from, but checking a bunch of mods online made me realize that since storage was very limited or I guess stereo wasn’t even an option people got by using very short less than 32 kHz sample rate less than 16 bit mono samples. Crazy how limitations can make you more creative and potentially sound better.
To my ears (and I suppose this is personal taste), I find that when I want to pan a particular sample, the pan effect ‘seems’ to sound further to the left or the right. Almost as if a stereo sample has more purchase in either the L/R field, simply by existing there in the first place. The panning effect sounds weaker to me. This is subjective and I’m sure there’s arguments for or against it, but that’s one reason I’ll do convert a sample to mono.
Another reason is whether or not the actual sample would benefit from being stereo. a click/pop/bass drum, sub-bass (to me) doesn’t need it. Sometimes a particular snare sound might benefit from it - say I wanted to accent a particular point in the track by having the stereo field punctuated - yep, that’d do nicely.
More like, it needs not it.
Stereo sub bass frequencies (between different instruments/sounds, especially) when summed to mono can create nasty phase cancellation problems that can wreck the low end of your mixes, especially when playing out on big mono systems. For dance music, I almost always use mono samples for my kick, sub, and bass instruments, sometimes using an oscilloscope to check phase alignment in the low end mix, and usually I’ll check my whole mix in mono towards the end of the mixing process. Phase cancellation is real. It’s been more than once that I was playing out on some big mono system and whole track elements seem to completely disappear from the mix, or get much weaker, or things just don’t hit right. It’s a shitty feeling, lol
Stereo is great and wonderful, but yeah, check your mix in mono, and if there are elements that don’t need to be stereo, go mono for consistency’s sake
Right, should have worded that better - sub/kicks/low end never are stereo in my mixes was still waking up when I wrote this!
I think stereo vs. mono has different aspects. One is that a mono sound is like a sound object that you can move left and right, or with more advanced technology in 3d space around the listener. Like a single source of sound in space. A stereo sample on the other hand probably already has some depth info in it, thus some kind of spatial info and maybe also panning. If you want to pan such a sample, then to shift the balance of the left/right info to place certain elements of the sound in a place where it is desired. The stronger you pan, the more spatial info from the stereo is lost. And if you want to place it in space, you suddenly have two objects…now repanning/repositioning becomes like placing two stereo speakers in the space around the listener, so there is not a sound source, but rather something like virtual speakers that play back a stereo sound.
About the bass/sub, yes it can be better to mono it below 200-400 hz. One reason is the possible cancellation when it is made mono in a subwoofer, or random phase cancellations in a big room between the speakers. Other reason is that the human hearing ability cannot spatially locate sounds below a certain frequency very well, anyways…I think below 80 hz there is no chance to locate, and there is probably also a range above that frequency where we cannot locate the source very well. Keeping the stereo info in the higher frequencies of a sound will preserve the spatiality. And with headphones a panned or out of phase bass below a certain frequency sounds weird/unnatural (though sometimes cool), so monoing the sub/bass preserves natural impression on headphones, as well. I like to keep just a subtle amount of depth in the sub/bass, as this can make the sound feel a bit more organic, and it also makes a nice feeling on headphones, but never so much that there could be major cancellations.