How to fight aliasing?

  • = no matter what the band-limiting is based on

Current status: There’s cubic and sinc (for rendering) interpolation, high audio rates are possible, without filtering and downsampling.

I’m no expert, I tried to find and describe a few ways how to get there. It would be great if you participate in searching what’s possible and what would be a good feature.

If you create just one sample of 336 samples length , and spread it across the keyboard it will alias as hell in the higher octaves .= normal
Remedy …Create multiple samples of different sample length , record your favorite vst plug (plugin grabber ) per step …
Forget about fft ,ifft …way to cpu hungry .
Take your time to sample instruments , or just use vst’s

For instance, Sytrus doesn’t feel cpu hungry to me but it’s a subjective matter.

Don’t know how to say, but I’ll try:

I think for capturing and modelling acousting sounds, the current sampling features are really good.

But for synthesizer sounds, … even with great multisamples, I’m not sure how good e.g. a glide can sound (more split zones = more jumps, less split zones = more aliasing).
Also, it takes pretty much space to capture something that’s evolving. On the other hand, the aliasing is imho too much to invest much time into building complex electronic sounds, because it’s a limitation that at least I find too problematic.

Renoises “sample based” Instruments now start looking like a synthesizer. I took this as a sign of the dev team, as if they wanted to make VST (and all their pitfalls in controlling them) unneccessary in the end. So, well, I’m curious if then generators inside this will be a separate solution (like a build-in modular synthesizer so to say) or a step by step approach on top of the current Renoise instruments.

I’ll try to get the sound right in cubic. If it sounds okay to me, i’ll just render it anyway. I’ll turn the aliassing to my advantage instead of muffling it away.

Well, I admit I like the sound of the aliasing when it’s not too loud. It reminds me of tracked music and it’s like a feature. But it’s also like a feature that you cannot turn off.
When I listen to the VSTs I have, I don’t think of “muffled away”. Trying to think of something that sounds muffled, the cubic interpolation comes into my mind. Hm.

shut up lala :D

youuu Internal-Midi-Routing-voter! ;)

haha, how cute :D

shut up lala :D

here’s your routing :lol:

What has sytrus to do with all this , sytrus is an fm synthesizer , capable of generating spectrums/waveforms by stacking sines , renoise has no native dsp generators , only effects …
I tend to avoid sampled synthesizers that have modulation / movement in their overall spectrum ,as soon as you sample these, the sound becomes lifeless and static when you play the samples as a regular instrument …
Top end romplers from roland / yamaha have top notch algo’s compr.methods etc… ,especting the same result with just a few multisamples is wishfull thinking ( maybe I am getting a bit off topic here )

Sure some effects can hide these thngs , but it’s not a solution ,
single shot samples for drums or single cycled waves ( altough in a limited octave range ) …all the way I love it and do it all the time …

Yes, but Sytrus uses ifft and might hold as reference what the cpu impact might be, with or without it’s rest. If not Sytrus then zynsubaddfx or whatever. Your point was “ifft is way too cpu hungry” and I thought that it might be not more cpu hungry than plugins that I already use in Renoise songs.

You say Renoise has no native dsp generators, ok. I’d reframe it as: it does have (it cycles through waveforms), and they or something in Renoise based on them could maybe improved soundwise.

I agree. That’s why a “native synthsizer” (be it a separate one or an improvement of Renoise instruments) sounds so appealing to me.

Yep. It’s hard to hide the “staticness”. As Renoise uses cross-fade-loops already, maybe it could crossfade betweeen waveforms too (uh, waveguide patent?). Or with iFFT there’s the padsynth idea to get rid of the static sound, though afaik it’s not really changing the spectrum except the phases. The static sound is one problem, and the aliasing is another one. If a solution against the aliasing would also enable lively and non-static sounds, that might be the right one.

And if there’s something that could be implemented relatively easily, would help and not go against any possible future improvements, why not that. Would be the oversampling of instruments imho.