Anyone who likes Aphex Twin and trackers...

What really fucks me up about Renoise is that everyone here is commenting on the mouseclicking, yet ignoring THIS, VERY, SPECIFIC, FEATURE, THAT RENOISE, SHOULD, HAVE:

You could print plugin effects directly & destructively onto the sample , hence fr eeing up CPU but you could hear the effect first before you printed it.
I’ve really pecked several people to do this and it did get finally done in Renoise but its still not as accurate as PP, gain is not handled correctly last time i checked, Renoise has that great highlight part of the arrangement thing but the gain doesn’t get worked out properly when you have a bunch of fx , be top if this is fixed now?
The other reason this feature is so good and powerful is because most people these days setup EQs on each channel etc and they just sit there wasting CPU and most importantly the urge to carry on tweaking it always remains.
You would be amazed how it can train your brain to get it right the first time when you are forced to make a decision about EQ and then can’t change it, a bit like with a digital camera, you just take loads of shit pictures of the same thing instead of one thats right, I’m generalising.

But every sampler VST i’ve seen does this as well, its the wrong way to do it, all your plugins should be available in the sample editor to apply to samples , not on the mixer, well you need both.

I think its because in the beginning of audio on DAWS, coders were fixated about replicating real mixing desks and recording bands but this didn’t take into account the new way people were going to start using DAW’s
But even if you can’t take that discipline you could just have an undo history on the sample…so you wouldn’t have to re EQ the EQ if it were wrong… you could also have an amazing cpu guzzling EQ on every sound.
It just doesn’t make any sense to have a live EQ on static samples…yet every DAW does this, unless I missed one? Ive checked all of them and they all do that…frustrating when everyone goes down the same wrong road.

good grief that’d be fucking lush to have in Renoise. Not that anyone Renoise dev here seems interested in listening to the A to the F to the X.

(╯°□°)╯︵ ┻━┻

1 Like

What really fucks me up about Renoise is that everyone here is commenting on the mouseclicking, yet ignoring THIS, VERY, SPECIFIC, FEATURE, THAT RENOISE, SHOULD, HAVE:

good grief that’d be fucking lush to have in Renoise. Not that anyone Renoise dev here seems interested in listening to the A to the F to the X.

(╯°□°)╯︵ ┻━┻

You mean this feature?

Works as expected and you CAN use any plugin you use on the track as well but also create FX chains and apply them right from the Sampler Waveform editor. After rendering the FX chain is disconnected so. You can connect and reapply from the left side FX dropdown menu. You can deactivate the entire chain from the FX chain tab. The sampler workflow in Renoise is really good btw, rivals that of MPC and NN-XT in Reason.

About gain however, its kinda true, the gain is not too accurate and the conversion done via rendering seems to miss the mark a little, when you play it live it sounds clearer, after rendering the FX is printed no doubt but it lacks some clarity.

About rest of the entire paragraph he is mostly quibbling about how everyone should use EQ printing on samples as a better method of using FX rather than run gazillion plugins in the background. Its a workflow suggestion. However, he can always get one of those gaming laptops like the Predator or build a PC supercomputer rig if processing power is all he is complaining about. But I gather he is talking about being more disciplined and decisive about using EQ once only and tuning your ears or something like that and not taking CPU resources for granted which in terms of both CPU power and space is very much available for the right price (memory is cheap however).

PS. Just noticed my level changed to ‘Advanced Member’. Cool :guitar: :dribble: :drummer:. Thank you very much dear moderator for the upgrade to level 1. Much appreciated.

Me so happy [smiles all the way…]

You mean this feature?

Works as expected and you CAN use any plugin you use on the track as well but also create FX chains and apply them right from the Sampler Waveform editor. After rendering the FX chain is disconnected so. You can connect and reapply from the left side FX dropdown menu. You can deactivate the entire chain from the FX chain tab. The sampler workflow in Renoise is really good btw, rivals that of MPC and NN-XT in Reason.

About gain however, its kinda true, the gain is not too accurate and the conversion done via rendering seems to miss the mark a little, when you play it live it sounds clearer, after rendering the FX is printed no doubt but it lacks some clarity.

About rest of the entire paragraph he is mostly quibbling about how everyone should use EQ printing on samples as a better method of using FX rather than run gazillion plugins in the background. Its a workflow suggestion. However, he can always get one of those gaming laptops like the Predator or build a PC supercomputer rig if processing power is all he is complaining about. But I gather he is talking about being more disciplined and decisive about using EQ once only and tuning your ears or something like that and not taking CPU resources for granted which in terms of both CPU power and space is very much available for the right price (memory is cheap however).

Love it. First you say that yeah there’s this thing that exists, then you say, hey, it

  1. gain is not too accurate (note how AFX actually spoke about this and mentioned it?)

  2. conversion “misses the mark a little” “live sounds clearer” “FX printing is done but lacks clarity”

So basically you’re agreeing with him. It’s not accurate, and it’s not precise, and it’s not as it should be.

The rest of the entire paragraph is the whole concept of why should you have dozens of EQ on every channel running live, when you could just imprint them to the samples themselves directly, and save yourself a lot of CPU - and your response?

buy a faster computer

Listen, I’m collecting bottles and cans here in order to be able to afford a brand new computer ( don’t believe me? check http://deposit4se.tumblr.com/) and I’m not interested in this whole concept of purchasing a faster computer when the DAW of choice (in this case Renoise) could do stuff a little bit more cleverly and accurately - and allow for exactly what AFX is suggesting. Don’t get confused because he talks about the concept of getting it right the first time, and becoming quicker at doing music and sounddesign, that is valid for any DAW, but Renoise could really step up with making the gain + clarity more accurate when imprinting samples with DSP.

Love it.

Admit it - you didn’t know about this feature :stuck_out_tongue:

Love it. First you say that yeah there’s this thing that exists, then you say,… Don’t get confused because he talks about the concept of getting it right the first time, and becoming quicker at doing music and sounddesign, that is valid for any DAW, but Renoise could really step up with making the gain + clarity more accurate when imprinting samples with DSP.

Haha, confusion, let’s talk about more information ::). I got a better suggestion, simply use Sonic Foundry’s Soundforge or Cool Edit Pro circa 2001 versions or even Goldwave 1998 version and just use the FX and render them becos they do not work in realtime. You know another genius who works this way ? Burial (William Bevan) one of the more cinematic Dubstep pioneers who uses ONLY Soundforge and does all his ‘productions’ in an old computer cos he can’t afford new ones. Listen to his tracks on YouTube like ‘SpaceApe’, or ‘Untrue’ or the super cinematic ‘Pirates’ and you get the drift. It’s so Leftfield that you would feel why his 2 page album review in the Guardian (or the Herald, I forget) warrants recognition. He has a very unique sound, never heard anything like him before and even after.

He produces unquantized music, working with everyday samples and obscure snippets, does not use grids and does not play and record but programs the whole shit on a wave form view. He reads beats like a ‘fishbone’ in his own words. His vocals manipulation is superb too, they sound androgynous. Never uses any VSTi etc. Many do not get his music or complain about his lack of new productions but his first two albums are genre bending and are very timeless.

For me though, ironically this is exactly how I started making music using only wave editors way back in school when computers were mostly a game machine for me. Just using the copy paste features and going by ear. Super tedious to do if you think you can work like Burial.

And yeah since I got a gaming rig, I can run at least 1.5% of gazillion plugins that amount to 20-30 tracks all running at the same time filled with plugins, never had any issues. It’s 2017 bruh, it ain’t Amiga days anymore. It is also a convenience thing, not about whether I can or cannot do but does doing it this way help me return and change things later if I want to, like a tape machine with a time travel feature.

Makes you think about, if you had a time machine of your own, would you continually use it to go back and fix things in your life or would you live a loop by choice reliving the best parts or would you be more disciplined about progressing in time and not getting old or getting old then getting young back and forth just basically playing with your own existence. I suppose this could be sci-fi movie script on its own.

Regarding building a better ear for mixing and mastering sessions, that is a more disciplined process like telling when a kick is off by what freq or is there a hum in the final mix somewhere etc, those are more essential skills. Also skills like telling the voicings of a chord and the type of chord, writing melodies you hear to sheet, being interval sensitive and having high degree of relative pitch are the more useful ones in terms of musicianship.

The genius are sometimes called crazy in that sometimes becos of their brain calibre they can work without certain core skills, because their complimenting abilities offset that. For others, skill itself becomes the first and foremost duty to obtain, only then you can actually use it.

1 Like

computer ( don’t believe me? check http://deposit4se.tumblr.com/) “External link”)

I feel you, but what, you are doing is good discipline and I suppose life in Estonia must be great living in Europe with great culture and food under the sun:). Btw for money matters visit http://www.mrmoneymustache.com/. Superb advice he gives with a more contemporary focus.

Admit it - you didn’t know about this feature :stuck_out_tongue:

i’ve used it but it sucked.

I feel you, but what, you are doing is good discipline and I suppose life in Estonia must be great living in Europe with great culture and food under the sun:).

Why are you talkin about Estonia?

Admit it - you didn’t know about this feature :stuck_out_tongue:

btw my comment was followed by a simple sentence where i take encryptedmind down a notch. he’s all like “use this” then goes to explain why it isn’t great, fully echoing afx’s writeup on why the gain is wrong on destructive rendering-to-sample. so why bother suggesting “this is a great solution to this problem” when you then follow it up with “actually not a great solution because of this that and the other”

Last time I checked the fx button rendered in the vsts just fine into the sample, at least I couldn’t tell any difference from the original. I thought the gain thing Aphex describes might relate to the headroom settings in the song settings, something he may not be aware of?

Can someone like dblue, scientifically :wink: proof that there is indeed something fishy going on with the fx button in the sample editor?

Not in front a computer with renoise to do tests myself.

Why are you talkin about Estonia?

btw my comment was followed by a simple sentence where i take encryptedmind down a notch. he’s all like “use this” then goes to explain why it isn’t great, fully echoing afx’s writeup on why the gain is wrong on destructive rendering-to-sample. so why bother suggesting “this is a great solution to this problem” when you then follow it up with “actually not a great solution because of this that and the other”

Your original post was a direct copy paste of Aphex’s interview snippet. It clearly mentions that he was expecting FX printing from the sampler as well, which FYI is not a ‘missing’ feature. More than any other DAW I have yet to see this kind of decoupling within the environment, where the track FX stand on their own and can be rendered to a new wave file as well as have a separate and more involved DSP chain(s) that can individually be modulated, and finally the DSP chains can be printed on the samples. Either AFX missed this feature or you failed to address this part, by first making it clear that this feature exists in Renoise.

Secondly, I was talking in terms of using Renoise dsp FX which in general is not too polished like Waves plugins but it gets the job done. When it comes to printing it in the sampler wave, I was not making an anti statement about whether it is worth it or not. Depending on the quality of plugins and the sample bit depth and soundcard bit depth etc, internal convertion can sometimes be a little crude. That is all, the feature exists and it works pretty well too.

When you say it sucks, how come 1) you never addressed that FX printing exists.
2) if you never were privy about step 1 then how in the world could you tell whether FX printing sucked or not?

Your glaring omissions have nothing on my airtight argument and ‘solutions’ that I suggested. In fact you always seem to complain about the very medicine for your ailment, for instance, suppose me is a doctor saying that “this medicine is right there in your bedroom closet, but it tastes bitter, but it works”, you will say “oh, so you saying that the medicine is bad”. It seems you have a limited understanding of things. If I say get a faster computer you will say you have no money, if you read my previous post to you where I say that you can use Soundforge for offline FX printing as Burial does on his old computer, you will probably say it’s too difficult… I don’t have a solution for both your money and your learning ability at the same time

Unless you clarify things upfront, it’s very easy to argue post mortem about what you meant in retrospect, because you can try to interpolate or fabricate facts. Ergo, you have not taken me down one bit?
But, I see you are a long time member who uses Renoise so no disrespect, but love :slight_smile:

Regarding Estonia, I dunno I prolly saw Estonia written somewhere in the link you sent, I thought maybe you are there, btw what language were the receipts in?..?

Regarding the other two features like changing instrument number of a track from the Pattern editor and choosing note names from the Pattern editor from a popup, I am working on a tool that will solve these issues and a couple other personal additions, in the Renoise fashion. Renoise is way superior to PPro so I will add the feature set but not emulate it.

Can someone like dblue, scientifically :wink: proof that there is indeed something fishy going on with the fx button

We briefly talked about this yesterday, and he (dblue) pointed out that the samples preserve their samplerate as you apply FX to them.
So, if your source sample is 44.1kHz and the song is being played in 96kHz, then yes - there’s an audible difference.

It makes sense. Would be great to automatically (but optionally) change to adopt the projects sample-rate, of course.
But a simply click on ‘Adjust’ before applying the FX will take of it as well…

We briefly talked about this yesterday, and he (dblue) pointed out that the samples preserve their samplerate as you apply FX to them.
So, if you source sample is 44.1kHz and the song is being played in 96kHz, then yes - there’s an audible difference.

It makes sense. Would be great to automatically (but optionally) change to adopt the projects sample-rate, of course.
But a simply click on ‘Adjust’ before applying the FX will take of it as well…

doesn’t seem like clicking on adjust to adjust something briefly is going to be a thing imaginary people who like complaining about stuff, would like to do.

.

I’m new here, demoing/learning Renoise. This thing about rendering sample fx is quite an issue to me. Just checked it with some reverbs - yes, the difference is huge between “live” and rendered.

added: so what about rendering the song or pattern, is it the same problem?

1 Like

Me like Apex Twich

I never use the “apply track/inst dsp fx to sample” thing. It just isn’t flexible enough. I also didn’t know it could also appy an inst fx chain, but it seems to just render the active lane for the sample, and ignore sends…?

I normally just make an empty pattern long enough, add the dsp there (then I can preview in realtime at leisure, just have to keep an eye on the effect of my master chain), and then add a note and render selection to sample.

Why? Because this way I have control over sample rate and sample playback pitch etc, can even automate parameters along the way…and can even make complex variations of it by cloning patterns and editing them, all the way through the town.

To make the apply dsp fx option valuable, for me it would need added options about rendering rate (so renoise runs at 44.1 for perf reasons, but I want to dsp fx at 192 and then downsample afterwards, for quality reasons…), and also the pitch/speed the sample will be fed through the dsp.

Interesting about the idea of applying mixing eq to a sample, rendered. Now how would this work, like an instrument grabber through the eq? because if you render one EQ setting, and then repitch the sample, the eqing curve will be transposed also. Formants would be turned into partials. Maybe this effect is interesting as an alternative mixing approach, preserving the resulting timbre of the instruments in a more direct way? It is different for sure, but I guess it complicates mixing, or rather, turns mixing into something else? Now I do apply eqing/filtering onto samples by resampling them (via the render selection way…), but then I consider this a sound design step to produce waves with a certain shape and/or timbre that I can work with, and still try to EQ afterwards with a realtime live EQ to make it fit to the other sounds going on without turning into mud…

…btw, I just checked, you ‘can’ drop a sample directly onto the pattern in 3.1.

This thing about rendering sample fx is quite an issue to me. Just checked it with some reverbs - yes, the difference is huge between “live” and rendered.

Explained here :https://forum.renoise.com/t/anyone-who-likes-aphex-twin-and-trackers/47971

Explained here :https://forum.renoise.com/t/anyone-who-likes-aphex-twin-and-trackers/47971

Need to be double/triple checked but I pretty much sure that the song sample rate and the actual sample sample rate was the same ones, 44.1kHz 16 bit.