Anyone who likes Aphex Twin and trackers...

I looked at the code. There is no explicit menu addition. There’s only code that allows it to be bound to a keyboard shortcut.

I did that, and gave it a whirl. It does make certain things easier (pitch change, for example) but there is no option I can see to assign a specific instrument to the selected note.

Being able to change the instrument is less important… for me at least. Plus if you design your instruments properly, changing the note can change the sample.The lack of sample effects sucks though… maybe someone should add these features to it

I definitely wish this would be added to Renoise natively though. Seems like it wouldn’t be that hard to do either

Aphex seems to use the mouse quite a lot though, became a ‘clicker’ end of the day? Whatever happened to using Keyboards in trackers…

The piano roll feature I can do without, been using that for years and never liked it really, notation view however works for me, if that could be brought to Renoise.

Changing instruments : menu dropdown is just a workflow shortcut. Can help, though its fixed for me once I am done with the assignments and neither the MPC’s work this way either. Also I just mapped mouse button macros to Find and Replace tool and the Advanced editor using keystrokes which makes pressing Esc, Selecting notes, right clicking and invoking the tools a 1-click affair. Just assign a shortcut for the tool first.

One thing I would like with Find and Replace tool is that it should automatically read the current tracks instrument numbers instead of manually inputting the ‘source’. A simple + and - for increment decrement should also help. This can be done by the tool writer and then it will a 1-click affair back again.

This sort of minor features can certainly help in our IDE (Integrated Development Environment) “reflection” feature where the function names pop up from a menu and fill our code lines. However for trackers we have MIDI key controllers, pad controllers, knobs and sliders and the computer keyboard and the mouse. How many more peripherals does Aphex need really? Must be becos of his style of production where he is more into the Max/MSP paradigm of music making.

For EQ freeze function or Fx, he can just render the track and bypass the plugins (they all have a deactivate checkbox). He must be having a lot on his mind to quibble about such things. End of the day he really is a genius so maybe he just needs it like many writers need coffee or a particular brand of pen, not for functionality sake but for sake of personal preference and comfort.

Love his music though, was a big fan of his work since school days, some of his tunes are really masterpieces, much like Venetian Snares, never gets old, but its quite predictable though nowadays and similarish sounding and current toolsets have made this style of music really simple to achieve. I believe my tastes have matured into more organic styles where music theory, instrumentation and musicianship do matter, in addition to production ethics. Being too cerebral or one track (!) has its own pitfalls.

2 Likes

Where You found working PlayerPro build? OSX version not working :frowning:

thnx!

I would imagine he would approve if this feature got implemented?

https://forum.renoise.com/t/click-n-drag-to-change-values-in-the-patter-editor/42353

I certainly wouldn’t mind :slight_smile:

1 Like

Where You found working PlayerPro build? OSX version not working :frowning:

thnx!

Try this:

https://woolyss.com/tracking/tracking-trackers/PlayerPro.zip

It’s buggy and crashes sometimes but it works enough to see the feature set

I would imagine he would approve if this feature got implemented?

https://forum.renoise.com/t/click-n-drag-to-change-values-in-the-patter-editor/42353

I certainly wouldn’t mind :slight_smile:

I have a simple idea but I don’t know about the depth of the Renoise API. Is it possible to query a value in the Pattern Editor from any of the columns via an API function? Also is it possible to ‘set’ a new value to that location via the Renoise API. If so, we can build a KeyShortcutExtender tool, that takes in new keyshortcuts and processes them internally via the API and gets the job done. GUI is even not needed beyond specifying the Tool shortcuts from the KeySettings Config pane. A simple list of internally implemented features via the API can be listed out and shortcuts set for them. If this is possible which I think it should since you said that the API is deep and powerful, a simple query and set is all that is needed right? Nothing too extensive really. Two functions for two shortcuts, increment() and decrement() that are event processed whenever a shortcut is pressed.

Is it possible to …

Yes, except for a small detail everything you described is possible. That “detail” is that the selection you can read is currently limited to a column number.

So you can determine the pattern, the line, the track and the column. But that’s currently where it stops - note, volume, panning or delay column? It won’t tell you. But you can still read these values, of course… it’s just access to the smallest “sub-column” level of the selection (cursor position) that’s currently missing.

So if I were to do such a tool, I would expose separate shortcuts to control the each of those values: pan/vol/etc.

And, thinking about it, that might indeed be _a better_solution because you would have less need for horizontal navigation?

The Renoise API doesn’t read the mouse-wheel position, or track click-drag events either (did I mention that GUI isn’t the strong side to the API? :)), so a tool that fulfills the request in that topic I linked to… well that can’t be done with scripting atm.

Yes, except for a small detail everything you described is possible. That “detail” is that the selection you can read is currently limited to a column number.

So you can determine the pattern, the line, the track and the column. But that’s currently where it stops - note, volume, panning or delay column? It won’t tell you. But you can still read these values, of course… it’s just access to the smallest “sub-column” level of the selection (cursor position) that’s currently missing.

So if I were to do such a tool, I would expose separate shortcuts to control the each of those values: pan/vol/etc.

And, thinking about it, that might indeed be _a better_solution because you would have less need for horizontal navigation?

The Renoise API doesn’t read the mouse-wheel position, or track click-drag events either (did I mention that GUI isn’t the strong side to the API? :)), so a tool that fulfills the request in that topic I linked to… well that can’t be done with scripting atm.

I am browsing the API forum section now :slight_smile: So essentially what you are saying is that the user selection area are not referenced in the API but the lowest resolution of detail is the column itself. If that is the case can the Advanced Editor be hacked into? ‘Selection in Pattern’ is the first option at the top of the Advanced Editor, so is the ‘Add’ function in the dropdown menu right at the bottom and the increment factor and the Apply button. Once the user makes a selection

  1. set step length to 0

  2. enable editing in the pattern editor by pressing Esc

  3. Set increment factor to 1 and set the mode to Add

  4. Somehow activate the “Apply” button. I cannot MIDI map it, neither the Advanced Editor comes with any keyboard shortcut in the settings. So hacking in is the only way for now I suppose if you say that the API does not support column data types or user selection data reading. This segregation of selection and data ranges is done quite well in the Advanced Editor. We just need to figure out a way to connect the keyboard to the Apply button rather than the mouse.

If so then maybe indirectly we can plug another function call reading the values from the selection range and then pipe that into our increment()/decrement() functions. Is the API extensible from outside the Renoise binary or source code? If not then tough luck. Additional shortcuts for V/P/D increment decrement might not be the best way (2x3=6 new shortcuts to learn). A user can just select whatever data type he is using in regular pattern editor usage and use the increment-decrement shortcuts, accordingly the tool intelligently deduces the data type and fixes the range accordingly to prevent overflow or underflow (using simple checks), so for V - till 0x7F only, for Delay till 0xFF and similar.

Regarding mouse wheel processing it is not a thing I would enjoy using, keyboard shortcuts are the way to go. Or else it could be made very interactive like a pop-up window that enables to choose from available values, making click and drag slower. Also things like fast acceleration (pressing Ctrl) or unary increments only (pressing Shift) would be additional range based features.

If all else fails for now, I can try to use mouse recording macros and apps like AutoIt.

What really fucks me up about Renoise is that everyone here is commenting on the mouseclicking, yet ignoring THIS, VERY, SPECIFIC, FEATURE, THAT RENOISE, SHOULD, HAVE:

You could print plugin effects directly & destructively onto the sample , hence fr eeing up CPU but you could hear the effect first before you printed it.
I’ve really pecked several people to do this and it did get finally done in Renoise but its still not as accurate as PP, gain is not handled correctly last time i checked, Renoise has that great highlight part of the arrangement thing but the gain doesn’t get worked out properly when you have a bunch of fx , be top if this is fixed now?
The other reason this feature is so good and powerful is because most people these days setup EQs on each channel etc and they just sit there wasting CPU and most importantly the urge to carry on tweaking it always remains.
You would be amazed how it can train your brain to get it right the first time when you are forced to make a decision about EQ and then can’t change it, a bit like with a digital camera, you just take loads of shit pictures of the same thing instead of one thats right, I’m generalising.

But every sampler VST i’ve seen does this as well, its the wrong way to do it, all your plugins should be available in the sample editor to apply to samples , not on the mixer, well you need both.

I think its because in the beginning of audio on DAWS, coders were fixated about replicating real mixing desks and recording bands but this didn’t take into account the new way people were going to start using DAW’s
But even if you can’t take that discipline you could just have an undo history on the sample…so you wouldn’t have to re EQ the EQ if it were wrong… you could also have an amazing cpu guzzling EQ on every sound.
It just doesn’t make any sense to have a live EQ on static samples…yet every DAW does this, unless I missed one? Ive checked all of them and they all do that…frustrating when everyone goes down the same wrong road.

good grief that’d be fucking lush to have in Renoise. Not that anyone Renoise dev here seems interested in listening to the A to the F to the X.

(╯°□°)╯︵ ┻━┻

1 Like

What really fucks me up about Renoise is that everyone here is commenting on the mouseclicking, yet ignoring THIS, VERY, SPECIFIC, FEATURE, THAT RENOISE, SHOULD, HAVE:

good grief that’d be fucking lush to have in Renoise. Not that anyone Renoise dev here seems interested in listening to the A to the F to the X.

(╯°□°)╯︵ ┻━┻

You mean this feature?

Works as expected and you CAN use any plugin you use on the track as well but also create FX chains and apply them right from the Sampler Waveform editor. After rendering the FX chain is disconnected so. You can connect and reapply from the left side FX dropdown menu. You can deactivate the entire chain from the FX chain tab. The sampler workflow in Renoise is really good btw, rivals that of MPC and NN-XT in Reason.

About gain however, its kinda true, the gain is not too accurate and the conversion done via rendering seems to miss the mark a little, when you play it live it sounds clearer, after rendering the FX is printed no doubt but it lacks some clarity.

About rest of the entire paragraph he is mostly quibbling about how everyone should use EQ printing on samples as a better method of using FX rather than run gazillion plugins in the background. Its a workflow suggestion. However, he can always get one of those gaming laptops like the Predator or build a PC supercomputer rig if processing power is all he is complaining about. But I gather he is talking about being more disciplined and decisive about using EQ once only and tuning your ears or something like that and not taking CPU resources for granted which in terms of both CPU power and space is very much available for the right price (memory is cheap however).

PS. Just noticed my level changed to ‘Advanced Member’. Cool :guitar: :dribble: :drummer:. Thank you very much dear moderator for the upgrade to level 1. Much appreciated.

Me so happy [smiles all the way…]

You mean this feature?

Works as expected and you CAN use any plugin you use on the track as well but also create FX chains and apply them right from the Sampler Waveform editor. After rendering the FX chain is disconnected so. You can connect and reapply from the left side FX dropdown menu. You can deactivate the entire chain from the FX chain tab. The sampler workflow in Renoise is really good btw, rivals that of MPC and NN-XT in Reason.

About gain however, its kinda true, the gain is not too accurate and the conversion done via rendering seems to miss the mark a little, when you play it live it sounds clearer, after rendering the FX is printed no doubt but it lacks some clarity.

About rest of the entire paragraph he is mostly quibbling about how everyone should use EQ printing on samples as a better method of using FX rather than run gazillion plugins in the background. Its a workflow suggestion. However, he can always get one of those gaming laptops like the Predator or build a PC supercomputer rig if processing power is all he is complaining about. But I gather he is talking about being more disciplined and decisive about using EQ once only and tuning your ears or something like that and not taking CPU resources for granted which in terms of both CPU power and space is very much available for the right price (memory is cheap however).

Love it. First you say that yeah there’s this thing that exists, then you say, hey, it

  1. gain is not too accurate (note how AFX actually spoke about this and mentioned it?)

  2. conversion “misses the mark a little” “live sounds clearer” “FX printing is done but lacks clarity”

So basically you’re agreeing with him. It’s not accurate, and it’s not precise, and it’s not as it should be.

The rest of the entire paragraph is the whole concept of why should you have dozens of EQ on every channel running live, when you could just imprint them to the samples themselves directly, and save yourself a lot of CPU - and your response?

buy a faster computer

Listen, I’m collecting bottles and cans here in order to be able to afford a brand new computer ( don’t believe me? check http://deposit4se.tumblr.com/) and I’m not interested in this whole concept of purchasing a faster computer when the DAW of choice (in this case Renoise) could do stuff a little bit more cleverly and accurately - and allow for exactly what AFX is suggesting. Don’t get confused because he talks about the concept of getting it right the first time, and becoming quicker at doing music and sounddesign, that is valid for any DAW, but Renoise could really step up with making the gain + clarity more accurate when imprinting samples with DSP.

Love it.

Admit it - you didn’t know about this feature :stuck_out_tongue:

Love it. First you say that yeah there’s this thing that exists, then you say,… Don’t get confused because he talks about the concept of getting it right the first time, and becoming quicker at doing music and sounddesign, that is valid for any DAW, but Renoise could really step up with making the gain + clarity more accurate when imprinting samples with DSP.

Haha, confusion, let’s talk about more information ::). I got a better suggestion, simply use Sonic Foundry’s Soundforge or Cool Edit Pro circa 2001 versions or even Goldwave 1998 version and just use the FX and render them becos they do not work in realtime. You know another genius who works this way ? Burial (William Bevan) one of the more cinematic Dubstep pioneers who uses ONLY Soundforge and does all his ‘productions’ in an old computer cos he can’t afford new ones. Listen to his tracks on YouTube like ‘SpaceApe’, or ‘Untrue’ or the super cinematic ‘Pirates’ and you get the drift. It’s so Leftfield that you would feel why his 2 page album review in the Guardian (or the Herald, I forget) warrants recognition. He has a very unique sound, never heard anything like him before and even after.

He produces unquantized music, working with everyday samples and obscure snippets, does not use grids and does not play and record but programs the whole shit on a wave form view. He reads beats like a ‘fishbone’ in his own words. His vocals manipulation is superb too, they sound androgynous. Never uses any VSTi etc. Many do not get his music or complain about his lack of new productions but his first two albums are genre bending and are very timeless.

For me though, ironically this is exactly how I started making music using only wave editors way back in school when computers were mostly a game machine for me. Just using the copy paste features and going by ear. Super tedious to do if you think you can work like Burial.

And yeah since I got a gaming rig, I can run at least 1.5% of gazillion plugins that amount to 20-30 tracks all running at the same time filled with plugins, never had any issues. It’s 2017 bruh, it ain’t Amiga days anymore. It is also a convenience thing, not about whether I can or cannot do but does doing it this way help me return and change things later if I want to, like a tape machine with a time travel feature.

Makes you think about, if you had a time machine of your own, would you continually use it to go back and fix things in your life or would you live a loop by choice reliving the best parts or would you be more disciplined about progressing in time and not getting old or getting old then getting young back and forth just basically playing with your own existence. I suppose this could be sci-fi movie script on its own.

Regarding building a better ear for mixing and mastering sessions, that is a more disciplined process like telling when a kick is off by what freq or is there a hum in the final mix somewhere etc, those are more essential skills. Also skills like telling the voicings of a chord and the type of chord, writing melodies you hear to sheet, being interval sensitive and having high degree of relative pitch are the more useful ones in terms of musicianship.

The genius are sometimes called crazy in that sometimes becos of their brain calibre they can work without certain core skills, because their complimenting abilities offset that. For others, skill itself becomes the first and foremost duty to obtain, only then you can actually use it.

1 Like

computer ( don’t believe me? check http://deposit4se.tumblr.com/) “External link”)

I feel you, but what, you are doing is good discipline and I suppose life in Estonia must be great living in Europe with great culture and food under the sun:). Btw for money matters visit http://www.mrmoneymustache.com/. Superb advice he gives with a more contemporary focus.

Admit it - you didn’t know about this feature :stuck_out_tongue:

i’ve used it but it sucked.

I feel you, but what, you are doing is good discipline and I suppose life in Estonia must be great living in Europe with great culture and food under the sun:).

Why are you talkin about Estonia?

Admit it - you didn’t know about this feature :stuck_out_tongue:

btw my comment was followed by a simple sentence where i take encryptedmind down a notch. he’s all like “use this” then goes to explain why it isn’t great, fully echoing afx’s writeup on why the gain is wrong on destructive rendering-to-sample. so why bother suggesting “this is a great solution to this problem” when you then follow it up with “actually not a great solution because of this that and the other”

Last time I checked the fx button rendered in the vsts just fine into the sample, at least I couldn’t tell any difference from the original. I thought the gain thing Aphex describes might relate to the headroom settings in the song settings, something he may not be aware of?

Can someone like dblue, scientifically :wink: proof that there is indeed something fishy going on with the fx button in the sample editor?

Not in front a computer with renoise to do tests myself.

Why are you talkin about Estonia?

btw my comment was followed by a simple sentence where i take encryptedmind down a notch. he’s all like “use this” then goes to explain why it isn’t great, fully echoing afx’s writeup on why the gain is wrong on destructive rendering-to-sample. so why bother suggesting “this is a great solution to this problem” when you then follow it up with “actually not a great solution because of this that and the other”

Your original post was a direct copy paste of Aphex’s interview snippet. It clearly mentions that he was expecting FX printing from the sampler as well, which FYI is not a ‘missing’ feature. More than any other DAW I have yet to see this kind of decoupling within the environment, where the track FX stand on their own and can be rendered to a new wave file as well as have a separate and more involved DSP chain(s) that can individually be modulated, and finally the DSP chains can be printed on the samples. Either AFX missed this feature or you failed to address this part, by first making it clear that this feature exists in Renoise.

Secondly, I was talking in terms of using Renoise dsp FX which in general is not too polished like Waves plugins but it gets the job done. When it comes to printing it in the sampler wave, I was not making an anti statement about whether it is worth it or not. Depending on the quality of plugins and the sample bit depth and soundcard bit depth etc, internal convertion can sometimes be a little crude. That is all, the feature exists and it works pretty well too.

When you say it sucks, how come 1) you never addressed that FX printing exists.
2) if you never were privy about step 1 then how in the world could you tell whether FX printing sucked or not?

Your glaring omissions have nothing on my airtight argument and ‘solutions’ that I suggested. In fact you always seem to complain about the very medicine for your ailment, for instance, suppose me is a doctor saying that “this medicine is right there in your bedroom closet, but it tastes bitter, but it works”, you will say “oh, so you saying that the medicine is bad”. It seems you have a limited understanding of things. If I say get a faster computer you will say you have no money, if you read my previous post to you where I say that you can use Soundforge for offline FX printing as Burial does on his old computer, you will probably say it’s too difficult… I don’t have a solution for both your money and your learning ability at the same time

Unless you clarify things upfront, it’s very easy to argue post mortem about what you meant in retrospect, because you can try to interpolate or fabricate facts. Ergo, you have not taken me down one bit?
But, I see you are a long time member who uses Renoise so no disrespect, but love :slight_smile:

Regarding Estonia, I dunno I prolly saw Estonia written somewhere in the link you sent, I thought maybe you are there, btw what language were the receipts in?..?

Regarding the other two features like changing instrument number of a track from the Pattern editor and choosing note names from the Pattern editor from a popup, I am working on a tool that will solve these issues and a couple other personal additions, in the Renoise fashion. Renoise is way superior to PPro so I will add the feature set but not emulate it.

Can someone like dblue, scientifically :wink: proof that there is indeed something fishy going on with the fx button

We briefly talked about this yesterday, and he (dblue) pointed out that the samples preserve their samplerate as you apply FX to them.
So, if your source sample is 44.1kHz and the song is being played in 96kHz, then yes - there’s an audible difference.

It makes sense. Would be great to automatically (but optionally) change to adopt the projects sample-rate, of course.
But a simply click on ‘Adjust’ before applying the FX will take of it as well…