Sampler extended functionality

Sampler extended functionality

As the Renoise sampler performs so well and is the heart of the system extending that architecture seems a natural choice. though I understand that could be a tricky sell for those reasons. :) There is a Video on youtube "called “Fairlight CMI30A Realtime effects” which shows some wonderful possibilities. I was taking a look at the same time as looking more closely at the Renoise sampler engine to see if any of these tricks are possible. mapping Velocity and Keyboard position looks like it might be something Renoise could easily be capable of already, but more extended functions such as access to individual filter and resonance envelope/curves on a per zone basis may be out of reach. combining velocity and key position also looks is possibly within reach of Reason functionality already too.

It all seems tantalizingly close but the lack of individually accessible zone filters (or) just simply the way the CMI30A achieves making it all possible in the interface is what is responsible for that detail of control.

sorry if this is a little vague but watching the mentioned Video should make it a whole lot clearer. I’d be interested to hear what people think.
I’m going to try to emulate this functionality with what is currently available and see if I can simply map note position to a well laid out filter curve.

The excellence of the sampler and depth of precision control is really what sealed the deal for me and Renoise.
so a big up to the designers here, as well as to say this is literally the only software I actually (enjoy) using, all the rest are a chore. :)

Welcome to the forums,

I’m all for new functionality, fail for not supplying link to video though! :wink:

I wasn’t sure if you could. so thanks. and thanks :)

No worries :) , some nice functionality there in the video, I’m not sure but maybe using a key-tracker meta device, set up to control a lfo with custom shape controlling whatever effect, volume, filter etc you can emulate the fairlight to an extend?

yes you can do lot and it gets close. The stopper is that there is only ever one global Volume, filter, Res curve per sampler. I was thinking maybe making a very long filter vector shape and trying to map the notes to play only (portions) of the curve.
probably using those key-tracker meta devices you mentioned. it’ not ideal but it might be possible (to a degree). I get the feeling i’ll lose resolution though, because of the (256)? value limit in Hex.

having individual Volume, filter, res, curves per zone, would probably sort that out was my thinking. That could open up all the requisite possibilities to make it a truly devastating sampler. :)


Maybe you can also incorporate the ‘meta mixer’ device in your set-up splitting the signal up in low, mid & high frequency regions that can be controlled individually?

yes I think that’s also an avenue. you could have a single voice per track and place them all in a grouped track. splitting up the midi note data on inputs… though I can’t remember how many active input splits you you can make. but you can see how close Renoise is already. creating one mega 12 voice split track sampler could get a little hard to handle. Then there’s the problem when you might want to remap the notes. you’d have to handle different protocols/approaches. I think the solution would probably be in individual zones having individual envelopes. after that I guess even a cleverly designed scripted interface might handle the bulk of the interfacing.

There’s another video where he shows the phase accuracy of the voices playing the same sample at different pitches.
Renoise has the same accuracy from what I could tell. I set up a similar test condition and it met it admirably. :)

just to report I’m having some success with this Djeroek. you have to currently use the tools you suggested in the DSP section. It’s a little convoluted, using things such as keytracker, XY pad to visualize and constrain, then feeding an LFO set to 1 which resets another shaped LFO. that then feeds a Meta mixer which in turn feeds into a Filter and a Gain DSP. Then compressed. I might be able to simplify it but it does work. quite a bit more general and less focused than the Video example but that is expected. I might post an example if it proves possible at some point.

These DSP keytrackers only route into tracks of course and the envelopes in the sampler itself look as though they are not possible to target. I suppose (if) keytrackers and velocity trackers were available in the sampler and there was a 2nd LFO to drive the primary LFO to (reset) then that may be a very powerful addition in the sampler architecture. This could also potentially drive playback start position of the sample itself. If the zones then were eventually made independent the sampler architecture would be even more powerful. it might even help ease up on DSP processor resources.

Here’s an example of my attempt to emulate the CMI system.

I’m trying to figure out a way to emulate traditional velocity to Amp/volume scaling at the moment.
I understand that the pattern section accepts velocity values and maps to the volume of the sample/instrument
but actually remaping that in a sensible way is proving the DSP section I am having to map velocity
to gain (via an LFO in this case). the issue there is though, that the Gain is global and not voice independent.
The samplers volume envelopes seem to function in a voice independent fashion but they have no velocity to depth
functionality of traditional samplers. similar with the Cutoff and resonance I guess. of course, aside from this,
what the system currently has and how it sounds is really good imo. Hence a desire to make one of the best. :)

if anyone downloads the example song file, you might have to check your Audio output in preferences.
Also I think I was possibly working at 96Khz. the song contains one simple looped waveform for the sound source.
I’ve done minimal work in the Instrument setting envelopes, just a little hi roll off a smidgen of pitchbending
and a simple bit of Volume shaping to make the waveform more instrument-like.

Hi Cas.

It’s hard work but I’m getting a better angle on this kind of CMI functionality in Renoise.
I took a look at your tools today and by the looks of things I won’t need to explain what the issues are.

The main difficulty is, when an LFO tool is used in the DSP section to emulate how this CMI works,
it has to be connected to drive a Gain (in the case of Velocity to Gain … say).
when a new key is triggered the LFO jumps to a position just fine, like the CMI video.
however, because the gain is global to the whole instrument any other voices are also gain altered.
This is ok for emulating subtle overall compression effects but soon breaks down with more complex
LFO shape custom drawn curves.The same issue applies to mapping key range or velocity to filters.

The only possible 3rd party solution I could think of, is to have some tool to map the key range distribution
of samples in the instrument, to independent tracks, and then have the same DSP architecture copied to these
independent instruments. It’s still not ideal as you may imagine, but because the core sample engine has no
independent filters (or) ways to map key position or Velocity to a specific starting point in an independent
filter / Volume envelope, you have to go into the DSP section to achieve it. the DSP is global to any single
instrument to boot so, the only solution I can think of for that is the previous solution. which is likely
processor intensive as well as being somewhat difficult to manage.

I’ve made a slightly more simplified version now, which I may upload at some stage.