Realtime Stretch

How about the glide command (5xx) and a vocoder? Pretty funky stuff!

I’d love to have a timestretch like in Fruityloops, where you can adjust the pitch and length as you wish very quickly with two knobs. That i miss in Renoise. In Fruityloops though, it was sometimes too rough, and you were only allowed to play one pitch on all keys afterwards. Pretty much the death sentence for many applications, though I used it a lot. It turned a Ferrari into a lame horse, but it was at least a horse.

Great respect for Renoise in any case - very cool to be able to use a good old tracker together with vsts and stuff.

Maybe we should professionally categorize Renoise as a sequencer not a DAW. It’s a sequencer with balls. I think it shines on two things: external MIDI clocking/sequencing and virtual instrument sequencing. To me, it’s sort of like an Akai MPC with VST support and more functionality. The fact that it is rewire, makes it easy to bounce your tracks into your DAW of choice. Running under a DAW and controlling external MIDI gear that way also makes a great combination.

Now as for timestretching, I always thought implementing realtime granular capability with the sampler would be beneficial. You could make up for the chops in the 09xx command that way, since you can create short grain clouds with altered grain and width speed and the 09xx fluctuates through the time.

Not only that, you could make super small sample based projects and not have to deal with loop points and re-pitching, even creating big textures out of very small sounds for extremely low filesizes. That way you kill two birds with one stone, and both are equally useful and would even be something the Renoise team might even prefer since they all like the minimalist approach to things.

Edit: http://senduit.com/a46d6f I made an A/B test to show the method in action just for kicks, nothing else but a fruity granulizer re-triggered with automation. All done in real time so no waiting :)

Yes - put timestretching as interpolator type in a sampler directly!
Then the sample sync could track temp, yet you would be able to play
different without any extra looping.

I’d like to have “rubber points” in a sampler editor too, where I could
stretch parts of samples freely. To fix or rearrange notes’ timing for an
example.

If timestretching will exist, then you will be able to import ANY lenght and speed rhythm sample, any singer voice and create melody or mix you want. Then Renoise mixing will have unlimited combinations :) For me, currently, the best timescretching is implemented into WaveLab5 - Mark sample and hit T - window will appear and you can finetune your sample. Unfortunatelly it is not realtime.

Anyway - looong time ago, there was WaveLab plugin that was working in realtime pitch shifting and has just three knobs! One knob for voice pitch, other knob for “opening mouth” so it can sound more or less male/female/kid and mix knob. And it was in 1999 - when I used to work on TV station. That plug in was perfect for voice.

While a timestretch function of its own is all well and good, it will, in my opinion, need a little more meat to be flexible and really useful. For instance… Audio tracks, so you can line up an original beat with the acapella (when you’re doing a remix for instance), so you get the timing right. Some form of dialogue where you input original bpm and destination bpm values, not just manual drag-n-stretch, which is good for some things, but not really for vocal timing and so on (well it’s good for fine tuning but not for quickly getting a full vocal track in sync). And the E1 and 09xx commands does not cut it (for those who think they do) :) They are however very useful for other things.

At the moment I’m using Cubase for all my timstretching needs, and it works fine, but I’d love to do it all “in-house”.

Once Lua scripting will be there few extra possibilities will open. Access to
samples is supposed to be possible so imagine coding a transient detector,
spliting samples to a new instruments and then populating a pattern with notes
set to recreate a whole.

Or (if io.popen will be exposed) just run some external program to do work
needed.

Of course such workarounds shouldn’t be needed just for some timestretched
samples. However it should get just easier…

How about having simple fine-grained granular time stretching / pitch shifting at first (better than 09xx but in the same vein)? Workflow and GUI would be the same as with some other external (and advanced) library but it would’ve no licensing costs and might be useful for some people.

thats quite a good idea actually, not sure if it would actually be possible guys?

But certainly good for minor changes in tempo…

Just adding a comment to the thread. From time to time I search in the hope of finding an amazing new timestretch VST or otherwise soultion I could use in Renoise.

It seems that while the Elastique 2 algo is very good, from overal concensus on the web (on sites like KVR) is that nothing can touch Dirac LE. It sounds like Dirac is a resource hog, but what comes out of your speakers can’t be matched. It sounds like it may be a very good solution to implement because it’s free, and has the power to get top of the line results (even if it can’t be done in real time.).

Well put sir.

I have built a lot of time-stretch/pitch-shifters in Max and there is a lot of really tight processing going on even with the simple ones.

I think adding automation to sample loop points might open up some interesting possibilities.

Dblue, I’ve always been very Impressed with how responsive (or how close to real-time) your stretchers are.
Do you have any tips for achieving this? My best attempt was a very short look-ahead recording buffer, and then to avoid using overlapping windowed grains, i tried a single windowed grain with a delay which was timed according to the grain duration… which sounds fine. Any light you could shed on this subject would be greatly appreciated, thanks.

I don’t have any experience working with Max myself, so I’m not sure exactly what sort of limitations might exist when using its standard objects. The basic stretchers I have done - which can only slow the audio down, not speed it up - begin playing the audio buffer while it is simultaneously being recorded, so there is literally no latency at all. I’m not performing any kind of interpolation or advanced processing that would require there to be a delay.

A very simple example (with no range checking!!) might look like this:

  
// setup  
  
buffer[]; // array of samples to hold the buffered input  
bufferWritePosition = 0; // buffer write position in samples   
bufferReadOffset = 0; // buffer read offset in samples  
grainSize = 256; // size of grain in samples  
grainPosition = 0; // playback position within the current grain  
stretchFactor = 0.5; // 1 = normal speed, 0.5 = half speed, etc. MUST be within the range [0.0, 1.0]!!!  
  
// processing loop  
  
// record input  
buffer[bufferWritePosition] = input;  
bufferWritePosition = bufferWritePosition + 1;  
  
// get output  
output = buffer[bufferReadOffset + grainPosition];  
  
// update grain  
grainPosition = grainPosition + 1;  
if ( grainPosition > ( grainSize - 1 ) ) {  
 grainPosition = 0;  
 bufferReadOffset = bufferReadOffset + ( grainSize * stretchFactor );  
}  
  
  

Here’s a little visual demo I made a while ago:
http://illformed.org/temp/stretch.swf

Obviously there’s a bit more to it than that, but this is essentially the core logic that I use. Realistically you would want to keep track of at least 2 unique grains so that you could alternate between them, using some appropriate method of fading/smoothing to avoid clicks in the output. You would also want to have some logic in there that decides what to do when the input buffer is full. I personally just let the stretch effect continue to play until the user stops it, but you could also do some interesting stuff that was tempo-synced, where the buffer and stretching are constantly being reset every X beats.

Anyway, hopefully this is helpful to you.

I don’t understand the need to re-invent the wheel. Dirac le is free to implement and has fantastic sound quality. I realize that a lot of the Renoise userbase does glitchy and experimental type music, and I think there are ways to get interesting strech effects with things like Paulstretch.

With that said, stretching a vocal from 110bpm to 125bpm with as few artifacts as possible requires a pretty good algo. Dirac is already built and it’s free… doesn’t it make sense to implement that as opposed to build something from scratch?

wow awesome, that was really helpful, thanks man!
This should work if i can get the recording and playing buffers to sync like that. I wouldn’t even bother getting into Max if you know how to code this stuff. Might make quick experiments or idea-sketches easier, definitely great for visualizing ideas like this.

I threw this standalone stretcher together today for anybody who wants it. You’ll also need Max runtime. Somebody was talking about vocal samples so I included a loop by MIA. The metro - step timer was originally intended for beats and stuff but you can just turn that off for straight up looping. To start, just click the speaker and then the green box next to the wave. The little blue box will freeze it and then you can scrub around with the slider below the wave. To configure your output - click the little button next to the speaker. hope it works!

According to me the real benefit of a timestretching/pitchshifting algorithm would be to have it in the sampler!
Then, if you have a sample at C-4 and you play it at C-5 it will only be pitched, but NOT stretched (or shortened in that case).
I think that is what the focus should be on. Just stretching some long portion of audio can be done in any other audio oriented DAW anyway, but inside Renoise it should improve the sampler.

What do you think?

fladd

Yup. I can testify that thanks to this little tip from Conner_Bw, I now have a “built-in” realtime timestretch function in Renoise and was able to ditch using Ableton Live for that purpose. All I have to do is loop a sample, check Sync in Sample Properties and fire up AUPitch and tweak the pitch to taste, which is exactly what I wanted.

For non-OS X users, we also have the rubberband lua tools in 2.6 like
http://tools.renoise.com/tools/rubberband-timestretchpitch-shift
and

the latter of which can apply timestretch/pitch-shift to only a selection of a sample, from what I understand.

For general use, I’m really enjoying the sample sync + AUPitch method and find it the easiest to use.

What I miss in renoise most is the timestretch in the sample editor. I got the pitch-shifting / time-stretching script from the tools section but …

I used to be creating in ableton live before switching to renoise. The most powerfull and flexible thing about it, for me, is it’s sample editor.
It allows you to timestretch slices of the recording so mistakes of the musician can be harmlessly corrected within the editor just dragging marker points of the specific slice in the right place.
Of course the thing happens in real-time in ableton.
And yes, technically it would be possible to do the same thing using the timestreching script in renoise but without the visual representation and ability to cut the sample into slices you are not able to do that.

Giving that kind of slice / timestratch ability and a beats grid over a waveform would make recording and editing live instruments and vocals in renoise one of the best solutions there could be :)

That kind of implementation of timestretch in the sample editor is what I dream of at night :wink:

i just looove the timestretch/pitchshifting tool and its right there in the sample editor,and i think its somehow possible to make some great extreme(not as extreme as paulstretch but anyway)timestretching in it

Timestretch in sample editor is quite good solution at some moments, but it just can’t compare with real-time stretching/pitch shifting. After remixing some tracks recently, that and sample slicing bound to midi notes were only things that I have found missing constantly. I am not sure if that’s trivial to implement, but it would make renoise much more versatile app. I have used (and am still using, when working on windows) Livelab Liveslice, wonderful slicer VSTi that has realtime granular-based slicing that sounds just great! Hopefully renoise will have something like that in near future.

That Rubberband app sounded awesome.