Awesome update guys, thanks a lot!
When I heard about the pattern matrix my hopes for the feature Cosmiq is referring to went up again, but sadly it seems it has not been included yet.
When starting at some random point in a pattern:
-Renoise should know which samples (WAVEs) were triggered earlier and are still supposed to be playing by
calculating the offset in the WAVE (taking into account earlier pitch bend/slide and offset effects, etc)
-Play those waves (with track FX starting from that point onward)
That would REALLY make my day! And it would perfectly fit with the pattern matrix concept.
Please give it some thought Mr. Taktik
If I can control almost everything by MIDI, and I can’t wrote MIDI stuff on the sequencer, does it mean that i can control almost everything on the sequencer ?
yeah. I tried to do it by hand-editing the XML and managed to crash 2.5b3.
also I learned that if you want to make a bunch of crazy copies of signal followers and destination DSP tracks like this, you want to copy the destination DSPs first. this is because the destination in the signal follower is stored as the relative offset to that track. so as long as the offset is the same (e.g. track : track 12 :: track 5 : track 13) you don’t have to tweak the destination. saves a little bit of time.
Cheers!! Will study this!! I started something myself too, but I still know too little about what happens ‘inside’ the vocoder to know what I’m doing. Again, thaaanks!!
It’s possible to make a fformant filter …just creat enough send tracks with a bandpass filter ( parallel routed ) each bandpass has its own unique frequency ( resembling the human throat ) assign them all to a hydra device ,
Finally all the bandpass filters should go trough a master bandpass filter …changing the frequency of the master bandpass should also offset the frequency of the singular bandpassfilters .
This wasn’t quite possible with earlier versions …because you couldn’t assign the master hydra slider to different tracks ( meaning the frequency off the bandpassfilters) But now with the crossrouting option added in 2.5 , it is …
yeah:p I love this vocoder! Before I din’t know anything about the concept how it’s working but with a couple of minutes investing this xrns I got it Thank you licence!
I also made a modification of this and noticed that the Butterworth model sounds much more clear for this purpose. Also added the hydras to make controlling the signal followers easier.
Nice! The Hydras are a great idea. Actually I had never used them before looking at your XRNS.
Those Hydras would also be useful for scaling the frequencies. Too bad they don’t have more inputs but I guess they could be chained.
I hope the scripting has the ability to batch-add/tweak modules; it would make setting up things like this a lot easier.
Awesome! There’s the chart I was looking for:
Also might want to do the uh…fricatives? Consonants… knowledge from my cursory linguistics class is fading away…
Just wanted to say major props to the Renoise developers. The 2.5 beta and the roadmap laid out looks awesome. It’s really refreshing coming from Ableton Live to see such an open and accessible model compared to most other major software developers.
I think Ableton Live became so popular due to their innovation with the easy to use grid matrix view combined with the elastic audio warping algorithm. They became so widely adopted since they were the only game in town with those two key ingredients and unfortunately, it seems all that success has gotten to their heads and filled them with motivations for more profit as their main priority. The cost of entry for a useable version of Live plus what they make you pay to extend it with max for live is really disheartening. I’m new to trackers (the interface has intimidated me for a long time) but the model you guys use for Renoise and the enthusiasm from the community around it is what finally got me to try it out and it’s very inspiring so far!