Being able to do realtime rendering of hardware synths via LineIn device input is brilliant and much appreciated, but - as with the multi-pattern automation request also posted today - Renoise’s patterns seem to get in the way… When recording external hardware (and softsynths, for that matter) I usually want to catch things like long releases, delay tails, reverb etc. In most DAWs I’d expect to make a selection longer than the length of the MIDI notes I wanted to record and then bounce all that, trimming it down later.
I want to be able to select two or more patterns on the left-hand side and right-click->Render Selection To Sample. Render to WAV is already there, so I think (hope) this would be an easy thing to implement. More like joining-the-dots of features which already exist
OR, the Render To Sample function could detect silence and only stop recording when the input has disappeared…but I think the first solution would be more flexible. Not to mention the fact that you often need to render an entire 5-minute song’s worth of hardware synth line (which you can now slice up with the sample slice feature, WEHEY!) and it’s a pain to render it to a WAV file, find that WAV file, then load it back into Renoise…
martyfmelb - that’s what I’ve been doing (well, that or rendering to disk and drag/dropping, whichever I can be arsed with, although they’re equally ungraceful).
As problems go, this seems like a small one. It IS a small one… but it feels more like a hole in Renoise’s native behaviour that defies expectations most DAW-users would have. “Select an arbitrary chunk of project, bounce it to an audio sample”. That’s why I think a native solution would be better than a tool - especially since the capability already exists, and it seems that all that’s required is an extra right-click menu entry beneath ‘Render Selection to WAV’. Or, as Psyj suggests, an option in the ‘Render Selection to WAV’ dialogue to throw the resultant sample into an unused instrument slot.
Well said! With one of my bands, we work with a Virus TI and (due to the TI’s inability to live up to Access’ hype) often need to render off its entire ~5min bassline, or whatever, and use it with Autoseek like a conventional audiotrack stem. Pattern-merging is no feasible solution for that, but connecting/unifying the ‘to WAV’ and the ‘to Sample’ features avoids having to export from Renoise and then import.
best solution for that is probably to Mute every unrelated track and then Render To Disk with the new option that includes audio input. That should give you the whole song’s worth of that track, without having to go through every track separately as you would if you ticked the Each Track As Separate File option. Obviously still far from ideal though.
Yes, that’s what we’ve been doing Actually, we’ve been doing that since before Autoseek was introduced…we suffer for our art
(But there’s always a better chance that Renoise will improve than that Access Music will improve anything, ever. Or even sell products that do what they say on the box… On the other hand, the latest TI update includes some new chorus types, or something… Whoop-de-fucking-doo!)
Having had to use Ableton Live instead of Renoise today, over this issue, I was reminded of another possibile solution which Live uses: counting ten seconds for the input signal to disappear (which is long enough for almost all delay/reverb tails) after rendering a selection of arbitrary bar-length in realtime. Live explains why it’s waiting, shows a countdown, but also gives you the option to ‘skip’ if you don’t want to wait.
Having said that, there’s still a slim possibility that somebody might want longer delay tails/release fades than 10 seconds, so Live’s solution isn’t perfect. My original suggetion (for the extra context-menu item) would probably still be the most flexible.