Sending Midi When Rendering

Hi,

I’ve been playing about with the audio/line in on 1.8, great new feature, its opened up a whole new range of possibilities for me, but there is one thing that seems to be lacking:

When I use Render (on real time priority), it does not send any midi. I understand that previously this would make sense as renoise wouldn’t be recording anything for the midi data, so why send it. But now that I can hear (and effect!) my midi synth through the audio in, would it not be possible to send the midi data when rendering, so we get the full track recorded. Otherwise I need to use an external recording device.

I’m kinda hoping I’m missing something and this can just be enabled via an option, but if not then I’m really hoping that this is something that is pretty easy to implement because it would give me a fantasic all-in-one setup using renoise.

Malcolm.

The easiest thing I’ve found to do is to use the recording feature in the sample editor, record the whole song worth of audio from your outboard gear, then just have it render the recorded .wav along with the rest of the song. Makes it kind of a two step process, which sucks, but on the bright side, it opens up lots of cool editing possibilities with the recorded samples.

As purposed by mr. longname, record it in the sampler of Renoise, then render the final wave.

There are hardware limitations but there are also cpu power limitations. And the more this risk barges in, the less such feature is usable at all. When intensively using VST plugins you have reached these limitations quite quickly.

Just face it, rendering audio even on low priority is not possible with live feeds. Well it is, but guarantee 80 to 90% of your rendered audio will contain raw cuts and glitches because the render engine didn’t caught things in time. This is an option you will never ever see in any audio application, regardless of the machine and platform.

Yes. This has been discussed before as well. I really see no reason why we should not do this automatically. This is such a timesaver for everyone using external equipment.
A simple killer feature that not so many other host have (if any) :)
It’s especially useful when rendering selection to sample slot.
When renoise detects a midi note in the selection, then it would start rendering it this way.
It’s just a batch process of things you can already do manually in renoise. So why not?

Yes this would be great and really simplify the process to get a complete rendered song/part.

Oooh, never say never. I understand what you mean though. For this to work the CPU usage would need to remain low enough for renoise to be able to steam audio from the line in in real time and still be able to write everying to disk. But this is possible, you would just need to make sure that the CPU useage of your track was quite low, or increase the latency on your sound card, which can be compensated for by sending the midi data early.

And it is possible to do real time line in, playback and recording. I use a live setup with a friend where we use reaktor to do real time effects and BPM synced looping from live decks+electribe, and then record the output in soundforge. So we’ve got simultaneous line in, VST style processing, playback, and recording to disk. As long as we keep the CPU usage by reaktor low (~20%) we can do this on a latency of 7ms on a crappy old P4.

What I would love though, is to be able to write tracks in renoise, and use my electribe, and auto render to disk without having to “record what I hear”. Obviously the rendering to disk would also have to happen in real time, but it would be handy.

The recording the line in as a sample and then playing it back is a good suggestion though. I haven’t tried it yet, so would this work if I was sequencing the external gear from renoiose, i.e. does it midi send while you record to sample? (I could probably have checked this in the time its taken me to write this actually).

The sampler in Renoise records your choosen line-in signal and allows you to do this during patternplay, so why shouldn’t it record the audiosignal of your midi-device if that one is plugged into your line in?

With the proper set of latency, you can achieve a lot of breathingspace for the CPU, but not every ASIO card allows you to manually set any specific buffering latency in the host and even the system panels to control the driver configurations may have limitations to perform these options.
You can achieve this with Directsound, but above 500msecs , some audiocards start to choke (some even choke on lower latency rates because the audio chipset doesn’t seem to respond properly beyond a certain buffersize). So this means this should all be done using softwarebuffering, well most systems can do this, but recording live signals will then not be for p400mhz systems.

The better alternative would be that all MIDI controlled channels are recorded seperately in a wave file and then merged afterwards.

Recording several live signals is mostly doable when only working with samples and forget about heave VST and VSTI plugins.