Realtime Rendering Does Not Work As Expected

Hi all !
:rolleyes:

I’m trying to test all new features announced for the 2.7 release such as

  • Real-time rendering mode: render MIDI instruments and line-in devices, or hybrid VSTs

I said to myself : cool, I’ll be able to compose a xrns song with a line-in device, feeding a vocoder,
and I’ll render the final result in realtime, with the help of this new “realtime rendering” mode.

But :

(0) I compose my song, ok
(1) When the song’s finished I load the “Render song to disk” dialog box
(2) I choose “Realtime Rendering”
(3) I click on “Start”
(4) I prepare myself to sing on my vocoded track
(5) … I can’t hear anything while the patterns scroll up and the wav file’s being created
(6) I say “wtf ?” in the microphone
(7) I stop rendering and check my soundcard parameters : everything’s right
(8) I open the wav file and I play it
(9) I hear my song, with my vocoded voice saying “wtf?”
(10) but of course, my lyrics weren’t synced at all with the song tempo
(11) because I wasn’t able to hear anything at all

It seems very hard to use the “line in” device or hybrid VSTis during rendering especially if
nothing can be heard in the background while the song is being rendered.

Could it be fixed or improved ?

Thanx for reading !

You really should use the Twitter more, the whole story could have been said with one sentence: “When rendering in real time, I can’t hear the music”. ;)

I think you need to set a specific line-out with send channel if you want to hear the music too. The audio coming from master channel is directed to the disk writer.

Isn’t the soundcard disabled while rendering? Giving access to sample rates and bit depths that your soundcard can’t handle. That would lead me to expect you to hear nothing on a render…

You just want to use a normal Line In device and record you vocal as a sample, then layer it in a normal track and Render.

You’re right I’ve never been able to write short sentences. I’m testing your method. So you add a “send track” device, then “keep the source”, then route the sent sound on something else than the “master track”, for example an alternative line out device (on my card I’ve got a bunch of “high definition line out jacks”). Am I right ?

Of course I could also continue to use an external program able to capture everything that goes through my soundcard, as usual. However I simply expected that Renoise 2.7 would allow me to drop this old but good method.

Thanx for reading

Yeah that should work.

But the realtime mode is mainly intended for rendering offboard synths/effects or JACK devices on linux, not for realtime jamming. For the latter I suggest you just record your vocal inside the renoise as a sample and use that with the autoseek option.

I make music with guitars and vocals and lately have used lots of long samples in renoise. Works quite well if you have reasonable amount of RAM.

Yes, I know that the “sample recorder” in the “sample edit” section can work it out easily. It can create “synced to track/pattern/song” samples. When you play the Hunz track “Soon Soon” you guess that this demo track is a good example of the usage of this kind of feature. But I don’t get then the way to use this new “Realtime Render to Disk Line-In Device” thing… Worse : you’ll find users that will complain about it that they have to disable the line-in devices before any rendering to disk, because it records unwanted background noises such as the barking dog in the living room, the yelling cat, the the birds outside, farts, things like that… you get the picture ?

Thanx for reading

On the “realtime” mode the soundcard is not disabled. Only the master output is not sent to the soundcard. If you want higher bitrates or worried about neighbour or dogs, then you must use the offline mode.

Ever heard of MIDI devices?

Yeah realised this on the way to the shop to get some breaky bits. Obviously it can’t stop the soundcard driver, otherwise there would be no input, rendering it useless.