I’m trying to test all new features announced for the 2.7 release such as
Real-time rendering mode: render MIDI instruments and line-in devices, or hybrid VSTs
I said to myself : cool, I’ll be able to compose a xrns song with a line-in device, feeding a vocoder,
and I’ll render the final result in realtime, with the help of this new “realtime rendering” mode.
(0) I compose my song, ok
(1) When the song’s finished I load the “Render song to disk” dialog box
(2) I choose “Realtime Rendering”
(3) I click on “Start”
(4) I prepare myself to sing on my vocoded track
(5) … I can’t hear anything while the patterns scroll up and the wav file’s being created
(6) I say “wtf ?” in the microphone
(7) I stop rendering and check my soundcard parameters : everything’s right
(8) I open the wav file and I play it
(9) I hear my song, with my vocoded voice saying “wtf?”
(10) but of course, my lyrics weren’t synced at all with the song tempo
(11) because I wasn’t able to hear anything at all
It seems very hard to use the “line in” device or hybrid VSTis during rendering especially if
nothing can be heard in the background while the song is being rendered.
You’re right I’ve never been able to write short sentences. I’m testing your method. So you add a “send track” device, then “keep the source”, then route the sent sound on something else than the “master track”, for example an alternative line out device (on my card I’ve got a bunch of “high definition line out jacks”). Am I right ?
Of course I could also continue to use an external program able to capture everything that goes through my soundcard, as usual. However I simply expected that Renoise 2.7 would allow me to drop this old but good method.
But the realtime mode is mainly intended for rendering offboard synths/effects or JACK devices on linux, not for realtime jamming. For the latter I suggest you just record your vocal inside the renoise as a sample and use that with the autoseek option.
I make music with guitars and vocals and lately have used lots of long samples in renoise. Works quite well if you have reasonable amount of RAM.
Yes, I know that the “sample recorder” in the “sample edit” section can work it out easily. It can create “synced to track/pattern/song” samples. When you play the Hunz track “Soon Soon” you guess that this demo track is a good example of the usage of this kind of feature. But I don’t get then the way to use this new “Realtime Render to Disk Line-In Device” thing… Worse : you’ll find users that will complain about it that they have to disable the line-in devices before any rendering to disk, because it records unwanted background noises such as the barking dog in the living room, the yelling cat, the the birds outside, farts, things like that… you get the picture ?
On the “realtime” mode the soundcard is not disabled. Only the master output is not sent to the soundcard. If you want higher bitrates or worried about neighbour or dogs, then you must use the offline mode.