Asio Input Channels

yep!
i made a thread about i actualy did this sry for that…

we all must agree the MOST important thing to be made is.
ASIO input record.

like : i got a hardware synth ok.! i route it to the dsp and then to renoise and there i can record the linein input with effects if i want to do that.

for now i have to use soundforge and renoise to make this a reality.
renoise runs the notes on the hardware and soundforge graps the audio…ok fine…but it could be much cooler if renoise could handle that!

most important thing to make…!

i can donate some $ for that specific feature!

:rolleyes:
Welcome aboard :)
And welcome also as a fellow member of the “We want to live-record” exclusive club… where all members share the common hope of seeing live-recording among the future features.
Now take a seat, serve yourself your favourite drink and chill out with us :lol:
You’ll not see it popping up in a matter of minutes :)
But you can be sure of this: the Holy Devs have been widely informed (they are ALL-seing, mind that) of our needs and hopes about the “live-record” feature.
And unless you’re going to place some very serious amount of cash on their altar… I think you can put away your wallet at the moment ;)
Our prayers will be heard.

Oh, god, please let there be audio-in! ;)
I’ve done it before using CoreAudio on the mac, and it ain’t hard. I think the biggest problem facing the developers is how to integrate it seamlessly with the existing functionality. I’ll take the initiative here and include some ideas I already wrote to Blackis.

“…the most cohesive solution I could come up with is to introduce another type of track (similar to a send track) in which the user can select which channel pair (or mono channel) to take input from. This choice could be made in the hardware interface box in the track dsp area.
Audio in could also be recorded into the track in the same way as a track can be rendered, i.e. recording the audio into an instrument sample. That would make it really easy to re-use the recording in other parts of a module.”

Well, that’s embellishing it a little, but the general idea is there.

Come to think of it, a new type of track wouldn’t be needed at all- just use an ordinary track and add audio in capability. Hmm… it can’t be that easy, I must have missed something.

sounds good.
but I use the sound forge method at the moment. and thats better when i think about it. Cus i allways want to do some kind of standard effecting , nomalization and so on on my new samples…so either way…it allways ends up in soundforge…and if i had to first record in renoise to save it and then load it in SF then…heh…yeah…its obvious, for me…
my method is better for the way i do it.
but could be cool to have the feature though, im sure some would like it alot!

btw. i blew the right monitor one weekend , had a wet t-shirt contest and boom…so i stalled the music production untill the 1. nov when i get my $$…hmm…
i guess i dont have anymore to say about this.

gkmotu

:huh: … uh… well…

and then…

…how it… hem… what do you… errr… why… yes, why the hell you started this thread then?
:huh:

well… I see… yes… maybe that’s better :)

Arr. It seems that the Renoise community has varied interpretations on the concept of audio input. I might take this opportunity to say a few more things on the subject.

Firstly, my interpretation of audio-in support is the live processing of audio from external sources. That means running any input through track dsps. I would personally use this in composition AND live situations, mainly because I want to run my hardware synths thorugh some of my favourite effects, which are software (some of which I’ve written- I may release them later).

Recording samples for instruments is an entirely different way of interpreting it (as far as I can see) and maybe it is better to record into a more professional wave editor for that purpose. If there are any rebuttals, now should be the time to voice them.

ok yeah well…

i started it the day i though recording inside renoise was the best thing to do. But i fast found out that i allways return the sample to my sampleeditor that is what it is, realy good at dealing with samples.
If renoise had a sampler and the wide possebilities and easy to use. yadayada things i needed then i would not use sound forge.

all there is to it is opening the 2 progs at the same time. and if i use some vsti’s, then actualy SF is almoste like another plugin, well not at all but it runs side ways to catch the linein audio through the dsp.

thats by far the best way to record line in to use in renoise, i think! :)

Infact i dont think i’d change SF for an internal sampler in renoise, and that sound abit like me having two oppinions.
I dont, i just realized what i want and so forth.
So i changed my mind.
I dont relaly need the sample input channels at the moment. Cus my way works for the way i produce.

the render to sample button is how ever a great thing. now that is nice. :)

Like i think i understand lawrence, is that u dont actualy just have to have all features in one program, well u cant, the app would fill 100mb or so. and renoise is just a master tricker basicly. and it does that 110% good.

I think combining programs that is best in their fields are good.

so. at the moment wtih the new release i dont realy think i need new stuff in renoise.

oh well now we are at it…

a full screen mixer assignable knobs. IKNOW its allready there…spread out over my tracks. but could be nice to have though, i thing some will agree.

btw check this fella making music without renoise or soundforge or infact any programs besides the small app that comes with windows.
quite original i must say! :D

http://www.albinoblacksheep.com/flash/noises.php

i could have done it some years back when i smoked loads of strong ziggys… but now i only use renoise. its way better :)

gkmotu

Your post just gave me an idea! Here’s a thought:
How about an audio-in vsti? I’m going to write one, but if anyone wants to beat me to it…

I don’t know if it is going to work because the vsti’s are always fed by the host. I never saw a grabber where you can select the inputs of your soundcard.

:blink: :lol: Man! You can -write- your own vsti? And you make music with Renoise? Then WHERE is the huge list of Renoise 100% compatible free plugins? :lol: :lol:
Hint: In case… I have ideas and a decent hand at pixel gfx… ;)

audio-in vst - use NI reactor.

That’s partly what gave me the idea, but since I don’t have $1,000,000,000,000,000 to give to NI for their overpriced software I think I’ll stick with doing it the long way. VSTi’s aren’t hard to write if you don’t bother with a gui. I’m sceptical of fancy guis because they can clutter up the screen and imply that an instrument only sounds good if it has a nice picture to look at, but thanks anyway Parsec, I’ll keep it in mind.

Anyway, back on topic, since VST instruments are just loadable code bundles, it is (I assume) possible to write a one which interfaces with the computer’s sound hardware- it just makes it platform specific. CoreAudio is a feature-rich system, but it’s also complicated, which is why it’s going to take me a while (and I have exams at the moment which are getting in the way). Hence the call for someone to beat me to it!

The beauty of having an instrument in Renoise which just feeds audio from a hardware input into a track is that I can sequence MIDI notes in Renosie which are fed into an external sound device and then receive on the same instrument instance a stream of audio. This is especially useful for rendering tracks.

Well, it’s only an idea at the moment. I’m still reading up on Apple’s CoreAudio buffering and coverting code. The problem is that the process() method has to read from a buffer in memory, because firstly the audio must be converted from integer to 32-bit floating point, and secondly no non-threadsafe calls can be made from within process() (that is, no interfacing directly with hardware). If a Renoise developer wants to tell me it can’t be done, then now would be the best time.

Just got an audio feed into Renoise from an external synth working for the first time! :yeah:

A demo of a VST plugin called Apulsoft Wormhole did the trick:
http://www.apulsoft.ch/

I’ve tried others like it (such as Tobybear’s MixBox) but been unable to get them going.

I opened Wormhole up in Tobybear Minihost (which can be downloaded from http://www.tobybear.de/p_minihost.html ) and let Minihost monopolise my creamware card’s ASIO driver, so it was receiving input from ASIO. I then disconnected the routing from the ASIO Source in the creamware routing window (effectively silencing Minihost’s output to my soundcard). I set the Wormhole inside Minihost to “Sender On”.

I opened Renoise and let it use Directsound. I opened up a Wormhole and set it to “Receiver On”.

Note that it didn’t work when I opened up the Renoise receiver Wormhole first, and you’ll know when they’re working when they’ve both got a solid yellow LED (which means connected), rather than flashing yellow LEDs (which means trying to connect).

The result is external audio inside Renoise! Woohoo! I can see the output of my creamware synths through the Renoise scopes, and effect them with Renoise effects! :)

Unfortunately, neither “Render Selection to Sample” nor rendering a pattern or song in realtime are any use with this method (the latter presumably because Renoise doesn’t send anything through MIDI out when rendering, which is a bit of a pity, as it would have been quite useful with this workaround), so it doesn’t seem possible to render this external audio going through along with the rest of the track, or sample it into a sample slot.

heh. nice work there!
to bad the render thingy wont send midi other wise problem solved…

NI? i guess i have to read about that then…

doesnt there exist some kinda record on the fly…with real low latency plugin…for play back…then render section might be enabled…i dont know :)

hmm… cant be that hard to make the render engine tricker midi notes can it ? and merge the stuff into a stream… ??

:D

If you mean the “render in realtime” option, then unfortunately that didn’t work with this method (I don’t think Renoise is sending any MIDI out messages when it does this).

There’s another one I found called Freewire that might be worth checking out; will try it out tonight:
http://web.archive.org/web/20040216035910/…mi/effects.html

Whoa, thanks Rounser!
It’s strange that Renoise doesn’t send midi out messages when rendering to a sample in real time. Shouldn’t that be considered a bug?
I checked it on my version just then (sending note messages to midipipe) and the same thing happens.

EDIT:
I thinks it’s also worth asking how the audio from a vsti is assigned to a track “automatically”. The (obscure) reason I ask is that the thought of introducing input functionality as another aspect of an instrument, which (as everyone knows) currently has three modes of operation: midi note out, sampler, and vsti; prompts more discussion on the native implementation of audio input in future versions of Renoise.
So far, the possibilities are:
(1) Using a meta device for input into a track (ugly);
(2) Having a dedicated track for input (ok, but what if you want to switch between different dsp chains?);
(3) An Audio-in option added to the structure of an instrument. The audio stream would switch between tracks in the same way as VSTi streams do when there are note instances in more than one track (I don’t know exactly how this is done).
The last one makes the most sense to me at the moment. Any more ideas?

I like option 3 too. An “Audio In” along the same lines as the existing “VST Instrument” and “MIDI” settings blocks, so you could (in theory) have the one instrument triggering MIDI, playing a VSTi, playing a sample, and channeling audio all at the same time.

I’d suggest that it be unaffected by any note commands or regular pattern effects, but have just a drop-down to select the ASIO or WAV source, and buttons for On/Off, and perhaps “Record” and “Stop” icons to render the input to the currently selected sample slot. New pattern effect commands could be added to turn the audio channel on and off in the pattern editor, and the “Render selection to sample slot” could be extended to capture the audio in as well (assuming the selection is triggering MIDI, meaning the rendering would have to be extended to send midi out as it rendered).

Anyway, I’m sure the devs have some ideas.

I like it.
Are there any free spaces in the list of pattern effect commands though?
I heard somewhere that this problem was being considered and 0-z codes are possible, though confusing.

Anyway, it sounds like you’ve thought this through quite a bit. I’m wondering if there should be another drop-menu under the "on/OFF’ button for a default track to stream the audio into when not explicitly set in a pattern effect command.