There are “OSS” emulation “drivers” in the kernel (part of alsa)
Jack is a low latency sound server which can run on multiple backends (alsa (linux), coreaudio (os x), ffado (linux firewire devices) or even portaudio).
Portaudio is a different sound server which is designed to intercept all audio (via emulating a whole bunch of different APIs - libalsa, libxine, libao - think virtual alsa device etc.) and route them to various audio interfaces / over the network. It is the portaudio layer which allows things such as per-application volumes etc.
As I have mentioned before, I have no experience of portaudio. Direct jack on alsa is always how I have run things.
As someone else has mentioned, it’s usually the desktop environements (Gnome / KDE / XFCE?) that run all these server layers in the background. Just run a lightweight X11 windowmanager (e.g. openbox, i3-wm) which doesn’t start all the junk in the background and you’ll probably be better off.
As for the renoise sh installation script I always had to set permissions manually because the script failed to set execution and read flags for group and others so I couldn’t start the program as not-root.
indeed I also always asked myself how hard could this be to implement. After all, if even MacOS developers have managed to do it, it can’t be that hard
are you sure your not thinking about this? Guitar Jack 2
I know iOS has had core audio since iOS 5, but I have not heard about jack (other than the sonoma wireworks physical interface) being used in iOS. However there is something similar coming to iOS 6:
that’s the one. i haven’t seen more than a peek at this, but a friend got access to the iOS6 SDK and noted that his app was definitely compiling against a “libjack”. perhaps not the same one? but jack2 definitely has support for the platform…
Great thread! I’ve been running an Ubuntu/jack installation of Renoise for a couple of years now and I also find it strange that the whole sound-server thing is still problematic. However, as you probably have pointed out already, this is mostly due to the fact that renoise can’t be routed to Pulseaudio, which seem to have emerged as the favourite for most distros quite recently.
I too, only use jack because of renoise, every other program I run would have no trouble switching to Pa. I’ve been told that one of the reasons that Renoise is without Pa support is that Pa’s supposedly to slow, with regards to latency etc. but I can’t find any indications of this being a problem in newer versions. These remarks, to me, seem somewhat ungrounded.
I’m running Renoise in ALSA mode in 12.04 LTS. Works fine on my desktop and laptop. Would be great if PA will be supported soon. Every distro today use PA as default.
I suppose because Renoise is reserving a different interface of ALSA than Portaudio or Gstreamer does. I’m using Debian Squeeze and as others have mentioned, the default installation doesn’t come with PulseAudio. ALSA had been supporting multiple input streams for a long time but Pulse somehow succeeded in being supported by many applications. So, as you mentioned, it is possible to have Totem (gstreamer) and Audacity (portaudio) playing some stuff via ALSA at the same time.
Renoise, however, seems to access some lower-level interface of alsa, where you also can set the buffer size and the periods. This interface seems to be exclusive, so the multiple input streams of alsa won’t be available until Renoise releases ALSA again.
Interesting that you quote that link. That was actually my previous website (tapas.affenbande.org). The information contained in that post has since been absorbed into many other places Luckily these days it’s much easier to setup a RT system under linux. But if you are not interested in low latencies it’s totally unnecessary anyways. The vanilla kernel with realtime priorities setup for jack is enough for most renoise work…
FYI: The way back machine has the site archived here:
BTW: Since another linux audio user asked about it, I ran a small benchmark on the midi jitter produced by Renoise’s ALSA SEQ interface. While not quite up to a reference implementation I wrote that just produces a constant stream of midi notes (jitter of about 0.5ms), Renoise is actually quite usable with a midi jitter in the 1 - 2ms range (result of a relatively short measurement)…
That tarball contains three small programs: One that produces a constant midi stream using the RTC (requires setup of privileges on /dev/rtc), one that produces a constant midi stream using a sleep() based regimen (which i referred to above) and one that measures the difference in samples (using jack’s timing mechanism) between two consecutive midi notes (which I used to measure the jitter of renoise)…
I figured this might be of interest to other inquiring minds that wanna know
BTW: One more note about Renoise’s use of the ALSA PCM (audio, not MIDI) interface. Renoise’s use of the ALSA PCM interface makes the same mistake that many other applications (especially closed source ones - they seem to never get it right ;D) make: The choice of which ALSA PCM device to use.
The default should ALWAYS be the PCM called “default”. Not “hw:0,0” or any other hardcoded string. If you want to offer the user a choice of which ALSA PCM device to use, it is perfectly fine to somehow enumerate the existing devices (to offer some common choices like “hw:0,0”, “hw:1,0”, etc.). But ALWAYS also add a free form text field which allows the user to select one of the PCM devices that your enumeration mechanism might have failed to catch. It is, for example, always possible to put a “plug:” in front of an ALSA PCM device name to get automatic sample rate conversion, etc. And there are other cases where a user might form a desired PCM device name that might not occur to you at application writing time or which will DEFINITELY be missed by your device enumeration scheme.
NOTE: If you use “default” as default, renoise will automatically work with PulseAudio (how well it works is a different question ;D)