From Os X To Ubuntu

They are the same from the perspective of the Renoise end user.

Renoise uses ALSA to in an attempt to get good performance but every other app out of the box on Ubuntu doesn’t do this. This isn’t obvious to an end user.

Let’s go back to my platform of origin, OSX. JACK, from an OSX user’s POV, is Rewire. Everything else is CORE AUDIO. On OSX, there isn’t a half dozen CORE AUDIO systems. CORE AUDIO is designed to handle all audio needs. No one tries to write a new CORE AUDIO, fork it, put a CORE AUDIO on top of a CORE AUDIO routed through a CORE AUDIO… Anyone trying this stuff falls under fringe, unsupported user. I’ll get back to this point later.

In contrast, here’s what happens when a user “plays an MP3 using GStreamer”:

The file source reads an MP3 file from a computer’s hard-drive and sends it to the MP3 decoder. The decoder decodes the file data and converts it into PCM samples which then pass to the ALSA sound-driver. The ALSA sound-driver sends the PCM sound samples to the computer’s speakers." (from Wikipedia)

Sound familiar? Yes, sounds like Pulse Audio.

From a Renoise’s users perspective, after having been forced to read a bunch of bullshit I don’t care about, instead of Pulse Audio blocking ALSA, it would be GStreamer blocking ALSA, but on a Ubuntu system Gstreamer is more or less routed through Pulse Audio, so double the pleasure double the dumb. A Pulse Audio configured OS tries to re-route all sound streams through itself. This adds a layer of duplication and latency (Yo dog, I hear you like sound servers so I put a sound servers in your sound servers…) but it is done to avoid confusing messages to the end user.

Renoise, on Ubuntu, is the first (and only) app that caused me problems. Renoise opens a Pandora’s box of what the fuck. The resulting behaviour (blocked sound, weird errors, …) makes no sense. If I open “Audacity” and I go to settings, it also says it’s playing through ALSA. (using PortAudio) but Audacity, allegedly using ALSA, doesn’t have any problems running at the same time as YouTube video in Chrome.

TODO: Someone explain the above paragraph to me. Both use ALSA? Renoise really uses ALSA but Audacity is being routed into ALSA through libalsa Pulse?

If we compare other similar aspects of Linux we have clear winners and clear fringe. For example: Hurd kernel vs Linux kernel, Wayland vs X11. For some reason Linux sound is a cluster fuck of anti-cooperation trying to co-operate and no “sound server” wants to admit its over for their project and die.

In conclusion, they are “all the same” in the sense that I don’t give a shit about any of them, Renoise should just work on Ubuntu like it does on OSX and Windows.

Yes, I know this is “wrong” but that’s the point of this thread, natch? Renoise: From OSX to Ubuntu.

Good times.

At this point in time I think it’s important to point out that I’m not a Linux newbie. I’ve been using Linux since 1997. Several distros, several jobs, many reasons.

The difference is that I never used it as my default desktop, full time, until last month. I always had a fallback OSX or Windows computer that was my “main machine” for “getting work done”.

Now i’m trying something new for 3 years.

Linux is the new veganism?

Audacity is probably playing to some virtual ALSA device which is routed to Pulse. Renoise apparently ignores those virtual devices for a reason or two.

Pssst… I heard OSX snooped of several elements from various linux/freebsd distributions… Why can’t the Linux community steal Coreaudio and abuse that as the main kernel audio component? I mean how hard can it be? We only need a driver component and a kernel component to control the driver, nothing much else besides the application controlling it. Pulse audio would then be something like what the windows mixer is for Windows, but the Windows mixer isn’t interfering with the audio drivers and blocking applications access to it, that is where Pulse-audio does it the wrong way. Gstreamer ditto (a sort of copperlan idea:routing through networks). The purpose these tools are built for are okay, it is just the place where they are put is incorrect.

Ok, Linux audio. This is my understanding:

  • The “drivers” in the kernel are part of alsa
  • There are “OSS” emulation “drivers” in the kernel (part of alsa)
  • Jack is a low latency sound server which can run on multiple backends (alsa (linux), coreaudio (os x), ffado (linux firewire devices) or even portaudio).
  • Portaudio is a different sound server which is designed to intercept all audio (via emulating a whole bunch of different APIs - libalsa, libxine, libao - think virtual alsa device etc.) and route them to various audio interfaces / over the network. It is the portaudio layer which allows things such as per-application volumes etc.

As I have mentioned before, I have no experience of portaudio. Direct jack on alsa is always how I have run things.

As someone else has mentioned, it’s usually the desktop environements (Gnome / KDE / XFCE?) that run all these server layers in the background. Just run a lightweight X11 windowmanager (e.g. openbox, i3-wm) which doesn’t start all the junk in the background and you’ll probably be better off.

As for the renoise sh installation script I always had to set permissions manually because the script failed to set execution and read flags for group and others so I couldn’t start the program as not-root.

indeed I also always asked myself how hard could this be to implement. After all, if even MacOS developers have managed to do it, it can’t be that hard ;)

You probably have uncommon umask.

fun fact: apple is using jack2 on iOS now. it’s part of the “inter-app audio” feature coming in iOS 6.

I don’t understand, IAC has been in OSX since at least 2007.

Jack is GPL (server) and LGPL (libs). I’m not sure Apple is cool with that license?

More details please.

are you sure your not thinking about this?
Guitar Jack 2

I know iOS has had core audio since iOS 5, but I have not heard about jack (other than the sonoma wireworks physical interface) being used in iOS. However there is something similar coming to iOS 6:

musicradar - iOS 6 inter app audo

also there is a project called audiobus that will stream audio between apps locally and on separate ios devices

audiobus tumblr

I am super excited about both of these projects.

that’s the one. i haven’t seen more than a peek at this, but a friend got access to the iOS6 SDK and noted that his app was definitely compiling against a “libjack”. perhaps not the same one? but jack2 definitely has support for the platform…

https://github.com/jackaudio/jack2/tree/master/macosx/iphone

also, this appears to just be midi, or does it do audio as well?

Audio as well.

Actually, I don’t remember.

Negative, I installed it on the same partition as system which is ext4 with default settings.

Great thread! I’ve been running an Ubuntu/jack installation of Renoise for a couple of years now and I also find it strange that the whole sound-server thing is still problematic. However, as you probably have pointed out already, this is mostly due to the fact that renoise can’t be routed to Pulseaudio, which seem to have emerged as the favourite for most distros quite recently.

I too, only use jack because of renoise, every other program I run would have no trouble switching to Pa. I’ve been told that one of the reasons that Renoise is without Pa support is that Pa’s supposedly to slow, with regards to latency etc. but I can’t find any indications of this being a problem in newer versions. These remarks, to me, seem somewhat ungrounded.

I’m running Renoise in ALSA mode in 12.04 LTS. Works fine on my desktop and laptop. Would be great if PA will be supported soon. Every distro today use PA as default.

I suppose because Renoise is reserving a different interface of ALSA than Portaudio or Gstreamer does. I’m using Debian Squeeze and as others have mentioned, the default installation doesn’t come with PulseAudio. ALSA had been supporting multiple input streams for a long time but Pulse somehow succeeded in being supported by many applications. So, as you mentioned, it is possible to have Totem (gstreamer) and Audacity (portaudio) playing some stuff via ALSA at the same time.

Renoise, however, seems to access some lower-level interface of alsa, where you also can set the buffer size and the periods. This interface seems to be exclusive, so the multiple input streams of alsa won’t be available until Renoise releases ALSA again.

Interesting that you quote that link. That was actually my previous website (tapas.affenbande.org). The information contained in that post has since been absorbed into many other places :D Luckily these days it’s much easier to setup a RT system under linux. But if you are not interested in low latencies it’s totally unnecessary anyways. The vanilla kernel with realtime priorities setup for jack is enough for most renoise work…

FYI: The way back machine has the site archived here:

http://web.archive.org/web/20080225092828/http://tapas.affenbande.org/wordpress/?page_id=40

Maybe this can be pinned?

BTW: Since another linux audio user asked about it, I ran a small benchmark on the midi jitter produced by Renoise’s ALSA SEQ interface. While not quite up to a reference implementation I wrote that just produces a constant stream of midi notes (jitter of about 0.5ms), Renoise is actually quite usable with a midi jitter in the 1 - 2ms range (result of a relatively short measurement)…

http://fps.io/~tapas/midi_timer-1.tgz

That tarball contains three small programs: One that produces a constant midi stream using the RTC (requires setup of privileges on /dev/rtc), one that produces a constant midi stream using a sleep() based regimen (which i referred to above) and one that measures the difference in samples (using jack’s timing mechanism) between two consecutive midi notes (which I used to measure the jitter of renoise)…

I figured this might be of interest to other inquiring minds that wanna know :D