Right now Renoise uses a fairly unorthodox way of enumerating and opening ALSA devices. Instead of enumerating so-called PCMs, it enumerates hardware devices.
PCMs, or device handles, can be defined by the user in ~/.asoundrc or by applications. These psuedo devices are used for variety of things like advanced device/speaker mapping, software mixing (dmix), etc. More importantly it’s also the mechanism that ALSA uses to provide a default audio device. If an application simply opens the device named “default”, the audio device the user probably wants will be opened on a correctly configured system.
Limiting the user to only hardware devices also makes life unnecessarily hard for users who run distributions that use sound servers, such as PulseAudio, by default (most of them). Sound servers generally offer completely transparent ALSA support for applications by providing a PCM called eg. “pulse”, which is also usually the same thing as “default”.
Doing what virtually every other ALSA applications does, namely using the “default” device handle by default, would solve most of these problems.
I would suggest either to,
A )
Use snd_device_name_hint() to enumerate devices instead of snd_card_next(). This is the difference of eg. doing aplay -L instead of aplay -l. See the function pcm_list in the aplay.c, in alsa-utils for an example on how to implement it easily. Use the index of the default device as default. Or,
B )
Let the user optionally specify a freetext device name in the GUI (or even the config file) and pass that on verbatim to snd_pcm_open() as the device handle. Set it to “default” as standard, so the system default audio device is used instead of whatever happens to be hw:0,0.
Hi, I think you’d have to consider the realtime-nature of progs like Renoise in the equation.
I’m mostly using Jack for making music on Linux, but when driving with pure alsa, Renoise kind of does what jack also does - grabbing the hardware device for privileged operation at low latencies. If you used a pulseaudio or alsa dmix device, you’d have to cope with high latencies or dropouts. Most music-users do it the other way round, the drive their system with low latency jack grabbing the hw and plug alsa or pulseaudio bridges into that system for sharing sound for “normal” programs.
Only case I’d see where it’d be of use to drive through a “soft” device like pulse or dmix would be to plainly listen to some songs in renoise while doing other stuff, when latency doesn’t matter. Making music with pulse resampling, latency and non present realtime stability wouldn’t make sense I guess. Maybe that’s what you want to do?
What maybe very few people could like to do would be to define a asoundrc for devices with silly ports, or for merging hardsynced soundcards into one. But I guess with Jack this can be done better anyways.
It doesn’t really matter whether JACK is better or not. Renoise has ALSA support, but it’s severely handicapped a weird implementation. It’s a bug.
I personally don’t really care that much about 50-100 ms latency for Renoise, especially when using it as a pure soft synth. Literally the only effect the latency has for me is the time it takes from when I press a button to when Renoise starts playing a sound or the song. It doesn’t affect anything else at all. The playback is 100% consistent and there is 0 difference in the end result between using ALSA and JACK. Going through the additional hassle to set up a JACK server which wants exclusive access to my sound card is not worth the benefits for me, at all.
So if you like JACK, good for you, but please don’t derail posts about a bug in the ALSA code with an irrelevant discussion about JACK vs. ALSA.
This is a problem that can literally be fixed in - at most - a couple of hours by the devs. Hell, I’d do it myself if I had access to the source code.