I recently made some soundfield mic fieldrecordings of the sea and like to use these recordings in Renoise, but that is impossible at the moment .
I don’t know how difficult this is to implement, but a soundfield b-format.wav consists out of a four-channel signal encoded in 1 file. Three of the signals, the X, Y and Z channels, describe the space around the microphone in the X (Front/Back), Y (Left/Right) and Z (Up/Down) dimensions, and are equivalent to recordings made with three figure-of-eight microphones at right angles to each other. The fourth channel, known as W, is equivalent to a recording made with an omnidirectional microphone, and provides a reference for the three other channels.(quoted from: http://www.soundfield.com/soundfield/soundfield.php )
More info + diagram pics can be found here:
Are you looking to preserve the ambisonic properties, or just get a usable stereo recording out of the ambisonic source file?
both would be cool to have,…but guess it is rather specific.
right now I wanted to have them for stereo use and have used another program to convert them.
I recently started a field recording project using ambisonics. I recorded cyclist demonstrations (for example at a Critical Mass Event) and I’m also planning two events with ringing cyclist groups in a more controlled environment and a more sophisticated microphone placement. The goal is to make a sound installation with four speakers in a room and I want to arrange and place my sounds to the speakers in some interesting ways.
I came across this old thread, because I wonder how you treat ambisonics in post production. I’m experimenting with the “ICST ambisonic externals” in max msp and the “envelope for live” plugins in ableton.
They are working pretty well and I can place the sound in different places. I listened to the results with headphones, encoded to binaural.
But: How do you do like “cutting, placing, fading, layering, …” the samples, because renoise and ableton cannot read multichannel .wav files. Do you use another DAW like reaper?
I also do not understand what happens when I split the .wav files (for example with wave agent from sound devices) from a B-Format (AmbiX, in my case) file into its four .wav files. I’m not understanding what I am hearing then or if I can use them in a group track. I think when I split the A-Format, then you here just the X, Y, Z and W pure microphone recordings.
Didn’t had the time to dive in deeply (but I will), so I thought I just make this post, to see how people are doing that kind of thing, because the handling with those recordings is totally different.
(p.s.: I was using an Ambeo Microphone from Sennheiser with the zoom f8)