Test Mix and Approach To XRNI building

Feel free to move this thread to its appropriate place,
I’m only posting this here since its 3b4 related.

This is a test mix… I’m adjusting some 2.8 mixing methods to 3b4
as well as testing out what is appropriate to fit inside an xrni.

The download is 1.54mb: “upload_bitkits.xrni”

The drumkit mix is comprised of Kick, Tom, Snare, Open and Closed Hi-Hats,
picked out from a free full kit of other items curtosey of Icebreaker Audio.
“Bitkits” link : http://icebreaker-audio.com/freeware/battery-kits/

Please read before loading and allowing audio output to speakers or headphones.

  1. load “upload_bitkits.xrni”

  2. change tempo to 90…

  3. …the tempo can of course be changed but this was mixed at 90bpm…

  4. …I’m guessing if the tempo changes, I will have to change the gain values as well

  5. press C-0 on your keyboard…

  6. …its set to loop and hold, so no need to program it in the pattern editor

  7. I did not put a maximizer, the master output is fairly low at about -10db true peak…

  8. …add a maximizer if you like and boost to your liking.

  9. Is the Sub-Bass, Bass/Medium, and High-Medium to High balanced ?

…How did you get it to play random samples from a pool like that? That’s awesome, the beat’s a little different every time!

edit: I also didn’t know you could do sends within an xrni.

Set the “Overlap” to Random, located at the Keyzones (lower right).

There have been suggestions to expand the Overlap, at the moment, you’d have to split
an xrni, making two or more xrni’s that was originally just one, in order to have the Overlap
play in more than one intended mode. Basically the Overlap is global per xrni, which I suppose
is fine since its typical use is to avoid that “machine gun” effect.

The sends/multiband-sends within an xrni is great, it makes a whole world of a difference
when mixing inside an xrni. Most of the time I mix using multiband-sends, so I can see/hear
the loudness levels per band and mix accordingly. Its more compact now compared to 2.8.

Thanks for the reply, to get the ball rolling from a non-me perspective.

The main goal in this thread is to get some dialog rolling in typical and non-typical
uses of an xrni. The first post upload is part of a series of steps towards something
I can play live with samples only. I still have to figure out whether effects should be live
or rendered and just have a samples and phrases pool.

Is this a normal behaviour:

If I load your drumkit to a Renoise 3.0 song,

I can only play the same note on the keyboard on every second press. First one is note-on, second one is note-off…?

Yes, if you want a different playback option other than that…

  1. go to Sampler

  2. on the top right, you should see an option for “hold”

  3. disable that

  4. you may also want to fiddle with the loop option

  5. in Sampler, turn on Phrase Editor…

  6. …bottom left, between the LPB and Key Tracking options is where you can disable “loop”

  7. you may also change the phrase note

  8. on top of the Phrase Editor Keyboard, move the phrase block to your desired note

  9. C-4 for example, and then right click on the C-4 keyboard…

  10. …so both the Base Note and phrase block matches

Just a quick note in the approach to xrni building.

I was trying to figure out whether to sample in single shots or render in phrases, then slice it up.

These two options are the most prominent in the sampling world, at least what I see in Rekkerd.org

There’s a high contrast between these two workflows.

The reason I prefer to render in phrases is because tempo and rhythm is the context I put
everything else in.

Everything else is a consequence because of that initial groundwork or checkpoint.

This is in light of trying to figure what I think would be best for pseudo live show
or controlled random song render.

Maybe if I was selling or giving away free samples, then single shot workflows might be better.

Hmm - you are considering making sample sets, 00.1? That would def. be interesting to hear about (I personally find your approach to composing in Renoise quite unique and inspiring).

I hardly ever render anything. I like to be able to go back to the source at any given moment and tweak things. But I can see the point of doing so :slight_smile:

Thanks for the compliment in regards to approaches…

“…the longer we can say its not finished, the possibility of something great happening exists.” - Hans Zimmer

Full 36 minute interview by DP/30 on youtube: DP/30: Sherlock Holmes, composer Hans Zimmer

Yes, the thought of making sample sets has crossed my mind. I don’t have mic and recording
equipment but I could call a friend or schedule a studio appointment if it boils down to that.

I don’t think I’d be able to compete with the big guns but if I had the opportunity,
I’d probably sample specific foley type sounds and experimental instruments.

As far as the render vs source tweak… currently I’m experimenting with a 2 XRNS system.

XRNS 1 would contain all sources, both vst and multi-sample instruments.

XRNS 2 is where I load all the renders.

Here’s a typical name scheme…

xrns_030s_72done920625_01 (the word “done” is actually d#1)
xrns_030s_72done920625_02 (tempo is pitched 5 semitones down from d#1 72.920625)

Here’s a soundcloud example of render and source… The xrns I cannot upload because its messy
and lack of vst compatibility, I will try to simplify the description.

Render (The phrase is rendered)

Electronic drums: dry and progressive distorted versions.
By “progressive” I mean it is rendered one effect at a time on top of the last effect.
I render one effect at a time because I multiband mix the loudness levels to hit a certain Low Mid High Scale,
before this render goes through another effect.

Source (No phrase render, live)

Taiko drums and rims: Now this is where it gets messy for me and I’m still trying to figure out
a decent workflow. Because this is a Kontakt vst, I have to think in those terms.
The company selling these instruments also included the samples, so the possibility of an XRNI version is there.
But other Kontakt instruments do not provide the samples, so I have to experiment with render “instrument to sample”
or render the notes and articulations as I need them.

The concept is one rhythm for percussion with timbre/instrument switching with MaYbe command.
(MaYbe switching from 4 electronic drum versions to Taikos).

To put simply, the electronic drums (dry and distorted) have each been mixed in context of the phrase,
while the Taikos are not.

The problem for me is this is MaYbe command driven.

The electronic drums have all been sliced, so MaYbe at any given point of slice it may or may not reverse.

I don’t mind the sliced/reverse glitch sounds from the electronic drums but the Taikos I want to play organically.

So the question for me is, how to mix the the organic sounds ?

To reiterate, for this soundcloud example, I mixed the Taikos in context of the MaYbe command and not in context as a rendered phrase.

An alternative solution to try out… maybe it will work or not.

Mix each instrument in context of phrase.

Render single shots in context of phrase instead of rendering in context of the MaYbe command.

Rendered playback is in context of phrase while maintaining the freedom of the MaYbe command.

The original instruments would be left untouched for other phrase contextual use.