Excel to Renoise

Noob! :lol:

Do you really get music just by opening this in Renoise? I figure you have to add instruments manually?
How much of a composition is that sheet, is it a complete track, or how does this work? Does it sound good?
Can we hear?

Those word problems :wacko:

Everything is a manual transfer from Excel to Renoise or the other way.
Excel’s job is more or less a rhythm and pitch sequence, statistics driven, manager.

Its still at an experimental stage, the picture is one xrns.
The goal is to auto-pilot the composition process both in context of subject and non contextual.

At its most basic level, the rhythm alone with a simple click tone should sound good.
I do choose instruments first in the entire process of things, before rhythm.

If I don’t give up, I’d like to publish something on bandcamp as a proof of concept.

Sounds to me like an extremely ineffective way to compose, but nonetheless quite awesome!

Beautiful, in a techy sort of way.

looks to me like schrodinger’s cat barfed on the screen…

I wonder if you’d mind sharing the XRNS?

The Excel image looks absolutely fascinating, but I’m really struggling to interpret what it means, what it might sound like, and so on. I’m very curious to see and understand the Renoise song data this actually translates into.

Heck, even a screenshot of the pattern editor might be enough, if you don’t feel like sharing the whole thing. I’m mainly curious to see if there’s some kind of 1:1 relationship between the Excel data and the pattern data.

Orange cells in Excel are patterns in Renoise.
This is a way to organize the 1:1 relationship between two programs.

On the very left of Excel, you’ll see a description of what those numbers mean.
Not a detailed description as this is becoming a personal system of sorts,
which means changes are made constantly to satisfy my perception of music
and trying different ways to accomplish a faster composition process and
if the whole process can be in auto-pilot, great.

Order: In the Excel example you’ll see that the entire row are all one’s.
In other words, this is the first layer which will dictate the rest of the layers.
Sort of like a slipstream.

Vary: You’ll see that its double digit, its just a way to keep tabs on
which pattern I did first, second, etc. and the sum of patterns.
Pattern 5 of 7 total for example.

Gear vary: Some patterns may share the exact same rhythmic units.
This is a way, just like “Vary”, to keep tabs of which patterns that share
the same exact rhythmic units I did first, second, etc. and the sum of patterns.
1_77_12 and 1_77_22 share the same exact rhythmic units. 8 lines, 3 lines, and 5 lines.

In Renoise this labeling will simply be placed in the Pattern Matrix.
1_17_11 = Order, Vary, Gear Vary.

SRX unit: SRX stands for Start Return Exit. Like geometry, Rays and Lines…
still fresh with my math, so bear with me, if there are mistakes with my math lingo.
SR is a way to label a pattern that loops. Start, Return.
SX is a way to label a pattern that does not loop. Start, Exit.
In 1_17_11 there are 7 rhythmic units and the SRX length or pattern length is 48 lines.

Underneath SRX unit and SRX length are the rhythmic units pool. The XRNS is at 16 LPB,
1_17_11 has 4 lines and 8 lines. 4 lines = 1/16th and 8 lines = 1/8th.
8 is color coded green which means It starts with 8 lines and ends with 8 lines.
You can see the yellow color code which simply indicates that it ends with that many lines.

indi V: which is short for independent variable. In 1_17_11, I put in 8, meaning this
pattern will loop 8 times which will then calculate some stats. SRX unit originally 7 is now
equal to 56, the length is now 384 lines long. I’m currently trying to figure out a way to
calculate these in minutes and milliseconds. Also I may later use percentages during
the arranging process.

Finally, Sequence and Dynamics. In 1_27_11 the sequence is 10, 5, 10, dynamics
at mid to low, then high.

These patterns are recyclable, some adjustments may be required if going from slow to fast bpm.
In the XRNS I used Taiko drums because of their ambiguity, you can use your own samples,
and to create a greater variation and shelf life of each pattern, use the maybe command
on snare placement, if you are simply using a kick snare combo instrument.
Doesn’t have to be drums, use melodic harmonic instruments if you like.
Check out Achenar’s acoustic samples:
https://forum.renoise.com/t/video-2-recording-new-sounds/41810

14MB wav rendered files.
xrns_exceltorenoise

Link will take you to you one drive, with a bunch of other files, I haven’t figured out
direct download link, if you want, just use “find” in your browser, copy paste " xrns_exceltorenoise "
to find it quickly, although there aren’t that many files to sort through.

how do you export the .xls to .xrns

do you simply rename it?

Everything is a manual transfer, Excel to Renoise or vise versa.

Manual may read like it takes time, it really doesn’t due to knowing how both programs work.

I haven’t scratched the surface of Excel, but I like the results so far.

Hey kids, this is why you shouldn’t consume too much LSD.

Could we hear something that has been composed this way?

Could we hear something that has been composed this way?

Mix priority is low on this demo loop, plus I need to update my mixing skills, especially creative stereo mixing.

The math used to arrange the Taiko drums was percentages.

I mostly use ratios for writing melodies. This demo is using two part counter-point.

Concepts of momentum and trajectory is intuitively used but not mathematically.

Statistics (mostly finding range, median, inter-quartiles) is used however, in conjunction with momentum and trajectory.

Here’s an unofficial Mad Max Trailer Remix, I just swapped the Taiko drums from before,

and to experiment further, the first half is Excel or math based, the second half is more sample looping and just feeling it out.

Not much, if any, intricate sample editing or pattern command usage. Just making sure the sample cuts sound proportionate.

Headphone mixed…

I’ve also been trying out R Programming Language for this type of approach to music, though that’s going to be awhile

before anything fruitful happens.

Very interesting. Do you find the excel process removes you from the music in a sense? By that I mean do you experience a level of surprise and novelty when you hear the results rendered by Renoise?

I am wondering if it’s a kind of partial basic control of the sonic structure, but discourages on-the-fly modifications based on the feedback loop you have normally with the composition process when using more orthodox methods.

Yes, the excel process removes me from the music in the traditional feedback loop sense.

I don’t think I experience a level of surprise and novelty, the excel or numbers process is meant to do the opposite,

no surprise, more of a practice in precision, however, the process is in modular units,

nothing completely set in stone, I wanted it to be recyclable and reconfigurable with

parts done in the past and parts done in the future.

I suppose its Beethoven-esque. A long time ago I went to an AES convention (Audio Engineering Society),

went to a lunchtime keynote about Beethoven, his deafness, and his music, by Dr. Charles Limb.

I’m googling that off memory, photos of him do look like the speaker during the keynote.

Long story short, he talked about the possible causes and aggravations to his deafness,

he then played Beethoven’s symphonies during the stages of his deafness,

each symphony with a filter to mimick his deafness, the last symphony was no audio.

I think I read elsewhere that Beethoven had already memorized all he needed to in order to music that way,

plus his assistant might have helped.

pseudo random suggested reading, google search terms: aes engineering ear doctor beethoven

http://edn.com/electronics-blogs/brians-brain/4304878/The-Audio-Engineering-Society-Convention-Deafness-Doesn-t-Dampen-Musicians-Talents

On a side note, if you play video games, doing music this way is like city building games,

whereas faster feedback loop style gaming like real time combat games,

is much like the traditional way of making music.

I haven’t studied combinations and permutations,

I’ve only heard about permutations using the Permut8 plugin.

I have some understanding from a musicians point of view, currently not

from a mathematician’s point of view.

I’ve been using a 4 bit binary system to organize sounds.

Lucky for me I’ve narrowed down my instruments to 4 categories.

Atmosphere, Percussion, Bass, Mid. In that order. So if I had just a Percussion and Bass-frequency track,

in binary would look like 0110.

I’ve updated my template, so all 15 (counting from zero) combinations are ready,

if I wanted to create a bunch of Percussion and Bass only tracks, that would get organized

in the 0110 section in the sequencer.

This method also helps in managing pattern data. Which I suppose is configurable,

pattern data in one section can go to a different section. 0101 (Percussion, Mid) to 0011 (Bass, Mid).

Sorry, nothing to see here, I just needed to backup an experiment somewhere quick.

...... FRAMES / EVENTS
K.LOUD 160 / 13 = 12 (4 LEFT OVER)
K.SOFT 160 / 13 = 12 (4 LEFT OVER) SHIFT DOWN 6 INDUCE W/ X 2'S
S..... 160 / 8 = 20
RICK.. 40 / 5 = 8 
...... 1 OF 2 @ "S" EVENT 1 + 2
...... 2 OF 2 @ "S" EVENT 5 + 6
UNA... 40 / 3 = 13 (1 LEFT OVER)
...... 1 OF 2 @ "S" EVENT 3 + 4
...... 2 OF 2 @ "S" EVENT 7 + 8

Man, this is one crazy but fascinating thread!!! :w00t:

Man, this is one crazy but fascinating thread!!! :w00t:

I guess all I really want to do is to automate the process of post inspiration.

Very much refutable, but in my opinion, finding inspiration is way more fun than sitting in front of your computer crunching numbers.

Ideally I’d like to just upload a life experience, captured in video format, and have the computer translate that to music.

It feels odd not to have a fourth sentence, so this sentence is just a filler.

Humans Need Not Apply