Jump to content


Photo

RAS - Renoise Accompaniment System


  • Please log in to reply
183 replies to this topic

#151 Renoised

Renoised

    Member

  • Normal Members
  • PipPip
  • 46 posts

Posted 24 July 2017 - 13:56

@Renoised,

 

Very OT:

 

I was playing around with a waveform generator concept, supposed to be mimicing the native modulation system. Not sure that's what you mean?

 

The operand buttons, and lots of features are missing here though. And all parameters should be able to be modulated by another OSC. Also, a good idea would be to have something akin to a "formula device". And ability to render to a single duty cycle and/or multi-sampled.

 

On the other hand, there are already awesome VSTi:s for these kind of sounds, including the support of converting to/from overtone envelopes, morphing and whatnot. MPowersynth is one of them.

 

Spoiler

 

 

Man, that's twice you've amazed me with your stuff, and yes, that's absolutely what I mean!

 

If you were to take that, and allow math operators between the waveform objects (as in the Renoise Modulation System) that would be absolutely spot-on!

And what's extra cool about this stuff is that, because it's procedural, you can ensure the end always blends perfectly into the start without clicking.

 

Thumbs-up to you, Joule, I do like the way you think, you definitely have the right ideas for tools that should be part of Renoise :)

Actually, I do too, trouble is I can't program them :unsure:

 

 

 

 

To summarize the essential idea of the thread (the dynamic RAS), in any case some new forum reader is tuning in on this topic and having a somewhat difficult time understanding what it's all about:

 

Take a look at this video, where some guy demonstrates how to create a so-called "style" on his Yamaha PSR-S950 Arranger Keyboard:

 

 

 

Now, at the end of that video, did you observe that he's playing chords with his left hand and melodies with his right hand, and the style will automatically harmonize to play the entire style arrangement in accordance with the chosen chord -- chosen, as in "any chord"?

 

Imagine a similar Accompaniment System implemented in Renoise (the so-called RAS). You edit/record an arrangement in a single pattern, everything -- bass, piano, strings, etc -- playing in (let's say) C Major. You press the "Enable RAS" button. All the tracks which should be left "untouched" by the RAS algoritms are marked (with a specific color or trackname or whatever). Such tracks could be for example drums and various one shot sound effects.

 

Then you hit playback and the Renoise pattern starts playing in a loop. You switch over to your MIDI-keyboard and start experimenting or improvising an entire song structure just by playing the chosen chords with your left hand and the melodies with your right hand.

 

The power of the RAS is that it would add a new value dimension to the tracker. A single styles could be stored as a Renoise song (a .xrns file) that contained all the template patterns making up a style, each template pattern in that song being a variation (such as a break, an intro, outro, etc).

 

Renoise code could then also possibly be compiled into a new side-product, with a different name (such as "Re-Play", "Renoise Live Arranger", "Renoise Instant Gratification" or whatever) and sold or distributed as a separate product that wouldn't show all the "tracker editor stuff", just the necessary basics for people to start jamming without knowing anything about trackers at all. Maybe such a separate product would also load encrypted .xrns songs (i.e. the "styles") so that it would be possible to distribute "styles-packs" separately, making an incentive for Renoise styles-creators to make some $$$ in the process. Then also implement a ChordPulse/Band-in-a-box type of chord progression editor into that new "Renoise spin-off" product so that users could just type "Fsus4, Dm, G7, A" etc and hit play -- with some well-placed out marketing, this thing will sell thousands of copies within weeks!

 

So, is the RAS concept a good idea?

 

No. It's an EXCELLENT idea!!

 

 

Yes! Yes!! Yes!!! - That's exactly how I envision RAS to be, and just think, all that chord-recognition he was showing off at the end, that could work by pressing on-screen chord-pads as well, so even easier than he demonstrated, even a complete noob with no musical theory whatsoever could do it.  Like I said, some people have their reservations about this, but I'm pretty sure if it were implemented correctly, in a simple and obvious manner, everyone would be so damn in love with it they'd never want to put it down!

 

We definitely need this stuff, it would be super-addictive and productive :walkman:


Edited by Renoised, 24 July 2017 - 16:16.


#152 Renoised

Renoised

    Member

  • Normal Members
  • PipPip
  • 46 posts

Posted 24 July 2017 - 15:19

LIFE BEFORE RAS

 

:excl: :( :excl:

 

 

 

LIFE AFTER RAS

 

:yeah: :yeah: :yeah: :drummer: :yeah: :yeah: :yeah:



#153 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 24 July 2017 - 23:17

" with some well-placed out marketing, this thing will sell thousands of copies within weeks!"

 

For some strange reason, I am liking the sound of that. :)

 

RAS can be a game changer for others and for existing Renoisers who prefer to use trackers. Could be seen as a third product category in the Renoise lineup, that truly brings both tracking and ease of use to a wider audience. Branching out and re-inventing certainly helps once in a while. 


encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#154 Garrett Wang

Garrett Wang

    Super Advanced Member

  • Normal Members
  • PipPipPipPip
  • 155 posts
  • Gender:Male
  • Location:Las Vegas
  • Interests:Science fiction, film and television

Posted 25 July 2017 - 00:36

Actually, I'm wanting to tease for another feature as well right now, but thought I might be pushing my luck a bit!

 

I was going to ask about a procedural waveform generator, something that could procedurally generate the usual Sine, Triangle, Saw up, Saw Down, Square, and noise etc, but with the added ability to procedurally blend more than one wave together, and with the ability to soften the corners, bend the ramps, and proceduarally add noise and irregualrity to the waveform.  The resulting waveform could be used as a single-cycle waveform for the sampler and even external samplers etc.  I was going to ask about something like that, but thought it best not too seeing I already have one thread going!

 

So I've decided not to mention it at all, no one has any clue I would like to see a procedural waveform generator ...  :P :D ;)

 


 

:D

 

Have you tried these two tools here:

 

custom wave synth:

 

https://www.renoise....stom-wave-synth

 

morphsynth:

 

https://www.renoise....ools/morphsynth



#155 Garrett Wang

Garrett Wang

    Super Advanced Member

  • Normal Members
  • PipPipPipPip
  • 155 posts
  • Gender:Male
  • Location:Las Vegas
  • Interests:Science fiction, film and television

Posted 25 July 2017 - 00:44

Branching out and re-inventing certainly helps once in a while. 

 

In terms of reinventing trackers and bringing them to a wider audience, I was thinking that a tracker which does not use hexadecimal and is based on 96ppqn [pulses per quarter note], could do that. It would bring "interoperability of groove" with the MPC style groove.

But if people couldnt learn hex they probably cant divide up 96 either, so maybe not. Anyway, renoise can already do it, but its complex for those people who want simplified stuff. If renoise is set at 12TPL its 48ppqn (12 x 4). Dude, I think the LFO module has the bipolar.



#156 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 25 July 2017 - 01:14

@garrett: Hardy, har har:0 LFO tool is cool.

Edited by encryptedmind, 25 July 2017 - 01:15.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#157 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 25 July 2017 - 04:31

In terms of reinventing trackers and bringing them to a wider audience, I was thinking that a tracker which does not use hexadecimal and is based on 96ppqn [pulses per quarter note], could do that. It would bring "interoperability of groove" with the MPC style groove.

But if people couldnt learn hex they probably cant divide up 96 either, so maybe not. Anyway, renoise can already do it, but its complex for those people who want simplified stuff. If renoise is set at 12TPL its 48ppqn (12 x 4). Dude, I think the LFO module has the bipolar.

 

Well thats not really much of a difference given that MPC swing templates do exist (Ableton). Divisions are mostly a resolution thing for the hardware design. With the advent of software, its much more convenient to change them since they are not tied to music hardware specific designs. The Renoise environment captures Dillaesque hip hop grooves very nicely so the delay lane and the TPL at 16 gets my deal done and I can analyze the delay settings for a beat, by just slicing it in the sampler and analyzing the note delay values in the Phrases editor. Entering the same delay sequence results in a same groove. For live playing even that is not required, just press ESC and play. Simple, also becos I do not use quantize for any of my rhythm section (beats, bass, keys, solos) resulting in very 'unquantized' electronic music.


Edited by encryptedmind, 25 July 2017 - 04:34.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#158 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 25 July 2017 - 05:00

After watching many Yamaha Tyros style files demos and toots on Youtube, as well as playing with my Casio arranger keyboard I gotta very simple idea that I can use quite easily to implement styles support in RAS protoyping.


1) The rhythm section for every style is basically programmed or played and heavily quantized (at least on Tyros).
Next, the rhythm variants are manually programmed as per the hardware sequencer buttons for INTRO, VARIANT A to E, ENDING, FILLS, CHORUS etc.
Most of these sections are first chosen and recorded in a similar fashion. Rhythm variants are simple in the sense just add or delete more elements.
For each of these variants, some additional changes like INSTRUMENT PRESET changes are also done to change the overall feel of the variants.

2) For the bass rhythm, much of the purpose of the bass track whether monophonic or polyphonic is basically just to capture the note groove and the not the nuances of the bass playing. That is because the Tyros manual recommends that you must record the bass note only on the C note, not even the C scale notes (implying the C scale).

This makes you think that it might be WAY more easy to develop our RAS system. Hear me out.

3) Same for the chord sequence, the C major chord is played at the rhythm required and the chord changes themselves are not recorded.

Now at this point you already have a foundational style and if you do all the variants, the intro, ending etc for each of the buttons, you will eventually have a fully complete style. Name the style and save it.


We can do a very simple thing then becos of the following assumptions:

1) The user right after opening the Renoise software will not look for the user manual or manually create a new style. He will just jam on the keyboards while pressing play.
2) He will expect it to follow certain industry standards like the Yamaha Tyros styles creation and playback.

This means a lot to us:

1) We basically just need 5-8 virtual song sections with their own drums tracks. This will give the overall song an AABA or similar starting point and mostly for the user to choose which variant or intro/ending he would like to play with. A Synchro-Start feature of RAS would be great, where the user can press the MIDI keyboard and the RAS will play then.
2) SInce the rhythm tracks are taken care of, the chord and bass tracks can be essentially just rhythm patterns basically, since we can eliminate the bass note fluctuations and chord changes and just record their rhythm values much like a drum track. Basically all the bass notes and chord notes can be normalized to a C note or CMaj chord, but these chords will store the TIMING information of their playing.
3) After that the style will seem to play only on C Maj chord and the C note bass pattern, but he will notice that they are playing with a specific rhythm along with the drum track.

4) THIS IS WHERE WE NEED DO WORK. We need first of all the first feature that I wanted inside RAS- Chord Analyzer, even a basic one first, because unless we have this module, RAS will not be able to understand what the user input chord is. If not done then how will it 'know' what the user input chord is anyways, since it is live input? This will without a doubt be a make or break feature. The manual chord entry is not really the USP for RAS. If you look at the Tyros style creator, I do not see any easily visible Chord Entry tool that enables the user to press a chord name on the screen and program like that, they all enter the chords by the keybed. The style creator shows the chord entered and the bar position etc. The style creator has a Chord Analyzer function.

5) Now lets look at the current data we have if the chord analyzer module does its job. Firstly we have the original chord and bass timing templates. Now we also have a live input chord data from the user, however, he is not required to play the chords in time or the similar style, he just has to enter the right/user chord at HIS choice. Pressing once also is enough and that will LATCH to the song. As long as he does not enter a new chord or the variant does not end one cycle.

So basically our RAS has to MAP the user chords and play them according to the timings in the style template stored for the chord/piano section. The bass section will be automatically mapped to the chord root note. These information will be generated by our Chord Analyzer function and the data will be passed accordingly.

Our jobs has greatly simplified if I am correct with this preliminary analysis. This way we can even incorporate an arpeggiator since the user can play ANY chord and the requisite chord notes will be arpeggiated accordingly as a preset default or change according to the information we store in the styles. This also ensures that complex chords will not be dumbed down just to fit the style. I can play Gospel chords over a country track and it will still sound 'Country', just a little harmonically dense. But now this way you will NOT be able to play the Jazz style piano rhythms if the Country style is just playing 4 to the bar chords, becos that is what is stored in the Country style.

Now, the user can play and press the chords in the computer keyboard or MIDI input and the chord type will play at the style preset rhythm and also arpeggiate (if required) and it will latch on the last played chord. The SPL will be relevant here since the particular MIDI data being generated by the USER will be simply played back at the current SPL BUT in accordance with the style rhythm i.e. the style rhythm section will always take precedence.

In our style creator module we have to map sound presets and be able to read and write MIDI data effectively. A solid MIDI library is essential for our coding purposes. However, the logic of how to get this done seems pretty straightforward, no wonder those early simple keyboards back in 1990's had all this, when we did not have any 'sophisticated' Ableton like 'warping' features or granular synthesis, however, a virtual band was already being manufactured for the past few decades.


What are your thoughts on this? Am I on the right track or is Style creation a very complicated job indeed?

Does anyone use a Tyros in the Renoise forums? Maybe you can give us some insider tips?



Another thing I noticed is that some styles make use of C Maj chord substitutions played rhymically, meaning, some amount of variation while storing chord tracks is permitted if our RAS can read those variations and transpose them for the user according to the Key set by the user.

So a simple C Major chord substitution chord track template ,which is playing essentially a C Major chord can be : CEG, CEG, CFG, CEG.

The C major triad is changed to a CSus4 chord for one chord only. However, when the user plays the style and presses F chord, since the key is already preset to C Major, F Major is the chord that will play, However, while following the original timing template above, F major will not play FSu4 since the Bb is not in the C Major Scale (or C Key). Therefore it will play, either an available chord note like G or simply skip if there is no 'available or valid' scale note for that particular chord. On the contrary, G major if pressed by the user, will result in a GSus4 being played for the 3rd chord according to the timing template.

Seems logical and way more simpler than I expected in terms of implementing the logic.

Edited by encryptedmind, 25 July 2017 - 19:12.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#159 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 25 July 2017 - 11:12

Btw just read about https://toplap.org/siren/.

 

It seems to be inspired by tracker interface and algorithmic composition for live coding music performances.

 

I wonder if RAS could also implement a live coding framework that melds into the accompaniment system. More than just Tyros or BIAB it can incorporate code snippets as well and whatever cool feature this Siren tool seems to offer. Renoise API can use live coding principles as well, I wonder if there is a live coding tool for Renoise yet. 

 

For a new user/existing Renoise user to further implement RAS with his own sounds, rather than both Tyros or BIAB with their own 'stock' sounds which are not really suited for thumping electronic music, the sound sources have to be directly linked with the Renoise instruments list or else it must have its own method of maintaining samples and loading VSTis and FX plugins. Just like Redux, this feature set that we are talking about calls for a VST plugin like approach. However, for starters the tight integration with Lua and OSC means that this 'outside' application will still be able to control Renoise and convey the arranger benefits. If this is bound to only Renoise, others who see its potential might feel very limited to using Renoise only. 

 

The styles features are quite similar between keyboard version like Tyros and the more extensive ones like BIAB. I like the BIAB versions better because it offers so much more. You can play with BIAB via midi as well and while being exponentially more complex for the average Tyros keyboard player, for detailed song production BIAB really does have a very complete feature set - for a majority of styles and genres. I say once we have the Tyros system in place, we should aim for the BIAB essentials feature set that works with our expectations. The vocal harmonizer with vocoder is a final component that we will need to implement as well if we are to call our new software complete. BIAB does jazz analysis and solo composition really well. Tyros based Yamaha styles do not do any sort of solo lead generation or lead melody generation. Tyros is much more simple to use and to implement than for something like BIAB who under the hood actually looks quite beautiful. 

 

In the current climate of styles products, audio tracks are an advertised feature. For Renoise, audio generated internally or used internally can be fed as source for the specific audio related tracks in the RAS. By default we will have to package a good number of real sounding and groovy audio beat tracks to add to the RAS feature set. 

 

As we get further deeper into RAS development, more features will be perfected or added, that is normal. We first have to draw a top-5 feature list to do chart. We need to agree on a programming language or API and then start with the pseudocode, mockups and feature algo research to get ahead with this particular project. We need to appoint a project overseer who will catalog the development and all related assets and information required to see this through. Developers on their own will do their part and add features and modules and do research for finding solutions. 

 

 

My suggestion, more than Tyros, investigate BIAB really well, not to copy paste anything, but to gather the most useful bits for our purposes. Check out the Siren app as well btw, between all the above discussed features lie our RAS tool.


Edited by encryptedmind, 25 July 2017 - 11:13.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#160 danoise

danoise

    Probably More God or Borg Than Human Member

  • Renoise Team
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 6330 posts
  • Gender:Male
  • Interests:wildlife + urban trekking

Posted 25 July 2017 - 12:56

I wonder if there is a live coding tool for Renoise yet. 

 

xStream is all about those things. Which is also why it's arguable a bit hard to use, heh. 

But then, I did point out that an xStream spinoff would be an obvious choice for building RAS, or a prototype thereof. 


  • encryptedmind likes this

Tracking with Stuff. API wishlist | Soundcloud


#161 teis

teis

    Local Chief Member

  • Normal Members
  • PipPipPipPipPip
  • 223 posts

Posted 25 July 2017 - 14:15

is this about getting more and more lazy or what isthis about?

 

there are several vsti's out there with tons of chords and stuff triggering other vsti's playing whatever you tell it to play, by pressing 1 note.
2 have been titled here already.

 

and then you have pattern phrases ..

 

i mean. is it really about getting lazy? .. about pads getting automatically chorded to a baseline ??

 

i really don't get it.. please help




there's 3 types of people in the world
.. those who don't know what happen
.. those who wonder what happen
.. and people like US that make things happen !

 


#162 joule

joule

    Guruh Motha Fakka is Levitating and Knows Everything About Renoise Member

  • Normal Members
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • 1444 posts
  • Gender:Not Telling
  • Location:Sweden
  • Interests:music, philosophy, engineering

Posted 25 July 2017 - 14:43

For those of you who don't understand the concept of auto accompaniment ("styles") and keep referring to phrases, note to chord midi tools - I suggest trying the demo of chordpulse to get an idea of what these systems are about.

 

A very simple explanation is that it's an intelligent arpeggiator (remapping voices), as opposed to the 'stupid' arpeggiators mentioned that only deal with absolute inputs and simple transposing algorithms (phrases, note to chord tools).

 

PS. It's laziness in the same sense that loading a third party drum loop is lazy, instead of making your own beat from scratch. Or using a VSTi instead of calculating or drawing your own samples.


Edited by joule, 25 July 2017 - 14:46.

  • Fsus4 likes this

#163 Fsus4

Fsus4

    Super Advanced Member

  • Normal Members
  • PipPipPipPip
  • 150 posts

Posted 25 July 2017 - 15:25

For those of you who don't understand the concept of auto accompaniment ("styles") and keep referring to phrases, note to chord midi tools - I suggest trying the demo of chordpulse to get an idea of what these systems are about.

 

A very simple explanation is that it's an intelligent arpeggiator (remapping voices), as opposed to the 'stupid' arpeggiators mentioned that only deal with absolute inputs and simple transposing algorithms (phrases, note to chord tools).

 

PS. It's laziness in the same sense that loading a third party drum loop is lazy, instead of making your own beat from scratch. Or using a VSTi instead of calculating or drawing your own samples.

 

^^^^^  This.  ^^^^^^

 

Regarding ChordPulse, here's a video if anybody just needs to watch that software in action:

 

 

 

is this about getting more and more lazy or what isthis about?

 

there are several vsti's out there with tons of chords and stuff triggering other vsti's playing whatever you tell it to play, by pressing 1 note.
2 have been titled here already.

 

and then you have pattern phrases ..

 

i mean. is it really about getting lazy? .. about pads getting automatically chorded to a baseline ??

 

i really don't get it.. please help

 

OK. Imagine you control an entire orchestra that's playing a 64 tracks Renoise pattern arrangement in C major -- "control" in the sense that you're directing that orchestra in realtime (i.e. no stopping the playback, no manual editing of notes) to switch from C major to ANY chord you desire on a moment-to-moment basis. You simply signal that you want this switch to happen by doing one of these things:

 

1) forming the new harmony (chord) on your MIDI-keyboard

2) pressing a visual chord-pad in a grid of pads with chord names on them

3) writing short-hand for the chord, e.g. "Dm" would mean D Minor

 

When you do any of these three things at any moment, the orchestra will instantly adapt every track and instrument that needs to be adapted to this new chord and continue to play the same carefully programmed pattern arrangement as before, but automatically and dynamically harmonized on-the-fly as you go forward with your live chord progressions.

 

So, a RAS is very much an issue of getting a better song/sound design making experience. The gap between your mind and reality will become much smaller, you can spend more time focusing on the harmonies and the music rather than the manual editing of note data. With a RAS, one can be more experimental and creative with harmonies (just play that chord on the MIDI-keyboard or pressing a graphical chord-pad on the screen) and have the computer instantly adapting a full pattern arrangement to such experiments and creativity.

 

Yesterday I also summarized the essential concept of a RAS, maybe you could check it out and gain a better understanding. ;)

 

Yeah, but I still don't  understand WHY anyone would need this RAS thing. You just seem like a bunch of lazy dudes. Renoise is a TRACKER. So why do you need to...

  • say "fruit" when you could just say "apples, oranges, bananas"?
  • multiply 3*10 instead of doing the simple addition of 3+3+3+3+3+3+3+3+3+3?
  • use variables like x and y and equations such as y = 3x+1 when you could just write 7 = 2+2+2+1?

And why -- WHY??? -- do you instantly wish to transform entire pattern arrangements on the fly in realtime to harmonize with any chord, when you could just spend hours having fun with manually editing new patterns??? (*)

*)   Quoting nobody specific in the above quote, just pointing to the deeper issue that probably would influence some people to express negativity about the RAS


Edited by Fsus4, 25 July 2017 - 18:43.


#164 Garrett Wang

Garrett Wang

    Super Advanced Member

  • Normal Members
  • PipPipPipPip
  • 155 posts
  • Gender:Male
  • Location:Las Vegas
  • Interests:Science fiction, film and television

Posted 26 July 2017 - 17:28

is this about getting more and more lazy or what isthis about?

 

there are several vsti's out there with tons of chords and stuff triggering other vsti's playing whatever you tell it to play, by pressing 1 note.
2 have been titled here already.

 

and then you have pattern phrases ..

 

i mean. is it really about getting lazy? .. about pads getting automatically chorded to a baseline ??

 

i really don't get it.. please help

 

This is exactly what I was thinking as well...the tool is not really necessary at all.

 

@Fsus4 and encryptedmind:

 

Good luck building your tool.

I'm interested to see what the final result will be.

When will it be finished?

Looking forward to it.

 

Seems like maybe what you guys are looking for is the Yamaha PSR-S950?


Edited by Garrett Wang, 26 July 2017 - 17:29.

  • encryptedmind likes this

#165 Fsus4

Fsus4

    Super Advanced Member

  • Normal Members
  • PipPipPipPip
  • 150 posts

Posted 26 July 2017 - 20:48

All right guys, kudos to everyone who participated in this thread in a constructive way and made an effort to understand the value of a RAS and the potential of such a powerful feature. Hopefully this thread will inspire more people out there to go Lua scripting as well. (Maybe even seasoned Lua scripters such as joule and danoise will beat us all to the task, who knows...)  

 

I now need to learn Lua, that's the only way forward for me since I want to create my own customized tools. You might see more of me in the Lua scripting forums here @renoise.com, as I'll try my best to invest 1-2 hours a week in learning Lua scripting -- that's all I can afford, unfortunately -- and I'll also explore in private some other stuff related to batch creating .xrns files (from a server) for my specific needs.

 

The RAS will be built sooner or later, it's only a matter of time.

 

See you around in Lua land. :)


Edited by Fsus4, 26 July 2017 - 21:47.


#166 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 26 July 2017 - 22:29

@garrett: Yeah all the Yamaha keyboards have the same essential style maker algo. Thanks for the encouragement.

 

 

TL;DR: StyleMaker algorithm analysis of BIAB; RAS feature priority list; RAS mockup draft; ETA.

 

BTW I wanted to draw some parallels with BIAB, but for our purposes its way more involved, not in terms of the feature complexity but the time it will take for us to get it right because we are not pro-audio devs as of yet (pro in other things though  :)).

 

However, some very important things came out when I was making my own styles in BIAB, I never used this accompaniment feature anyways ever, but I can see it can be a very good jamming tool in RAS.

 

1) USER EXPERIENCE: Load a style, adjust the tempo to taste and enable Syncho Start in RAS and just press the Chord Pads in the RAS UI or press the MIDI keyboard for chords input. The chords you play will adapt to the style parameters and the music will progress accordingly.

 

2) STYLE CREATION: Create you own style in RAS inside Renoise. The UI of RAS will show the 5 most important elements for the style - DRUMS,BASS, KEYS, GUITAR & PADS(slow synth).

 

         This recorded pattern will be represented internally in RAS as a pattern list in a row. We can fill each of the pattern boxes with our user patterns. In pattern entry mode RAS will switch to Renoise view and work with the editors (Phrases and Pattern editors) to collect and process data. 

               e.g.   

                        DRUMS [] [] [] []

                        BASS    [] [] [] [] 

                        PIANO  [] [] [] []

                        GUITAR [] [] [] [] []

 

        # TRACK 1: Record a 2 bar drum beat. Sound source: drum vst (Kontakt) or just samples in a drumkit layout in Instrument 1

                            Choose another pattern box and record the sequence for a fill.

 

        # TRACK 2: Record a 2 bar bass pattern in C Major Scale playing the bass riff as if it was for the C7 chord (music theory reasons, makes it easier to 'transpose' (see modes below)for blues etc).

 

        # TRACK 3: Record a 2 bar piano pattern in C Major Scale with the pattern playing around the C7 chord. You can use other notes as well in the scale if required since RAS will decide finally what to actually play.

 

We can leave out the guitar and pads for now as even this is suffice for our first working example.

 

Next, the user has to save the style to disk so that it can be recalled later.

 

Each pattern box will have a conditions list dialog box where we can further configure the playing conditions like the number of bars to play or when to play in the song section (like after how many bars etc). That dialog can be invoked by a right click on the box itself. 

 

Next, RAS will have the CONDUCTOR section in the UI like INTRO, VERSE A, VERSE B, CHORUS A, CHORUS B, MIDDLE 8, ENDING which are further user customizable. These can be a set of buttons on a horizontal groupbox inside RAS on the main toolbar. 

 

Rest of the work will be done by our algorithms. Looking closely at BIAB I see 3 main modes of 'intelligent' chord analysis for live 'tracking' and 'mapping' of user input. Similar to PSR keyboards but a lot more musical if we get it right and implement it. 

 

MODE A: RAS Noob Mode or Simple Transpose. CEGBb will be transposed for a user F7 chord as FACEb.

MODE B: RAS Smarter Mode or Voice Leading. CEGBb will be transposed for user F7 chord as CEbFA taking into account closer movements to destination intervals.

MODE C: RAS Mimic Mode or Riffs Based. A 2 bar (8 beats in 4/4) pattern with notes stored in the style pattern box as C,C,E,G,Bb,E,G as a short phrase or riff will be played in accordance with validity of notes of the user chord with the current key setting and also taking into account the style 'riff pattern'. Thus an F7 user chord will play as ,F,F,A,C,Eb,A,C in the same rhythm as the style pattern. 

 

That is all there is to it. If our RAS can get to work on a very simple 1 pattern box filled for a very simple style for just drums, bass and piano and can take a user input and RAS uses the MODE A for starters, and it plays in sync with our expectations, we already have an accompaniment system in place. We can then just embellish the RAS code base to include MODE B and MODE C later on. We first and foremost need a very basic working prototype that does not have delays or other issues. 

 

There is another factor to the stylemaker in BIAB, its called 'weights'. It works like this, each of the pattern boxes I mentioned above have a number from 1-9 filled in that dictate the probability of that pattern playing. Again this is something we can certainly implement right after we get the first prototype working and sounding good enough. 

 

BIAB however has one major problem: it is not really a LIVE accompaniment system in that you still have to enter the chords and let BIAB perform ALL this analysis which if you read the manuals and demo the melodist and soloist modules makes sense. If the style or song is familiar to you then of course BIAB will work as an auto accompaniment. The BIAB algo actually 'looks' into the 'future' based on the user input text chords and performs post analysis on the styles pattern generation and note sequences based on things like "if the next chord is a mediant minor to the current chord in the measure, play pattern 7 and not pattern 5 which is the default". It takes into account 2-5-1 progressions and THEN generates the full backing track, which while playing live with our RAS has no way of knowing before hand all this. Of course some amount of intelligence can be programmed, if the user DOES input a 2-5-1 then RAS will be in expectant mode and let a pattern box play accordingly, but that is not what we are looking for right now. BIAB has very strong points in analysis but it does them a posteriori the user has entered all the chords in text form and not LIVE.

 

RAS DEV FEATURE PRIORITY LIST:

 

I will post a sample mockup in GIMP later on for the UI.

I also think programming our own sequencer etc is not the best use of time, unless we are looking for a full VSTi conversion. For our purposes RAS must be working in Renoise first. Renoise already does all the transport and navigation and VSTi integration etc so we can concentrate of the more essential components.

 

1) Lua GUI for RAS.

 

2) Processing user input for chord analysis.

 

3) For pattern box filling we can use another Lua function internally to copy paste from the pattern editor instead of making our own capturing mechanism. This also makes good use of the Renoise GUI instead of overly complicating our RAS. Renoise handles MIDI very well and we can leverage on that instead of using external MIDI libraries too soon in this project.

 

4) Implementing the live 'in sync' transposition (MODE A/B/C)  without stopping playback. This will be the second most important feature after the chord analyser, because if this breaks then all we have is a ChordAnalyser or a ChordBuilder (like @joule's tool) which is a step forward but not the real USP for RAS. 

 

5) In RAS mode, Renoise MIDI Input has to be directed in a split fashion where the lead MIDI notes are directed to the synth or piano, but the left range of MIDI notes cannot sound on their own but RAS will fill in those sounds from the style (C-1 to B-2). This is an implementation topic. Internally, I think the best way to go forward with the data mapping of Live Input (via chord builder/chord pads or user MIDI input) and Pattern Boxes is to use the Phrases Editor functions and RAS has to create a new phrase for every new chord played in the analysed template of the pattern from the style that will play. It can use 2 phrases as a double buffering system and write to one and delete the data on the other one going hand in hand and then trigger that particular phrase but also be able to play it from somewhere middle depending on when the user inputs the chord. Renoise will handle all the muilti-threaded playing etc , we just need to capture and process(map) the live input in time for RAS/Renoise to play the live input mapped without glitches. 

 

@joule obviously has a lead on us in terms of having Lua knowledge and a similar component purposed tool getting ready behind the scenes, but it all depends on how quickly we can get upto speed with Renoise API and Lua and everything else in between. 

 

 

TIMELINE:

 

I will take one month time to research the API and learn Lua. Also eventually I will convert RAS into a VSTi so in parallel I have to do other things as well. Also I am building the TOC for the book and will start contacting some publishers and put my pitch forward so I estimate a good 3-4 months before I personally can come up with a working version of something. Plus we have our own lives to live and this is all behind the scenes and not paid work so like Open Source software, we will have to be patient. But from my analysis and prima facie research on the available softwares and hardwares, I am very sure that its totally doable with a little persistence from our end. 

 

Some Renoise dev help is certainly appreciated and if senior devs can come up with a prototype sooner than us, we all win either ways. @danoise and @raul have already said that they are willing to help in the learning process, however, the doing I suppose will have to be us, the new guys, who don't know shit. But it will be fun. I am not the person to give up on things especially when I can learn so much from it. 

 

 

 

B4un6Y0.png

 

 

Proof of Concept Prototype of the Renoise Ztyl Zyztim or RZZ.

 

Faster to do this on http://moqups.com even if it lacks more advanced Photoshop features.

 

Features (from top to bottom):

1) Conductor Section

2) Chord Analyzer LCD and current Style/Song Key

2) Pattern manager, with current Tab selection on the Core or the Pattern Manager pattern box area. Conditions TAB will open the pattern box conditions pane. Config tab is for other toolwide settings.

3) ChordPad section for input of chord and chord type while the RZZ is playing. 

4) Parts of the GUI can be collapsed or hidden.


Edited by encryptedmind, 27 July 2017 - 06:07.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#167 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 26 July 2017 - 22:36

xStream is all about those things. Which is also why it's arguable a bit hard to use, heh. 

But then, I did point out that an xStream spinoff would be an obvious choice for building RAS, or a prototype thereof. 

 

XStream looks like a great place to get lost :) But I see the power in it so, all in good time. In fact it might be very useful to learn XStream first if it gets the job done in a quicker manner. I will investigate this too. Live coding will be added in another take once I get the hang of the Lua API. 

 

[after playing with XStream edit:]

 

Playing with XStream and it looks this could very well be the RAS in making, provided we script it very well. Feels good to already have a data manipulator and inserter right inside Renoise. @danoise has done a phenomenal job with this tool  :) From here manipulating input data to output data in a 'stream' fashion is exactly what the RAS might need. Fsus4 and others I suggest play with XStream and see if it handles the use cases for RAS.

 

I just finished reading the Renoise Lua API in detail and I now get the idea of what  Renoise Song object exposes (properties, functions, iterators etc). Good stuff and some glaring omissions or rather incomplete,as @danoise says (in another post about selecting events from a user selection in the V/P/D columns) about the level of detail possible to read from the API for instance.

 

 
-- The currently edited sub column type within the selected note/effect column.
renoise.song().selected_sub_column_type
  -> [read-only, enum = SUB_COLUMN]
 
-- Read/write access to the selection in the pattern editor.
-- The property is a table with the following members:
--
--  {
--    start_line,     -- Start pattern line index
--    start_track,    -- Start track index
--    start_column,   -- Start column index within start_track   
-- 
--    end_line,       -- End pattern line index
--    end_track,      -- End track index
--    end_column      -- End column index within end_track
--  }
--

Edited by encryptedmind, 27 July 2017 - 06:11.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#168 danoise

danoise

    Probably More God or Borg Than Human Member

  • Renoise Team
  • PipPipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • 6330 posts
  • Gender:Male
  • Interests:wildlife + urban trekking

Posted 26 July 2017 - 23:31

and I'll also explore in private some other stuff related to batch creating .xrns files (from a server) for my specific needs.

 

Here are a couple of good links then: 

 

Forum dedicated to the XML fileformat (preceded the lua API) 

http://forum.renoise...eformats-tools/

 

XRNS-PHP: Php scripts that can parse/transform Renoise XML 

http://xrns-php.sourceforge.net/

 

The XRNS-PHP project actually got parked in 2013. Yet, four years later I still use some of the scripts from time to time. As a PHP platform, it has continued to work since it's parsing Renoise XML schemas and not assuming anything about the songs. But the *scripts* themselves do assume things...so only the simplest of them are still working).


  • Fsus4 and encryptedmind like this

Tracking with Stuff. API wishlist | Soundcloud


#169 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 26 July 2017 - 23:53

The XStream Arpeggiator model is a very good starter for the RAS. I was jamming with it just now and it plays note in a singular column based on the chord input via MIDI. There are options to arp in different directions. There is no glitch in playback and integration with Renoise is tight. I think skinning XStream later on a the RAS tool also might just work with @danoise's help. The playback in the Arp can be scheduled for Beat and Pattern and that also includes the expectations we have for the RAS. XStream can also send and receive notes back and forth to the pattern editor. 

 

Keeping the above in mind, if for now, the ver 0.01 demo, we can just make XStream play back one user recorded 2 bar pattern in a buffer track(say Track 10) and let the live codeing in XStream do the similar arpeggiation logic but with a little more extensive mapping and use the timings in the 2 bar pattern and the relative interval sequence to transform the live input in XStream (via midi options), I say we already have the RAS skeleton and muscle. Next we can add a style pattern bank GUI controls and a skin. Life will breath on its own. 

 

@danoise: Can you put some extra work into the RAS feature set? Seems with what you have already achieved with XStream, writing a style 2 bar pattern mapping script will be at most a day or twos work for you. Are you willing to do this from your end, while we can focus on the VSTi version of it? Of course we will be learning the Lua language the the API and XStream side by side since whatever format we choose we still have to implement the logic and learn from all around, so its not being evasive but rather than re-inventing the wheel, you yourself said in the above posts that XStream can do pretty much all of the PSR styles algo that I described, so then why don't you take the lead of this project? Would be great if your expertise can expedite the dev of RAS.

 

I am reading older posts from the Scripting Forums and seeing that XStream was being developed over the last 2 years. That is 2 years of hard work and midnight oil. If many see this as a viable entry point in completing a 3rd product line up for Renoise, I think your tool will be the best way to move forward towards RAS.

 

I am playing with XStream and with Live MIDI Input, Pattern note read and write support, Lua script logic incorporation and scheduling: I really don't see why should we the new guys go though all the pain of learning bit by bit what took you years to develop and figure out. Our involvement will start with ideas and initiative and conversion to other formats like VSTi at least for starters. What do you think? 

 

The core PSR Styles logic and the Chord Analyzer can be two individual models in XStream. You can simply pipe Model A (Chord Analyzer) to Model B (Styles Mapper). I see from the forums that @joule and yourself have had many conversations about using models for chords and Euclidean rhythms, I wonder if the Chord analyzer module is done by now? Would be super fast if we can collect all the old scripts (conversations), secret tools (non NDA parts :)) and start working on new ones. 


Edited by encryptedmind, 27 July 2017 - 07:30.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#170 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 27 July 2017 - 05:15

.


Edited by encryptedmind, 27 July 2017 - 06:04.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#171 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 27 July 2017 - 06:11

.


Edited by encryptedmind, 27 July 2017 - 06:21.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#172 joule

joule

    Guruh Motha Fakka is Levitating and Knows Everything About Renoise Member

  • Normal Members
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • 1444 posts
  • Gender:Not Telling
  • Location:Sweden
  • Interests:music, philosophy, engineering

Posted 27 July 2017 - 09:40

xStream might be useful if you want to make a realtime system (akin to playing a hardware keyboard arranger).

 

However, I've skipped that idea for the simple reason that I actually don't think that the API song access is fast enough. My guess is that you will reach the limit after 4 tracks or so, given a moderate BPM.

 

I have went for a custom system, aimed at being used as an offline production tool and not as a playback tool. It's difficult making a tool to cover both cases, and xStream fits better for covering the latter rather than the first. The first is obviously more important to me. The main selling point to me has always been rapid song prototyping, not realtime keyboard/chordpad accompaniment.

 

Just some short advice..


Edited by joule, 27 July 2017 - 09:44.


#173 Circe

Circe

    Member

  • Normal Members
  • PipPip
  • 46 posts
  • Gender:Male
  • Location:Spain
  • Interests:Visual Basic 6.0 Music Programing
    Piano Chords

Posted 27 July 2017 - 09:55

@Joule....Check your pm, please.

#174 encryptedmind

encryptedmind

    Advanced Member

  • Normal Members
  • PipPipPip
  • 119 posts
  • Gender:Male
  • Location:India
  • Interests:Jazz, malware, world travel, women

Posted 27 July 2017 - 10:30

xStream might be useful if you want to make a realtime system (akin to playing a hardware keyboard arranger).

 

However, I've skipped that idea for the simple reason that I actually don't think that the API song access is fast enough. My guess is that you will reach the limit after 4 tracks or so, given a moderate BPM.

 

I have went for a custom system, aimed at being used as an offline production tool and not as a playback tool. It's difficult making a tool to cover both cases, and xStream fits better for covering the latter rather than the first. The first is obviously more important to me. The main selling point to me has always been rapid song prototyping, not realtime keyboard/chordpad accompaniment.

 

Just some short advice..

 

Valid points, though prototyping a live system is exactly what RAS will be for its primary feature set. If you look at the PSR series, its the live feel and construction of a song played live via the styles that is the USP. The users are not expecting a DAW inside their accompaniment module, and BIAB is more involved and currently best selling 'song protoyping' tool for offline song writing, which cannot be directly used in Renoise (even as a VSTi) but the export features of BIAB with MusicXML and MIDI file as well as WAV are good enough, along with a pretty cool Melodist and Soloist generators. The good side of using Styles is the lack of genuine complexity, its mostly just implementation issues once we get there. The drum track on its own needs no transforming, just switching. Bass and guitar will be guided by the Model A Chord Analyzer which will output the bass note, root note and chord quality, as well as the voicings templates for the next chord to check with to align its own voiceings. Thus the Piano track is the most important because it will contain the chords and bass info. The bass track will mostly be just for the rhythmic template. The rest of the tracks like lead etc can be just straight recorded on the pattern editor and maybe we can add a looper module to collect the data and shift the tracks automatically so that the notes do not overlap. Renoise will take care of the track play themselves. 

 

A single Xstream instance that can transform user MIDI is all we need and the piano and bass can be both guided by just writing them to their respective tracks pre-arranged. The new elements like guitar etc will mostly be repetitive riffs, and they are always 2 bars long, but play maybe only 1 bar. 8 beats are the longest in a style in BIAB for the most parts. They use pattern chains and conditions to offset the lack of more bars and use a probability system to choose which patterns to play. All of this can be certainly done in XStream. A GUI face lift will be needed though for the Conductor parts to work properly. But that can be offset by using XStream presets themselves. Lets see if @danoise sees the feasibility analysis as positive.

 

A song prototyping tool really does not get better than BIAB, for electronic styles you have a multitude options but Chord Pads are just one element of the process, in fact an XStream Model can create an entire pattern list for a song, place separator markers as INTRO, ENDING etc and populate the drum tracks by copy pasting pattern style data. Repeat for the bass, at most the XStream has to handle 3-4 tracks maximum to achieve the PSR Styles effect. XStream has everything else in place and without reinventing the wheel, I see it as a perfect tool for both learning Lua and Scripting as well as developing models, and finally having the RAS done. After that using this as the playground, we can prototype VSTi version for the next iteration. 

 

Lets see what happens. 

 

I would love to see your tool as well for the offline prototyping experience, I wonder if you plan to implement a drum sequencer as well and slowly morph your tool into a mini DAW of sorts? Or will it be like ChordPulse, since Renoise itself is a song prototyping and production tool. Or you mean like Cognitone type tool features with a composer algo as well? In that case your use case is quite different from RAS. 


Edited by encryptedmind, 27 July 2017 - 10:37.

encryptedmind

Victor Marak

 

uZIK|mAInD|Z0FTwA-RE


#175 joule

joule

    Guruh Motha Fakka is Levitating and Knows Everything About Renoise Member

  • Normal Members
  • PipPipPipPipPipPipPipPipPipPipPipPipPip
  • 1444 posts
  • Gender:Not Telling
  • Location:Sweden
  • Interests:music, philosophy, engineering

Posted 27 July 2017 - 10:48

I would love to see your tool as well for the offline prototyping experience, I wonder if you plan to implement a drum sequencer as well and slowly morph your tool into a mini DAW of sorts? Or will it be like ChordPulse, since Renoise itself is a song prototyping and production tool. Or you mean like Cognitone type tool features with a composer algo as well? In that case your use case is quite different from RAS. 

 

Nope.

 

Maybe the next version of the API will make a realtime tool possible. I wish you good luck.