Feature Design: Zoomable Pattern Editor Discussion

So, my wet dream for the longest time now has been to have patterns as instruments… this clip idea comes damned close… so will we be able to change the pitch of all the notes in a clip simply by playing the clip’s master instrument with a different note? If so, pwnage… and also, will there be a way to change the clip’s tempo too?.. this wouldn’t be as important as pitch imo, but cool nonetheless.

… I guess the other question would be… if it does change pitch, will it be the pitch of individual samples, or the notes played of the instruments in the clip?

Also, will retrig of clips be possible? how about offsets? will we be able to play multiple instances of the same clip at the same time? on different note columns? how about different tracks?

[edit] just noticed you guys already started discussing this… sorry about the redundancy… some of my questions still apply though :P

Very nice flash presentation! :)
I would want to do away with the whole speed concept if this was introduced, as mixing speed and zoom levels could be pretty weird.
I’ve been longing for clips though, they just offer so much flexibility! I remember an old Amiga- tracker that had this feature, but I can’t remember its name for the life of me…

+11!

+328947239849237498237492398.333333333333333 etc etc

Very sexy.
A couple of things, tho:

  1. When we zoom out to 25%, you say that notes are represented by single pixels, but in the mockup I am unable to see the bunch of notes in the rightmost track. My suggestion is to keep the color coding of the original track data when representing steps as single pixels, in this case it would be white-green-yellow-blue from left to right. That way we could easily tell which part of a track is populated by data, and not only notes but also commands.

  2. Also at 25% zoom level, you say that the cursor acts as a magnifying glass. If we scroll with cursors up or down, does it move by 4 steps or by 1 step?

  3. What if we place clips at positions that are not multiples of 4? How is this represented at 25%?

  4. Sort of regarding BYTE-smashers point no5. I find the 400% zoom confusing a bit. If sub-tick notes are converted to note delays, then aren’t we supposed to make this a persistant logic when at zoom levels less than 100% and make similar representations of the data we don’t actually see?
    Perhaps not, but the whole issue is problematic, imo. Here’s why.
    There are two options:
    a) 400% (or 8x or 16x…) is the largest zoom
    b ) there is infinite zoom

While keeping in mid that:
“0dxx - Delay notes in track-row xx ticks before playing. (00 - speed)”

If we stick to a) then not all delay-commands will be translatable into this zoom level, which is bad, cause then instead of seeing notes, you get to see more delay commands, only with different params. What I mean is, say that speed is 6, it is not divisible by 4,8 or 16.
Also if we have any fixed maximum zoom, then why not just set speed to a higher one and get the larger resolution to begin with (like we do now)? Is limited zoom in fact useless?

In case of b ) we are dealing with possibly infinite amount of data per step, and that is for one thing not possible to represent with delay when zoomed out, even at 100%. If we don’t use delay to indicate that there is more data than we see, we’d have to use another method.

Also, on a sort of unrelated note, I’d like to be able to compress tracks horizontally (like a horizontal zoom out) to be able to see all tracks on the screen while playing a song. Also useful for getting around the tracks.

If we do away with speed, then we deffo need infinite zoom, or we have limits that we didn’t have before.

no matter if this is feasible or not in the end, you did a very good job on sketching this. splendid approach!

Very good addition.

I’m interested - can you do a small sketch of this idea?

I had imagined that keyboard interaction was unchanged (single-row increments), but that mouse interaction was quantized with the current zoom factor (x4, in this case). So if a clip was dragged using the mouse, it would quantize to the zoom factor (every 4th row in the pattern), but if using keyboard, the cursor would allow us to edit each row.

Good point, and my guess is as good as yours. But I’d say that intuitively, the best thing would be to allow the clip to be drawn with an offset (between the rows, so to speak), until we did something with it (possibly quantizing it’s position, by using the mouse).

I’d say this is a general problem with any tick-based concept: if timing resolution will increase, the delay commands would need to change - meaning that old xrns songs would have to be converted to a new format. And this is where we could ask ourselves, if we should ditch the antiquated tick-delay commands, and go for a more fluid system. But since this is all speculation, I decided to base the concept on the existing tick/delay implementation, which works rather nicely I think :slight_smile:

You did see the 6.25% zoom level, right? Question is: should this work independently of horizontal zoom?

Speed is limited as it is, and what more it’s quite confusing. Lower speed equals visibly higher progress through the pattern even though the BPM stays the same. In my book this is bad for a number of reasons.

What if each clip could have an editable resolution? One clip would play as fast as the next one in overview-mode but two clips could have different zoom-depth allowing for different timing needs.

Here it is. Of course, I’d vote against using yellow as it represents no data, but I stuck with original color codes from your mockup.

Now it’s SORT-OF visible that every line contains a note, but only every second has a command. Also different colored commands would be discernible at this zoom level. At any case it should be much easier to find the exact bit you want to edit and zoom in.

This is very usable concept, like for example in Reason, but it would make it impossible to view all the data side-by-side which is one of the main strengths of trackers, imo.
Dunno, maybe there IS a way to get the best of both worlds.

Indeed. That is the main problem with this concept, and is also why IMO we should
separate the clips from the instrument list and let that be optional.

Then the separate clip editor with independent speed is made in the instrument it self
something like this (from the Rni Future Thread:

Then you choose to use clips side by side directly in a normal pattern, or to
use/track separate individual patterns. You then set the settings for individual
patterns in instrument settings where you choose to synch the pattern or not. And you map the pattern like any sample.
A clearer difference in my opinion between clips and instrument-patterns.
But the basic idea is the same…

Ace concept , how long till we’ll see this for real?

The idea of triggering clips as instruments is the BEST concept in all of Renoise’s history IMO…

danoise: any ideas on the questions I asked regarding clips?

+1
Saw this concept in AHX and unless I’m mistaken that oldie futurecomposer had something like it as well. It would also be great if there was a way to see visually the contents of the ‘MultiInstrument’ encapsulated somehow in the main pattern editor.
There’s even a lot of unused space in instrument editor, so it’s an ideal match! FTW

Hey, you tell me! This is a discussion :P

But OK, these were the ones you meant?

yes, it’s supposed to work that way.

Conceptually, yes. But we would need to introduce a new, clip-only, tempo command, since the normal F1xx and F0xx commands deal with the global tempo.

I see what you mean, especially when working with a multisampled instrument. Perhaps it should be an optional setting for each clip?

To demonstrate, imagine that we have a clip containing notes from a multisampled drumkit, and we trigger that clip at various pitches, c4, c#4, f4 and so on… With simple note transpose, the drums would swap samples (hihat becoming snare, kick becoming crash cymbal or whatever). With the second mode, the pitch of each sample would transpose, making the hihat stay a hihat and it’s pitch change according to the notes being played. I think both are valid options!

Retrig: yeah, think so. Actually, we need to consider how each and every command would work.
Different note columns: I thought about this, and my conclusion (and the way I’ve visulized it in the screenshots) is to limit to a single clip per track. Why? Because the maximum number of note columns is 12, and having two clips, each with say, 10 columns, would exceed that. But we really need Taktik’s input on this, before anyone of us say what is possible and what isn’t.
Multiple playing instances/different tracks: sure. Also, with the additional benefit of “dynamic clip automation”, which would let you have the same automation curve control different parameters, depending on what track the clip is located in (this is described in-depth in the feature design).

Two things I don’t get
1: Surely the notes would not become visible side-by-side, just by embedding them inside instruments?
2: Note sequences would be confined to using whatever samples the instrument has to offer. How is that more flexible?

Great job on visualizing. Personally i’d prefer a zoom in function like shown by you, but instead of zooming out an arranger. To me it makes much more sense to arrange clips in a seperate step.

1: Thats right. You only see the note of the instrument that is triggering the instrument-pattern. Where a clip is only like a block selection.
If you add 2 clips side by side, you don’t see two notes where one note representing each clip. Instead you see entire content of the clips side by side. The clip is not hooked up to any instrument.
If you however hook it up to an instrument, then it will become embedded and you will see the clip represented as a instrument note. That way you have both worlds.

  1. Not true. You can add shortcuts to any instrument in the project. You just drag drop any instrument into the key mapper:
    Something like this picture.

In the list you can very easily drag/drop clips from to instruments.

Now… when that said, there is nothing stopping us from having options to automatically insert clips using the keyboard. But you will then not see a note inserted, but the entire clip content just like you would copy/paste a block. But if you want instrument behavior (instrument commands and fx,note off’s etc), then you must embed the clip inside a instrument slot (you convert it then to a instrument-pattern.

To sum it up:
clip = a block selection.
instrument.pattern = an entire independent pattern embedded in a instrument structure.

With your concept. How will you decide if you wanna insert a single note or if you wanna insert the entire clip content (extracted content) in the pattern when arranging?

OK I understand it a bit more now, but still not completely. And perhaps my preferred workflow is different from yours, but I’d prefer to have the ability to edit clips while simultanously having access to the pattern arranger + editor. And, lest not forget, two of the three basic points about the zoomable editor concept deal with something else than clips:

  • There is currently no visual representation of the song flow
  • Renoise provides only a few tools for working on song-wide scope (Advanced Edit)

By the “entire clip content”, you mean the actual notes embedded inside the clip?
It would be simple to convert a sequence of notes into a clip, and the opposite would of course have to be possible as well (extracting the clip’s notes, turning them into normal notes) - is that what you meant?

And there’s no way of confusing notes and clips, since they are visually very different. Clips are also stored in another part of the instrument table, since they can be dragged into the pattern (unlike instruments):

(BTW picture is outdated - I’ve ditched the “automation” clips in favor of “dynamically linked automation” :) )

Ok… lets not talk through each other here…
The last two points here is obvious. Of course we want that. I have never said anything else.
Also to see several things at once. That can be done no matter what clip concept is used…

Exactly.
You can for instance make a clip by just mark a block and ’ add clip’ (this is however
also done automatically as you start entering data into an empty pattern, you will get
track sized clips as default).

How would that be simple?
Lets say I have made an instrument that contains a pattern (i trigger a pattern inside
the instrument with it),now I play a chord and also add some pattern commands to that.
How could I possible convert that into extracted data? What about timing etc? The commands.
The space it would occupy, the dsp used inside the instrument? the ADSR envelopes etc?

Ok so the clips are NOT connected to instrument then? That was ‘my’ concept all the way ;)
But why do you have to keep the clips in the instrument list then?
If you track normally there will be tons of clips (one for each track in each pattern, normally).
I would then much more prefer to have it separated in it’s own list:

We are talking the same language here. But I store them in another folder in the same list you
see in picture above. You can use and reuse the automation blocks on anything. They are also
optionally layered.
One of the top left buttons you see there decides if you wanna include layered clips or not (if you
collapse the automation clips like on the picture below, then you will copy/paste all clips if the
‘layered’ button is activated):

Compare the second track from top on the pictures that have the automation
clips collapsed (last picture) and extracted (first picture).
As long as I’m having the layered button activated (the rightmost button) then all
clips (automation and fx clips) will also be copied/pasted along. The other buttons you
see there are visual buttons. To see the automation/fx clips and to see details on the clips.

You see we are in the same ballpark here. Just a bit disagreement on how to
structure av few things…

I’m just trying to point out the main differences between what I refer to as clips
and what I refer to as pattern-instruments.
All you see above here is about clips. No instrument involved at all (except for the
instruments the clips point to of course).
When talking about instrument-clip, or what I really refer to as instrument-patterns
then all that is very well explained in the RNI future thread…

Now it is up to the user how they will use this.
If you just want a simple clip arranger there is no use of using instrument properties
on that. It is just a way to copy/paste pattern and automation data much faster (block
of data is stored as clips in a clip list),
If you however wanna ‘track’ your own instrument to trigger entire patterns (buzz style), then
you do that in the instrument section of renoise.
If you have a cool clip and wanna put instrument properties on it you could simply drag/drop
that clip from the clip list into an empty instrument slot. Then put instrument gadgets on it :)

If you wanna use both methods then just track individual instrument patterns, then insert
the note for that instrument and make a clip out of it (the clip then represent the
instrument pattern).
If your instrument patterns are 1 real pattern size in length then you just insert one
note and the clip will be made automatically at same size (default auto-create clipsize).

You then have your ‘external’ clip editor in the instrument editor. If renoise would be
more flexible in the future you could see both windows at the same time if you prefer that…

Pysj, the more I understand your ideas, the more I think we talk about the same thing :D
Just a couple of quick notes: I decided to integrate clips into the instrument table since they are input in the pattern editor simularly to instruments. Putting them in the same spot would make that obvious to all. Also, they are NOT created automatically, this would be optional. And, in my previous post to byte-smasher, I was arguing that clips would perhaps be limited by the current track structure - so no full patterns inside clips. However, as I also did point out this might be too early to decide what is possible and what isn’t. To answer that, I really think Taktik needs to get involved.

I’m giving this topic a bit of a rest now, and going to spend some time making music for a change!
People - please continue to bring your ideas forth. We will do a revised version later on.

Yes, most parts are the same. This is just what have been discussed for more then 5 years now anyway :)

But just answer this simple question:

When you insert a clip into the clip arranger (= into the main pattern),
what will you see? Will you see one single note representing that clip? Or will you see the content of the clip (the extracted clip)?

In other words, are you forced to use an external clip editor where you edit one clip at a time? Or can you directly edit the source of many clips side by side in the pattern editor?

I never got a straight answer from you about that… or I missed it…

The reason for automatically make clips is to keep backward-compatibility with todays pattern structure. Else you will have problems, and you would force ppl to make their own clips which is not acceptable IMO. But you can read more of that specific problem quite early in the large pinned arranger thread.