Brainstorming: Audio Tracks

0l3ks4 visualization its perfect for me… +100

I´ve been dreaming of this for a long long time. I had the exact same pictures in my head of the vertical audiotracks, it just seems so obvious and logical.

+1

I’ve been wanting this too.
Another way it could be implemented is if you ctrl click, or something, on a note the waveform of that note appears in scale to the pattern, maybe even as an overlay for as long as you hold down. it would be really good to see where exactly an individual note will end; prob most useful for reversed perc hits or samples that stop suddenly.

To add a few more thoughts:

Renoise is my DAW, I don’t look at it as a tracker anymore because tracking has become one small component of Renoise.

Renoise already does what I’m suggesting, it just doesn’t display it like I visualized. Audio tracks can have multiple columns just like tracker tracks and really only need to retain the start point of the samples being played (which it already does now via a note). If a pattern with audio is reused, the audio track would function just like how a tracker would: when the start point of the audio is triggered again, it will either cutoff the audio that is already playing, will continue to play underneath the new audio, or it will will trigger a note off (this would use the same setting in the sample instrument properties window).

Obviously if a sample is longer than the pattern, the remaining audio will automatically trail into the following pattern (unless there is no pattern there, in which Renoise could warn you that there is overset audio on pattern X, similar to how graphic design software will warn you of overset type on a page layout).

Once the visualization is established, the fun part of being able to edit the audio with the full functionality of the sample editor will come into play right from the pattern editor (clicking the instrument editor tab could bring up these functions in the bottom tray like the automation editor).

This would make live recordings so much easier to integrate into our music and is what I believe is really holding Renoise back from being a complete package right now. We can keep using excuses that other software does it better, or there’s a complicated plugin trick to solve this, but it will never be truly solved until it is put in, pure and simple.

Oh, and one more thing - Don’t be prejudice against vertical waveforms, that’s audio discrimination ;)

Brendan

…and Zeus himself would step down out of the clouds.

I agree, unless there was some way of having the audio track not linked to the pattern itself (even though it’s still visible in the patt, as a guide).

THISS!!

What this really boils down to is freezing a track. Renoise is a sampler/sequencer through and through, you can picture a note event triggering a sample having a “wave trail” or “wave cascade” or whatever you want to call it following the note event. The difference is we’re not seeing it, and we’re not able to rewind in it or shuttle/jog in it.

An “audio track” is a misnomer, since this is what Renoise is doing all the way through anyway. The problem for me is in creating a new track type. I don’t think we need that, and i think it nudges close to breaking the GUI “language” of the software.

What i think is a more solid compromise is the idea of a frozen track, which basically does a render of that track, makes it uneditable until you unfreeze it, but syncs playback offset to the line the playhead is currently at. After all we don’t really need to be able to scrub, so much as we need to be able to jump 3 minutes into a voice take without having to listen to the whole track over again or cut the take up. I don’t think calculating the playback start offset is tough math, knowing where the playhead is within the application.

In this sense, starting an “audio track” is just a case of doing the take, triggering the recorded sample on line 1, and freezing the track. This way we maintain flexibility, and add a sexy performance control tool to other tracks as well.

Interesting thoughts sunjammer.
I recal a player (I think deliplayer or hippoplayer) on the Amiga that just started samples anywhere while dragging the play cursor around.

dispite Renoise functions beyond a player, I think your suggestion to get a freeze function for a track, making it only possible to use samples there, would benefit the ease to use and to implent.

Please explain why this would be a problem compared to now. i mean: you can already have long samples running in to the next pattern…

I think your suggestion is a more like a very bad workaround. It would be close to just using EnergyXT as VST. Why are you afraid of getting a new “track” (audiotrack) next to your step tracks? Yes, Renoise is a Sampler/sequencer, So was the Akai MPC 1000 until some independet japanese guys wrote a new OS for the machine, which amongst another hundred cool additions, featured AUDIOTRACKS. It most certainly did not break any GUI language. JJ OS

It wouldn’t be like using EnergyXT as VST, because you wouldn’t have to buy EnergyXT, or learn a new package to use it. It’s really that simple. Hypothetically, if a track is frozen, you can’t change its contents (pre or post fx optional), but you can scrub it. This solves the problem, period.

I’m not sure why this is such an either-or situation for some users; I suggest a “workaround” if you will that fits within the existing framework, and adds functionality you can use beyond just dropping a wav in there and shuffling it about a little. What people otherwise seem to want is the ability to go to any position within a song, and have renoise “look back” at preceding notes and calculate playback offsets from there. A friend of mine worked on a tracker ages ago that did exactly this, and from all my use of it, i don’t miss that feature whatsoever.

The only times i need “audio tracks” as described in this thread is when i work with vocals or accomp tracks. And the problem in these cases is scrubbing and playback.

Explain to me the difference between an audiotrack and an existing note track that looks back for sample triggers, then explain to me why we need a distinction. Then explain to me how such a distinction benefits the software and the enduser.

If the intent is to actually sequence using samples in this way, Acid-style, might i suggest looking into other software alternatives?

And just to clarify, i am VERY interested in this problem being solved. I’ve needed this functionality for ages, and i have friends for which it’s even more important. I’m just not certain the audiotrack solution is the right way to go with the future in mind.

right click on the note -> show as wavefile (it starts rendering and pasting the wave as on the pic at the beginning of the thread)

on the pattern sequencer put the option “duplicate without audiotracks”

could be a solution?

I suggest that there be two track types: a) a regular pattern notes track, and b) a continuous track (the one that contains those waves on the first graphic at thread start). This new type of track is not locked to any pattern length, but is stretched along the overall length of the sequence: if the sequence consists of only one pattern, then it is just as long, if two patterns - then two patterns long and so on and so forth.
Length adjustment calculations must only be made on tempo\speed changes, where present, in pattern data.

Now, any sample can be used as a) a regular instrument note, or
b) drag’n’dropped onto a continuous track, where it becomes a continuous wave, and can be split\nudged\glued (probably including other operations pertaining to sample editor) as a usual wavefile.

I dont get you. You´re against audio tracks, but you´re not. You don’t miss them whatsoever and you have been needing them for ages…

Im sorry if i totally misunderstand you, but it just seems weird to me that you put so much effort in trying to “talk” this sugestion down to less than it could be.

Is it just the visual thing? like the vertical audiotracks view or what?

one useful thing audiotracks could bring besides the visual aspect and the abillity of starting to play in the middle of the file, would be direct recording (recording directly in to the track). also, it would just make the whole thing alot easier if you could see where the peaks are when starting and stopping at a certain point.

I also like to be able to cut, copy, paste etc. in the sequencer as well.

“If the intent is to actually sequence using samples in this way, Acid-style, might i suggest looking into other software alternatives?”
-This is just plain arrogant and now youre being either-or. I LOVE to track, especially in RENOISE. but i also love sequencing with waves. why cant i have both in the same app?

It´s plain to see that im not the only one longing for thiss…

Yep +1 on this

I’m not talking it down as though i don’t want it implemented. I’m trying to narrow the suggestion down to what’s as easy as possible to implement. My point really is that we are extremely close to this already, and i’d like to see the primary problem of playback offset solved in a practical and easily doable way that doesn’t require a lot of UI redone/reshuffled, so we might conceivably see this before 2.5 or later.

We can talk this into the CLOUDS if we want, but i think it’s selling the dev team short if we think they don’t realize what the best UI solution is, and it’s not getting us any closer to seeing it implemented ahead of the other stuff that takes precedence. We have literally been asking about this EXACT solution since 1.2 or even before (before my time), vertical wavs and all, but it’s obvious that solving the timing issue, mixer view etc, as in features that improve and really nail the core functionality of the package are simply more important to get done. This is a tracker first, and the goal is to make it badass.

I want this. But there are levels of complexity to the problems it solves, and lots of them are already easily worked around. The one that ISN’T easily worked around is playback offset for long samples, and the faster we can get that done the better. If we get that done in a way that also offers CPU performance gains, we’ve struck two flies. I’m personally more keen on generic tools that improve the overall package than simply smacking in another sequencer style in there, as in the endless pianoroll debate;

I see track freezing as more commonly useful than wave editing as part of the sequencing.

But i guess it’s all subjective.

I have come in my mind to some idea, and accidentally found this thread, which in fact is a better definition of what I was going to propose.

Anyway, what do we really miss in Renoise (and, basically, in all trackers in general)? I think - it is graphical shape of the stuff we are creating. Now the screenshots presented in this thread are very impressive. My thought would be - the creator should be able to switch between “note/volume/effect” and “audio” (or “waveform”) track presentations (yes, simultaneous views should be as a preference too). Anyway, I understand that this type of feature isn’t so easy to implement.

What I propose - as a temporary (maybe Renoise 2.2) solution: when a track note in the pattern editor is inserted, the following lines in the track are dimmed to some other color - for the length of the sample played. Meaning when you look at the pattern editor, you can always see for how long the note will be audible compared to other notes in other tracks.

It is just a concept, and I understand that this involves several problems connected to that (for example how to act when new note is entered in the same track during the previous note play duration, or how to present VSTi created sound data), but those are the details.

Such solution would implement modest (but very useful) graphical presentation of the sample relations in the whole tune. Of course, the idea of vertical waveform presentation of the track is the obvious follow-up, and would be very appreciated.

The thing, of course, is what about midi or VSTi notes? Boil it down to simply “on/off” and trust that?

There are so many “funky” ways to get the wave display done, such as redrawing the waveform on a per-line basis during playback. So whenever you play a line, the waveform render for that line is redrawn. This is a very “dumb” way to do it, like instruments painting the tracks they play on… But it would solve the VST and line-in problem conceivably…?

A man can dream ;)

Maybe some kind of a channel “volume treshhold” value after what it is considered that note is inaudible

I thought about this and I agre: I’d rather have track freezing and the option to display freezed tracks as waveforms.

Track freezing via a background thread, especially! So when you edit stuff in a freezed track, Renoise changes the color of the waveform to show it’s not up to date anymore, and then begins to render the track again (in the background, with low priority).

In short: make it ninja enough and track freezing would be just as good or even better than what this thread is about!

Definitely. It goes better hand in hand with Renoise as a tracker environment.