Brainstorming: Audio Tracks

It’s not that I’m against this suggestion. Not at all. I would be even happy to have this since I use long samples too.

But the main idea behind this feature is to make samples fit in a certain time area. Couldn’t this be theoretically soluted with effect commands?
If the developers would decide to expand the effect commands to Z, we could arrange this by simply using those.

For example:

0P01 sets the start position of the sample. It can be combinded with offset triggering 09xx. This effect command alone does nothing since it needs its “counterpart”, the ending note of the sample.

This would be then 0P00. The sample will then be played in this exact timea area, regardless of its length and hence be time stretched if needed.

This would be a long time solution for most of us, even if it doesn’t use actual soundwaves.

Just a thought I had when reading this thread.

I thought the idea of this was to get away from having to stick to lots of individual samples.
i.e. I could record my singing straight into the track without having the issue of chopping up samples, arranging them (and, i will be able to hear them wherever i press play, without having to wait for the start of the note)

Yes, if this is about improving workflow at the same time as overcoming the ONLY thing which keeps Renoise behind other DAWs for all-round, standalone use, it should be a solution which doesn’t require a complexification of the offset command.

I guess this has been discussed a million times, but it seems to me that the most straightforward means of implementing what people want is to do something like this:

  • calculate, for any given playback-start position (mid-song or mid-pattern) which long samples should be currently playing (i.e. they’ve been triggered in the past; their length exceeds the time between their initial triggering and THIS playback time; they haven’t been interrupted by a noteoff or another NNA)
  • calculate the offsets for those samples AUTOMATICALLY and INVISIBLY (not with 09xx commands), which would allow a much higher resolution than the 09xx commands for long samples and avoid a lot of fiddly, time-consuming work on the part of the user
  • play the damn things

XMPlay does this when replaying modules, since it does a certain amount of calc on songload so that even if you skip to halfway through the song, you don’t get weird patches of silence like you do when skipping through the sequence list in ProTracker or FastTracker. It also does this for cutoff/res envelopes in those few .IT modules which use them. Obviously it’s devoted to playback and not all manner of cool CPU-honking plugin shit like Renoise, but I’m sure Renoise can spare the extra few cycles.

A solution like this would be compatible with the 09xx command, because that would just be interpreted as an override of the type described before: if anything overrides the continued playback of a sample, Renoise acts accordingly. Actually, this is just a Renoise playback issue/solution, and I don’t think it would impact upon fileformats or anything else. I reckon there wouldn’t even be a compatibility issue with older Renoise songs, since a rendered full song from a replayer that used this behaviour wouldn’t ultimately be any different to one rendered with the current behaviour.

Am I just rehashing old discussions or foolishly second-guessing how the dev team would implement this? (I know Taktik always tells us not to speculate on how tricky an issue would be to solve, but simply to make our demands and then let them sort out the details :P )

Anyway, the sooner we get this, the sooner I never again have to use Reaper, Acid, Soundforge, Sonar, ANY of that whack shit that takes me away from Renoise :(

[EDIT: I realise I didn’t address in-pattern visual representation of audio waveforms…because I don’t think it’s half as important as being able to HEAR the audio. This is a digital AUDIO workstation, after all. If the playback engine is playing long samples at the correct offsets/etc, we can jump to Sample view to check out the waveform, and (until in-pattern visual representations are eventually implemented, which I imagine they will) use our EARS to do the work in DSP automation envelopes etc.

I’m not advocating a particular workflow, or my desired workflow, or anything. I’m just saying that like with everything else, we have to prioritise…and while there’s still a lot of debate about how best to visualise waveforms, that seems to me like a little more of a luxury than the fundamental compositional benefit of being able to work with long recorded audio samples alongside VST/DSP output. I’d love to see wee waveforms one day, but I’ll never be relying more on the arbitrary and often misleading peaks and troughs of a complex waveform compared to the no-lies assurance that what I hear is…well, what I hear. And what I want an audience to hear.]

Well let’s think about the process.

In this example we are going to say we have a song with 12 tracks and we want to play from the 17th pattern in the sequence (just so we have some numbers to use.)

You hit Play at the 17th pattern.
Renoise now has to look back through all 12 tracks and see if the last even was a Note On or a Note Off (ignoring tracks that have Note On at first line of pattern, although you may loose a little depending on NNA settings you will also loose a little from not having previous samples effects such as reverb tails.)
Let’s say Renoise finds 3 tracks which last had a Note On. (Also you could argue it should take into account Envelopes after a Sustain but think that’s getting too deep.)
Now it has to get the length of these samples and see if they should still be playing. **
Any sample of length so that it would still be audible is started at the correct point.

** Now this sounds easy but please bear in mind bpm (and lpb) settings. This adjust the speed at which Renoise plays and thus any change would mean a different amount of time has elapsed between trigger and wanted start point. Especially with the old tracker technique of using bpm changes to set a groove for a song.
So to calculate the time from last Note On to current play position Renoise again has to look through Every Single Track (this time including all Sends and the Master!) all the way back to earliest open Note On to check for tempo change commands and then use these to calculate the amount of elapsed time. If Renoise would to assume the value at Note On or desired playback point it could be vastly out, especially with tempo groove changes.
Now if Renoise were to implement a Global Tempo Track (or even limit tempo commands and the like to only being entered in Master, although I don’t personally see that happening) then this could be helped somewhat. Although I called it a track I don’t necessarily mean a track in the pattern editor but more somewhere you can see all tempo changes and if you were to add them from within there, rather than on the pattern in your chosen track, they would be entered into the Master as default, so you still have the visual representation in the patterns.

And yes, all this is going to add some cpu cycles as it all takes processing.
As it only ever happens when you press play a tiny pause while it calculates it hopefully wont be too off putting.
If you could set a flag per track so only tracks with long samples were scanned it would help quite a lot.
To me it looks that the tempo scanning and correction will be more labour intensive than the samples so really think Renoise needs to add some kind of tempo track to make this easier and lighter to implement.

Hope any of that makes any sense to anybody but me and might be useful.

Thanks for working that through, kazakore - many good points. I think a slight pause before playback begins is quite acceptable - indeed, most DAWs have this sort of a gap after you hit Play. And if it isn’t widely seen as being acceptable, your proposal of a per-track flag to warn Renoise of where a long sample is being used is a good idea. Renoise’s flexibility (all types of commands can be entered on any track for any instrument or parameter) is admirable, but I don’t think many people would mind having to exercise just a tiny bit of self-discipline in keeping long-sample commands in a flagged long-sample track. Besides, when you’re producing a serious piece of work that involves long samples, send groups, etc. you’ll probably be adhering to a sensible track structure with sensible track naming anyway.

One question, though, is this: if there’s a calc delay when starting playback, would there also be a calc delay during live playback, when jumping between patterns on the fly? Does the unpredictable nature of live playback mean that any long-sample solution has to be toggleable global behaviour or (as you say) toggleable per-track behaviour?

To combine some previous suggestions, if this behaviour were implemented, it would be cool to have a ‘split to patterns’ command which split a long-sample into a multi-sample instrument and placed a 09xx offset command on each pattern automatically. This could potentially need very high resolution, so 09xx would maybe need an overhaul… This would negate any calc issues during live playback and give users a toss-up between saving CPU cycles and being able to view one huge sample in the Sample view.

BTW, I totally agree about the tempo track. After all, it’s not like Renoise allows desynced sub-patterns of different tempos (yet :P). This would help a huge amount with the calc process you outlined, and the ‘invisible’ approach would be perfect.

As far as I can see there would also be the same calculation delay if it was to be enabled for use with live pattern switching. I did actually mean to include that as one of the closing points and saying how it may be worth (at least for now) only activating it for playing from a point, not jumping from point to point. Although if audio continued (or faded out) until the calculation was complete it may be barely noticeable under most, normal-world conditions with a reasonably powerful computer.

Of course that is pure speculation and the only way to really know would be through testing. Things like tempo tracks and audio track flags should help keep computing to a minimum though.

Hows this for an idea. Load the sample in a song of the tempo you want, with the pattern size you want. Render each pattern to a separate file. Bingo, you have you audio cut into pattern length chunks to be loaded back in.

As I understand the thread so far, I really love the idea of having an optional toggle to make a single sample “persistent” from the last time it was triggered. (or put the option or effect in a track to do the same thing.)

It might get complex with tempo changes, but for more basic usage, it seems it would be simpler (or at least more tracker-familiar) than adding visual waveforms into the pattern editor. I really don’t need to see the waveform, just as long as I can keep it playing. That way, sample offset commands would still work, without having to worry about slicing and moving a waveform around, like in a DAW.

Do you really need to see the waveform? I think that fundamentally changes the pattern view with very little pay-off. Even if you do, I can imagine just occasionally needing to switch to the sample view and seeing where the play-head is from there.

I’m glad this is being discussed. This and finer 09offset resolution are the only features I miss in Renoise right now.

(edit: yes, I realize it’s all far more complicated than it sounds. :) )

-sage

+1000 when it done :D

I’m going to add a +1 for every fresh discovery of another way in which this would rule:

I’ve got a guitar part that’s currently split into 16 samples over 16 patterns and normalising the levels over the entire part (which from Sonar would be a quick job in Soundforge, then a reload), will require me to stick all 16 samples into one huge sample, export, process, import, split up into 16 again…

So +1 for the fact that this kind of task would be made hundreds of times easier :)

And it gave me another idea… Instrument envelope drawing to be overlaid on the waveform in the sample view? I could fix my fluctuating guitar part levels (as described above) with curves and points, clicky-clicky, knowing that the long-sample was going to be triggered just once. I think this should be a straightforward thing and it seems like a sensible extension of the current Ins envelope principle, which involves lots of guesswork, trial-and-error and flicking between views. And it’s STILL a much more useful visualisation fix than having waveform data displayed vertically in the pattern (mental!).

Oh, did I say +1? +1.

++1 for Audio Tracks.
beside some features I`m missing in xrni this is my greatest request for renoise.
:)

Tempo changes will not be that big problem, but sample commands that happened in previous patterns are.
On the other hand we could simply skip retriggering such sounds, only retrigger samples which have no effects at all. For vocal tracks one anyway rarely uses sample commands?

Still I’m not sure if that’s really the way to go if it only works “sometimes”.

Actually, I use a lot of public domain samples, and I’ll also be doing a lot of acoustic guitar/bass and singing in future tracks, but the main feature I love about tracking is the ability to do tick-by-tick sample-mangling. I use sample commands a lot, to mangle and restructure. So you hit the nail on the head when you say the solution we go with shouldn’t be one that works “sometimes”. We can’t assume people use a feature a certain way.

I did this in a recent track where I sampled Mia Doi Todd and rearranged parts of her vocals to make new sentences. But I had to change my workflow to fake having better sample offset resolution: I would copy and paste the little chunks I wanted to use out of the larger sample, and then use that as an extra instrument. (but still there are large portions of silence if I hit play in the middle of the song, because I’m still mostly using the long sample.)

There’s also one place where it would be especially useful to have persistent samples. I use the signal follower a lot to create funky filter effects, and so I’ll sometimes use a silent drum loop to modulate a bassline. But if I am trying to nail down the bass track, I have to keep scrolling up to the last drum loop trigger to hear it properly, and then scroll down to change the effect. But the drum loop isn’t just playing straight-ahead, I am re-triggering, playing backwards and otherwise mangling it as well. (otherwise I would just “lay it down” tick-by-tick with 09 commands.)

-sage

Well we have ‘Sync’ in Sample Properties, which locks the sample pitch/key and disregards any variation in note value in the pattern: the note is either triggered or not.

What about a similar overriding toggle, perhaps also in Sample Properties, to achieve the long-sample behaviour we’re discussing? ‘Persistent’ on/off, or something. When it’s enabled, sample-specific effect commands would be disregarded, while volume/panning/pitch/cutoff/resonance envelopes in the instrument editor would be used for such adjustments (even better if the same values could be altered using an envelope overlay in the Sample view!!!). Make sense?

What about ghost-layering the snipped the command applies to and disregard the actual sample-piece in the spot?
This means you have to work with “prerendered” snippets when sample-specific commands are applied on a spot somewhere in between places.
Everytime a command is entered or used on an audio track, the snippet is prerendered on-the fly.
DirectFromDisk streaming would fit in perfectly in this concept.

I agree with vV, some prerendering/intelligent freeze function could solve this? This will give the user more options as well.
However, I can’t see this replacing true audio tracks (with waveforms).
Having “audio tracks” without waveforms is like working in the sampleditor without any visual information? I don’t get it… sure it is possible to do that, but it would be very awkward IMO.
I don’t see the problem to have some separate (visual) audio tracks together with a intelligent freeze function for any type of track.

We could also discuss this on a per instrument basis. So it depends on the instrument type you put into a track, how it would be visible in the pattern editor.
So both mixing notes/commands and waveforms in the same track. I guess the waveform clips/instruments would then be in a separate subtrack without sample commands being applied to them.

Visual information in most DAWs is for the purpose of being able to detect where the samples begin and end - usually represented by coloured blocks. Waveform info started to appear in DAWs when systems were able to draw that stuff without wasting valuable CPU cycles, but it’s rarely useful and usually quite vague - especially where complex drum samples are concerned. For example, where people retain brief attack sections of drum hits, the visual ‘beat’ point is at the loudest point of the waveshape, while that’s not necessarily where the ear places the accurate beat point. And in even more complex, bounced samples from multitracked sources (say a pad mixed with melody and ambient noise), the visual waveform data is even less helpful as it ONLY reflects amplitude and, when shrunk proportionally according to the number of tracks (width) and the BPM/patternlength (height), becomes a mess. Even more so again when it’s been loaded in from a compressed format such as MP3 - to see what I mean, load an MP3 into SoundForge and zoom way out.

My point is that people seem to be hung up on the idea of waveform display in the pattern, and I think it’s a red herring. Looks pretty, but wastes cycles on slower machines. However, to adopt the sole genuine advantage of other DAWs’ displaying of waveforms, we could shade/highlight blocks of patterndata between long-samples being triggered and their ultimate end points (as calculated across one or multiple patterns). This could be a transparent block of the track’s colour. That’d also be a nice alert to the user that a sample would be playing right now, and that they might want to avoid disrupting it with new notes or commands.

Unless I’m misunderstanding, in which case I’m sorry :)

Well, I agree that we do not necessary need waveform in the pattern editor. It could be in an arranger window/matrix. Even in the automation window.
I simply disagree that waveforms is not useful. I and lots of other people use this in many daws. It’s not just blocks you move around. You ‘read’ the sound structure. It’s faster then listening over and over again till its perfect. (of course in reality you do both listen and read).
And yes, the amplitude makes you instantly see where to cut things. Where to fine tune things. How to put things you have recorded on beat. Draw envelopes directly on it etc.
IMO it’s much more efficient that way for these tasks then to scroll up and down in a pattern editor and type numbers.
In general, fine tuning things I like to do graphically. While well known exact and repeated data is much more efficient to type in the pattern editor with less graphical input needed.

Pysj - I agree wholeheartedly with the usefulness of detailed waveform display. That’s why I advocate envelope overlay in Sample view and why I’d also advocate waveform overlay in automation envelopes, if long-samples were to come about.

What I’m saying is that considering the degree to which the accuracy of visual waveform data would have to be compromised in order to fit into the vertical patterndata area (i.e. into tracks), the visual data we’d be left with would be indistinct and of questionable usefulness. Especially if we aren’t able to arbitrarily zoom horizontally and vertically in the Pattern Editor in the way that most DAWs can in their horizontal multitrack sequencer views.

Cutting, fine-tuning, why can’t those functions be performed in views which are more suited to it, like the Sample Editor? And, if the overlay suggestion gets picked up on, in automation/envelope views? Personally, if I’m doing any chopping or fine-tuning, I want to do as good a job as possible, which necessitates having the best, largest, widest possible view of the sample data’s waveform. Arbitrary zooming like I describe above could be implemented if sample editing functions were overlaid on automation envelopes (which would be useful for all sorts of reasons), and then we’d have automation, sampledata waveform AND patterndata all visible on the same screen - brilliant! ;)

Does that go some way towards reconciling all of these concerns? I didn’t want you to think I was an anti-visualisation pariah, I just wanted to be clear that I didn’t think it should be literally IN the Pattern Editor, which it seems you agree with :)

+1 what Syphus says.

Yeah, I think having waveforms in the pattern editor would be a nightmare of clutter and annoying inaccuracy. Upgrading the Sample Editor seems like a much better idea. I also particularly like the idea of having the waveforms display horizontally since that’s what I think most people, including me, are used to.

Envelopes would be great too. I posted a somewhat similar idea a few months ago with a mockup, just so you know how much I approve of such things. :)

+1 to all of what Syphus said.