Yes, if this is about improving workflow at the same time as overcoming the ONLY thing which keeps Renoise behind other DAWs for all-round, standalone use, it should be a solution which doesn’t require a complexification of the offset command.
I guess this has been discussed a million times, but it seems to me that the most straightforward means of implementing what people want is to do something like this:
- calculate, for any given playback-start position (mid-song or mid-pattern) which long samples should be currently playing (i.e. they’ve been triggered in the past; their length exceeds the time between their initial triggering and THIS playback time; they haven’t been interrupted by a noteoff or another NNA)
- calculate the offsets for those samples AUTOMATICALLY and INVISIBLY (not with 09xx commands), which would allow a much higher resolution than the 09xx commands for long samples and avoid a lot of fiddly, time-consuming work on the part of the user
- play the damn things
XMPlay does this when replaying modules, since it does a certain amount of calc on songload so that even if you skip to halfway through the song, you don’t get weird patches of silence like you do when skipping through the sequence list in ProTracker or FastTracker. It also does this for cutoff/res envelopes in those few .IT modules which use them. Obviously it’s devoted to playback and not all manner of cool CPU-honking plugin shit like Renoise, but I’m sure Renoise can spare the extra few cycles.
A solution like this would be compatible with the 09xx command, because that would just be interpreted as an override of the type described before: if anything overrides the continued playback of a sample, Renoise acts accordingly. Actually, this is just a Renoise playback issue/solution, and I don’t think it would impact upon fileformats or anything else. I reckon there wouldn’t even be a compatibility issue with older Renoise songs, since a rendered full song from a replayer that used this behaviour wouldn’t ultimately be any different to one rendered with the current behaviour.
Am I just rehashing old discussions or foolishly second-guessing how the dev team would implement this? (I know Taktik always tells us not to speculate on how tricky an issue would be to solve, but simply to make our demands and then let them sort out the details )
Anyway, the sooner we get this, the sooner I never again have to use Reaper, Acid, Soundforge, Sonar, ANY of that whack shit that takes me away from Renoise
[EDIT: I realise I didn’t address in-pattern visual representation of audio waveforms…because I don’t think it’s half as important as being able to HEAR the audio. This is a digital AUDIO workstation, after all. If the playback engine is playing long samples at the correct offsets/etc, we can jump to Sample view to check out the waveform, and (until in-pattern visual representations are eventually implemented, which I imagine they will) use our EARS to do the work in DSP automation envelopes etc.
I’m not advocating a particular workflow, or my desired workflow, or anything. I’m just saying that like with everything else, we have to prioritise…and while there’s still a lot of debate about how best to visualise waveforms, that seems to me like a little more of a luxury than the fundamental compositional benefit of being able to work with long recorded audio samples alongside VST/DSP output. I’d love to see wee waveforms one day, but I’ll never be relying more on the arbitrary and often misleading peaks and troughs of a complex waveform compared to the no-lies assurance that what I hear is…well, what I hear. And what I want an audience to hear.]