Sample waveform view in tracks

Hi, this feature I believe would/could be a good workflow tool. Not to be mistaken with actual samples in the track_editor, but as a “reference view”

The idea would be called " comparison track " and any sample loaded into a slot will have a context click option to compare to whichever track is highlighted.

things i have already considered

(1) what if the comparison track was longer than the actual e.g track?
[a] then the comparison track will span through the tracks length and will show as the song scrolls and as different matrix blocks are playing, this would be good for comparing whole songs, so one could see things like where an intro ends or where a drop starts etc.

(2) would 1 view be enough at any given time?
[a] possibly a user will require more than 1 view, i think there may need to be more reference views, as a user may want to see where the kicks are in a drumloop and at the same time on another track there be a riff playing and the user wants to track in his/her own pattern as close as possible to match the steps.

(3) what if you are using your sample from the sampler and it has been sliced and/or synced?
[a] this ones tricky i guess, but i can imagine it can still show up from the sampler too and perhaps a possibility to show where the slices are in the ’ comparison track ’

so to recap, this is only a waveform view, no audio will be heard from the comparison track… i hope i have made myself clear enough as i know how i want this but might not be explaining it too well thanks … below is a mockup i made of roughly how it would look :)/>

single combined “mono” waveform view would probably suit best i guess, the image was just a rough mockup.

I like it, I think it’s a good idea.

Here’s a similar concept : audio views. It’s not “audiotracks”, it just allows you to view un-rendered / un-processed raw waveforms (mostly samples from the active instrument).

For CPU reasons and a smaller latency there can only be one audio-view displayed at a time. You can display it with CTRL+A shortcut under Windows for example.

The audio-view is displaying a hightlighted frame on the line where your edit block is waiting.
Let’s zoom on this frame and see what you can do with it :

So the audio-view would allow some “basic” manipulations on the notes, volume, pan, and fine delay. The classical track editor would interact with views : editing manually the notes would modify the audio view automatically, and tweaking the frameview controls would also automatically refresh the content of the associated cell.

Once again, this is not exactly an audiotrack. But I see it like a way to “work graphically” within a tracker.


Sexy. Not just from the looks of it, but how it brings the user closer to editing the audio.

The only thing lacking is that the audio views won’t render after dsp effects. That would have opened up a new market for Renoise imo, as an audio editor with unparalleled control of its elements (notes). Considering the “render to sample” feature, it wouldn’t be too far fetched to further implement a ‘real-time’ view as well, even if it adds some additional lag.

+1 i like your concept, this should definitely be seriously considered by the main developers.

The problem is that we allready experience some strange lags with the new interface. If Renoise has to fully render in realtime each track and draw views for it, I can expect a massive slowdown.

Imagine that some of us are tracking with 280BPM modules and 48 LPB and huge sized patterns. I thought about non processed samples because I said to myself that it could be faster than a true rendering.


The pattern editor has to display not only the notes that are played, but also the notes that “WILL” be played (on the bottom of the screen).

Drawing a “realistic waveform” would imply to completely pre-render 3 whole tracks in the background : the actual one, the previous one, the next one. Otherwise you would not have a true vertical continuous visualisation.

The only way to do this is to create a kind of realtime rendering background process, that works, even if ther is no activity from the user, and that fills a pre-rendering buffer where the GUI can then take some information to quicly build a waveform.

This pre-rendering system would anyway have some limitations. It could only concern normal tracks (no send tracks, no groups, no master track). You could only see the audio-view in one track at a time.

Since audio-views are just graphics, maybe would it be interesting to build them through a “lofi background rendering mechanism” with for example 8bit 11KHz mono resampling.