The last major shortcoming of Renoise: Insert audio tracks

I’m completely stuck in my workflow due to the incapacity to insert audio tracks into renoise.

With an audio track, I mean a recording that start&stops at any postition and is not triggered like a note.

I use a monophonic (analog) synth and I want to sequence and tweak the knobs track by track. This can only be done by overdubbing techniques, as it is impossible to let the synth play and create a 2nd track.
(It is impossible anyway to properly steer multiple tracks at the same time, being one single human being.)

This is a basic feature of all the other DAW’s… but… I am a tracker. Totally melted together with hexadecimal numbers, patterns and … well I probably don’t need to explain why I use Renoise here… Could write a book about it.

I don’t want horizontal layouts, piano rolls and all that stuff. Not for composing and sequencing.
I want to work in Renoise as it is.
But I also want to capture my live output, put it in a track in renoise, and sequence/compose further based on that recording. So I can tweak my knobs (for instance on my favourite monophonic synth, which has no ability to save patches, that’s for control freaks, not for me)

Basically the main difference would simply be that instead of a sound recording that needs to be triggered, it is played at whatever point in the project you are working.

To make it workable, the following is needed:
-Most important of all → it ALWAYS plays without trigger, also in the middle of the audio recording. (technically if only this would be possible, even if the audio is loaded in an instrument, the workflow would already be possible, all other stuff is just “luxury”)
-Show the waveform (vertically of course!!) instead of notes. (One single note in the beginning isn’t very helpful, as a recording may easily last multiple minutes)
-It is best implemented an independent, synced timeline (inserting patterns, adjusting length etc should not have any influence on the continuously running recording.)
-It would probably best be inserted between the Pattern Sequencer and the Pattern Editor (and be collapsable, only visible when one wishes so)
-Of course it can have FX attached (would not use this a lot personally, but that’s purely a personal choice

It would make life much more easy, and just run Renoise completely alone to do everything.

Of course, just some checkbox, marking a certain recording/instrument as an “Audio recording” which should play at whenever position the sequence is started (not when it is triggered as note), would already do the job and can probably be implemented in a rather short timespan. It would lack the visual luxury, but after all, as long as we can hear what we do, it’s workable.

Now I’m really stuck…
Renoise can’t handle an audio track properly (yet intents to be a full featured DAW)
The other DAW’s… yea, they still resemble Cubase on the Atari. They offer nice ways to record audio, but anything related to sequencing and sculpturing a track to the detail… pfff, not my workflow.

I really hope this gets picked up, like my previous proposition about 20 years back.
(That was lifting the oldskool limitation of the amount of patterns possible, as I was stuck back then too. Couldn’t finish my track because I ran at 256 patters, and that was it… Most were joking about it, saying what I could do to work around it, but I was happy when the developers just realised that limitations based on 8-bit numbers were nothing more than a legacy from the past.)

Audio tracks is in my opinion the last thing that Renoise is lacking. It is essentially a basic feature of any DAW today. The combo of recordings and sequences are a heaven.
(Or a nightmare if you mess things up, but hey… great possibilities always demand some learning, every Renoise adept knows that!)

Well if you are trying to produce music, and not perform live;
I use this trick all the time to do this: “Line Input” device.

Everything is sequenced from Renoise and the audio comes back in to the track with the device.
Then simply render song to disk; rinse and repeat, building layers if wanted.

Hth

I know how that works.
But I cannot hear my recording while sequencing the next track.
It’s a monophonic synth with one single patch at the same time. So I cannot make a second track while it is driving the synth with the first track.

Recording is not a problem, I can connect through MIDI to another DAW.
But Renoise isn’t able to send a timecode to sync the timeline between them.

It is simply not possible, and it is a major shortcoming. I saw that there is some misplaced purism in some topics here around tracking, looking down on audio recording, but don’t tell that to someone who has tracking experience on C64, Amiga & DOS.

The essence is: I simply cannot sequence over a recording I made.
And I cannot tweak multiple tracks live
And today, analog semi-modular analog synths are on the market, which are impossible to be completely controlled over MIDI anyway, but offer such great sound palettes.

So my music production is completely stuck.
I tried running another DAW parallel, but it’s like producing blind (or rather deaf.)
And the Atari Cubase based DAWs are simply not up to the task either, as they lack decent sequencing capacity. (They are more like cutting and pasting tape recordings!)

Can’t be that hard to implement that small checkbox that allows a sample to be played based on the timeline position, instead of based on a single trigger.

I’m not sure of my next comment. But I think the problem here is more about loading (long audio tracks) and also about the graphical interface.
Translating data from a stereo two-channel audio waveform into a vertical (or horizontal, it doesn’t matter) elongated graph, I think, is a significant burden.
Every time I think about these things I think about the extreme cases. An audio track that could last? 30 seconds, 4 minutes, 10 minutes, 30 minutes?

For this to really work, the graphic representation would have to be very simple and low resolution, because from a sound point of view there would be no big problems.

The demand for audio tracks has been around for many years. If it has not been implemented there will never be a compelling reason.

Why don’t you ask the creator of Renoise directly why he doesn’t implement audio tracks, and at the same time facilitate the implementation of recordings (with microphone, guitar, and other instruments) so that everything is more direct to work with audio waves that not prefabricated samples?

Finally, the audio wave tracks should be time-stamped, but graphically they should also adapt to random effects parameters that the composer can introduce that influence the change in playback speed. Designing all this is an art in itself.

In theory, graphically this would be as if you had an audio wave that was a stretchy rubber band, which contracts or lengthens depending on the playback speed. That is, if you introduce an effect parameter that influences the speed, the audio wave should automatically lengthen or contract in that visible section. Would something like this work correctly?

Loading long audio tracks isn’t much of a problem.
I now loaded a 6minutes 30 seconds track as an instrument. Runs smooth as butter.
It is mono 24bit 96kHz.

The Graphic representation is actually the least important (after all, I also have the muted sequence which shows me all notes of the audio track properly). What concerns me, this can be very basic, or even completely absent, or minimal (a coloured line telling an audio track is running could do it.)

The only thing it should do is start playing at any point in the song, not needing the “note on” trigger. (I would be fine if just placing a note would be all there is, as long as it doesn’t need to pass that trigger to play)

I think graphical representation, and drag & drop and that kind of things, are mostly extras in the context of a tracker. Essentially the biggest need it to hear the audio, as when using hardware, there are many circumstances where an audio recording is the only way to create multiple tracks using the same instrument.

Even the stretching is not needed. The audio recording created from rendering or recording a sequence is in sync with the sequence. I know, changing tempo after recording would be a wrong move, but I’m not scared of a more complex workflow, where one needs to anticipate on these kind of things.

I think the possibility to implement audio tracks is a crucial feature, but I don’t think we should make it too complex to start with. Just the basic feature to hear the audio starting from any position in the song, not needing a “note on” trigger covers 95% of what we need. Especially if it is aimed towards overdubbing a song, pattern or series of patterns sequenced in Renoise.

I would of course be happy with all the extras too (but understand it makes implementing harder.)

Have you tried enabling ‘Autoseek’ on the recording?

With Autoseek you do not have “direct visual control” of your audio track. But hey, it’s the closest thing we’ll see.

In case there is any doubt about using Autoseek, just put the note C-4 when starting the track, on the first line, and the sample that has Autoseek activated in the Sampler. And that’s it.

What?
Seriously?

Thanks a lot!!!
That seems to work.
It might not be a full implemented audio track feature, but it does the needed basics.

How did I miss this (maybe this should show up when looking for search terms like “audio track” etc.

Some improved visualisation is still welcome, but hey… you made my day!

In reality, we could say that Renoise has audio wave tracks, the windows need to be properly placed separately.

Here I have placed the Sampler window horizontally above, and the pattern editor window below.

I don’t usually put it like this but in reality a lot of monitor space is lost in empty horizontal bars, and it looks very ugly if you put the two windows inverted.

But hey, it’s the closest thing to audio tracks. And yes, visual time control appears there too.

Below, the two windows upside down (lots of empty horizontal bar):

In my use case, it looks more like this to visualize:

First track is muted and present in the second (collapsed) track as audio recording using autoseek. But I can see exactly what happens in the muted track.

For inserting live played instruments, this is of course less useful.

(Sorry for the dark screenshot, I like to make music in low light.)

Through the recording tool for audio input you can synchronize it with the playback of the song, either when starting the project or when starting a specific pattern.

The only problem I see is that the sampler does not allow you to display the exact number of the frame that is played at the beginning of each pattern, for later editing tasks to maintain synchronization.

If the API had a way to capture the exact frame of the playback marker at each moment, we would have practically everything necessary even for post-editing tasks of the audio waveform itself.

The point is always to be able to maintain synchronization in case of post-editing.

Of course, this is a very interesting topic!

That would indeed be a great.

A possibility to display a timecode per voice, in the pattern editor, for instance.
Interesting indeed!