Renoise is still the best for expressive sequencing — do you agree?

Place the notes(Melody…Harmony)…Then sculpt them

Via ‘modulations’ and ‘fx chains’ triggered by velocity

Care to elaborate?

Velocity level in Renoise doesn’t really mean “velocity”…You can disable ‘Vel->vol’

You can assign MODS and FXs per velocity levels…

You can duplicate samples

So you can assign expressivity per velocity levels

An “expressivity matrix”

But I think You know that…Renoise team joke

Ah yes, assigning samples to different velocities is great — and of course, nothing new. Glad to see the Renoise team has a sense of humour. :v:

That said, I personally find filter modulation to be a much faster way to achieve what velocity layering is often trying to do — revealing more of the sound as you play harder. Unless you’re doing something more complex, like a glide, which could just as well be handled with a tracker effect.

Mapping a bunch of samples or modulation layers isn’t always the most inspiring task, at least for me.

Let me know if I missed your point.

I thought you were part of the Renoise team…I’m not from renoise team

@Logickin has talked about Modweel…Can be a a key (in realtime)

For non Tracker expert,Tracker commands are not so easy to master

The essential is that each person find his good workflow

Realtime and sculpting is different

But seriously,I think that for fun,some realtime is needed

(There is a problem…some modwheels are shit nowaday…“drift”…‘nektar impact’ for example)
(I don’t know if “optical modwheels” exist)(ALPS potentiometers are the best)

And it seems also hard to learn and not future proof because of its unique interface. Try to imagine what we could do if we got a seaboard and it broke, while it was no longer available (valid concern because roli did filed for bankrupcy). All the magic of expression through that controller will be gone, and there isn’t many other alternatives, besides perhaps the Continuum By Haken Audio which is even more expensive and requires to relearn the controller from ground up again.

I do believe MPE or Midi 2.0 are also possible in trackers, but we need to know how to hide those abstraction into tracker commands so that we can use it too, and I do think it might require us to rethink a new tracker format because many of the trackers base on their older ancestor, and many legacy stuff still remains.

Exactly, and I do also think there are quite a lot of cool theoretical features for tracker workflow yet we never seem to explore, like tuning system other than 12 equal temperament because we only need some characters to represents the pitches instead of sticking with a piano layout as pianoroll. It is possible to control things other than samples: VCV rack and SunVox (using GPIO module) can control the voltage to certain output. Or doing linear daw’s stuff in a tracker way, like: why don’t anyone think of slicing videos instead of audio, just like how other linear daws doing with their video clips? Why don’t we use tracker to control graphics to build some demosense-ish music videos, something like z-game editor in FL?

Every time I see opinion like this, I have some mixed feelings because while you are right, but… I am always thinking, do trackers always have to be in this hardcore for beginners?

The concept is actually simple because it is basically just a step sequencer playing vertically using keyboard as input instead of mouse. The hard part is actually getting used to the keyboard workflow instead of aiding with some visual feedback, but after all, because the rules are consistent, it is not that difficult to understand once we know how it works.

Perhaps, we might forget to explore music trackers in a more abstract and visual way. Instead of showing a bunch of number, I guess we could learn from notation software where most of the features and effect commands should be exposed as icons and sliders, while adding those effects should be available as a clickable button in a clear location:

In this way, the beginners can add their effect by clicking some buttons on some tool bars, without need to memorize all the fx command, and they can know what kind of fx at the current row by looking at the type of icon, and the sliders and texts to understand the intensity of the fx. For example, you could more and less know my mockup image has slide down effect applied from row 2 to 5 at the left track, while a major upward arpeggios all the way down in the right track with volume fading out. (I know my mockup art missed the instrument column, but you get the idea.)

Technically, we may add a button toggling between beginner mode and normal mode, which the beginner mode convert those sliders and icons into actual fx value, so that experienced users can still use their original tracker command instead without have to stick with the slightly more visual approach.

I don’t know; seems there are a lot of ways to explore the tracker workflow without destroying the spirit like adding pianorolls, while having some better quality of life features for beginners to use, but I really have a feeling that workflow in many trackers seem to stuck at late 90s and not much innovation ideas brought into them, while other daws are catching up with different standards and technologies.

It does! It is The Leap Motion Controller:

It is not cheap and a bit CPU intensive though, especially their second version.

1 Like

I’m not from the Renoise team. That was a fun misinterpretation. :joy_cat:

Anyway, I agree — tracker commands can be hard at first. It took me years before I really got into it, but once I did, it was worth it. It’s still trial and error sometimes, but in terms of workflow, I think the alternatives are even harder. We’ve yet to find a worthy replacement.

I’d still love to have a MPE keyboard — and I’m sure we’ll see cheaper alternatives soon.

Absolutely, I agree. My idea for a future tracker takes a different route though. I’m skipping MIDI entirely and focusing on raw sample playback. The idea is to treat pitch, glide, velocity and delay as time-domain modulations — not performance data.

Instead of adapting the tracker to modern MIDI, I’m thinking of a system where expressivity is written directly into time and playback rate, using a delay pivot (0x80) and millisecond-based offsets.

So rather than expanding the tracker to meet MIDI, I’m simplifying the performance into something that’s both trackable and tweakable — especially for people who don’t play instruments but want full control.

Yeah — there’s so much unexplored potential in trackers once we let go of the traditional sample-player mindset. I really like that you brought up alternate tunings. The tracker model is actually ideal for things like just intonation, microtonality, or Scala-mapped notes — since we’re not tied to a visual keyboard or equal-spaced piano roll. A character-based pitch system is inherently abstract — and that’s a good thing.

Controlling other domains like video, voltage, or graphics could definitely work too. My current focus, besides actually making music with what’s available :upside_down_face:, is to build a solid core: writing expressive musical performance directly on a time grid — so people can compose performance, not just notes.

Realistically, this will start as an AU/VST plug-in (using Max/MSP). Since I’m sharing this on the Renoise forum, I clearly have no issue if Renoise gets inspired by any of it. :wink:

Totally — and I’ve been thinking along similar lines. One idea is to offer a switchable view: vertical for tracking (note/fx input), and horizontal for waveform and automation editing. Each view optimized for its task.

This is the future! Imagine playing a melody in the air (maybe using neon-colored laser indicators :smile:), and using your hand to glide between notes. Then you simply go back and correct or finesse the result by editing the automation or typing tracker commands.

What do you think — am I being naive, or are these ideas for a plug-in actually doable?

1 Like

If I am not mistaken, is that means every row has a delay time, and it advance a row once it has timed out? That’s an interesting take for trackers, and using millisecond delay could be handy to do delay compensation.

This is because I really want a to have a music tool to write music in different tuning. I recently have addicted to 15edo, finding that this tuning have more and less aligned to the traditional 12 equal temperament yet it aligns to my world building idea. I also know there are quite a few SunVox users who write music in other tuning systems; however, I don’t really see any music tool good for writing such music if there are more than 12 notes per octave.

Me too, I guess it is time for some innovations in the tracker world; at least, those features are used to attract the new generation of potential tracker users.

Well, it is hard to know at current stage because I am still learning programming in lower level and audio programming stuff which I can’t give you good answers, but I am really interested how would a tracker look like if you use modulation and delays over a playback rate instead of a traditional fixed grid.

Thanks, and I really appreciate your thoughtful responses.

I can totally relate to your interest in alternate tunings and the expressive potential beyond 12EDO — and you’re absolutely right that the tracker format should be well-suited for those ideas.

To clarify your earlier question: the system I’m working on doesn’t wait for a row to “time out” before advancing. Instead, the host DAW (like Logic) keeps the global clock running, and my plug-in (GlideSync) listens and reacts in real time — sample-exactly.

Every row can hold a note event, and each event has a delay value from 00 to FF, giving 256 fine-timing steps per row. There’s a pivot at 0x80, which represents “exactly on the beat.” Values below 0x80 (00–7F) play early (push), and values above (81–FF) play late (drag). Internally it’s all translated into millisecond/sample offsets depending on BPM and LPB — there’s no tick system like in traditional trackers.

My AI friend (:joy_cat:) even calculated this for me:

delay_ms = ((hex - 0x80) × samplesPerLine / 256) × (1000 / sampleRate)

So basically, you’re composing relative timing, not just placing notes on a grid. This allows for grooves, laid-back notes, early triggers — all visible and tweakable directly.

GlideSync is being built as an AU/VST plug-in using Max/MSP. The goal is to let you program “artificial performances” — like a synth solo, a bass groove or fill — with full expression but no need to record live . Everything is editable in tracker-style, but designed to complement your DAW, not replace it.

I’m still very new to coding as well, but I’ve come a fair way for a beginner. There’s still a long journey ahead — and I’m balancing this with my other projects, including music-making. But I’m always open to collaborations or ideas. :v:

Less faster than a modwheel and more energy consummer…But more precise
Not ultimate,but nice option (remind me Theremin)

Jean Michel Jarre done it with lasers(something like that)

neither is better…It depend of the task

I was talking about a modwheel wich rely on old mouse technology
Idiot modhwheels have no basic drift correction

A clock could verify little drift and correct it…a sort of ‘watchdog’

very very little amplitude=correction