Revisiting Resynth/Native Synthesis

I’m no programmer but I’m damn sure a script kitty. VERY basic c++, python and because of Renoise,lua knowledge.

I’m interested in input from you guys and I would like to know what theoretically could be done to get some form of Native synthesis within Renoise. What’s possible? The old tool Resynth had some bugs but came pretty close to something solid. Even being able to save your stuff as xrni instruments.

This is solely for informational purposes as of now, but a goal of mine is to have my hand in bringing forth some form of modular synthesis i.e Phase Plant to Renoise

2 Likes

It is possible to start with a 1 sample length pulse wave, then filter from there, to create saw’s and sine’s, much like hardware. Can also use simple single cycle waveshapes; some examples:

2_Osc_Monosynth_1.xrni (3.3 KB)
Wave Folder Sine.xrni (3.4 KB)

Probably, much more is possible, i’m for one glad you’re asking, so thanks for that; and hth.

2 Likes

Just as reference (in Renoise+API).

From the API with Lua, for the Sampler/Waveform, it is possible to destroy and rebuild an audio wave, that is, its buffer, (mono or stereo), changing its number of frames and its quality (Sample rate, Bit depth), or change the amplitude of your frames without having to destroy your buffer (this is processed much faster!).

As long as it doesn’t involve destroying and rebuilding the entire buffer, the results are virtually instantaneous on samples of short duration (1-15 seconds).

Destroying and rebuilding the buffer does not imply deleting the containing sample (the sample saves several associated parameters). Therefore, for certain uses it would be possible to use “real-time” Lua tools (note that it is in quotes).

There are already several Lua tools that deal with synthesis. But most can process very large buffers, giving the sensation of slowness, when they may be processing millions of frames.

Do not confuse the transformation of an audio wave with the application of effects or modulation (the original audio wave is always intact)…

From the API it is not possible to apply effects or modulate (as far as I know). It is only possible to manipulate accessible parameters of existing effects and modulation devices (native or plugins) as well as the parameters of the sample itself…

2 Likes

Would it be possible to send whatever audio from this hypothetical synth to the FX chain in the sampler?

Placing the synth output in between the sampler and sampler fx in the dsp chain? For instance. sending osc/generator output to a native fx chain?

I know you can emulate a pretty decent subtractive synth setup using samples of basic waveforms and sending them to fx chains with sends. But with them being samples even when looped aliasing becomes very apparent at extreme pitches.

I am going to expand here because I am passionate about this topic…

From a native Renoise instrument you can generate sound in several ways:

  1. With the native sampler, using an audio wave (what appears in the waveform). For this to work, a sample slot must contain a buffer with data (with its number of channels, sample rate and bit depth) (with modulation, instrument fx and track dsp). From a nute you can trigger several audio waves (layers in keyzones).
  2. With a single instrument VST plugin (no modulation or instrument fx).
  3. With MIDI sound from the MIDI OUT (no modulation, no instrument fx, no track dsp; notes sound but sound is not sent to any track).
  4. Or any combination of the 3 previous options.

Basically, any of these options are the sound basis for triggering it via notes.

Starting from a native audio wave, a sample from the sample box (from the waveform), you can modulate and send to instrument fx in a later process. The result will be played on the audio tracks that have notes associated with the instrument, then playing the dsp effects of the track. The track output (including track sends) eventually goes to the master track.

A simple diagram from an audio wave (base sound, the waveform) on a native instrument (with 6 positions):
1- base sound → 2- modulation ins → 3- fx ins → 4- dsp track → 5- dsp master track → 6- render song.

So, from the base of a native sample (position 1), with its data buffer, it is possible to manipulate the data in the buffer. If we do not manipulate the number of frames in the sample, it is possible to manipulate the amplitude of all samples “practically in real time” if the number of frames is not very large…

The buffer is made up of consecutive frames, “digital data containers”. One way to explain it is that data such as amplitude and position in time for each channel are stored within each frame. It is this sample that can be manipulated, specifically its amplitude from 0 to 1, practically in real time, as long as the audio wave does not contain a very high number of frames (this allows the buffer not to be destroyed, because you do not change the number of frames). frames. Rebuilding the buffer may take too long). If you iterate between all the frames with some specific function, you can change the sound of the base of the audio wave, since you are manipulating the amplitude of its samples (that data that is within each frame).

Then, you can change the sound by manipulating the data in the buffer (without destroying this buffer) specifically the amplitude of each sample, using iteration functions through a Lua tool, whether you run it through a button, or if you create a tool that detects the trigger and stop of each note, and during that time transform the amplitude of all the samples of the audio waveform in some way.

Aside from manipulating the base sound of the audio wave, you can apply modulation and instrument fx.

So in summary, in Renoise there are only 2 things:

  1. the audio waveform (waveform), which is the basis of the sound, which you can manipulate (a consequence of frames), which is equivalent to granular digital synthesis,
  2. and also, add modulation and effects on top of that sound base.

If you rebuild the buffer, you can work with subtractive digital synthesis, to generate audio waves that are mathematically “easy” to create, such as a sine wave, sawtooth, square, etc. Or you can even employ addictive digital synthesis techniques, which involves generating different audio waves and mixing them as a result.

As an example of a Lua digital sound synthesis tool, you have the MIDI Universal Controller or MUC tool. It has a section called “Sampler Waveform”, which gives access to a sub-tool called “Wave Builder”, which based on an audio wave buffer, works with granular synthesis (basically manipulation of the amplitude of the samples in each frame), subtractive synthesis (with up to 8 wavetables) and addictive synthesis, mixing these wavetables.

But, for a “real-time” tool, it would only involve manipulating the amplitude of all the samples, without touching the number of frames in the buffer, and as long as said buffer does not contain a large number of frames (because you have to iterate over the entire range to change the amplitude value, multiplied by 2 if you use 2 channels, left and right).

2 Likes

IMHO better use a workstation VSTi, don’t waste your time, except for sportive activity maybe. Renoise is very attracting due to the very nice gui. But the instrument section would require a complete overhaul to be a real synth. ENVs are far away from precise, you can’t route filters, modulation is not sample accurate either. There are no wavetables at all, no cross-osc-mod, no modulation matrix. Renoise instrument section/Redux is a sampler.

Wait for Black Friday, stuff will be cheap. Some examples:
Falcon 3.0, Synth + Sampler (MPE), ZebraHZ/Legacy, Phaseplant (MPE), MSoundFactory, Synth + Sampler (MPE)

Or try one of the many extremely good free synths:
Surge XT (MPE), Vital (MPE)

Better get used to a modern VST. It saves your lifetime.

2 Likes

The GUI and overall theme of having high levels of control is definitely where the idea comes from. Still missing some VERY crucial elements like you mentioned.

But it’s like we’re so close to having a DAW that can do virtually anything, especially with tools(¿API Update?) Hell we really don’t need a native piano at all; enter Piano Roll Studio

I actually own Phase Plant, so yeah, no need to ever buy another synth :stuck_out_tongue_closed_eyes: I just think with the sampling capabilities of Renoise expanded with synthesis would be :ok_hand:t6:

Or at least a way to insert instrument plugins in the sampler dsp chain, maybe parallel to the sampler. I HATE using the track fx for sound design and you’re forced to do that when using plugin sounds that u want to mess around with without turning them to samples. I use my mixer section exclusively for compression,eq, etc.

2 Likes

Parallel chains and complex fx chain routing in the doofer would do the trick as well…

1 Like

How do?

Edit: How so?

We can’t yet. I’m saying we need that :slight_smile:
But there are many methods of native synthesis already available to us. Basically all my earlier tutorial videos outline various methods of native synthesis. Plus a couple other methods I haven’t demoed yet :upside_down_face:

1 Like

Aww come on man quit holding out on us :smirk:That granular tutorial was :fire:btw. There is a 30 minute tutorial on modular synthesis somewhere on YT.

Maybe with some ingenuity my goal could be accomplished which is pretty much Phase Plant Renoise style and perhaps the entire process streamlined in a tool

I’m gonna check out MUC pretty soon and see what I else I can squeeze out of Renoise

1 Like

Or try Cardinal… Could be synthesis heaven, once you get used to. Also comes with all the Surge XT modules. And is free.

1 Like

I recall doing wavetable before VST’s existed, on FT2, just needed to change instrument to scan the table (aka synth sweep with small loop points defined across multiple instruments). PWM, yep, that too, again multiple instruments (Square 10% pw, Square 20% pw, etc…) there is a xrns here: Just a bunch of squares.. (aka "Four Squares") if you remove the filters you can see the pwm clear in the oscilloscope. Just saying… they (the code pasters) got alot of this stuff from trackers.

1 Like

You can do PWM by using one saw wave and and inverted saw wave, slightly detuned. That’s exact PWN then. Only problem, you can’t change the phase…

2 Likes

PWM nowadays is just considered syncing because it’s no longer exclusive to just pulse waves on most synths

…ahh the good ole days when a subtractive synth could have two oscillators, one 12db lp filter, MAYBE an lfo and be called a BEAST lol

So here is the tutorial where the ring mod is used to generate a sine wave using the dc offset of a blank sample…can this be further explored? I would think a tool could aggregate this process. At least to a point where we can get generated waveforms vs sampled ones

And what exactly does the hydra do? Thanks again for all the input

2 Likes
2 Likes