[idea] General wave for the whole song. Wave tab

As a result of the discursión of this thread, I open this new thread to not deviate it.If you are curious about the discussion, read around the content of the previous link.

@ffx said:

A waveform could be pre-generated in the background, e.g. if you stopped playing. It wouldn’t be 100% exact, but surely would help a lot. But then other details like note blocks instead note-offs should arrive, too, IMO.

What I’m wondering is whether the wave in the background requires a lot of CPU processing or is something light.In this way it would only have to update the wave every time that the parameters or notes are introduced inside the pattern editor/phrase editor (which is something that I do not like).It seems a very complicated subject. But I would not rule out that in the future, this would be perfectly possible, given the high power of the existing hardware.The question is to know what impact all this has on the overall performance.

For me, it would be great if this were possible. Each time you enter or remove anything from the song, you would see the impact directly on that general audio wave. At some point, this will be implemented in other audio programs, even if it involves some delay. But do not forget that this can be very complex to implement. Imagine very long songs. Any effect, equalization, filter, influencing the whole song would change the whole general wave. It would be a little beast. Therefore, it is necessary to know what real impact would have on the performance of having to redraw a wave or parts of it constantly.It must be very well thought out so that it does not require a lot of CPU processing.

The other path would already be a wave generated through an update button. There could also be a pattern range selector. Show me the patterns from 10 to 30. You do not have to show the whole wave.All this would be useful in post-production processes.Ok, you have your song practically finished, then check that everything is in place.

For volume correction of certain areas compared to the rest of the song would be very useful.

One way to do all this by steps is to render the song, then put that song in a sample to see what you’ve done.How many minutes fit in a sample?But this process is useless. The interesting thing is to be able to do all that before rendering the song.

Otherwise, you have no choice but to use an additional program, and do the process in steps:

  1. Your song is in the postproduction process, and you need a general look of the song.
  2. Save the entire song rendered in wav format.
  3. Run another program that allows you to visually analyze a complete audio wave to the millimeter.
  4. Determine the areas to correct. You will easily find the exaggerated peaks and the poor valleys. There is nothing better than a general wave to determine if your song has the volume structure you want.
  5. Go back to Renoise, locate through the time clock the conflicting zone or zones to be corrected.
  6. Readjust the overall volume of the song again.
  7. Render the song again.
  8. Reload it in the wave analyzer program.
  9. Check again that everything is in place.
  10. Repeat the process as many times as necessary until you find the volume structure you want for your song.

If Renoise had something that would accelerate this whole process, it would be magnificent. With the spectrum analyzer or the equalizer you can guide yourself in each point, but it does not allow you that global vision of the whole song. I’m thinking at all times about the post-production process, the mastering.That’s why a “wave update button” would not be crazy. Only use the general wave when necessary, even if that requires a short waiting time for the load.

This would also help the compositor to create albums that would not be out of harmony with each other due to volume issues. This is a fairly generalizing problem in Renoise.

This could be a general wave of 3.5 minutes:

8079 wave_1.png

With a general vision, it is very easy to locate the conflictive zones, and above all, to provide more dynamism to the general audio wave, which implies weaker areas with very little volume and others at maximum volume, with great force. Obviously, it will also depend on the musical style. It is not the same an orchestral piece, much more dynamic than electronic music, which is usually very marked, with the volume almost always at its best in the whole wave.

If the audio is rendered, it takes less than 1 second to load the wave.Would it be possible to draw a general wave of the song without doing the whole rendering process? Speed up this process?

The interesting thing about all this is that the composer can modify any part of his song without leaving Renoise, finding those conflicting zones. A global vision is gold to solve it. It is not about modifying an audio wave, but to access the conflicting pattern and correct the corresponding parameters there.

It would be good to discuss this issue together. See if there is any way to approach this solution. I guess it would take a timeline of minutes and seconds too, its progression bar, its update buttons and range of patterns to show. Syncing the progression bar with the position in the pattern editor would be more than enough. It would be quick enough to correct a song in this way.

the waveform view is nothing else than a visual representation of the actual audio in uncompressed state…with all details summing up to the result…there is no nice “shortcut” to generating this…you need to render the actual audio for it, through all plugins/dsp…even if you simplify the wave, to simplify it you will need to real thing first…you know, you could for example render 11025 instead of 44100 to speed up rendering…but then as you probably know the results would not be 100% of what could be expected when redering in normal or high quality…transients messed up, filters behaving totally different etc…

I like another idea though, that might have intersection with your vision…renoise would need audio tracks with visualisation, and a function to render/freeze/convert “midi” tracks to audio tracks with all their notes and automations, also render meta-device actions based on audio (signal followers, key/vel tracker, synced lfos…) to new automation tracks… then visualise them…you can kind of already do this rendering to visualise, I do for example to align transients…, but with lots of manual steps involved and without the nice bird’s eye view of watching zoomable waveforms next to each other in the pattern or sequence editor…

Normally you would want to work with oscilloscope plugins to get hold of the waveforms…you know, to see how wellformed and well-aligned your transients are, how much amplitude space the low-end takes in a master, etc…the oscilloscope plugins visualise the audio fed through them in realtime and turn into waveform…many plugins can be time-scale tuned, so you see a longer waveform of some seconds instead of the direct oscilloscope mode…also you can tune the renoise master scope to display a longer time frame instead of the short-time oscilloscope, I use that, but I guess proper plugins are much better…

This question goes to Danoise :

Is it possible to create a tool that allows you to do the next ones?

  1. Render song.
  2. Charge this song in the first sample of selected instrument or where you prefer (a new instrument < 255)…

8081 wave_2.png

The sample editor is quite complete to do this task, even if that means rendering the song previously.I remember seeing a tool that allowed you to render the song, in mp3?

Then, setting the time bar in “Minutes” is enough to jump to the area you want to correct.

8082 wave_3.png

If this were possible, I would investigate it to create the tool. For me it would be quite useful, and I would not need another program to do this task, even if the solution is not ideal.

renoise.song():cancel_rendering()

-- Start rendering a section of the song or the whole song to a WAV file.
-- Rendering job will be done in the background and the call will return
-- back immediately, but the Renoise GUI will be blocked during rendering. The
-- passed 'rendering_done_callback' function is called as soon as rendering is
-- done, e.g. successfully completed.
-- While rendering, the rendering status can be polled with the song().rendering
-- and song().rendering_progress properties, for example, in idle notifier
-- loops. If starting the rendering process fails (because of file IO errors for
-- example), the render function will return false and the error message is set
-- as the second return value. On success, only a single "true" value is
-- returned. Parameter 'options' is a table with the following fields, all optional:
--
-- options = {
-- start_pos, -- renoise.SongPos object. by default the song start.
-- end_pos, -- renoise.SongPos object. by default the song end.
-- sample_rate, -- one of 22050, 44100, 48000, 88200, 96000, 192000. \
-- -- by default the players current rate.
-- bit_depth , -- number, one of 16, 24 or 32. by default 32.
-- interpolation, -- string, one of 'default', 'precise'. by default default'.
-- priority, -- string, one "low", "realtime", "high". \
-- -- by default "high".
-- }
--
-- To render only specific tracks or columns, mute the undesired tracks/columns
-- before starting to render.
-- Parameter 'file_name' must point to a valid, maybe already existing file. If it
-- already exists, the file will be silently overwritten. The renderer will
-- automatically add a ".wav" extension to the file_name, if missing.
-- Parameter 'rendering_done_callback' is ONLY called when rendering has succeeded.
-- You can do something with the file you've passed to the renderer here, like
-- for example loading the file into a sample buffer.
renoise.song():render([options,] filename, rendering_done_callback)
  -> [boolean, error_message or nil]

-- See renoise.song():render(). Returns true while rendering is in progress.
renoise.song().rendering
  -> [read-only, boolean]

-- See renoise.song():render(). Returns the current render progress amount.
renoise.song().rendering_progress
  -> [read-only, number, 0.0-1.0]

Apparently, it renders the song and saves it in a file called xxx.wav. Afterwards, the tool should be able to access that xxx.wav and put it in a new sample. It could be in a new instrument <255.It would be great to be able to do all this with a single click of the mouse.

Edit: from what I see, it would also be possible to create a tool window with several configurations. Even indicate where to load the new wav (in a new instrument, or in the selected sample …).

This question goes to Danoise :

Is it possible to create a tool that allows you to do the next ones?

Sure - I think you answered the question yourself by quoting from the API documentation :slight_smile:

Sure - I think you answered the question yourself by quoting from the API documentation :slight_smile:

@Danoise. Yes!

I have built this function, to make tests. But I have strange errors, depending on where I place:

song:instrument( song.selected_instrument_index):sample(1).sample_buffer:load_from( filename )

function rnd_render_song()
  local settings = {}
    --renoise.SongPos( sequence, line )
    local start_pos = renoise.SongPos()
    local end_pos = renoise.SongPos()
    start_pos.sequence = 1 
    end_pos.sequence = 2
    settings["start_pos"] = start_pos
    settings["end_pos"] = end_pos
    --
    settings["sample_rate"] = 48000
    settings["bit_depth"] = 32
    settings["interpolation"] = "precise"
    settings["priority"] = "high"
  ---
  local filename = os.tmpname("wav")
  --temporal folder: C:\Users\USER_NAME\AppData\Local\Temp\Renoise-0-3644\
  ---
  local function rendering_done_callback()
    --load the wav
    print(filename)
    song:instrument( song.selected_instrument_index):sample(1).sample_buffer:load_from( filename )
    song:instrument( song.selected_instrument_index).name = "Rendered"
    os.remove( filename )
  end
  ---
  --render the song
  song:render( settings, filename, rendering_done_callback )

end
---

Return the error(window) Failed to open the file for writing.

Click to view contents
function rnd_render_song()
  local settings = {}
    --renoise.SongPos( sequence, line )
    local start_pos = renoise.SongPos()
    local end_pos = renoise.SongPos()
    start_pos.sequence = 1 
    end_pos.sequence = 2
    settings["start_pos"] = start_pos
    settings["end_pos"] = end_pos
    --
    settings["sample_rate"] = 48000
    settings["bit_depth"] = 32
    settings["interpolation"] = "precise"
    settings["priority"] = "high"
  ---
  local filename = os.tmpname("wav")
  --temporal folder: C:\Users\USER_NAME\AppData\Local\Temp\Renoise-0-3644\
  ---
  local function rendering_done_callback()
  end
  ---
  --render the song
  song:render( settings, filename, rendering_done_callback )
  --load the wav
  print(filename)
  song:instrument( song.selected_instrument_index):sample(1).sample_buffer:load_from( filename )
  song:instrument( song.selected_instrument_index).name = "Rendered"
  os.remove( filename )
end
---

Return the error (window): Sample import failed with the error: ‘Windows DirectShow Audio: no decoder can handle the given audio file, or decoding failed (Internal Error: ‘Failed to render the graph [80070020]’).’

How do I fix this?I can fix the rest of the parameters in a window myself, in order to change them.But I do not know why it does not correctly load the temporary wav file in the sample’s buffer.

You can try it?

Edit :It seems that the first function is more correct. Is it possible that the error " Failed to open the file for writing" is a problem with file permissions?I use Windows 10, 64 bit.

You can’t render in the background, so the tool is pretty useless.

You can’t render in the background, so the tool is pretty useless.

I do not want to render in the background. :slight_smile:

Well, I would imagine a really simple tool: You open the tool window, boom, it shows you a preview of the current song position (master), maybe with zoom in / out, or one pattern pre / post. You close it, boom, rendering in the background stops again.

Well, I would imagine a really simple tool: You open the tool window, boom, it shows you a preview of the current song position (master), maybe with zoom in / out, or one pattern pre / post. You close it, boom, rendering in the background stops again.

What I am trying to build is a simple tool. Just render the whole song and put it into a sample. There you will have all the wave of the song. I suppose that later I will be able to add some button to jump to the position of the pattern that corresponds to the marker of the wave, simply synchronizing the time. Although it is very easy to go manually.

The issue is getting the general audio wave of the song quickly. Then, add a delete sample or instrument button, and that’s it.

I think I can program everything. But I need to avoid the error mentioned.

Go! I have restarted Renoise and it seems that it no longer returns the error: Failed to open the file for writing.

It seems that the function is correct. I just need to create the window with the options and that’s it! :smiley:

Here it is:https://forum.renoise.com/t/new-tool-3-1-1-samrender-v1-3-build-007-january-2019/49294

[sharedmedia=core:attachments:8087]

I wish there was some way to do this faster. Get a general wave of the song faster than the audio rendering process.Anyway, the SamRender tool gets the general wave, and with that it is possible to work without leaving Renoise…