Minor (hopefully) feature request

I’m sure this must have come up before, but it would be great to be able to render selection/track/etc to the current/new sample slot of an existing instrument.

Example usage - rendering stutters/fills/glitches that have been programmed in the pattern editor down into variations that can be triggered by single notes. This way it is easy to make lots of varieties of beats and experiment easily with them.

Thanks for listening!

If I understood you correct, this is already possible. Just highlight the selection in the pattern editor, right click and select “Render to sample.”

-Edit- Oh, to an existing instrument! I guess some smarty pants would be able to script it for you

behaviour of the rendering is not changeable… Really wouldn’t know why you would want this btw. You can easily remove the old instrument?

So it’s not possible to make a tool that will add a menu “Render selection to sample slot”? It doesn’t change the behavior of render, it adds a new option.

Instruments can contain a number of sample. Right now the render selection will just put it as a whole new instrument for one sample. You might want to make an instrument made up of many Rendered to Sample samples.

Exactly! It’s about being able to render samples into “kits” (for want of a better word) that can be spread across the keyboard and played. Stutters, fills, glitches, even just plain old “variations” on a short beat, whatever it might be. Rendering to a new instruments, copying that into an existing sample slot, then deleting that unwanted new instrument could be made a much quicker and easier process.

Worst case scenario, copy/paste samples across instruments?

Sure, I know it’s possible now, it’s just a bit of a fiddle. This is very much a convenience feature, and surely one that is minimal in terms of development time. If the render can target the first slot of a new instrument, I presume it can’t be a huge stretch to make it target the currently selected slot or next slot of an existing instrument.

I know there must be a long list of FRs so I’m not trying to rush anyone! I’m a software developer, I know from experience that these things take time!

What are you waiting for?

Drawing experience from the amount of tools I’ve made, I would like to help you get started by warning you about the problems you’ll face.
First of all, the API equivalent of “Render selection to sample” does not yet exist. Relevant function is the one below (out of the documentation/Song.API.lua)

  
-- Start rendering a section of the song or the whole song to a WAV file.  
-- Rendering job will be done in the background and the call will return  
-- back immediately, but the Renoise GUI will be blocked during rendering. The  
-- passed 'rendering_done_callback' function is called as soon as rendering is  
-- done, e.g. successfully completed.  
-- While rendering, the rendering status can be polled with the song().rendering  
-- and song().rendering_progress properties, for example, in idle notifier  
-- loops. If starting the rendering process fails (because of file IO errors for  
-- example), the render function will return false and the error message is set  
-- as the second return value. On success, only a single "true" value is  
-- returned. Param 'options' is a table with the following fields, all optional:  
--  
-- options = {  
-- start_pos, -- renoise.SongPos object. by default the song start.  
-- end_pos, -- renoise.SongPos object. by default the song end.  
-- sample_rate, -- number, one of 22050, 44100, 48000, 88200, 96000. \  
-- -- by default the players current rate.  
-- bit_depth , -- number, one of 16, 24 or 32. by default 32.  
-- interpolation, -- string, one of 'cubic', 'sinc'. by default cubic'.  
-- priority, -- string, one "low", "realtime", "high". \  
-- -- by default "high".  
-- }  
--  
-- To render only specific tracks or columns, mute the undesired tracks/columns  
-- before starting to render.  
-- Param 'file_name' must point to a valid, maybe already existing file. If it  
-- already exists, the file will be silently overwritten. The renderer will  
-- automatically add a ".wav" extension to the file_name, if missing.  
-- Param 'rendering_done_callback' is ONLY called when rendering has succeeded.  
-- You can do something with the file you've passed to the renderer here, like  
-- for example loading the file into a sample buffer.  
renoise.song():render([options,] filename, rendering_done_callback)  
 -> [boolean, error_message]  
  

Freeze Tracks tool is the only one I can remember that used this function. Using this means you’ll have to “Solo out” everything that’s not in the selection, be that tracks or note columns. Computing the start_ and end_pos from selection is not difficult. Then you have to ask via another sys call (e.g. Akaizer tool does this) for a temp filename. Then render to that file, then load the file back into the currently selected (or whatever) sample slot via

  
-- Load sample data from a file. Files can be any audio format Renoise supports.  
-- Possible errors are shown to the user, otherwise success is returned.  
renoise.song().instruments[].samples[].sample_buffer:load_from(filename)  
 -> [boolean - success]  
  

The question is, is this much of a time saver after all? In the end I think you’ll be coding and especially debugging a lot and using Renoise for years might not make up for the time you lost doing it. I would focus on a tool that simply takes all the instruments called “Rendered Sample xx” and combines their samples in one new inst. Then creates somewhat logical keymaps. It’s probably the best compromise right now between timesaving with tools and using fast render selection builtin functions.