As a result of the discursión of this thread, I open this new thread to not deviate it.If you are curious about the discussion, read around the content of the previous link.
A waveform could be pre-generated in the background, e.g. if you stopped playing. It wouldn’t be 100% exact, but surely would help a lot. But then other details like note blocks instead note-offs should arrive, too, IMO.
What I’m wondering is whether the wave in the background requires a lot of CPU processing or is something light.In this way it would only have to update the wave every time that the parameters or notes are introduced inside the pattern editor/phrase editor (which is something that I do not like).It seems a very complicated subject. But I would not rule out that in the future, this would be perfectly possible, given the high power of the existing hardware.The question is to know what impact all this has on the overall performance.
For me, it would be great if this were possible. Each time you enter or remove anything from the song, you would see the impact directly on that general audio wave. At some point, this will be implemented in other audio programs, even if it involves some delay. But do not forget that this can be very complex to implement. Imagine very long songs. Any effect, equalization, filter, influencing the whole song would change the whole general wave. It would be a little beast. Therefore, it is necessary to know what real impact would have on the performance of having to redraw a wave or parts of it constantly.It must be very well thought out so that it does not require a lot of CPU processing.
The other path would already be a wave generated through an update button. There could also be a pattern range selector. Show me the patterns from 10 to 30. You do not have to show the whole wave.All this would be useful in post-production processes.Ok, you have your song practically finished, then check that everything is in place.
For volume correction of certain areas compared to the rest of the song would be very useful.
One way to do all this by steps is to render the song, then put that song in a sample to see what you’ve done.How many minutes fit in a sample?But this process is useless. The interesting thing is to be able to do all that before rendering the song.
Otherwise, you have no choice but to use an additional program, and do the process in steps:
- Your song is in the postproduction process, and you need a general look of the song.
- Save the entire song rendered in wav format.
- Run another program that allows you to visually analyze a complete audio wave to the millimeter.
- Determine the areas to correct. You will easily find the exaggerated peaks and the poor valleys. There is nothing better than a general wave to determine if your song has the volume structure you want.
- Go back to Renoise, locate through the time clock the conflicting zone or zones to be corrected.
- Readjust the overall volume of the song again.
- Render the song again.
- Reload it in the wave analyzer program.
- Check again that everything is in place.
- Repeat the process as many times as necessary until you find the volume structure you want for your song.
If Renoise had something that would accelerate this whole process, it would be magnificent. With the spectrum analyzer or the equalizer you can guide yourself in each point, but it does not allow you that global vision of the whole song. I’m thinking at all times about the post-production process, the mastering.That’s why a “wave update button” would not be crazy. Only use the general wave when necessary, even if that requires a short waiting time for the load.
This would also help the compositor to create albums that would not be out of harmony with each other due to volume issues. This is a fairly generalizing problem in Renoise.