Is there a way to work out how many samples per beat at different BPMs?

I always leave renoise samplerate at 44100 no matter what pitch is being played

You might leave it alone, but does Renoise? ->

" Interpolation: This is the quality of re-sampling used when samples are played at pitches other than the original."

https://tutorials.renoise.com/wiki/Sampler#Sample_Properties

I think you’re talking about the clock rate of the audio hardware, but that shouldn’t ever change during a track, which is why you re-sample and interpolate a fixed-rate sample to create different pitches.

The sample rate is constant but the sample-length (which can be measured in samples) is not.

For example, If I start with a single cycle sine wave which is 256 samples in length, loop it so it becomes an oscillator, then increase the number of times it loops per second I am increasing its frequency, making it higher in pitch.

When I tune a single cycle sine which started at 256 samples in length to A4 it becomes 100 samples in length, but because the samplerate is constant (44100Hz) it loses resolution and accuracy…there are less sample points available to plot out the sine…the higher you go, the more innacurate the wave becomes. Renoise rounds off to the nearest whole sample…this is aliasing i think…somehow there is a way around it with the anti-aliasing button. I have no clue about the details of how that works.

However with bytebeat method of sound design you are writing a mathematical formula to build waves and alter them…I would like to learn but its maybe too complicated for me right now. For some reason they can not achieve what renoise achieves in speeding up the looping of the wave to particular pitches, so they are changing samplerate to change pitch, which is crazy.

Its just madness I think I will stick to renoise and leave this formula shit alone, but yeah, I have some more to learn about bitrate, samplerate, aliasing, interpolation and all those crazy things

A single cycle sine wave at 22050hz requires only two samples to be fully replicated at the reconstruction output, this is what Nyquist is all about. So 100 samples for a sine wave is not a loss of accuracy or resolution in any way at all. Renoise only rounds off if you set the interpolation option to ‘None’. With this option chosen aliasing may come about because of high frequency harmonics caused by the truncation which are then reflected back into the audible range, as far as i can tell.

1 Like

a sample two samples in length can only be a square wave…

think of samples as like pixels, but for audio…you cant build a sine with two samples.

samples as in ‘samplerate’, not samples as in short recordings.

Nyquist is just whatever samplerate you have, half of that is the maximum frequency that can be achieved.

So for ‘CD Quality’ 44100Hz, you can play back things at up to 22050Hz accurately…

Human hearing range is up to 22000Hz, thats why they chose 44100Hz sample rate for ‘CD quality’…two times 22000Hz is 44000Hz, then they just added another 100Hz to be safe

Divide 44100 by 22050, what do you get?

you cant draw a sine with 2 pixels, similarly you cant make a sine with two samples.

yes its the highest possible frequency at 44100Hz, a square wave.

if you want to try open renoise sample editor right click - create sample - number of samples 2, try drawing any wave, you will see only a square is possible.

You’re incorrect.

It’s true that the digital waveform only contains two samples, and that in Renoise you may see it represented as a “square” wave simply due to the lack of other sample data, but when it comes time to reconstruct that waveform — either via resampling within Renoise itself, or when playing the raw audio back through a Digital-to-analog converter — then the waveform will be resconstructed back into a sine wave.

This is precisely why Nyquist–Shannon sampling theorem allows us to capture frequencies up to 22,050Hz at a sampling rate of 44,100Hz, even at just 2 samples per cycle, while still being able to accurately reconstruct the waveform back into its “clean” analogue signal.

Very basic example: dblue-sine-example.xrns (4.1 KB)

1 Like

O.K, I looked at the example.
It is represented onscreen as a square if the sample is only 2 samples in length.
As you say this is because of a lack of other sample data.

I see you transposed -84 semitones and turned the quick fade button on.
I played your instrument at C2 in a new track and rendered selection as sample.
You are right, the rendered sample displayed a perfect sine.

I created my own sample, 2 samples in length, transposed -84 semitones, put the quick fade button on, rendered a C2 to sample…it was the same result, a perfect sine.

So, why is a square a sine when it is transposed down?
I mean, why cant I create a square wave two samples in length?
If I create a square, transpose down, it becomes a sine.

How and why does that happen?

I see that if I draw a square into a sample created at 4 samples in length it becomes a sine when transposed down as well…how long does a sample have to be before it stays a square as it is represented onscreen when transposed down?

If I create a sample 16 samples long, and go throught the same process the result of the rendering looks like a rounded square…somewhere between a square and a sine.

I went as high as 1024 samples in length. When rendered, even played back at ‘A’ (with tuning -5st, finetune +30) the result was like a distorted square with some ‘sine-bumps’ where I would have expected to see sharp corners

I think it’s something like this: the example using two sample points is not a square wave because it does not have any harmonics higher than the fundamental (22050hz), and therefore is only possible to be reconstructed as a sine wave (a single frequency).

A square wave is not characterised by its geometrical shape, the shape we see, but by the frequency content. I’d speculate that the longer example you made allows for the higher harmonic content to become available for the reconstruction at output as it is in fact a sample with a lower fundamental frequency (2756.25hz), and therefore allows much more room for the harmonics above the fundamental which fall under the Nyquist frequency. The odd shape is probably caused by the quick fade, but i can’t be certain. Perhaps @dblue can answer that.

O.k, I think I get it now, the odd harmonics of the square would exceed the maximum frequency possible at that sample rate…those frequencies get cut, so it becomes the fundamental only - a sine.

Because of the resampling that occurs when playing the sample at non-native frequencies, i.e. pitching down.

Change the sample Interpolation mode to “None” if you wish to hear it played raw, without any resampling, and then it will sound like a naive square wave.

@dblue Are there then inter-sample peaks with an interpolated 22.05kHz squarewave? Or does Renoise clip those if exceeds 0dB? Lately I really wonder about these, even if I set the limiter to “true limiting” I often get (intersample) peaks above 0dB (not Renoise related).

thanks…ill have to go and read about interpolation.

there is an explanation of naive square wave at the link below if anyone comes across this later and doesnt yet know what that is

https://tomroelandts.com/articles/naive-square-wave

something about harmonics of the different waves here too…

http://synthesizeracademy.com/harmonics/