Ok. So lately I’ve been learning a bunch about the fundemental frequency of notes and the harmonic series.
From what I understand the fundemental frequency is the lowest frequency of a note and the rest of the sound is made up of its overtones.
For example an A which would be at 110hz may have various overtones that will always be multiples of the original frequency. And will be different depending on what instument the said note is coming from.
What my question is
Is
What kind of overtones do various waveforms like a sine and square wave have?
Or
Can somebody point me toward a rescource of where I can find lists of different insruments harmonic series?
Or!
Can somebody teach me how to look at a waveform veiwer to pick apart and figure this stuff out?
OR!
Better yet just tell me any information about this topic that you might think I would be curious to know.
You’ll find various math calculators for harmonic series.
Consider that this is a mathematical point of view about how sounds are produced.
However if you want to “view” / “hear” all this on renoise, it won’t be that easy.
Creating a native mono synth inside renoise that uses those kind of harmonic tables has been done with the help of a renoise user known as dBlue (he has also coded the Glitch VST, check it that’s a good one if you’re curious : http://illformed.org/plugins/glitch/
A sawtooth wave has all integer harmonics: 440Hz fundamental, 880Hz (440 x 2), 1320Hz (440 x 3), 1760Hz (440 x 4), 2200Hz (440 x 5), and so on to infinity with ever-decreasing amplitude.
The triangle wave and square wave both have odd-numbered integer harmonics: 440Hz fundamental, 1320Hz (440 x 3), 2200Hz (440 x 5), 3080Hz (440 x 7), and so on to infinity with ever-decreasing amplitude.
Some other stuff to keep in mind when working digitally…
When dealing with discrete signals such as digital sampled audio, we obviously cannot have infinite harmonics because we only have a limited sampling rate, so this is where band limiting comes into play.
An ideal digital signal should have been band limited (perfectly lowpass filtered, in other words) at Nyquist frequency (sampling rate / 2) before sampling it, so that we can capture as much useful data as possible while avoiding aliasing. So if we have a sawtooth wave or a square wave which has been correctly sampled from an analogue source, then it should not contain any harmonics higher than the Nyquist frequency.
I’m not 100% sure on the terminology everybody prefers to use, since I’m definitely not an expert on this topic, but everything I’ve read generally says that it only has its fundamental frequency and contains no harmonics. But now that I think about it, I think I have seen some places refer to it differently. I guess I could have said: no additional harmonics.
Although Fundamental is the more commonly used term it is the first harmonic. “Real” harmonics start at the second harmonic, ie two times fundamental frequency.
renoise is a software that produces variable and really fast electric signals in your soundcard
a soundcard has a “digital to analogic” conversion module
converting for example 16bits values such as -32768 to + 32768 (2 bytes) to an electric signal (negative and positive)
this conversion has to be done at higher rates otherwise your speaker sounds like a toy (lofi/aliasing side effect)
a variable electric signal (with positive or negative current) goes out of your soundcard, into your speaker
in your speaker, it produces the movements of an-electro magnet (modulated by this electric current)
a plastic membrane is attached to the electro-magnet
the faster the current changes its direction, the faster the membrane vibrates
this vibration of the air is the origin of the sounds you hear from your speakers
the “frequency” of this vibration is a parameter (given in the Hz)
a higer amplitude of the membrane movements is produced by a higher current and higher digital values
there is a relationship betwwen “tones” you hear, and the “frequency” of current oscillations
there is a relationship between big sound in the speaker and high voltage that goes through the coil
the slower is the frequency, the lower the tones sound
composing a melody with a soundtracker is based on that principle :
small samples are played “faster” what makes higher tones, or played slower, what makes lower tones
you can easily check it in Renoise : load a chipSine in the Instruments library
then play with it and see what’s happening first in the Scope View,
the scope view shows a sweet periodic wave that becomes more and more compact if you play higher tones
the spectrum view of a sine wave shows just a single peak moving from left to right, lower values, to higher values
put your mouse directly on the Spectrum : you’ll find the exact value of the “basic frequency” in Hz
lower frequencies will make a bass sound, higher frequencies will make a lead sound
warning a spectrum viewer is not a scope
a scope shows in realtime the electric signal that goes out or the equivalent plastic membrane vibration
while a spectrum is the result of an analysis of a full sound
on scopes, sine waves show a sine wave and have a typical regular & sweet bip sound
but on a spectrum viewer you’ wont see anything like a sine wave you’ll just see what frequencies are implied
the analysis of sine waves generally show one and only one frequency, this is simple
while the analysis of other types of sound waves the the spectrum reveal other smaller frequencies
the chipSaw instrument, (Saw Waves) for example, shows a first frequency followed by other ones
the first one is called a fundamental
the multiple following frequencies on the spectrum generally have a decreasing amplitude
you call the other ones “harmonics”, those harmonics are very important to define a sound
each instrument has its own harmonic behaviour, its own signature
there are lots of algorythms and models that try to imitate or synthetize each instrument
the evolution of harmonics (number, periodic values, amplitude and location on the spectrum in realtime), globally define a sound
some synths are able to nearly sound like real instruments
however real instrument also produce unpredictable other sounds that synths painfully recreate
in a guitar sound for example, when the finger slaps or slides it makes typical “clicks” and “zaps”
there are also sound distortions, colors, renonnance, due to different parameters such as the way instruments are build and react
that must also be added to the definition of the sound but that’s not that easy
the easier and simplest sounds are sine waves, everybody says it and it’s true
the most difficult sounds to synthetise are probably human vocal timbres and the vocal throath abilities
(I mean, good luck to the ones that want to synthetise the pavarotti singer for example)
the only way to deal with human voices is to forget vocaloid VSTi and all those crappy virtual instruments
because your soundcard also has a ADC an analogic to Digital conversion module
that can convert the electric signal from a line in or a microphone into 16 bits values, for example -32768 to +32768
today soundcards can basically capture stereo sounds at a minimal rate of 44000 Hz
theorically sampling “works”, but in fact it is highly hardware dependant and it produces some side effects
one of those side effects is the aliasing of sounds what could be called a loss of quality and definition
dBlue suggested that a pre-treatment (through analogic or good mechanical filters, maybe ?) of sounds could be done “before” sampling
note :
when sampling human voice through a microphone
let’s say that is you can’t pre-filter the recorded sound
then you’ll probably need to post-filter it,
for example, with a small “bandstop” filter
because of a thin “noise” or resonnance,
or most of the time,
with a lowpass filter to work out the “air” problem…
I didn’t suggest that you should pre-treat or pre-filter anything yourself, and in fact there wouldn’t be much point in doing so anyway, since the sampling device itself will already perform the band limiting that is necessary. If the sampling device didn’t perform any band limiting, then the captured signal would potentially contain a lot of aliasing and would therefore be completely useless.
I was just trying to highlight this important aspect of sampling theory, so that yourlocalloser and others would be aware of how it affects signals that he might analyse himself. For example, I’ve seen quite a few people mistakenly think that a sampling rate of 48kHz must be able to capture any frequency from 0Hz to 48kHz, when in actual fact the highest possible frequency you can accurately capture (without aliasing) under absolutely perfect conditions is only 24kHz. So it’s just important to keep this kinda stuff in mind when you start looking into various forms of signal analysis, especially if you’re trying to measure harmonics and things like that.
If it’s aliasing we’re talking about, then post-filtering would be useless. Aliasing is just one of those things you cannot fix afterwards, except in some very rare and specialised cases. So the signal must be band limited before it is sampled, which as I already mentioned will be done by the sampling device/hardware anyway, so you don’t have to worry too much about this aspect of it really.
(I know I’ve taken this out of context here as you’ve stated from 0Hz - blah.)
But Nyquist’s threory states that the Bandwidth (not highest frequency) that can be sampled is less than half of the sample rate. If you wanted to you could sample 16GHz as long as I didn’t want a bandwidth of more than 24kHz (well actually a little less.) EG 15,000,088kHz - 16,000,012kHz, which may be useful if you wanted to record the drift of a 16GHz clock and not use masses and masses of data by recording full bandwidth for no reason. Aliasing is actually a very useful artefact when used correctly!
But I admit it has almost nothing to do with music, you yourself likely know it, and I’m probably just confusing people now.
Thanx for those precisions dBlue I’ve misunderstood your previous words about the sampling rate / 2. I understand now that sound capture devices today often perform a necessary “band limit” to avoid sound aliasing. And I understand that it’s something automatic.
To be honnest I believed that a kind of pre-filtering could be necessary, because I’ve experienced something in the past while I was doing my very very first steps with digital music and samplers, I never knew if it was sound aliasing or not, when I was very young, I had an old casio sk1 - one of the first toy-samplers for the low budget people. ( http://www.casiosk1.com/sk1.cfm ) - I’ve kept ; it it’s still in my room. I don’t play with it anymore. But I like it because it’s so lo-fi.
9.38 kHz sampling frequency
8 bit
1.4 seconds of sampling
You’ve got a microphone but also a line-in device in the sk-1. When I was younger I naively believed that the bizarre quality of sampled sounds came from the integrated microphone. But it came from the sampling frequency and the 8 bit resolution. A CD quality input into the line in jack, sounds like a toy as a result after it’s been sampled by the sk-1, especially when playing lower tones. I’ve successfully re-created that kind of poor sound with the “lofimat2” dsp, with 8 bits mode, but let’s say that the emulation is better when I don’t click on the “smooth” parameter. It’s because of this old sk-1 that probably didn’t sampled sounds as well as today’s samplers, that I believed that a kind of pre-filtering could be eventually necessary. Today, because of the fact that technology is far better, I’m not really bothered with that toy effect anymore. I must say that my ears and associated neurons are probably fucked up by years of loud playback, and that I don’t really hear a big quality difference between a playback at 22000 and a playback at 44000Hz… (shame)… The spectrum viewer I often use to control the result of my recordings, stop to work anyway after 22 000Hz (btw the internal renoise spectrum focuses its analysis between 20Hz and 20000Hz).
Considering post-filterings I had in mind my own experimentations with my crappy microphone + hardware, that produce alltogether a brilliant peak of 1dB at 5000 Hz and a bit of “air” after 18000Hz. Most of the old microphones I used for now have a frequency response that falls after 13500 Hz. I’ve noticed a logical lack of quality and fidelity between 16000 and 20 000 Hz. Of course it could probably be solved with a better microphone, but anyway, I have to fix all this with a rack of filters (bandstop, lowpass, eventually a gater…).
I don’t want to derail this thread too far… but since we’re already kinda talking about sampling and you’ve specifically mentioned the SK-1, I thought I would share this video of an upcoming effect from Plogue (which will emulate the SK-1, among other things):
Some great posts in the this thread, I’ve learned quiet a few things
Overtones, harmonics and upper frequencies is where all the texture of music and sound is. Think of the fundamental notes as the ‘soul’ and the texture as the ‘conscious character’. Modulating, morphing and adjusting the texture of sounds in real time, slowly or quickly, is where the excitement and joy of sonic appreciation is focused. Well organised, dynamic activity in the texture of sound makes for excellent listening. This is why real acoustic instruments often sound much more satisfying than basic synthesised sounds. The texture is changing in an organic and complex way - you have to work hard with synthesisers to achieve that same sonic feeling and complexity. This can be done though modulating and various types of effecting.
Of course digital converters don’t capture it all. The higher the frequencies the more truncated the shapes are. Converters are generally getting better all the time at doing this job, the higher the sample rate and bit depth the better. But there are some pretty crummy converters out there, and even the best ones still only go part of the way of capturing the full analogue feel. You work the best you can with what you’ve got.
What matters most though in music with human beings is the soul. So getting those fundamentals right and giving the priority will make the deepest impression with people. When we walk away humming and inspired from music, we are resonating with the fundamentals. Texture is an afterthought. Texture is academic. Melody is primary. Melody, harmony, and it’s organisation into rhythm and dynamics is soul.