How Come Software Samplers, Use Multi-Layer Samples For Velocity Layer

Am I missing something? It just seems to me, that a software sampler, could take a sample, and then use an algorithm to decide things like velocity and maybe even give all types of options for the type of, “hammer strike, or finger pluck, or mallet wack.” You know…

Do they already do this?

If a Renoise instrument has 1 sample of a snare. Does Renoise calculate volume and velocity, assuming you use the pattern editor and input the values? Than what would be the point of creating multi-sample instruments?

That said… I definitely notice a huge quality difference, with sample instruments that are layers of multi-samples. How come? Doesn’t it make sense, that at this point, a software sampler would be programmed to, “create the proper velocity based on an algorithm?”

Just trying to expand the mind…

Cheers guys

There are several reasons to use multi-sampled instruments.

Here is main one:
If you use a one shot sample of a piano note being struck at C. Renoise would only change the ‘volume’ based on velocity. This is unrealistic, because if you played a real piano note at a lower velocity, much more things would affect the tonal quality of the sound than just the volume of it.

That’s why you’d use multi-sampled instrument. Record the piano note at several stages, low velocity to highest will do a decent job of capturing the tonal inflections of an instrument.

It’s not so important for electronic sounds such as analogue type synth tones or percussion hits, but to accurately emulate (the earliest hardware sampler was called an emulator btw!) acoustic instruments it is necessary to capture their dynamic behaviour when struck / plucked, etc.

A piano, guitar, snare drum, etc are all made of different woods & metals which respond with different sonic characteristics and overtones when played softly or forcefully. Their physical construction and the interaction of their physical components also makes a large difference for the sound, for example a snare drum is actually at least 4 sounds: the drum skin / head itself being struck by the stick, the short resonance of the drumhead afterwards, the drum shell conducting and resonating the sound outward and the rattle of the actual ‘snare’ on the bottom drum head.

Similarly a piano sound involves - the sound of the key clicking down on the keyboard, the sound of the hammer cocking back and then percussively striking the string inside, the long resonance of the string and all the physical resonant properties of the pianos soundboard (normally metal for tensile strength) and it’s body (normally wood to project and soften the sound).

Therefore it is impossible to recreate such nuances and subtleties with a single sample, even if it is multi-sampled per key / note, as the dynamic sound of most ‘real-world’ instruments is a great deal more complex than simply getting louder or quieter with velocity. Hence most modern sample libraries use either dense velocity layering, physical behaviour modelling or most often a combination of the two to recreate these sort of sound. For example the latest Roland V-Drums kits use 127 samples in the velocity layers for some of the hi-hat patches, as this is one of the most frequently struck and dynamically responsive parts of an acoustic drum kit.

A piano has multiple strings that one hammer can bash, with specific pedals you can shift the hammer slightly to the side so that it only strikes two or one string (Uno Corda mode) so some accoustical instruments are more complex than seem obvious.

well, also electronic instruments can be unreproducible with static samples: most of the synths have parameters linked to velocity and pitch (for example: the more the velocity, the more the resonance on the filter), so a simple mono-layered reproduction would not suffice

That is true, but that is also very easy to work around by taking a static sample of the synth and filtering it as necessary.