Samples Sound Darker/Less Vibrant In Renoise?

i know you’re winking, but i have to say that the perceived differences i am inquiring about actually STOP me from making music. it’s a strong psychological negative when i work on a sample in, say, Sound Forge, get it exactly how i want it, bring it into Renoise and … wait… that’s not how i left it. it’s… “dim.” i had this problem with a bass drum/kick/effect mix sample and a pizzicato string instrument. i messed with it many times in many ways without success. i even used EQ in Renoise to try helping the problem. i ended up just moving my whole song into Reason/Record (note, the pizzicato in the Orkester Sound Bank sucks) because i was tired of fiddling with things in Renoise. this situation jammed up my creative process for over a day because the tool wasn’t doing what it seemed to me it ought to do. i’m not abandoning Renoise, but i’m less enthusiastic about working in it now. i know mixing and mastering is a necessary evil with audio production, but the problem i’m experiencing happens notably before even that stage is relevant. so… it’s a creative block.

thanks, everyone, for the really good discussion. i’m so glad this didn’t turn into a flamewar or pissing contest. i’m very appreciative of everyone’s efforts at looking at this topic seriously and especially thankful for dblue’s data.

dblue said: “As with the other tests, there are some tiny differences that you can observe with a high resolution frequency analysis and things like that, but they’re so insignificant that it’s not really worth considering them.

here’s where it gets subjective. everyone’s hearing is slightly different. i’m particularly sensitive to high frequencies (all extreme frequencies, really, high or low). dunno if it’s autism neurology or what, but i hear things many people don’t hear that SHOULD be outside of human hearing range. as i age, i expect this will decrease, but so far it hasn’t (i’m almost 35). i was a happy user of MiniDisc for ages, and after a long time of listening to just my minidisc copies, i listened to my CD originals and i noted a considerable difference. the idea of the compression is psychoacoustic; it’s supposed to eliminate frequencies human hearing tends not to pay attention to. apparently my hearing system DOES pay attention to them, though :wink:

sorry for my big pile of replies back to back!! i was away

Before we get any further into this, can I just ask if you listened to the example files I provided in my earlier post? I’ll repost it here for convenience:

Download: sawtooth_110hz_test.zip

Files:

  • sawtooth_110hz_original.wav - Original sound generated with Sound Forge.
  • sawtooth_110hz_voxengo.wav - Captured live from Renoise using Voxengo Recorder.
  • sawtooth_110hz_renoise_cubic.wav - Rendered from Renoise using cubic interpolation.
  • sawtooth_110hz_renoise_sinc.wav - Rendered from Renoise using sinc interpolation.

Please listen to each of these files in Sound Forge (or your preferred listening environment) and tell me if you notice any difference between them.

If you can ignore the psychological influence of knowing which one is which (by their filename), and you can honestly hear differences between them, then I will gladly provide more blind listening tests to see if we can really get to the bottom of this. If what you are hearing is so clear and obvious to you, then it should be no problem to identify a ‘good’ vs ‘bad’ sound in a blind listening test. Do you agree?

reading through all of this, without listening to the samples (edit: did not listen because i know i fucked up my hearing already to hear much of a difference, so my input in that would be useless), i got two points i think are relevant to the discussion.

  1. i believe it would be possible for dysamoria to hear subtle differences in certain frequencies that most people do not hear, simply because his autism (as he states himself) seems to have a focus on that sort of thing. in that sense it might be an endless debate, because the basis for this will be extremely subjective, in the sense that most people do not hear the differences but dysamoria does. he will be (almost) on his own in this, due to the way his brain is wired, so to speak.
  2. if there is indeed a difference in the way Renoise reproduces the samples, or maybe in the way Kontakt or Sound Forge produce the samples, for whatever reason that might be, it seems to me this is not so different from (as mentioned before) the sound-quality of for example the MPC. in other words: analog gear is always hailed for having a certain ‘sound’… isn’t it ‘fair’ for software to have its own sound as well? or should it reproduce exactly as is?

it’s an interesting discussion in any way, and i really enjoy reading about stuff like dysamoria being autistic and hearing stuff differently. this interests me to no limit.

In the modern world where everything is designed to systematically rob you of your last penny, it immediately rouses suspicion when something is simultaneously excellent at what it does and ludicrously cheap compared to other products of its kind. I freely admit that I am no better than the next man and often suffer delusions of relative ‘quality’ inflicted by a ruthlessly capitalistic world.

I act on this by episodically sitting in front of Logic and a suite of NI plug-ins (which I have paid top-dollar for full licensed versions of ) trying to thrash out some ideas for hour upon frustrating hour before I inevitably, constantly return to Renoise and get the job done in five minutes and sounding the absolute bollocks to boot!

Let’s be honest, I think a lot of other users associate cost with quality which is clearly not proportional where Renoise is concerned. I found an interesting quote on the Reaper website (from one of the UK’s leading music technology magazines) which I think is equally applicable to Renoise:

PS. Hope this does not compromise any copyright shenanigans - just trying to make a point!

Word, yeah dude every component in a system has subtle variables and affects the sound / output to some degree and the zeros and ones that make up a host / DAW should be no different to subtle effects of the wood that makes a guitar or the cartridge in a turntable. TBH, Renoise is the first tracker I have ever used and I don’t even regard it is a such, I cut my teeth on an Akai MPC2000 and a Korg prophecy before going all software, I simply regard it as a brilliant intergated sequencer / sampler / etc.

On the subject of the romanticism often bestowed upon the MPC, remember Renoise only constitutes the equivalent of an O/S on such a hardware sequencer/sampler as the MPC or Ensoniq ASR-X. A lot of the ‘sound’ of these things - which operated at 44.1 kHz is due to the fact that you had to put sounds into them through an external source and then route them out to record them via cables / mixer / soundcard. So before the signal even hits the computer or hard disk recorder (remember those?) it’s been through at least two D/A to A/D processes at the point of sampling the source material and then being routed out of the machine itself.

Even Renoise poster-boy Venetian Snare says he 'puts everything through a mixer" to make sure ‘it sounds right’ in the interview D/L in this post (cheers again Bantai!):

I think Richard D James / Aphex Twin summed it up pretty well*:

"some people bought the analogue equipment when it was unfashionable and very cheap though.
some of us are over 30 you know!
anyone remember when 303`s were £50? and coke was 16p a tin? crisps 5p

also you have overlooked A LOT of other points because its not all about the overall frequency response of the recording system its how the sound gets there in the first place.
here are some things which you can`t get from a plugin,they are often emulated but due to their hugely complex nature are always pretty crass approximations…

the sound of analogue equipment including EQ, changes very noticably over even a few hours due to temperature changes within a circuit.
Anyone who has tried to make tracks on a few analogue synths and make them stay in tune can tell you this, you leave a track running for a few hours come back and think I’m sure I didn’t fucking write that, I must be going mental!

this affects all the components in a synth/EQ in an almost infinite amount of tiny ways.
and the amount differs from circuit to circuit depending on the design.

the interaction of different channels and their respective signals with an analogue mixer are very complex,EQ,dynamics…
any fx, analogue or digital that are plugged into it all have their own special complex characteristics and all interact with each other differently and change depending on their routing.
Nobody that i’ve heard of has even begun to start emulating analogue mixer circuitry in software,just the aesthetics,it will come but i’m sure it will be a crap half hearted effort like most pretend synth plugins are.
they should be called PST synths, P for pretend not virtual.

Every piece of outboard gear has its own sound , reverbs,modulation effects etc
real room reverb, this in itself companies have spent decades trying to emulate and not even got close in my opinion, even the best attempts like Quantec and EMT only scratch the surface.

analogue EQ is currently impossible in theory to be emulated digitally, quite intense maths shit invovled in this if you’re really that interested,you could look it up…good luck.

your soundcard will always make things sound like its come from THAT soundcard…they ALL impose their different sound characteristics onto whatever comes out of them they are far from being totally neutral devices.

all the components of a circuit like resistors and capacitors subtly differ from each other depending on their quality but even the most high quality military spec ones are never EXACTLY the same.

no two analogue synths can ever be built exactly the same,there are tiny human/automated errors in building the circuits,tweaking the trimpots for example which is usually done manually in a lot of analogue shit.
just compare the sound of 2 808 drum machines next to each other and you will see what I mean,you always thought an 808 was an 808 right?
same goes for 303`s they all sound sublty different,different voltage scaling of the oscillator is usually quite noticeable.

VST plugins are restricted by a finite number of calculations per second these factors are WAY beyond their CURRENT capability.

Then there is the question of the physicality of the instrument this affects the way a human will emotionally interact with it and therefore affect what they will actually do with it! often overlooked from the maths heads,this is probably the biggest factor I think.
for example the smell of analogue stuff as well as the look of it puts you in a certain mental state which is very different from looking at a computer screen.

then there is analogue tape…ah this really could go on forever…

I’m quite drunk cant be bothered to type anymore…’
so yeah,whatever, you obviously don’t have to have analogue equipment to make good music in case that’s the impression I’m giving,EVERYTHING has its uses .And not all analogue equipment is expensive you can still get bargains like old high end military audio devices,tape machines fx etc just go for the unfashionable stuff.

Richard."

*Original thread has now been 404’d on planet-mu forum

shit, that last post made me wanna post:-)

oh again I’ve been absent… as I came back to refresh myself on this thread I realized I said something stupid… I said I sometimes disable “resampling” in trackers to avoid the effect of muddying the frequencies present… duh me. I meant I disable interpolation! embarrassing …

also, I wish to state right away that I have no pretenses that an expensive tool sounds better than an inexpensive tool :slight_smile: that’s not why I mentioned Kontakt. this also brings me back to a related question I’d like to ask: is it possible that interpolation is really the sum total of the problem I’m experiencing? does the “render plugin to samples” feature sound 100% identical to the plugin if I disable playback interpolation? is a waveform more accurate if interpolation is off? I’m fully acknowledging that I’m not understanding some of the technical details about interpolation. I’ve assumed that it exists soley to deal with low quality sampling artifacts, but maybe I missed the point. is interpolation a necessary feature of resampling itself regardless of sample quality (to play back samples at different pitches/speeds)? I have assumed it is a feature intended only for “cleaning up” low qual samples… maybe my Kontakt rendered to samples would sound 1:1 if the playback engine had interpolation disabled? I know … I should be testing these theories myself … :wink:

i took a listen by drag-&-dropping all of them into Sound Forge (i ignored the file names so i listened without bias). i hear no difference. but i didn’t expect to, as these are pure and basic sounds at a relatively low frequency. i think that changes i’m noticing are impossible to see in such pure, low frequency samples. maybe a better test would be to try something that’s more rich with frequencies/harmonics/etc. if the frequencies i’m finding are being affected are not present to begin with, the simple sawtooth wave test wont be able to prove anything one way or the other. for example, i noticed a lot of change between Kontakt as plug-in hosted in Renoise (playing the Kontakt 3 pizzicato string around C3 to C4) and the same instrument after being converted to samples (with Renoise’s “render plugin to samples” tool). that was a dramatic difference to me between plugin and samples. it’s what drove me to move my project to a different tool outside of Renoise.

I love the smell of my C64 heats.

The interpolation in this case refers to the method used when resampling, so you’re talking about the same thing anyway.

Even when interpolation is ‘disabled’, in some cases linear interpolation will still be used, because although it sounds pretty crappy, it’s quick and easy to compute and will still sound a lot better than nothing at all.

For the resampling to be completely disabled, a method like sample and hold or nearest-neighbour interpolation would be used, but it’s quite rare to see this because it’s pretty useless from a musical point of view when compared to something even as basic as linear interpolation. If interpolation is totally disabled like this, then playing samples at different pitches results in a ‘blocky’ lofi sound with a lot of nasty aliasing, which can be nice for creative purposes, but is quite useless for high quality resampling. For what it’s worth, Renoise currently uses the sample and hold method when interpolation is set to ‘None’, so it’s there if you really want it.

(Small update: Edited the above paragraph a little bit just to clarify some things)

This is a tricky question to answer. Many plugin synths or samplers will apply a lot of subtle variations to the sound each time a new note is played. There might be many different types of modulations being applied, in order to vary the tone/timbre and create a more natural sound, etc. In the case of a synth, then every note is being synthesised uniquely. In the case of a sampler, then there may be several multi-sampled layers or different velocities per note, or even multiple different samples of each note which are selected randomly or in a round robin style, all of which combine to help give the instrument a more natural and authentic sound across the entire range.

If you render the plugin to samples in Renoise, then at most you are only getting one captured sample per note. You may be losing a lot of variations that you’d get from the plugin itself when playing in realtime, and this could very easily change the whole feeling of the instrument. Depending on the options you choose in the ‘render plugin to instrument’ dialog, you might be capturing an even smaller amount of samples, like one sample for every 3 notes, for example. In this case, then interpolation will have to be used in order to play the notes inbetween each captured sample. If the instrument only has captured samples for C-4 and F-4, but you want to play a D#4 note, then one of those samples must be resampled/interpolated at some point. It sort of depends on how you configure things in the instrument settings tab, and how you arrange all the multisamples across the note range.

Bottom line: rendering a plugin instrument to samples (which you then intend to play back as a new instrument) probably won’t sound 100% identical to the original plugin, but there’s so many other factors at play here that you cannot simply blame it on the interpolation.

If we’re talking about simply playing back a sample in Renoise at its original pitch (ie. C-4), then the interpolation method does not matter here. There is no interpolation necessary when playing samples at their original pitch/speed, so there is no interpolation being applied. You are getting a 1:1 copy of the original data. Interpolation is only applied when playing the sample at a different pitch/speed.

It’s not strictly necessary, as I’ve explained above, but it will dramatically improve the quality of samples played at different pitches/speeds.

The quality of the original sample is not really the point here. High quality samples still need to be resampled with a good quality interpolation method if you want to accurately play them back at different speeds. In fact, you could argue that high quality samples probably benefit more from good interpolation, because there’s a lot more detail and/or frequency content that could be damaged and distorted if the signal is not properly reconstructed.

I chose a sawtooth waveform because it is one of the most harmonically rich sounds that exist. Even though the fundamental frequency of my example sound was 110Hz, the sawtooth itself has strong harmonics that reach all the way up through the frequency spectrum. This sharp, rich sound is what makes the sawtooth an excellent basis for sound design, since you can use filtering to shape it into a huge range of diverse tones and timbres. Because it contains such rich harmonics in its raw form, any changes (ie. dulling) of the frequencies would be very easy to hear, so if there was a real difference in tone between playing the raw sound in Sound Forge vs playing the sound in Renoise, you would be able to identify it immediately.

If you really want to satisfy your own curiosity, then you could try the same test but with a sample of white noise instead. Generate some white noise in Sound Forge, then compare how it sounds in Renoise. Since white noise contains all frequencies distributed evenly, then any change in tone in Renoise should also be pretty easy to spot this way.

This could be related to what I’ve already said, regarding the fact that the plugin could be applying all sorts of subtle nuances and changes to the sound, compared to the sampled version in Renoise which is very static and could very easily sound different because of that. I don’t have Kontakt 3 or the instrument you’re talking about, so unfortunately I really cannot test this myself.

I’ve already asked a few times in this thread, but could you please, please, please just upload some example sounds that we can listen to? Render a sequence of notes played by the Kontakt plugin itself, then render the same sequence of notes played by the Renoise sampled version. Or render out any other kind of test you think is appropriate. Show me the differences you are experiencing. This is the only way that I can possibly hear exactly what you’re talking about.

If you don’t upload some examples that I can listen to and compare directly, then we’ll be going round and round in circles forever! :)

Sorry to nitpick but you have already pointed out yourself that nearest neighbour is a form of interpolation. It still has to check position against both values and calculate which it is closest to. The alternative (no interpolation) would be to say “We don’t have a value here, I’m not going to look what was closest, I am going to put a default value (0) in/repeat last value.” This would result in nothing but a output of silence with glitches on it or a change of level each time an old sample position coincides with a new one, unless you have pitched by a nice round division, such as an octave.

So yes, some form of interpolation is 100% required.

Also I don’t agree that a saw wave is a good test bed. Any steady tone our ears very quickly adjust and get used to. Transients are very important when trying to pick out finer detail without making the ears tired. Doesn’t matter how harmonically rich your source is if it’s a constantly repeated cycle (IMHO obviously.) Especially when using such a low base note, where the harmonics will be very close together in terms of perceived pitch at the upper registries, and thus much more susceptible to the auditory masking effect.

No interpolation means it does not calculate/synthesize missing samples, but replaces them with previous sample:

Original: 1, 2, 3, 4

Twice slower:
No interpolation: 1, 1, 2, 2, 3, 3, 4
Linear interpolation: 1, 1.5, 2, 2.5, 3, 3.5, 4

Something has to be done in order to reconstruct the signal, I will agree with you there. But as Suva points out, no interpolation could mean simply repeating the last known sample, which is what Renoise currently does when interpolation is set to “None”. Nearest-neighbour is of course a very primitive form of interpolation, but for practical purposes it will produce almost identical results to simply repeating the last known sample, ie. not very good results at all for resampling. In my mind, neither method is useful except for novelty usage, ie. when specifically aiming for a very lofi sound.

I only used a sawtooth because of what dysamoria said in his original post:

You can’t really get much more harmonically rich than the sawtooth, so based on his own words I thought that would be a good place to start. The base frequency doesn’t have a lot to do with it in this particular instance, imho, as I believe it’s quite easy to spot a ‘dulling’ effect on a sawtooth over quite a wide range of base frequencies.

I don’t know exactly what constitutes a ‘dulling’ effect, but to hopefully give a very quick demonstration here’s another .wav to listen to:
test_saw_880hz_vs_110hz.wav

This is what you’ll hear:

  • sawtooth at 880Hz, unfiltered
  • sawtooth at 880Hz, filtered at 10kHz with a butterworth 8n lowpass
  • sawtooth at 110Hz, unfiltered
  • sawtooth at 110Hz, filtered at 10kHz with a butterworth 8n lowpass

Can we agree that a ‘dulling’ effect is observable in both instances?

Edit:
Anyway… until I actually hear some example sounds from dysamoria, this is all getting a bit silly. :)

Yeah sorry I was being a bit blindsided, last sample that would of played, rather than last sample that was in correct position, which in most likelihood would be the same as nearest neighbour 50% of the time (next sample the other 50%.) In your example No Interpolation would also be the same as Nearest Neighbour (should of done a third to illustrate it better.)

I have personally admitted to very rarely being able to hear any difference, and that wasn’t a deadening of sound (which I still believe is often psych-acoustic and related to the -6dB drop in level.) Although so much of the time these days I’m on laptop sound and headphones anyway…

Trying to upload something that does have some noticeable differences when comparing inverted/mixed waves but the ftp access to my site is making me want to throw hammers at people!

Right I give up with my server, seems OVH are having serious FTP Upload issues (I can browse and delete fine but not a chance of uploading) so here it is on Mediafire.

http://www.mediafire.com/?4oiz51iext0i7zc

Process:
Select as high a quality break loop with good top end rides (24bit but unfortunately 44.1hKz is all I had.)
Load into Renoise.
Add gainer in Master set to +6.021dB (can we get the Preset +6.021 rather than +6.00 dB please?)
Play sample at +3 semitones.
Render Song using both Cubic and Arguru’s Sync.
Load rendered files, play at -3 semitones to play again at original pitch and export again using both interpolation methods.
So Arguru’s Sync is used two in the AS file, Cubic is used twice in the C file, both have a double run of a 3 semitone change.

Personally can’t hear the differnece and doubt I would if I had better than onboard chipset sound from my laptop through headphones but you can see some differences when inverting and mixing with orignal. This shows Arguru’s Sync is definitely closer to original waveform that Cubic. There may be some additional differences made by the fact 6.021dB isn’t quite exact, but it is pretty close.

So can anybody actually hear much subjective difference in the samples? (I should probably of given them random names…)

i just want to say, i don’t really care if the issue gets resolved or not, i just enjoy reading all this stuff from people who understand way more of the technicalities of sound than i do, and understanding half of it, and hoping to understand it all when people stop replying to this thread :D

@dblue: your explanations are awesome and detailed and you should consider writing an article for Renoise:In Depth, as articles get posted there way to little (because WHEN an article is posted, it is always a great read, thanks to mr_mark_dollin for that most of the time i think)

@suva: thank you for the visual explanation of this interpolation stuff. i was going to ask for this and you beat me to it. i completely understand that now.

keep up the discussion guys! valuable information here :)

Basically this is correct, both are sub par quality by todays standards, although there is a real harmonic difference in no interpolation and nearest neighbor interpolation. The former produces some wrong harmonics in the sound, because the up-ramps will get unnaturally sharp.

EDIT: Okay maybe I was wrong, I sketched the idea and only thing I came up with was some slight phase shift. Otherwise they seem to be identical. Can’t say for sure, don’t have time to dig in more deeply at the moment. :)

BYTlN6wjcvQ

Audio Myths Workshop - http://www.youtube.com/watch?v=BYTlN6wjcvQA

Strongly recommended