Seems to be a lot of this useless rubbish floating around the web right now, it is even getting to the point where people actually think they can’t make music unless they are 24/96 or whatever the current buzzword is in audio tech.
Take a step back, take a deep breath and write some music, because guess what, it doesn’t matter, 90% of the music listening population of the world is now listening on some crap mp3 or such, and guess what, years ago 90% where listening on some crap recorded cassete of a copied cassette.
A good tune is a good tune, no matter how or what it is recorded on.
I agree with your opinion on the impact (or lack of) of audio quality on the musical value of a song. However, I’m not trying to make my music “better” by making the sample rate higher. My music will get better with experience and dedication, not with advancements in technology. I have way more than I need in the technological toolbox to make a million dollars overnight. I just don’t have the talent. That comes with time as im sure you know.
So go listen to your crappy 128kb/s 44.1kHz mp3s and i will have no problem with it. I will continue to reach nirvana through audio quality as a personal experience, you don’t have to have a problem with that do you?
It’s important to make a distinction between two kinds of adequacy in sampling rate. Research has shown rather conclusively that in double blind tests, humans can’t perceive a change in audio quality above 44.1-48Khz. This is because the limit of human hearing is around 20Khz at best, and the Nyquist Theorem indicates that in order to faithfully reproduce that frequency range, anything above that sampling rate would be superfluous. Audiophiles say all sorts of things, but when it comes down to hard (i.e. not anecdotal) evidence, in a properly controlled study with high end equipment, a sampling rate of 44.1 and 96Khz (or higher) should be indistinguishable.
However, that doesn’t mean higher sampling rates and bit depths are pointless, far from it. Even in you’re dithering down to 44.1Khz/16bit, which you will eventually be doing in most cases, a higher sampling rate gives you more breathing room for manipulating the playback of samples while avoiding undesirable artifacts. For instance, you’ve likely noticed that if you slow the playback of a sample down, as the rate approaches zero you lose more and more harmonic content – the sound becomes darker and duller. By sampling at a higher rate, you can retain high frequency content in your sound as you slow it down. This is especially useful for a tool like Renoise, where users are accustomed to creating sample-based instruments with single cycle waveforms. Similarly, working with a high bit depth gives you extra headroom, which can be useful in preventing digital clipping in the recording process, even though 16-bit audio has a perfectly acceptable noise floor for playback.
As an analogy, think about video. We generally accept that once you reach a certain frame rate, motion is perceived as smooth and continuous, and so video capture and playback systems are typically designed to operate at a fps around that limit (24fps, 30fps, etc). But try slowing down a video captured at 30fps, and it will quickly become choppy – in order to make those silky smooth slow motion effects you see in movies, you need to film at a higher fps than would ordinarily be necessary, so that when you slow it down it still looks like continuous motion.
Yeah I see your point there. I’ve recorded stuff at 192kHz with the sole purpose of slowing it right down to make interesting noises. Hitting a glass ashtray with a pencil --> giant haunted cathedral bell, etc.
But as far as rendering a finished song is concerned, 192kHz is just insane.
I completely agree sir, its not that easy to tell the difference between 96kHz and 192kHz, but in theory, it can be very useful.
And for the record, I can tell the difference between 48kHz and 96kHz, although admittedly its a very small one.
But there is a MASSIVE DIFFERENCE between 44.1 and 48kHz for me. Its crazily obvious to me.
I agree, although I wouldn’t call it an onslaught… thats kind of like a war horse that has big numbers… hehehe, see what i did there?
That will sound silly for some of you but before claiming that you don’t hear differences between 44.1 & 96 be sure that your soundcard can output 96k and your speaker don’t cut at-18k. I pretty sure that all of you can feel the difference wich is not so big but enough to be considered I think (better dynamic, feeling of a larger sound, clarity)
Plus, I recently read an interview on analog modeling plugins where the guys said that it’s hard to simulate the behaviour of a analog comp because the attacks where faster than one cycle at 44.1 that’s probably why there is a huge gap between analog comps and digital ones.
Plus, consider that between the moment you start your music from sample and the moment it’s eventually released on cd for exemple you will go through 3 to 5 conversions wich are all destructive and kill some of the harmons that you had carefully crafted with your eqs. I think, but correct me if I wrong that the mixing is better calculated with higher khz.
Really, I would focus more on your craft. We all can say we have heard some absolute amazing tracker songs done on the amiga @ 8K back in the day while hearing radio rubbish created on the “latest and greatest tech”.
Some of the best music was created before the digital age, just remember its a tool not a necessity. Sometimes over thinking tech can cloud ones creative thoughts, just have fun and seek better methods of mastering. Mastering is the craft that can make or break your 44.1 48k or 192k masterpiece.
I think we are all totally agree with that, but I think we are all totally agree to say that nothing sound better than analog, so why do you consider it worthless to tend to a more transparent/analog sound.
Don’t think mastering can do miracles too, I think all mastering engineer are ok to say that the better the mix is, better the mastering will be.
I didn’t do a proper blind test, but a teacher show me last year different recordings of strings at 24bit 44.1 48 96 PCM and 320kb mp3 ( on a pair of genelec 1031 / digidesign 192 / ssl aws 900 ) the result was cleary hearable but I agree that most people don’t listen on such gear ( wich is not the best but already really good ) Then we did a summing test from drum stems recorded at 96 then converted to 48 44.1 and 320kb mp3, then we bounced all theses mix to 16bit 44.1 all under protools. Even after the 16bit 44.1 we were able to say wich one was from 96 wich one was from mp3, there were some phase conflict on the 48 that allow us to identify it from the 44.1. Do say it simple the 96 one was larger, seems to have a better dynamics, seems more living. ( I specified protools because try to do the same bounce from stems under cubase/pt/dp you will hear that there will be all differents)
I agree were are on trackers wich is more bedroom related than we can ask from a professional studio, but some of us are doing extreme sample manipulation wich require that kind of “high quality”, and I think that’s enough to be considered by the developer team (and it’s one the todo list as Pysj said before)
But we don’t want this thread to go to another 44.1is enough everybody are listening mp3 vs we need 320kHz 32bit float, so come on people respect the point of view of everybody.
If one has a 192Khz audiocard, that would make sense. But i believe most don’t go over 96Khz.
So for whatever purpose one would need 192Khz, it is pointless without having a 196Khz audiocard, because you don’t hear what you doing and the outcome mixed back to 96 or 48Khz always sounds different.
Another thing: Audio is always being downgraded again because most folks don’t have audio-gear that plays over 44/48Khz.
192Khz processing, perhaps, if you like the degraded outcome after processing, nothing wrong with it.
But not for playing or mixdown rendering.