Renoise And Other Daws Audio Engines

Thanks for that reply :)

I completely understand that and I very much respect your scientific approach to analyze, compare and demonstrate things. Also, I wish I found the thread kazakore pointed to me before posting.
That being said the question I asked is still valid. I didn’t imply Renoise audio engine does suck, I just wanted to know what would make it different to another and if there’s one objectively better (i.e. neutral ?) than others and I expected people like you to give a precise, objective and cautious explanation (so I could shut my friend’s mouth, I admit).
For instance, I’ve been told that Pro Tools doesn’t sum/mix (I’m not sure what’s the right english term for that) tracks the same way than other DAWs. So, do Renoise, Cubase, FL Studio, Live, Reaper, Reason, etc all have their own way to sum/mix tracks, coloring the sound or would they all give a closely similar output of the same file ? I guess that was the question :)

All daws use different code. They are not the same audio algorithms. The only way to know this would be to compare all the code used. Hell even in renoise you get different rendering options on the bounce down. Why is that if they all sound the same then why does renoise even give you options? All mp3 coders are different to a degree as well. They all sound different to me, when using vsts, the fx sends, the plugins. Cubase to me is the worst sounding out of all of them. While I think logic is the best sounding for that style of daw, saw studio and renoise kick the crap out of everything else in my honest opinion. A daw is like picking a favorite synth, all daws sound different, all synths sound different. Get it?

Some people prefer the way cubase sounds, I am not one of those people.

Whatever the outcome of whatever a/b listening test, I’d rather hear a great piece of music played back shitty than hear a perfect mix of shit music.

I did do this with Cubase SX and Live. They play back differently, we already established that - some DAWs play back louder than others, which can color your perception. But they render the same. And if you account for the difference in volume, they sound the same, too.

No man, I’m telling you from my experience cubase ruins your high feqs, they are not there, at all. Not compared to real sounds that are analog, or other daws. They don’t render the same, like I stated above, even renoise has rendering options to render the track differently. It is not just a louder vs quite argument. Cubase has big bass, no top end, ableton has top end and lacks bass. People like bass so they think cubase sounds better. I like all freqs, saw, renoise give you that. Others don’t.

The test here is simple.

Test 1

  1. Import a sample into Renoise
  2. Place sample on track
  3. Export a WAV (do this as many times as there are rendering options)
  4. Repeat with another program
  5. Compare files

Test 2

  1. Import two samples into Renoise
  2. Place sample 1 on track 1, place sample 2 on track 2
  3. Export a WAV (do this as many times as there are rendering options)
  4. Repeat with another program
  5. Compare files

I can tell you that in Garageband. for example, there will be echo and reverb because it’s on the master track by default. Does it sound better? Maybe, but then this isn’t about rendering but about colouring. I can also hint that if you are importing MP3 in Step 1 then it will also sound different because MP3 is a lossy format. I can also hint that if you are using a VSTi instead of a sample then you are introducing variables by introducing by another program. Again, “sounds better” is not a methodological approach. What are you comparing, are they even the same thing?

PRO TIP: This must be the 50th time I link it in a 50th thread like this. LOOK HERE!!! Sampler anti-aliasing and pitch shifting comparison

EDIT: Is the above quote out of context? Yes. Read “What is aliasing?” linked on the side of the article. Then ask yourself, what is this thread about?

Good times.

Let me just stop you there and say that the sound of the native filters and effects that come bundled with each DAW has nothing to do with the “audio engine” of that DAW. Those filters and effects are obviously going to sound different from host to host, because they’re probably all using slightly different techniques and algorithms, as you correctly pointed out. There will most certainly be some noticable differences in the sound of each DAW’s native effects; differences in tone, character, colour, frequency response, stereo image, and countless other things. But once again; those filters and effects do not have anything to do with the “audio engine” of the DAW. They are simply not part of the main argument here, and it doesn’t make any sense to consider them when comparing the “sound” of a DAW.

The “audio engine” would be responsible for things like routing signals between tracks and mixing them together, resampling audio files to different rates (ie. playing samples at different pitches), and other similar low level processes like that. If there are any major differences between DAWs (and there most certainly are), then they will occur mostly at this level.

When you break it down to the most basic situation - playing/mixing unprocessed tracks through the DAW - then all DAWs should behave in the same way and produce almost identical results. Essentially, what you put into the DAW should be the same as what you get out of the DAW (not counting differences in gain due to mixing or whatever).

But there’s really nothing fancy going on at this level. Digital “summing” or “mixing” is nothing special. It’s literally just adding numbers together. Fundamentally speaking, it’s 1 + 1 = 2

Most DAWs process audio in floating point format which computers can’t handle 100% perfectly to begin with, so there will be some low level differences in how these numbers are processed and how each programmer decides to calculate the output of various functions, but we’re talking about unbelievably tiny differences that are essentially impossible for humans to detect without the use of additional analytical tools. Many of the differences are so far beyond the range of human hearing that it’s not even worth mentioning.

If Cubase’s audio engine - arguably the most fundamental part of the entire application - was ruining high frequencies and artificially boosting low frequencies when the user simply played an uneffected track through the DAW, do you seriously think that this behaviour would be acceptable in any way? Do you seriously think that people would continue to pay $$$ for Cubase and just accept that it ruins the sound? Really? I call bullshit! Nobody buys a DAW that ruins their sound.

When simply playing audio (or output from a VST, or whatever) through the DAW and mixing tracks together with clean settings, there is no conceivable or logical reason why Cubase (or any other DAW) would ever colour the sound in that way. No reason at all (!), unless the user had specically added some kind of EQ or filter to the track itself. If the user chooses to use Cubase’s (apparently shitty) native effects and ruin their sound, then it’s got nothing to do with Cubase’s underlying audio engine, and everything to do with the person that programmed the shitty native effects and the user that chose to use those shitty effects.

The same goes for Renoise and any other DAW. The native bundled effects will always have their own character in some way, but this has nothing to do with the underlying audio engine of that DAW.

If you load an audio file into Renoise, Cubase, Ableton Live, FLStudio, or any other DAW; then you play it through a track uneffected; then you render the resulting song/project to a new audio file; what you actually hear will be identical. There is no logical reason whatsoever for it to sound different. Any DAW that does produce different results or ruins the sound in some way is fucking broken and should be avoided!

I assumed he was talking about Interpolation and Dithering both of which do have many different methods and algorithms. The first of which can make a lot of difference (and different options are provided within Renoise for this reason) the second of which causes a lot of arguments amongst the audiophiles but all agree that dithering of some kind is important.

Interpolation in audio:
http://www.earlevel.com/main/1996/10/19/oversampling/

Conclusion:
In at 44,100, out at 44,100? No interpolation occurs. Upsample? This is “colouring

Dithering
http://www.earlevel.com/main/1996/10/20/what-is-dither/

Conclusion:
This is “colouring

Q: If a tree falls in a forest does anybody hear?
A: If a user colours their sound but doesn’t know, is that the audio engine?

The real problem with these kind of threads is that people throw around terms, urls, and bullshit based on what they imagine programming to be without ever doing any themselves.

Guilty as charged.

It also occurs every time you play a sample at something that wasn’t its original pitch as you are not playing the samples at the same rate they were recorded at!

In fact that point is MUCH more important than with oversampling, as you don’t have the original points ibetween any more either.

aren’t interpolation and dithering done with industrial standard algorithms? so any decent daw should colour the same way?

Ok, I agree.

But is that the audio engine? Sounds more like a sampler to me.

Of course in Renoise, the sampler is pretty much the entire app… But in something like Audacity, changing the pitch is a destructive action and is one of many effects.

IMHO it’s a matter of reducing the scope of the discussion to something that can be proven.

To be honest it has been so long since I used a traditional DAW but I thought most would have a way of playing melodies with samples without having to use any external plugin such as a sampler. Possibly if they also include their own packaged…

Also it still comes into play anyway. I start a project, half my samples are 44.1kHz the other 48kHz. No matter what rate you choose for your song the process has to be performed on half the samples upon loading.

imo these type of threads are a waste of everybody’s valuable time and should be closed by a moderator as soon as possible. Maybe this thread or the other one can be pinned permanently and used as reference in future closed subjective bull threads.

I’m sorry I always get suckered in, I think the arguments and differences are bullshit but the technology interests me.

If you want to read more about it, check these 7 pages of rigourous testing methodology. One of the four links provided in the Simon V test.

http://www.maz-sound.com/index.php?show=mpcs&mpc_id=34

Of note the first phrase: “The most important instrument for many musicians is the sampler […]” The article then goes on to discuss resampling and aliasing with testable results. For me, sampling is not the audio engine. Although even if we were discussing sampling, Renoise has been tested and proven to be “perfect.”

With that out of the way, and as a final thumbs up from me, the problem with these threads:

User states Renoise sounds bad or good. Asked why?

  • 64-bit math
  • Summing engine
  • Ears are plated with gold
  • Other insane theory

All of which are easily proven wrong with some quiet reading time and reflection. Instead people smoke a fat joint and “just know things” in public.

The most heart-wrenching part is that for several years dblue has politely tried to help find real problems. Even when he wasn’t on the team he was in there trying to get to the bottom of things. At any level of the app with regards to sound quality he’s willing to work it out and try to solve issues. If there’s a problem, he’s the guy who will put in huge amount of work to help anyone with a legitimate claim and an inkling of proof, screenshots included.

Instead we get these threads. I’m derailing with Twix while boooooo-ing loudly, others do more or less the same thing, and the original poster turns out to be some sort of dunce; possibly driving dblue crazy in the process… Choo choo!

:clownstep:

@Conner_Bw: i know i can always count on you to say the things i did not have the eloquence to say. couldn’t agree more.

No, people are lacking experience on this subject, how many people in the world buy more than one DAW?

I have, i had cubase, logic, friend had ableton, and guess what I used each hardcore I made lps with each of them. Cubase does in fact ruin your high end, only when it renders it in any form, it doesn’t record them in wrong. I took songs I made in cubase and exported them omf, put all sliders in the right places, pan laws everything. Logic rendered them with a tighter cleaner high freq sound. Way better, way way way way better, it was not subtle. These tracks were like 30 tracks with 8 sub mixes. Not simple tracks at all, all mixed at -10 to -16 db per track, no clipping.

Cubase ruins your music if you play back with it, or sum with it. Seriously, even playing back through an external mixer out high end converters it will taint your sound compared to logic. I have experienced this. IS what I am saying scientific, no, i’m talking from my experience only. I’m not trying to tell anyone what to do. I am saying from my experience with all of these daws, cubase is severely lacking in high end freqs and attack transients suffer because of this.

For the record, go to any apple logic forum and read what fleeing cubase users have to say, tons of them will tell you what I am telling you. Cubase colors the sound big time, its how they coded their whole audio engine. They tried to make it sound like some kind of analog tape where the high freqs are rolled off like some analog tape used to do but it fails so fucking bad at it.

Compare hats and hissy sounds, vocal S sounds, golden ears can hear cubase dulling those sounds out big time. a while back I rendered the same songs in saw, logic, and cubase, everyone of my fans that downloaded the stuff said they could tell cubase was the most muddy and dull sounding compared to saw and logic.

Here is my comparison. Logic Vs cubase. Mixed the same. This is how different the audio engines sounded for me.

http://www.velvetacidchrist.com/vaccubasevslogic.rar

They sound very different to me.

They definitely sound different to me, too. Looking at and listening to each waveform, it’s immediately obvious that the snare and vocal elements (and possibly other elements) in the Logic render are substantially louder than in the Cubase render, so they are certainly not mixed the same way at all and therefore this is not a fair comparison.

For what it’s worth, the kick drum in each render is all but identical. So much so that doing an inverted mix-paste causes the kick to cancel/null itself out to silence.

Were these files both genuinely rendered from their respective DAW, or did you simply create these differences to demonstrate roughly how it sounded to you at the time?