Renoise And Other Daws Audio Engines

Do it yourself!!

I have done it a lot, all daws sound different, and the sum tests don’t test for that. I think null tests are stupid, i use my ears.

Why do daws sound different? They all use different rendering code. They all use different fx. They all color sound in a ways as well, cubase makes everything sound so soft and dark. Renoise can get nasty and mean. They all handle being over driven in different ways.

To think all code sounds the same is silly… To think that would mean that all VA analogs would sound the same and they most certainly don’t. I have used most of the daws on the market to make music with extensively. I know that i prefer the sound of renoise over all.

Back in the day if I had a lot of white on my computer monitor I would hear a faint buzz in my audio; e.g. the DOS era where everything was normally black, or that is to say, no light on a monitor that created enough heat for a Canadian winter.

Back in the day, if I dragged Winamp across the screen I would hear a giant crackle. I had to move my soundcard one slot over so it wasn’t next to the video card.

Back in the day, I would hear faint morse code in my speakers whenever wireless internet traffic would pass through my router, placed incorrectly next to my speakers.

In all cases some form of electric field was fucking with the audio path. These are real things that happened, and can always happen.

I suspect in all cases its a mix of marketing placebo, mental disorder, being high at the time, and a shitty understanding of how electrical devices interact with each other.

In terms of “bits and bytes” its all the same.

In the minuscule chance something is wrong:

No one is talking about distortion or compressor DSPs or Glitch vs Electrix here. In these cases something can indeed sound better because they do different things and people like “MORE DISTORTION!!!” or whatever.

In terms of straight up representation of audio in the digital domain, all is the same. If it isn’t it should be fixed methodologically, not psycho acoustically with subjective mumbo jumbo.

Ears aren’t tools to prove things. Solid measurements are. Noticable differences are experienced but it doesn’t always mean the audio is malformed in one or more ways.
A softer volume level doesn’t make the sound dull, only the experience.

So more high, mid and low. Isn’t that the full range? So you mean it’s louder :P

So who wants to try rendering the same MIDI track in a number of sequencers using exactly the same plugs with the same settings? I only have licence for one so I can’t. Also how they handle samples internally played at differences pitches, rather than the audio once generated by a plug-in, is also quite important in a lot of cases.

Not that I currently have a listening point setup in which I think I would be able to tell the difference which some people might. Still there are scientific tests which can be done.

hahahahahaha fucking genius

to end this debate, check out this thread - https://forum.renoise.com/t/why-does-renoise-sounds-so-good/32414, the question in the 1st post and the answers in the following 2.

vV.

Ears are the best tool, and it made all the music from back in the day sound a million times better than half of the crap that comes out today, and back in the 80s they did not have null tests and stupid digital mixers :) Before limiters destroyed all musical dynamics.

I have made things come close to null and null.

My friends and I still did blind tests to each other and we all could hear the differences between the daws on our music.

If you have doubt, please, write a complex track and make them the same on every daw, render them all and listen to them, fuck null tests, LISTEN.

Thanks for that reply :)

I completely understand that and I very much respect your scientific approach to analyze, compare and demonstrate things. Also, I wish I found the thread kazakore pointed to me before posting.
That being said the question I asked is still valid. I didn’t imply Renoise audio engine does suck, I just wanted to know what would make it different to another and if there’s one objectively better (i.e. neutral ?) than others and I expected people like you to give a precise, objective and cautious explanation (so I could shut my friend’s mouth, I admit).
For instance, I’ve been told that Pro Tools doesn’t sum/mix (I’m not sure what’s the right english term for that) tracks the same way than other DAWs. So, do Renoise, Cubase, FL Studio, Live, Reaper, Reason, etc all have their own way to sum/mix tracks, coloring the sound or would they all give a closely similar output of the same file ? I guess that was the question :)

All daws use different code. They are not the same audio algorithms. The only way to know this would be to compare all the code used. Hell even in renoise you get different rendering options on the bounce down. Why is that if they all sound the same then why does renoise even give you options? All mp3 coders are different to a degree as well. They all sound different to me, when using vsts, the fx sends, the plugins. Cubase to me is the worst sounding out of all of them. While I think logic is the best sounding for that style of daw, saw studio and renoise kick the crap out of everything else in my honest opinion. A daw is like picking a favorite synth, all daws sound different, all synths sound different. Get it?

Some people prefer the way cubase sounds, I am not one of those people.

Whatever the outcome of whatever a/b listening test, I’d rather hear a great piece of music played back shitty than hear a perfect mix of shit music.

I did do this with Cubase SX and Live. They play back differently, we already established that - some DAWs play back louder than others, which can color your perception. But they render the same. And if you account for the difference in volume, they sound the same, too.

No man, I’m telling you from my experience cubase ruins your high feqs, they are not there, at all. Not compared to real sounds that are analog, or other daws. They don’t render the same, like I stated above, even renoise has rendering options to render the track differently. It is not just a louder vs quite argument. Cubase has big bass, no top end, ableton has top end and lacks bass. People like bass so they think cubase sounds better. I like all freqs, saw, renoise give you that. Others don’t.

The test here is simple.

Test 1

  1. Import a sample into Renoise
  2. Place sample on track
  3. Export a WAV (do this as many times as there are rendering options)
  4. Repeat with another program
  5. Compare files

Test 2

  1. Import two samples into Renoise
  2. Place sample 1 on track 1, place sample 2 on track 2
  3. Export a WAV (do this as many times as there are rendering options)
  4. Repeat with another program
  5. Compare files

I can tell you that in Garageband. for example, there will be echo and reverb because it’s on the master track by default. Does it sound better? Maybe, but then this isn’t about rendering but about colouring. I can also hint that if you are importing MP3 in Step 1 then it will also sound different because MP3 is a lossy format. I can also hint that if you are using a VSTi instead of a sample then you are introducing variables by introducing by another program. Again, “sounds better” is not a methodological approach. What are you comparing, are they even the same thing?

PRO TIP: This must be the 50th time I link it in a 50th thread like this. LOOK HERE!!! Sampler anti-aliasing and pitch shifting comparison

EDIT: Is the above quote out of context? Yes. Read “What is aliasing?” linked on the side of the article. Then ask yourself, what is this thread about?

Good times.

Let me just stop you there and say that the sound of the native filters and effects that come bundled with each DAW has nothing to do with the “audio engine” of that DAW. Those filters and effects are obviously going to sound different from host to host, because they’re probably all using slightly different techniques and algorithms, as you correctly pointed out. There will most certainly be some noticable differences in the sound of each DAW’s native effects; differences in tone, character, colour, frequency response, stereo image, and countless other things. But once again; those filters and effects do not have anything to do with the “audio engine” of the DAW. They are simply not part of the main argument here, and it doesn’t make any sense to consider them when comparing the “sound” of a DAW.

The “audio engine” would be responsible for things like routing signals between tracks and mixing them together, resampling audio files to different rates (ie. playing samples at different pitches), and other similar low level processes like that. If there are any major differences between DAWs (and there most certainly are), then they will occur mostly at this level.

When you break it down to the most basic situation - playing/mixing unprocessed tracks through the DAW - then all DAWs should behave in the same way and produce almost identical results. Essentially, what you put into the DAW should be the same as what you get out of the DAW (not counting differences in gain due to mixing or whatever).

But there’s really nothing fancy going on at this level. Digital “summing” or “mixing” is nothing special. It’s literally just adding numbers together. Fundamentally speaking, it’s 1 + 1 = 2

Most DAWs process audio in floating point format which computers can’t handle 100% perfectly to begin with, so there will be some low level differences in how these numbers are processed and how each programmer decides to calculate the output of various functions, but we’re talking about unbelievably tiny differences that are essentially impossible for humans to detect without the use of additional analytical tools. Many of the differences are so far beyond the range of human hearing that it’s not even worth mentioning.

If Cubase’s audio engine - arguably the most fundamental part of the entire application - was ruining high frequencies and artificially boosting low frequencies when the user simply played an uneffected track through the DAW, do you seriously think that this behaviour would be acceptable in any way? Do you seriously think that people would continue to pay $$$ for Cubase and just accept that it ruins the sound? Really? I call bullshit! Nobody buys a DAW that ruins their sound.

When simply playing audio (or output from a VST, or whatever) through the DAW and mixing tracks together with clean settings, there is no conceivable or logical reason why Cubase (or any other DAW) would ever colour the sound in that way. No reason at all (!), unless the user had specically added some kind of EQ or filter to the track itself. If the user chooses to use Cubase’s (apparently shitty) native effects and ruin their sound, then it’s got nothing to do with Cubase’s underlying audio engine, and everything to do with the person that programmed the shitty native effects and the user that chose to use those shitty effects.

The same goes for Renoise and any other DAW. The native bundled effects will always have their own character in some way, but this has nothing to do with the underlying audio engine of that DAW.

If you load an audio file into Renoise, Cubase, Ableton Live, FLStudio, or any other DAW; then you play it through a track uneffected; then you render the resulting song/project to a new audio file; what you actually hear will be identical. There is no logical reason whatsoever for it to sound different. Any DAW that does produce different results or ruins the sound in some way is fucking broken and should be avoided!

I assumed he was talking about Interpolation and Dithering both of which do have many different methods and algorithms. The first of which can make a lot of difference (and different options are provided within Renoise for this reason) the second of which causes a lot of arguments amongst the audiophiles but all agree that dithering of some kind is important.

Interpolation in audio:
http://www.earlevel.com/main/1996/10/19/oversampling/

Conclusion:
In at 44,100, out at 44,100? No interpolation occurs. Upsample? This is “colouring

Dithering
http://www.earlevel.com/main/1996/10/20/what-is-dither/

Conclusion:
This is “colouring

Q: If a tree falls in a forest does anybody hear?
A: If a user colours their sound but doesn’t know, is that the audio engine?

The real problem with these kind of threads is that people throw around terms, urls, and bullshit based on what they imagine programming to be without ever doing any themselves.

Guilty as charged.

It also occurs every time you play a sample at something that wasn’t its original pitch as you are not playing the samples at the same rate they were recorded at!

In fact that point is MUCH more important than with oversampling, as you don’t have the original points ibetween any more either.

aren’t interpolation and dithering done with industrial standard algorithms? so any decent daw should colour the same way?

Ok, I agree.

But is that the audio engine? Sounds more like a sampler to me.

Of course in Renoise, the sampler is pretty much the entire app… But in something like Audacity, changing the pitch is a destructive action and is one of many effects.

IMHO it’s a matter of reducing the scope of the discussion to something that can be proven.

To be honest it has been so long since I used a traditional DAW but I thought most would have a way of playing melodies with samples without having to use any external plugin such as a sampler. Possibly if they also include their own packaged…

Also it still comes into play anyway. I start a project, half my samples are 44.1kHz the other 48kHz. No matter what rate you choose for your song the process has to be performed on half the samples upon loading.