Whats next for Renoise?

Take all that in to account and guess what the next version of Renoise will have, i will guess, its own DSP language for coding native plugins direct in the software, of which ten will get made in the first few months by the stalwart geeks round here, then it will never get used (Much like the current scripting) and another three years will have passed.

Yeah, could be. MaybeLuaJIT.

But I’d guess if there issuch a thing as a NEW "non-geeky"featurethey’re testing(or will be testing later on) in the alphas, it’s audiotracks and hdd audio streaming. Probably some more polishing on the instruments/sampler too, in both Renoise and Redux.

I wouldn’t hold your breath for audio tracks, just go use a DAW

I really hope the team is busy with fixing the most archaic limitation that renoise currently has, and that pulls it back from the state of other daw software since years.

I mean I hope they decided to fix the audio/channel routing interface, and allowing for real audio rate sidechaining, multichannel and parallel processing doofers and send groups, better layering of instruments, such stuff. And also more clever CPU usage spreading and control, and monitoring along cores as side effect.

I think this most probably would mean really fucking around with the core engine of renoise, maybe in a way that means to redesign it in big parts. But it needs to be done!

Lightning fast envelopes . No more f…n interpoaltion .-breakpoints-tensioncurves inc.

Freely routable signal flow in instr.section.
Dsp gen.osc’s
Filters and dsm osc’s/ capable of audio rate modulation…f.m…crossmod etc…
Math.dsp function effect performibg at s.r…think r.s.func.shaper.
We gave Th Same dist. Eff.for ages .
All This and I am a happy camper…somthing tells Me I Will Be dissapointed …( again).

Lightning fast envelopes . No more f…n interpoaltion .-breakpoints-tensioncurves inc.

Freely routable signal flow in instr.section.
Dsp gen.osc’s
Filters and dsp gen. osc’s/ capable of audio rate modulation…f.m…crossmod etc…
Math.dsp function effect performibg at s.r…think r.s.func.shaper.
We gave Th Same dist. Eff.for ages .
All This and I am a happy camper…somthing tells Me I Will Be dissapointed …( again).

Turn up the pressure.

I really hope the team is busy with fixing the most archaic limitation that renoise currently has, and that pulls it back from the state of other daw software since years.

I mean I hope they decided to fix the audio/channel routing interface, and allowing for real audio rate sidechaining, multichannel and parallel processing doofers and send groups, better layering of instruments, such stuff. And also more clever CPU usage spreading and control, and monitoring along cores as side effect.

I think this most probably would mean really fucking around with the core engine of renoise, maybe in a way that means to redesign it in big parts. But it needs to be done!

+1

My personal high priority :

Sidechaining, yes sample accurate (thru send device, targeting any right/bottom fx listening to input 3/4), parallel container device(to make dry/wet processing possible + real parallel processing!)dry/wet amount for any effect/doofer/container (in hidden adv. dialogue), highly improved doofer concept (multi band splitting possible, insert-send possible, making lot of sends obsolete), overhauled lfo (reset->mini slider, 2 point in column, last column is 1st column, real curve mode), overhauled recording automation (including high precision recording for automation, support to move sliders in vsts, real curve mode), edit beyond pattern boundaries (dragging, including adv. options), better solution for midi command recording (sucks to be recorded in pattern command), placing sends anywhere (just as an visual alias, keeping current routing rules), indication of on/off automation

Nice to have:

Note-off to graphical blocks (draggable, resizable via mouse, turns renoise into cool piano roll without headache and idiocy), slow smooth scrolling in track scopes (so it turns into waveform display, which is much more useful, scopes only make sense in basic synthesis), horizontal aliases (instance doubling/multiplying, with overwrite-able data in alias), LUA dsp fx device (which can receive multiple meta signals + sidechain, too, so nicely fitting into a doofer)

For getting a piano roll, simply use Bitwig, as it has quite a similar dsp concept, but provides you with your beloved pianolol.

Ok, I can not get all my dreams to be come reality, but I hope at least some points… :w00t:

If you do that basic/important ones, I would do a tool for multi-device-chaining (to make it possible to save/reuse cross-track constructions) to contribute some work.

For getting a piano roll, simply use Bitwig, as it has quite a similar dsp concept, but provides you with your beloved pianolol.

Just simply go out and spend 3 times as much as Renoise costs to get a piano roll

(You do realise that the whole PianoLOL thing, is very old now, very tired, and about as funny as a finding a two inch wart in your nether regions, or maybe you are just trying to disrespect people who work in Piano Rolls and find them useful, not sure a bigot has any place on any kind of public forum, you are just asking for trouble)

If anybody here starts to use Bitwig, they wont be using Renoise ever again, it does pretty much everything better than Renoise, except tracking.

What’s next :

Okay the devteam has allready added a cool feature : : the convolver.

The NEXT Renoise should feature a “HTRF Binaural 3D Simulator” that position every input mono sound source into a virtual 3D space

Let’s call this release : Renoise 3.D

How ?

It has to Convolute a mono signal with Head-Related Impulse Response (HRIR) for the left and right ear respectively.

See this article explaining how to code it.

Pianorofl.

What’s next :

Okay the devteam has allready added a cool feature : : the convolver.

The NEXT Renoise should feature a**“HTRF Binaural 3D Simulator”**that position every input mono sound source into a virtual 3D space

Let’s call this release : Renoise 3.D

How ?

It has to Convolute a mono signal with Head-Related Impulse Response (HRIR) for the left and right ear respectively.

See this article explaining how to code it.

Sounds cool and not that difficult to program, seen from a non coder perspective, way cooler than a pianoroll. :slight_smile:

I only skimmed through the article and i certanly did not understand it fully. I did however download the recommended IR package and it only contained 25 azimuth samples per channel, but where are the elevation samples? Shouldn’t there be 1250 samples in all per channel and shouldn’t those only cover 1 permanently set distance? If i understand correctly we would need a vast amount of samples to cover a greater distance? And how about sound reflection and such? The way i see it, if you were to make one conventional stereo IR as the way we normally use it, and turn it into an equivalent binauralHRIRbank, then we would need 2500 versions of it just to cover this single IR. So wouldn’t it be easier and less resource hungry to make a simulation of the effect instead?

Have i misunderstood the way it works perhaps?

https://youtu.be/iC8UUNt0fiU

aaah, that’s where the sample was lifted from at the end of this track:

https://youtu.be/iC8UUNt0fiU

Original, never had that before.

Personally I would like to see:

Track presets/templates

Sub Projects

Both ideas from Reaper, but exceptionally useful.

Pianorofl.

Sounds cool and not that difficult to program, seen from a non coder perspective, way cooler than a pianoroll. :slight_smile:

I only skimmed through the article and i certanly did not understand it fully. I did however download the recommended IR package and it only contained 25 azimuth samples per channel, but where are the elevation samples? Shouldn’t there be 1250 samples in all per channel and shouldn’t those only cover 1 permanently set distance? If i understand correctly we would need a vast amount of samples to cover a greater distance? And how about sound reflection and such? The way i see it, if you were to make one conventional stereo IR as the way we normally use it, and turn it into an equivalent binauralHRIRbank, then we would need 2500 versions of it just to cover this single IR. So wouldn’t it be easier and less resource hungry to make a simulation of the effect instead?

Have i misunderstood the way it works perhaps?

It’s true we need 3 params : azimuth elevation and distance.

The previous method would require lots (too many) IR files and would increase dramatically the size of the renoise package.

And also, binaural resynthesis of acoustical environement in realtime is a very complex and CPU demanding thing.

However I keep on believing in that kind of feature http://www.pcjv.de/vst-plugins/stereo-plugins/

or

https://facebook360.fb.com/spatial-workstation/

(See the FB360 Spatializer, only available for OSX but Windows’ version is coming soon)

… that would be included in renoise as a DSP, fully automatable

… that would be included in renoise as a DSP, fully automatable

Sounds good. :slight_smile:

halamus Today, 12:55

Personally I would like to see:

Track presets/templates

Sub Projects

Both ideas from Reaper, but exceptionally useful

Hi, do u know that u can open 2 renoise instances simultaneously?

I like this more than virtual projects, since

  • you can play both renoises at the same time, no loading involved while switching
  • copy paste works flawlessly
  • u can place each renoise on different desktops

My personal high priority :

Sidechaining, yes sample accurate (thru send device, targeting any right/bottom fx listening to input 3/4), parallel container device(to make dry/wet processing possible + real parallel processing!)…

Absolutely 100%!

Also the “Layers / Instrument-Container” would be really cool and handy.

for parallel just need a Redux with some VST wrapper

renoise_www.kepfeltoltes.hu_.png

and its line-in devices…BC PatchWork can display(makes accessible to the host DAW) the macros of Redux with two clicks IMO it is the most comfortable (and with BC PatchWork also can add VSTs to the effect chain(s)) drag and dropping working between them (on device level too)

.

Probably dead.