I use a laptop. But even if it had a switch, I doubt I’d use it. I think something like this would need to be automatic.
But I genuinely believe this is the main reason we’re not seeing electronic music evolve - in fact, barely able to keep up with its own heritage - and you’re not seeing a new generation of Mozarts, Debussys, Fripps, etc. now your so-called cutting edge IDM or dubstep album is really put to shame by the likes of Girls Aloud in terms of taking risks musically and trying out new things - even in terms of writing a single memorable musical line… Which is pathetic.
I spent a long time obsessing over sound quality; then a long time re-learning everything I thought I knew about composition… and it’s all been hugely beneficial and opened a lot of other doors for me… But I now realise the single biggest problem with electronic music today is the way the modern studio/computer environment is so completely at odds with being in any sort of creative/musical state…
But… at the same time it does open up boundless creative potential… and I think that’s what we all find interesting and why we all stick with it, but I don’t think any technology or knowledge is going to allow us to go any further until the ergonomics and interaction issues are really dealt with.
-1 stonner rambling, close your eyes, conditioning your own mind is an act of willpower, not a feature request, the fact that you can’t see this is a recursive metaphor.
maybe renoise needs a build-in artificial intelligent listener with a cynical sense of humour that doesn’t hold back to share its critique relentlessly, no-nonsense, no-nuance, notorious, northpole, nothing.
imagine a voice coming from somewhere halfway your track playback:
“WHAT THE HELL DO YOU THINK YOU’RE DOING?!”
Yeah I understand what you’re saying, music does sound different when I’m listening to it elsewhere completely detached from the sequencer. But it’s pretty obvious that you are going to have some sort of visual connection when you’re making music…
The difference between written scores and modern sequencing environments, is that scores are visually more asynchronous to time. There is no quick way of telling how a score of music flows by looking at it from a distance, you have to read the notes in succession.
I don’t think that this visual map of music is made by watching your music zip by though, it’s something that’s being done while you’re working on it, so your solution of blanking the screen would not have the effect you want. What you could do to counter this problem you’re having is to make the screen area smaller, by setting your resolution to 1024x768. And doubling your BPM (or make your patterns shorter).
On more or less the same subject, something I would be keen on trying out is a tracker that doesn’t use space for silence but one that has a seperate symbol for it, so you have to think more about it. It would be a more additive way of making music, instead of substracting the music out of the slabs of silence
Oh yeah, the whole cubase syndrome thing is even worse… When you can see your notes or blocks on an arranger plotted against the time axis… It massively affects how you perceive music, but I think just knowing it’s there and making the mental connection does the damage… and this is the reason so many people are still paying through the nose for very old h/w sequencers… I think the tracker perspective is infinitely superior to that - score notation is coded and symbolic for that reason, and it forces you to think musically where a piano roll makes it almost impossible to think musically…
Blanking the screen whenever anything’s playing would be a crude solution, but in the same way, have you ever tried doing an IQ test while listening to 80s music? Something with lyrics, structure, lots of musicality - it’s impossible to concentrate, it’s as if just having it on in the background knocks 20 IQ points off because of the amount attention (which is how much resources your brain’s pumping into that area at that time) is used up interpretting it… I think that’s the effect you want to minimise… and you can find hundreds of examples where a producers who’ve switched from a h/w sequencer to a computer have lost something significant: Liam Howlett, Carl Craig, Ken Ishii, DJ Shadow, etc.
I think to make a feature like sellable it would be to have a light synthesizer or visualizer which switches in whenever you press play (optional of course)… Something which is easy to ignore… Or something which makes a more direct visual connection with what you’re hearing.
Don’t get me wrong… I love the flexibility of software… Cutting up drums in my Ensoniq used to take days on end sometimes.
And it’s a revolution having unlimited compressors and enhancers on hand - But the “consumerism” trap is SUCH a nightmare…
Even if you’re not actually paying for the stuff, there’s this idea magazines are selling nowadays that every kid with a PC, and (the right) plug-ins, is a potential Quincy Jones…
It’s become the new “boy racer” sport… Upgrading your computer, soundcard, plug-ins, Cubase version, etc… And to what end?
Well, much the same as the boy racer with his souped up Nova… You can spend all your life doing up your car, and never actually get any good at driving it… Let alone entering the world of racing and getting sponsored.
It can all too easily turn into one big distraction…
The reality is, many of the finest sounding dance records were made on little more than a 2 meg sampler and a desk - The idea that to produce dance music you need 1gb of RAM and hundreds of plug-ins and softsynths is ridiculous…
To actually get good at making music does involve endless hours experimenting and producing shit - Fiddling with a compressor on a bass drum for weeks on end… I remember when I got my first Behringer composer and Ultrafex I spent months just experimenting… I fear many of the new generation of Computer Music producers bite off FAR more than they can chew from the start… Then blame their lack of progression on a lack of “technology”…
Plus, it’s MUCH less intuitive trying to work out a plug-in than a piece of hardware… The effects, particularly of compression, can be a lot more subtle (in some ways) too…
There’s also nothing new about using computers to process audio… I started off using Stereo Master 2 and Quartet on the ST… You could sample, sequence, manipulate audio… I think that got up to 22khz, 16-bit… There was software to time stretch, picth shift, compress, EQ, add delays, etc…
In the last 15 odd years, what, the sample rate’s got higher, a few more frills here and there, but the only revolution is a marketing revolution… Why not market a product with next to no overheads instead of building gear…?
I was speaking to someone from the Synthvox forum the other day, who thought sampling breaks was lazy, because you could just record your mates’ drumkit in a barn and go to work with the plug-ins… It just makes you think… You can easily get SO caught up in the technology, and the marketing hype, that your perception and common sense goes out the window.
You know what helps too? Not giving a f****. Intelligence doesn’t just dissipate when you listen to bad music or watch patterns on a screen, you have to have some sort of intent to use it, that’s all that matters.
there’s a study being done at the moment into Human-Computer Interaction in music
i’m low on details, but i’m probably going to write it up at some point… anyway, technology is undeniably having a profound effect on the way we interact with music, perceive music, get feedback from music, make decisions, etc.
you can’t get away from it - one major part of most music technology is the nature of a visual interface, and visually representing music… we start hearing with our eyes as soon as the connect’s been made once - so we start structuralizing
is it any wonder 4/4 and 32-bar sections and things are de rigueur nowadays? there’s nothing less audibly natural about 3/4 or 5/4 - it’s just less visually structured… a lot of musicians nowadays can’t even think out of 4/4
when you’ve got a piano or guitar in front of you, that interface dissapears with experience and you get a much more direct aural interaction with your music
of course each of them have their own physical interfaces - you notice how piano music is often very regular, 16th’s or 8th’s - whereas guitar music is all over the place rhythmically
I think that part of the struggle in overcoming problems you face with your tools is what validates your music, it is part of the effort you make to create something. And all the problems you identify are things you can overcome exactly because you have identified them as being a problem. Something I learned by posting insane feature requests
Yeah but not giving a f**** seems to be at the heart of this… Indie music is wiping electronica off the map at the moment, and you don’t find many decent guitarists who’re willing to settle for cheap knock-off guitars and line 6 pods… Musicians are meant to be anal, over-reaching, ambitious and perfectionistic, not just convenience and price conscious.
Try writing a masters thesis while listening to Thriller at full volume… It’ll just take a lot more effort writing something which will probably end up sounding like it was written by a 15 year old…
i think producers/musicians need to feel inspired to produce their best work
jungle was groundbreaking when it first surfaced - there was a buzz, a whole world of new beats, samples and ideas to explore
when i make dnb now i can’t help but feel i’m just recycling old ideas, and that doesn’t inspire me to make music, that inspires me to tweak a snare drum for 5 days
re: production values
i know it’s not so important, but just listen to things like Earth Volume One, anything by Source Direct, Timeless, etc. that is what great production’s all about
it’s immensley listenable - there’s texture and soul in the sounds
what some people call good production now, which is the sound of tonnes of multiband compression and digital limiting making everything sound flat and devoid of dynamics, is not good production, for so many reasons
Optionally using the editstep interval as realtime recording quantize settings. So if it’s set to 4, hitting a key close to line 4 will place the note at line 4.
Per-instrument default velocity value
User definable sample offset “bookmarks”
Slider device mappable to any parameter on the chain, so you can collect all the parameters you want to automate in your chain under one device
One-shot LFO device option (so it oscillates once and then stops when the reset command is called)
rewire slave and master personally love to incorporate renoise in ableton/vice versa for live pa setup
supoort for multiple vst instrument outputs and routing like for kontakt
those are only 2 off top of my head i have yet to discover/see in renoise
But then you could just as well make a beatslice function since you already have the slice points, and then use the sample offset we have today on the individual samples.
I never said I wouldn’t want user defined offsets to be mappable to notes in an instrument … in fact, I’d like a shared sample pool too, so one sample could be sliced/mapped/reused across multiple instruments