When you launch, are you looking at the file browser (explore tab) or libraries?
For the file browser, it needs to index files. Libraries just need to initialize their databases…
No, I mean I have about ~100k samples and when I start Sononym it takes like 50 seconds until I finally can scroll the files in the library tab, on a pretty slow HDD. With 6 libraries.
Then I moved all my sample dirs to a brand new evo 860 ssd into one single library only (90k samples now) and the startup time is down to 2 s… Weird. Maybe it gets slow as soon as there are multiple libs?
I wonder if you could add some caching mechanism, reducing startup time, like scanning just with dir/“ls” instead whole files or so? Anyway, once it was started, it is a bliss to use.
Well I guess you had all these ideas already One more thing is you could run one crawler per CPU core, since it seem to access the drive with lot of pauses, then intercommuicate between the crawlers so only one accesses the drive at once (while one processes data, the other accesses drive).
Btw. is it based on chromium or webkit? I tried some “gpu enabling webkit css tricks” in sononym index.css, but did not notice any difference in gui speed, like
-webkit-transform: translateZ(0px);
-moz-transform: translateZ(0.00001px);
If you want to speed up your css development, maybe have a look at less. It really makes css work a bliss, simplifies the structure a lot and only requires that javascript as pre-compiler thing.
EDIT: Impulse responses are not so nicely categorized. Maybe you could train it for detecting impulse responses, too?
EDIT2: If you reindex and then quit sononym while indexing is running, you will get an error after next start of sononym, if you start indexing again, since the crawler process won’t be exited. Maybe that is intended, but feels weird to me.