It sounds quite nice. The multitap delay bandpass filters seem to be able to do stuff the normal filter can’t. Expecially because it has drive. But this is just an adaption of the technique of a device chain and the mentioned above “formant synth” technique?
To make the thing manual, you can easily bind the f_dispatch (or was it?) slider to a macro (haha modwheel ftw), and disable the random lfo at the beginning of the chain. Then maybe set the f1 and f2 lfo tables to linear instead of points, so it will interpolate, you could also use one inertia before the dispatcher, and scrub the other inertia devices. But this will make the vowel always sweep through the range, and not interpolate from one position to the (possible distant) next one immediately. This would save one formula device.
The vowel tables seem a bit odd to me, at least in the last range around o and u. there’s big jumpyness, so if sliding through the range it won’t sound like a clean sweep, but rather like some psytrancy wobble, because of formants jumping around frequency big time around that range.
It prooves that 2 formants varying and 2 high static ones are enough to make vowels really clean and recognisable, and that the 2 static ones will enhance the vowel impression big time. Wouldn’t have believed that. I’m still riddling how it gets that cool robot like sound, because:
I’m working on a vowel synth myself from time to time. A bit more complex, interpolating all 5 formants via parallel bandpass filters, and also the amplitudes, that get normalised for each formant via preamplification and compression. This amplitude thing has the advantage that it doesn’t matter that much what you put in spectral wise to get good results. This gets me very “natural” sounding vowels, but I’d really like to let it be “robotized” also on demand, I just can’t get it to sub-ring the right way. The lofi effect in this chain doesn’t seem to make the robotness complete, there must be other factors earlier in the chain to make that synthetic sound. Dunno what I’ve already tried. It always sounds cheesy, but also because I seemed to have gotten a table for the formants for some fucking basso classical singing voice formants, so it sounds really rally gay at the moment. I’m thinking about finding better data tables or analysing my own speech, or whatever.
What’s the secret to letting it sound like a robot screaming?
As for the macros, you only got 8, so I’d suggest to put meta devices at the very beginning of the chain all next to each other - or better into a “dummy controllers” dead extra fx chain, where only the relevant metadevices are next to each other, and let the users map their macros themselves. In my thing I’m already using 8 macros, but that’s also because I’m using 2 for pitch bend. But it’s fun to map them all to sliders/knobs to dial in the perfect sound. We need more macros.