Google today shared this video demo of NSynth Super, an open hardware controller for its NSynth Neural Synthesizer.
NSynth Super comes out of Magenta, a research project within Google that explores how machine learning tools can help artists create art and music in new ways. Magenta created NSynth, a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create new sounds, based on the learned characteristics.
NSynth Super, made in collaboration with Google Creative Lab, lets you make music using new sounds, generated by the NSynth algorithm from 4 different source sounds.
In the video demo, an NSynth Super prototype is played by Hector Plimmer. 16 original source sounds, across a range of 15 pitches, were recorded in a studio and then input into the NSynth algorithm, to precompute the new sounds.
The outputs, over 100,000 new sounds, were then loaded into the experience prototype. Each dial was assigned 4 source sounds. The dials can be used to select the source sounds to explore between.
NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard.
Here’s a look at the creation of NSynth Super:
NSynth Super is available as an open hardware DIY project via Github. No kits or pre-built options are currently available, so it’s a relatively hardcore DIY project, at this point.