fragment
Joined: Apr 09, 2017 Posts: 1 Location: France
|
Posted: Sun Apr 09, 2017 9:20 am Post subject:
Fragment Subject description: Collaborative spectral/additive audio/visuals live-coding synthesizer |
|
|
Hello everyone,
i have made my own spectral softsynth for a while, it is now a collaborative and free spectral musical instrument available on the web.
Here is some demonstration videos :
https://www.youtube.com/playlist?list=PLYhyS2OKJmqe_PEimydWZN1KbvCzkjgeI
You can try it at : https://www.fsynth.com
Documentation : https://www.fsynth.com/documentation.html
The best way i could describe it is : a collaborative live-coding additive audio/visuals synthesizer
Fragment associate visuals/audio through direct manipulation of the spectrum, the visuals are generated by a GLSL script (the GPU is producing the visuals) which is shared between users and per sessions, the visuals represent a kind of possibilities space from which you choose what to hear by "slicing it", spectrum slices are fed to a pure additive synthesis engine in real-time, some weird and not so weird sounds can be made with this synthesizer.
Visuals/sounds can be manipulated by code or through MIDI and MIDI enabled controls widgets (this is a Chrome and Opera only feature because FireFox does not implement it right now), it is also possible to manipulate the spectrum with a camera or images.
In the near future, i hope to integrate audio inputs as well so that samples can be imported and played back, this feature is almost ready but require more development for good conversions of audio data to graphical data.
Another useful feature which was recently added is that you can use the previous spectrum frame in the current frame, this is useful for feedback fx such as multitap delay, reverb or just pure weirdness.
This is mostly web-based but there is an app. written in the C language which provide a faster additive synthesis engine and can output any number of slices to different audio channels, this is especially useful so that it work as a "regular" virtual instrument in any audio sequencers, the program work as a server and all its data come from the network meaning that it can be put on any independent hardware. This was successfully used with a Raspberry PI and HiFi Berry DAC+ (~700 oscillators).
The usage of the GPU allow to produce visuals at the same time, in monophonic mode, the RGB channels are available for any kind of visuals to be synthesized while the alpha channel is sent to the additive synthesis engine.
This synth. is DIY, almost everything is bare metal and the sources are available on GitHub, it require some knowledges of GLSL in order to use it.
The main inspirations were the russian ANS synthesizer and Virtual ANS software, many ideas also come from Sound in Z by Andrey Smirnov and Shadertoy software.
If you have any questions, i will be glad to answer them here. |
|