Afficher les articles écrits par Mikhail Malt
News on Somax - Mikhail Malt, Marco Fiorini
Posté par: Mikhail Malt Il y a 1 année, 8 mois
It is based on a machine listening, reactive engine and generative model that provide stylistically coherent improvisation while continuously adapting to the external audio or midi musical context. It uses a cognitive memory model based on music corpuses it analyzes and learns as stylistic bases, using a process similar to concatenative synthesis to render the result, and it relies on a globally learned harmonic and textural knowledge representation space using Machine Learning techniques.
Somax2 has been totally rewritten from Somax, one of the multiple descendants of the well known Omax developed in the Music Representation team over the years and now offers a powerful and reliable environment for co-improvisation, composition, installations, etc. Written in Max and Python, it features a modular multithreaded implementation, multiple wireless interacting players (AI agents), new UI design with tutorials and documentation, as well as a number of new interaction flavors and parameters.
Benny Sluchin and Mikhail Malt - Somax 2, a reactive multi-agent environment for co-improvisation
Posté par: Mikhail Malt Il y a 2 années, 9 mois
Abstract :
Somax 2 is a multi-agent interactive system performing live machine co-improvisation with musicians, based on machine-listening, machine-learning, and generative units. Agents provide stylistically coherent improvisations based on learned musical knowledge while continuously listening to and adapting to input from musicians or other agents in real time. The system is trained on any musical materials chosen by the user, effectively constructing a generative model (called a corpus), from which it draws its musical knowledge and improvisation skills. Corpora, inputs and outputs can be MIDI as well as audio, and inputs can be live or streamed from Midi or audio files. Somax 2 is one of the improvisation systems descending from the well known Omax software, presented here in a totally new implementation. As such it shares with its siblings the general loop [listen / learn / model / generate], using some form of statistical modeling that ends up in creating a highly organised memory structure from which it can navigate into new musical organisations, while keeping style coherence, rather than generating unheard sounds as other ML systems do. However Somax 2 adds a totally new versatility by being incredibly reactive to the musician decisions, and by putting its creative agents to communicate and work together in the same way, thanks to cognitively inspired interaction strategies and finely optimized concurrent architecture that make all its units smoothly cooperate together.
Somax 2 allows detailed parametric controls of its players and can even be played alone as an instrument in its own right, or even used in composition workflow. It is possible to listen to multiple sources and to create entire ensembles of agents where the user can control in detail how these agents interconnect and “influence” each others.
Somax 2 is conceived to be a co-creative partner in the improvisational process, where the system after some minimal tuning is able to behave in a self-sufficient manner and participate to a diversity of improvisation set-ups and even installations.
This presentation will introduce the software environment, demonstrate its learning and interaction modes, explain the basic and advanced controls in the user interface, and present a real musical situation with famous trombonist Benny Sluchin improvising along with Somax 2.