Posté par: Gerard Assayag Il y a 2 années, 9 mois
Somax 2 is a multi-agent interactive system performing machine co-improvisation with live musicians, based on machine-listening, machine-learning, and generative processes.
Agents provide stylistically coherent improvisations based on learned musical knowledge while continuously listening to and adapting to input from musicians or other agents in real time. The system is trained on any musical materials chosen by the user, effectively constructing a generative model (called a corpus), from which it draws its musical knowledge and improvisation skills. Corpora, inputs and outputs can be MIDI as well as audio, and inputs can be live or streamed from Midi or audio files.
Somax 2 is one of the improvisation systems descending from the well-known Omax software, presented here in a totally new implementation. As such it shares with its siblings, the general loop [listen/learn/model/generate], using some form of statistical modeling that ends up in creating a highly organized memory structure from which it can navigate into new musical organizations, while keeping style coherence, rather than generating unheard sounds as other ML systems do.
However Somax 2 adds a totally new versatility by being incredibly reactive to the musician decisions, and by putting its creative agents to communicate and work together in the same way, thanks to cognitively inspired interaction strategies and finely optimized concurrent architecture that make all its units smoothly cooperate together.
Somax 2 allows detailed parametric controls of its players and can even be played alone as an instrument in its own right, or even used in composition workflow. It is possible to listen to multiple sources and to create entire ensembles of agents where the user can control in detail
https://www.stms-lab.fr/projects/pages/somax2
https://forum.ircam.fr/projects/detail/somax-2/
Commentaires
Pas de commentaires actuellement
Nouveau commentaire