Posté par: Mikhail Malt Il y a 2 années, 9 mois
Abstract :
Somax 2 is a multi-agent interactive system performing live machine co-improvisation with musicians, based on machine-listening, machine-learning, and generative units. Agents provide stylistically coherent improvisations based on learned musical knowledge while continuously listening to and adapting to input from musicians or other agents in real time. The system is trained on any musical materials chosen by the user, effectively constructing a generative model (called a corpus), from which it draws its musical knowledge and improvisation skills. Corpora, inputs and outputs can be MIDI as well as audio, and inputs can be live or streamed from Midi or audio files. Somax 2 is one of the improvisation systems descending from the well known Omax software, presented here in a totally new implementation. As such it shares with its siblings the general loop [listen / learn / model / generate], using some form of statistical modeling that ends up in creating a highly organised memory structure from which it can navigate into new musical organisations, while keeping style coherence, rather than generating unheard sounds as other ML systems do. However Somax 2 adds a totally new versatility by being incredibly reactive to the musician decisions, and by putting its creative agents to communicate and work together in the same way, thanks to cognitively inspired interaction strategies and finely optimized concurrent architecture that make all its units smoothly cooperate together.
Somax 2 allows detailed parametric controls of its players and can even be played alone as an instrument in its own right, or even used in composition workflow. It is possible to listen to multiple sources and to create entire ensembles of agents where the user can control in detail how these agents interconnect and “influence” each others.
Somax 2 is conceived to be a co-creative partner in the improvisational process, where the system after some minimal tuning is able to behave in a self-sufficient manner and participate to a diversity of improvisation set-ups and even installations.
This presentation will introduce the software environment, demonstrate its learning and interaction modes, explain the basic and advanced controls in the user interface, and present a real musical situation with famous trombonist Benny Sluchin improvising along with Somax 2.
Bio :
Benny Sluchin
Benny Sluchin studied at the Tel-Aviv Conservatory and Jerusalem Music Academy, parallel to pursuing math and philosophy degree at the University of Tel-Aviv. He joined the Israel Philharmonic Orchestra and was engaged as co-soloist for the Jerusalem Radio Symphony Orchestra. A member of the Ensemble intercontemporain since 1976, he has premiered numerous works and recorded Keren by Iannis Xenakis, the Sequenza V by Luciano Berio in addition to 19th and 20th century works for trombone.
A doctor of Mathematics, Benny Sluchin is involved in acoustic research at Ircam. Passionate about teaching, he edited Brass Urtext, a series of original texts on teaching brass instruments. He published Le trombone à travers les âges (Buchet-Chastel) with Raymond Lapie. Two of his books have been awarded the Sacem Prize for pedagogic publications: Contemporary Trombone Excerpts and Jeu et chant simultanés sur les cuivres. His written publication on brass mutes is a benchmark and his research on Computer Assisted Interpretation has been the object of several presentations and scientific publications.
As an application to his research, Benny has released a number of recordings of John Cage's music. His recent film Iannis Xenakis, Le dépassement de soi, has been produced by Mode Records.
Mikhail Malt
I am a Researcher in the Musical representations team of IRCAM, Computer Music Designer Teacher (within the IRCAM Department of Pedagogy), Associate Research Director at Sorbonne University and Composer. I have a scientific and musical background (Engineering, composition and conducting) and my research focuses mainly on the theme of computer-assisted music writing (computer-assisted composition) and musical formalization.
Since my arrival at IRCAM (October 1990 as a student and 1992 as a research composer) my main activity has been between research and teaching especially in the composition and computer music curriculum.
Currently, my work is developing on three axes:
• Modeling and musical representation: the study of the expressivity of formal models in computer-assisted composition, and in real-time generative music, and the modeling of open works),
• the development of interfaces and tools for computer-assisted composition,
• musical analysis and computer-assisted musical performance and musical creation.
Back to the abstracts collection
Partager sur Twitter Partager sur Facebook
Commentaires
Pas de commentaires actuellement
Nouveau commentaire