The ACIDs research group at the Beijing Central Conservatory

To mark the 60th anniversary of diplomatic relations between China and France, the 2024 International Music Festival opened on Monday at the Central Conservatory of Music in Beijing, a major cultural celebration highlighting educational exchanges between the two nations.

With the aim of illustrating the richness of the Sino-French cultural fusion through music, this festival welcomes the active participation of prestigious institutions such as the Conservatoire national supérieur de musique et de danse de Paris, the Ecole normale de musique de Paris, the Conservatoire Maurice-Ravel, the Association des professeurs de formation musicale (APFM), the Institut de recherche et coordination acoustique/musique (IRCAM), as well as various Chinese music schools and artistic troupes.

IRCAM's ACIDS research group will be present at the festival to give two workshops:

1) **Scientific conference on AI** :

AI in 64 KB: can we do more with less?

The main object of study is the properties and perception of musical synthesis and artificial creativity. In this context, they are experimenting with deep AI models applied to creative materials, with the aim of developing artificial creative intelligence. Over the last few years, they have developed several objects aimed at integrating this research directly as real-time objects that can be used in MaxMSP. The team has produced numerous prototypes of innovative instruments and musical pieces in collaboration with renowned composers. However, the often overlooked drawback of deep models is their massive complexity and enormous computational cost. This is particularly critical in audio applications, which rely heavily on specialised embedded hardware with real-time constraints. Consequently, the lack of work on lightweight and efficient deep models is a major limitation to the real use of deep models on resource-constrained hardware. The ACIDS team show how they can achieve these goals through various recent theories (the lottery ticket hypothesis (Frankle and Carbin, 2018), modal connectivity (Garipov et al. 2018) and information bottleneck theory) and demonstrate how their research has led to lightweight embedded deep audio models, namely:

1/ RAVE in Raspberry Pi // real-time embedded deep synthesis at 48kHz.
2/ FlowSynth // a learning-based device that lets you travel through the auditory spaces of synthesizers simply by moving your hand.
3/ AFTER // A new approach to diffusion models for real-time generation of audio streams.

2) **Music conference on the application of AI to music composition** 


Using deep neural audio synthesizers for music composition in Ableton Live and Max MSP


The aim is to provide new tools for modelling musical creativity and extending sonic possibilities using machine learning approaches. In this context, ACIDS is experimenting with deep AI models applied to creative material, with the aim of developing artificial creative intelligence. Over the last few years, they have developed several objects aimed at integrating this research directly as real-time objects that can be used in MaxMSP and Ableton Live. The team has produced numerous prototypes of innovative instruments and integrated lightweight deep audio models. After a brief introduction to the concepts of artificial intelligence and machine learning, participants will successively dive into the mechanisms and concrete use of nn~ and RAVE, both to use the models and to be able to train their own models. All the tools will be covered both conceptually and practically. At the end of this course, participants will be able to use the various existing generative AI tool environments, and in particular to go further than the existing possibilities by training their own models. The aim is also to provide the basic notions for including existing open-source deep models in their creative workflows.

A demo/performance will take place on 27 November during the closing concert with : Pierce Warnecke, Axel Chemla-Romeu-Santos, Benoît Alary, Philippe Esling