Abstract :
The goal of this residency is to propose a gestural control of Artificial Intelligence models in real-time scenarios. The idea is to develop a dedicated control interface (hardware and software) in order to offer novel and innovative ways of generating electroacoustic sounds using the most recent models offered by deep neural networks. A dedicated electronic interface allows to develop instrumental gestures precisely linked to sound, while having an involvement of the body and producing expressive electroacoustic materials. AI systems have great potential to generate expressive and highly musical sound. Combining these two aspects of generating nowadays sounds would provide fascinating and unexpected expressiveness of machines.
Bio :
Maxime Mantovani (France, born 1984) is a composer and improviser of mixed and electroacoustic music. He holds a Masters degree in composition from the CNSMD (Conservatoire National Supérieur de Musique et de Danse de Lyon), as well as an engineering degree in electronics and computer science. In 2020, he was graduated from the IRCAM Cursus of composition and computer music.His multi-faceted background in music and engineering form the basis of his artistic practice, which blends composition, improvisation, sound installations and computer music technologies. He uses dedicated tools to facilitate musical expression, including DIY interfaces and custom-made instruments. His goal in creating these tools is to achieve precise control of electroacoustic sound, as responsive and expressive as a typical acoustic instrument, as well as deepening his instrumental writing methods. As he continues his journey of self-discovery as an artist, his highest aim is to write music that engages him on both technical and poetic level. Over time, he obtained several grants and the support of institutions helping him building his interfaces, performing and composing.