(0 comments)

Presented by Frederic Bevilacqua, Diemo Schwarz, Riccardo Borghesi, Benjamin Matuszewski, Jérôme Nika

We will present new features of the MuBu for Max framework for multimodal analysis of sound and motion, interactive sound synthesis and machine learning, the CataRT and SKataRT corpus-based synthesis tools for Max and Ableton Live, the Gestural Sound Toolkit for the prototyping of gesture–sound interaction scenarios, and the new version of the Soundworks framework for JavaScript with tutorials.
We will also show a new series of Max for Live plugin Koral tailored for using movement sensors of smartphone through our application comote, developed in collaboration with the association Arts Convergence, as well as give insights about the current research and developments dealing with the composition of interaction with music synthesis processes.

N'a pas de note

Commentaires

Pas de commentaires actuellement

Nouveau commentaire

requis

requis (non publié)

optionnel

requis