Interaction Sound-Music-Movement team / Ircam

Riccardo Borghesi

Developer in the ISMM team I’m in charge of all the software of the team, such as Mubu, GestureAndSound, MaxSoundBox. After a master's degree in computer music at the University of Pisa, I joined IRCAM in 1996, where I participated in several teams, to the development of specialised graphical interfaces for musical interaction.

Frédéric Bevilacqua

Head of research IRCAM, leading the team Sound Music Movement Interaction, my work concerns gestural interaction and movement analysis for music and perfoming arts.  The applications of my research range from digital musical instruments to rehabilitation guided through sound feedback. I coordinated several projects such as Interlude (ANR Prize for Digital Technologies 2013, Guthman Prize 2011 of new musical instruments) on new interfaces for music or the ANR Legos project on sensorimotor learning in interactive music systems. After scientific and musical studies (Master in Physics and PhD in Biomedical Optics from EPFL, Berklee College of Music in Boston), I was a researcher at the University of California Irvine (UCI), before joining IRCAM in 2003.

Diemo Schwarz

The Sound-Music-Movement Interaction team will present its latest developments in the fields of individual and collective musical interaction:

MuBu:

- new versions and features (JavaScript integration, machine learning)

- new examples of granular spatialization in HOA (ambisonics), additive (re-)synthesis and recording+ringbuffer.

 

MuBu (for "Multi-buffer") is a set of modules for multimodal real-time signal processing (audio and motion), automatic learning, and sound synthesis by descriptors. The MuBu multimodal container makes it possible to store, edit and display synchronized time tracks of various types: audio, sound descriptors, gesture capture data, segmentation markers, MIDI scores. Simplified symbolic musical representations of synthesis control parameters and spatialization can also be integrated.

MuBu integrates modules for interactive automatic learning, to perform sound or gesture pattern recognition.

 

CataRT:

- new modular version based on MuBu and tutorials, created in collaboration with Aaron Einbond and Christopher Trapani and a work group on synthesis using samples.

- integration in Ableton Live "SkataRT" (joing project with IRCAM’s Sound Design and Perception team and Music Unit)

 

CataRT is a technology dedicated to the structuring of sound grains, obtained from large databases of all types of sounds, according to sound characteristics automatically analyzed and chosen by the user. Thus, sound databases and archives can be quickly and intuitively explored for use in the fields of music composition, sound design, performance and improvisation.