Diemo Schwarz, born in Germany in 1969, is a researcher at IRCAM, and a musician and creative programmer. His scientific research on sound analysis/synthesis and gestural control of interaction with music is the basis of his artistic work, and allows to bring advanced and fun musical interaction to expert musicians and the general public via installations like the dirty tangible interfaces (DIRTI) and augmented reality (Topophonie mobile). In 2017 he was DAAD Edgar-Varèse guest professor for computer music at TU Berlin. He performs on his own digital musical instrument based on his CataRT open source software, exploring different collections of sound with the help of gestural controllers that reconquer musical expressiveness and physicality for the digital instrument, bringing back the immediacy of embodied musical interaction to the rich sound worlds of digital sound processing and synthesis. He interprets and performs improvised electronic music as member of the 30-piece ONCEIM improvisers orchestra, or with musicians such as Frédéric Blondy, Richard Scott, Gael Mevel, Pascal Marzan, Massimo Carrozzo, Nicolas Souchal, Fred Marty, Hans Leeuw. He composes for dance and performance (Sylvie Fleury, Frank Leibovici), video (Benoit Gehanne and Marion Delage de Luget), and installation (Christian Delecluse, Cecile Babiole).
Interactive Concatenative Synthesis with CataRT and MuBu
CataRT is a technology devoted to structuring sound grains obtained from large sound data bases of any kind according to sound characteristics automatically analyzed and chosen by the user. Thus, sound databases and archives may be explored quickly and intuitively for use in the areas of musical composition, sound design, live performance and improvisation. The workshop will be mainly devoted to presenting the technology and giving participants the chance to use it via “hands-on” sessions in order to acquire the skills necessary for creating compositions, installations, digital instruments and personalized creative tools. This workshop will be led by Diemo Schwarz, ISMM team, and is aimed at participants having a good knowledge of Max programming.
MuBu (for “multi-buffer) is a set of modules for real-time multimodal signal processing (audio and movement), automatic learning, and sound synthesis via descriptors. Using the multimodal MuBu container users can store, edit, and visualize different types of temporally synchronized channels: audio, sound descriptors, motion capture data, segmentation markers, MIDI scores. Simplified symbolic musical representations and parameters for synthesis and spatialization control can also be integrated.
MuBu integrates modules for interactive automatic learning for recognition of sound or motion forms. MuBu also includes PiPo (Plugin Interface for Processing Objects) for signal processing.