IRCAM Forum Workshops Paris – Preliminary Program 2018


Please note: this is a preliminary program. It can be submitted to modifications and additions.

CONCERT IRCAM LIVE 2018 : March 7, 8:30pm, Centre Pompidou, Grande Salle
Click here for more information.
OPERA : La princesse légère March 9, 8:00pm, Théâtre national de l’Opéra Comique
Click here for more information.

Wednesday 7th, March

Wednesday 7th, March


Time Stravinsky Room – Conference room
9am-9:30am Registration
9:30am-10am Gregory BELLER and Paola PALUMBO (IRCAM)
Welcome Session
10am-10:30am Hugues VINET and Brigitte d’ ANDREA NOVEL (IRCAM)
IRCAM research and development news
10:30am-11am Axel ROEBEL and Charles PICASSO (IRCAM)
News from the Analysis Synthesis Team
11am-11:30am Break
11:30am-12pm Olivier WARUSFEL, Markus NOISTERNIG and Thibaut CARPENTIER (IRCAM)
News from EAC team
12pm-12:30pm Thomas HELIE, Robert PIECHAUD, Jean LOCHARD (IRCAM) and Hans Peter STUBBE 
News from S3AM Team
12:30pm-1pm Jérôme NIKA, Jean-Louis GIAVITTO, Philippe ESLING and Gérard ASSAYAG (IRCAM)
News from RepMus Team
1pm-2:30pm Lunch buffet
2:30pm-3pm Fréderic BEVILACQUA, Diemo SCHWARZ, Riccardo BORGHESI and Benjamin MATUSZEWKI (IRCAM)
News from ISMM Team
3pm-4pm Jean Julien AUCOUTURIER, Marco LIUNI, Pablo ARIAS (IRCAM)
News from Cream project TeamConveners: Laura RACHMAN, STMS Lab (IRCAM/CNRS/UPMC UMR 9912), Brain and Spine Institute (CNRS UMR 7225 / UPMC / INSERM U 1127), Paris, France Jean-Julien AUCOUTURIER, STMS Lab (IRCAM/CNRS/UPMC UMR 9912), Paris, France
 In recent years, the experimental sciences of emotion perception and production have greatly benefited from software tools able to synthesize realistic facial expressions, which can be used as stimuli in experimental paradigms such as reverse-correlation. In the audio modality however, tools to similarly control or synthesize the acoustic characteristics of emotional speech or music typically do not exist. The objective of this symposium is to present four new open-source software tools, developed in the past two years in the context of the ERC CREAM project (“Cracking the Emotional Code of Music”), that attempt to fill this methodological gap.

In more details, the tools presented here are designed as transformation techniques: they do not synthesize artificial sound, but rather work on genuine audio recordings, or sometimes even on real-time audio streams, which they parametrically manipulate to make them sound more or less emotional. Three of the tools (DAVID, ZIGGY, ANGUS) are computational models of a specific vocal behavior, such as the form of the speaker’s mouth conveying the sound of a smile, or the roughness of a voice expressing arousal. . The fourth tool (CLEESE) was developed not to generate emotional speech per se, but rather to generate infinite prosodic variations, which can then be used as stimuli ito uncover people’s mental representations of specific emotional or attitudinal vocal expressions n reverse-correlation paradigms.

In the four presentations of this symposium, each of the tools will be presented along with a demonstration of possible applications in experimental research. All the tools presented in this symposium are made available open-source for the community (, in the hope that they will foster new ideas and experimental paradigms to study emotion processing in speech and music.

Authors: Laura RACHMAN, Marco LIUNI and Jean-Julien AUCOUTURIER, STMS Lab (IRCAM/CNRS/UPMC UMR 9912) and Brain and Spine Institute (CNRS UMR 7225 / UPMC / INSERM U 1127), Paris, France

 DAVID is a tool developed to apply infra-segmental cues related to emotional expressions, such as pitch inflections, vibrato, and spectral changes, onto any preexisting audio stimuli or direct vocal input through a microphone. Users can control the audio effects in a modular manner to create customized transformations. Three emotion presets (happy, sad, afraid) have been thoroughly validated in English, French, Swedish and Japanese, showing that they are reliably recognized as emotional, and not typically detected as artificially produced1. When applying the emotion effects to real-time speech, the latency of the software is less than 20milliseconds, short enough to leave continuous speech unaffected by any latency effect. This notably makes the tool useful for vocal feedback studies2 and investigations of emotional speech in interpersonal communication. DAVID can be controlled through a graphical user interface, which is practical for exploring different combinations of the audio effects, as well as piloted in experimental software via a Python module pyDAVID. This extension allows for trial-by-trial control of for example the onset or the intensity of the emotion effects. Finally, time stamps can be stored with pyDAVID, making the tool not only appropriate for various behavioral paradigms, but also ideally suited to use in conjunction with neurophysiological recordings, such as electroencephalography (EEG).

1. Rachman, L., Liuni, M., Arias, P., Lind, A., Johansson, P., Hall, L., Richardson, D., Watanabe, K., Dubal, S. and Aucouturier, J.J. (2017) DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech. Behaviour Research Methods. doi: 10.3758/s13428-017-0873-y

2. Aucouturier, J.J., Johansson, P., Hall, L., Segnini, R., Mercadié, L. & Watanabe, K. (2016) Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction. Proceedings of the National Academy of Sciences, vol. 113 no. 4, doi: 10.1073/pnas.1506552113

Authors: Pablo Arias and Jean-Julien Aucouturier, STMS Lab (IRCAM/CNRS/UPMC UMR 9912), Paris, France

 ZYGi is a digital audio processing algorithm designed to model the acoustic consequences of smiling —Facial Action Unit 12— in speech. The algorithm is able to simulate the subtle acoustic consequences of zygomatic contraction in the voice while leaving other linguistic and paralinguistic dimensions, such as semantic content and prosodic features, unchanged. The algorithm, which is based on a phase vocoder technique, uses spectral transformations —frequency warping and dynamic spectral filtering—to implement the formant movements and high-frequency enhancements that characterize smiled speech. Concretely, the algorithm can either shift the first formants of the voice towards the high frequencies to give the impression of a smile during the production, or shift them towards the low frequencies, giving the impression of a closed/round mouth. In a series of recent studies, we showed that such manipulated acoustic cues are not only recognized as smiled and as more positive, but that they can also trigger unconscious facial imitation1. ZYGi exists as a Python wrapper around IRCAM SuperVP voice transformation software and is open to the research community.

1. Arias, P., Belin, P., & Aucouturier, J.-J. (2017). Auditory smiles trigger unconscious facial imitation, in review.

Authors: Emmanuel Ponsot, Laboratoire des Systèmes Perceptifs (CNRS UMR 8248) and Département d’études cognitives, Ecole Normale Supérieure, PSL Research University, Paris, France ; Juan-Jose Burred, Independent Researcher, Paris, France  ; Jean-Julien Aucouturier, STMS Lab (IRCAM/CNRS/UPMC UMR 9912), Paris, France

 CLEESE (Combinatorial Expressive Speech Engine) is a tool designed to generate an infinite number of natural-sounding, expressive variations around any speech recording. It consists of a voice-processing algorithm based on the phase vocoder architecture. It operates by generating a set of breakpoints in a given recording (ex. at every 100ms in the file), and applying a different audio transformation to every segment. Doing so, it allows to modify the temporal dynamics of any arbitrary recorded voice’s original contour of pitch, loudness, timbre (spectral envelopes) and speed (i.e. roughly defined, its prosody), in a way that is both fully parametric and realistic. Notably, it can be used to generate thousands of novel, natural-sounding variants of the same word utterance, each with randomly manipulated relevant dimensions. Such stimuli can then be used to access humans’ high-level representations of speech (e.g., emotional or social traits) using psychophysical reverse-correlation methods. By providing a computational account of such high-level auditory “filtering”, we believe this tool will open a vast range of experimental possibilities for future research seeking to decipher the acoustical bases of human social and emotional communication1, hopefully as successfully as it has been in vision science from analogous tools2. CLEESE is available open-source as both a Matlab and Python toolbox

1. Ponsot, E., Burred, JJ., Belin, P. & Aucouturier, JJ. (2017) Cracking the social code of speech prosody using reverse correlation. In Review.

2. Yu, H., Garrod, O. G., & Schyns, P. G. (2012). Perception-driven facial expression synthesis. Computers & Graphics, 36(3), 152-162.

Authors: Marco Liuni, Luc Ardaillon, and Jean-Julien Aucouturier, STMS Lab (IRCAM/CNRS/UPMC UMR 9912), Paris, France

 ANGUS is a software tool for high quality transformation of natural voice with parametrical control of roughness. Recent psychophysical and imaging studies suggest that rough sounds, characterized by specific spectro-temporal modulations, target neural circuits involved in fear/danger processing; the brain extracts such features from human voices to infer socio-emotional traits of their speakers1. Our software aims at the design of reproducible psychophysical experiments imposing a parametrical screaminspired effect on natural sounds, with the aim of investigating the emotional response to this sound feature. Analysing and synthesizing rough vocals is challenging, as roughness is generated by highly unstable modes in the vocal fold and tract: compared to standard production, rough vocals present additional sub-harmonics as well as nonlinear components. Our approach is based on multiple amplitude modulations of the incoming sound, that are automatically adapted to the sound’s fundamental frequency, which leadto a realistic, but also highly efficient, parametric effect well-suited for realtime applications.

1. Arnal, L. H., Flinker, A., Kleinschmidt, A., Giraud, A. L., & Poeppel, D. (2015). Human screams occupy a privileged niche in the communication soundscape. Current Biology, 25(15), 2051-2056.

4pm-4:30pm Break
4:30pm-5:30pm Marta GENTILUCCI with Jérome NIKA, Axel ROEBEL and Marco LIUNI
End of artistic research residency with demo : Female singing voice’s vibrato and tremolo : Analysis, mapping and improvisation
5:30pm-6:00pm Rama GOTTFRIED and RepMus Team (IRCAM)
Introducing Symbolist, a graphic notation environment for music and multimedia, developed by Rama Gottfried and Jean Bresson (IRCAM – Musical Representations) as part of Rama’s 2017-18 IRCAM-ZKM Musical Research Residency. Symbolist was designed to be flexible in purpose and function, capable of controlling computer rendering process such as spatial movement, and an open workspace for developing symbolic representations for performance with new gestural interfaces. The system is based on an Open Sound Control (OSC) encoding of symbols representing multi-rate and multidimensional control data, which can be streamed as control messages to audio processing, or any kind of media rendering system that speaks OSC. Symbols can be designed and composed graphically, and brought in relationship with other symbols. The environment provides tools for creating symbol groups and stave references, by which symbols maybe timed and used to constitute a structured and executable multimedia score.


2:30pm-3:30pm Shannon Room – Classroom

Hands-on: Spat


6:30pm-7:30pm ANNOUNCEMENT OF THE Laureates of the Artistic Research Residency  Program 2018-2019

Drinks under the glass roof


8:30pm-10:00pm IRCAM LIVE CONCERT

Centre Pompidou, Grande Salle

Thursday 8th, March

Thursday 8th, March

Conference room, demos and posters

Time Stravinsky Room Studio 5 – Demos and workshops
Session: Active musicology – sound library Session: New interfaces / new instruments
9:30am-10am Maurilio CACCIATORE 
The name comes from the french definition “Musique Mixte”, that indicates the tradition of live-electronics (or even only electronics without real time interaction) with acoustical instruments on stage. It works like a middleware: the modules are ready-to-use but all their parts are open, so the users can modify them locally in their own patch as needed for their project. MMixte is targeted from middle-skill to advanced Max users; the less experienced programmers can get a training about how to organize what is generally called like “concert-patcher” (a Max patcher to use in concert for the management of the electronics of a piece), improve their programming technique or simply avoid the risk of crash during a concert because of a bad programming of own modules. Advanced Max users can trust the extreme simplicity of this collection – only the basic library of Max has been used – and build in few minutes an environment for a piece to develop further by the own. The time of preparation of a concert patcher can be reduced dramatically; the programmer can start after few passages the work about of the audio treatments, the spatialization and the other creative part of the Max patching.
This collection comes from a personal need. I started to develop for myself a hard-core for a standard concert patcher to be used in my pieces. I started to formalize these modules to avoid the lack of time to write them each time from the beginning. MMixte is, in this sense, a collection made from a composer for other composers.
The presentation will show the architecture of the package, its modules and their use.
MMixte is a Max package released in July 2017
New Instrument
10am:10:30am Rémi ADJIMAN 
Over the past decade, first the SATIS department, and the ASTRAM laboratory, and now the PRISM laboratory (Perception Représentation Image Son Musique) have developed a project to creation a browsable, online library of ambient sounds called “Sons du Sud”. This project is currently entering a new phase of development with the creation of a specially-developed thesaurus and the realization of an interface that provides an ergonomic and didactic tool for indexing sounds (sounds intended for audio-visual professionals, more specifically sound editors). Rémi Adjiman will present his current work, research subjects and possibilities. It is also possible he will present the current “Sons du Sud” website, keeping in mind development is scheduled for the end of 2018. This project is supported by the SATT PACA, PRIMI, and AFSI (Association Professionnel du Son à l’Image).
10:30am-11am Break
Session: Artificial intelligence & sound design
11am-11:30am Sahin KURETA 
The presentation is intended as an introduction to deep learning and its applications in music. The presentation features the use deep auto-encoders for generating novel sounds from a hidden representation of an audio corpus, audio style transfer as shown by Ulyanov (, and future directions based on CycleGan, WaveNet, etc; as well as a high level introductions of some basic concepts in machine learning.
I wish to present on my ongoing work with the Modalys Induction Connection – created as a result of my Musical Research Residency in 2013. This presentation (an updated version of the one I gave in Santiago at a previous Forum) will begin with a general overview, followed by a series of audio examples drawn from my recently created project documentation pages.
This will be followed by additional examples from a piece I am currently composing, for piano and electronics, commissioned by Keith Kirchoff as part of his EAPiano Project. These new examples feature two main components: my experiments with modeling a grand piano bichord struck by a hammer, and additional experiments with the use of the Induction Connection. For most of these examples I first created a ModaLisp text document to generate audio files, then added code to generate an mlys script for real time use in a Max patch.
11:30am-12pm Karolína KOTNOUR  
Within this project, I aim to define a mutual synthesis of sound and position of the borderlines in space. Space, as well as word, has its own shape, meaning, and wording. How it affects our perception of architecture acoustic experience space? What is the relationship between sound and vision and how the brain interprets multidimensional space? As the sound spectrum affects brain activity and spatial perception, such as in the audible frequency range based on the physiological structures of the auditory organ of a living organism?
The main purpose of this project is the implementation of a structure that on the basis of this interaction creates a spatial envelope / the veil / that is changing in time and space, resulting in parallel. This creates elastic / fluid / structure of matter and the sound reflecting itself. This situation is modeled on the basis of information obtained from sensors located in space and on the basis of sensory evaluation of human auditory perception in such space. The realization of this structure will be preceded by testing and visualization of such structures in interactive virtual reality using 3D and 4D programs like Rhino, Grasshopper, Max7/MSP for Ableton Live which allows creative representation of a sound as a shape in a common space.
Summary of fundamental objectives:
Visualization of human sensory perception of an environment in a relation to the architecture forms.
Specifics of psychoacoustic space and visualization of the reality and inner structure of reality.
Possibilities and contexts of subjective visualization of sound in art and subjective graphical score for musical compositions. All objectives will be evaluated in bending details of a designing structure of “The Vail.”
The project ‚invisible synthesizer’ proposes an approach to support divergent thinking processes of artists navigating through the enormously huge amount of sounds of a synthesizer. The invisible synthesizer project lives at the intersection of music and interaction design and still is in an early prototyping stage. By using the hand tracking device Leap Motion the parameter space of a synthesizer is being translated into a physical 3D space on the desk of the artist where the musician can navigate through the possible combinations with a set of different hand postures and hand movements. Richard will talk about why he believes in the power of early prototyping and how he did user testing before he wrote a single line of code. At IRCAM Forum 2016 Richard already presented the theoretical foundations of this project in a talk. Now he would love to present the enhancements of his prototype. The main elements of the prototype consist of:

  • Leap Motion device
  • Wekinator (Software used to recognize the hand posture via machine learning)
  • Python script (to translate hand posture + hand position into synthesizer parameters that get sent to a synthesizer via the OSC protocol)
  • A software synthesizer

Personal website

12pm:12:30pm Robert B. LISEK 
We observe the success of artificial neural networks in simulating human performance on a number of tasks: such as image recognition, natural language processing, etc. However, there are limits to state-of-the- art AI that separate it from human-like intelligence. Humans can learn a new skill without forgetting what they have already learned and they can improve their activity and gradually become better learners. Today’s AI algorithms are limited in how much previous knowledge they are able to keep through each new training phase and how much they can reuse. In practice, this means that you need to build a new algorithm for each new specific task. There is domain called AGI where will be possible to find solutions for this problem. Artificial general intelligence(AGI) describes research that aims to create machines capable of general intelligent action. “General” means that one AI program realizes number of different tasks and the same code can be used in many applications. We must focus on self-improvement techniques e.g. Reinforcement Learning and integrate it with deep learning, recurrent networks.
The multidisciplinary study of trumpets—or cornua in Latin—of Pompeii, ancestors of today’s brass instruments, involving the humanities, acoustics, materials science, instrument making, sound and music synthesis, and is a part of the Paysages sonores et espaces urbains de la Méditerranée ancienne supported by the École Française de Rome. The presentation will focus primarily on one of the study’s aspects, that is the creation of virtual copied of 5 trumpets discovered in 1852 and 1884. These models help us understand their performance as well as their sonic and musical possibilities. In the beginning, the study consisted of analyzing the current reconstitutions of these instruments, which have undergone several restorations, and offer a profile that is as exhaustive and as accurate as possible based on a number of indicators. This profile made it possible to calculate, using the software program Resonans created to assist in the conception of wind instruments, the resonance of these trumpets and give us information about the notes that could be played with them and on a range of characteristics such as accuracy, timbre, ease and power of sound emission.
After this, thanks to the software program Modalys, a full, real-time model of these instruments was created in Max, letting us effectively test their sonic and musical possibilities.
12:30pm-1pm Knut KAULKE 
Where do beats begin and where does tonality end? You can find the answer in a forest of steel trees composed of resonant bark during a spectral thunderstorm“. The aim of my research project is the merger of melodic and percussion elements resulting into one conglomeration. Drum sets become more tonal without loosing their characteristic noise-like sound.
The ambition is to create entirely new sounds and complex forms driven by beat sequences which can be played via claviature.
During my scientific doctoral thesis I was driven by a passion to discover something new and to increase the understanding of complex molecular networks. In life sciences, hypothesis often have to be modified due to functional complexity of living things. Experimental results may disprove theoretical assumptions – a novel, surprising thought can appear! My curiosity and excitement of organic/natural complexity as a scientist also emerge as a musician when I perform sound studies.
My musical research focuses on a merger of melodic and percussive elements into one conglomeration. I approach my theme at two levels: 1. The sound design of percussive elements and 2. The tonal sequencing in and of this special sounds.
Applied Kyma instruments and effects are almost exclusively based on the SlipStick model. Thereby, SlipStick operates as an engine of sound synthesis. Simultaneously, it induces and influences other forms of synthesis as Frequency Modulation, Physical Modelling and Resynthesis including the combination and influence of one and the other. Subsequently, single sounds are multiplied by the Replicator. Thereby, highly complex and lively sounds are being created.
The intended tonality of percussive sounds is underpinned by the modification of Euclidean and Non-Euclidean rhythms. These rhythms were chosen because the structure of these algorithms may be used to generate very native percussive sequences, which can be modified by shifting tone pitches and other variances leading to a mergence in tonality and a subsequent collapse. A special developed Kyma sequencer, the core of my setup, can be played in half tone steps and creates instrumental sounds, which will surprise everyone.
The content of the demo:
Applied Kyma instruments and effects in my demo performance are almost exclusively based on the SlipStick model. Thereby, SlipStick operates as an engine of sound synthesis. Simultaneously, it induces and influences other forms of synthesis as Frequency Modulation, Physical Modelling and GrainCloud Resynthesis including the combination and influence of one and the other. Subsequently, single sounds are multiplied by the Replicator. The replicator can be used in Kyma’s programming environment for multiplying variables. (e. g. voices, controller values, instruments etc.)
In my demo I will show some techniques in how to excite SlipStick and how to use SlipStick as an exciter of sound synthesis. Furthermore, I will share some modification hints leading to a more lively sound.
Raphael PANIS 
The percussion is a wooden box on legs. It can be played on its top surface with hands (you can hit or rub the surface). The box itself works as a sound box. Of course, the table makes sound on its own, but the addition of electronics makes it possible to hear a larger range of timbres. Sensors detect when the surface is touched, the information is sent to a microprocessor (a BELA card). After analysis, the card sends the audio signal that vibrates the sound box, transforming the sound created by the instrument.
This table is part of my final student project, a mixed composition for solo instrument and spatialized electronics using Max with Antescofo, Spat, and the language FAUST. It also includes a video projection on several screens. The work is presented here as an interactive installation and the audience can play on the table; their actions are echoed in the electronic sounds and the image.
1pm-2pm Lunch Buffet Installation : Mux
Session: Career path with secondary-school pupil Session: From the lab to the scene
8 March The Women Day:Woman in Sound professions  The composer  Violeta CRUZ
Since 2011, I have been working on a research and writing project based on the musical and staged dialogue between symphonic instruments and electroacoustic sonic objects. Following up on the creation of three objects (the electroacoustic fountain, the little man machine, and the light rattle) that led to 7 concert works and 4 installations and performances, the project has recently expanded to encompass the conception and construction of the sonic décor of my opera La Princesse Légère. This décor was designed in collaboration with the set designer Oria Puppo and the director Jos Houben. In the context of an opera, the theatrical dimension of the objects becomes more important, providing new leads for their musical exploitation and offering new challenges.
Franck VIGROUX and Antoine SCHMITT  
Chronostasis is a temporal illusion that effects the neurons responsible for the prediction of the immediate future and to listening to music: time seems to stand still. But time is elastic and a stretched elastic always returns to its original form. The audio-visual performance of Chronostasis pushes this logic to its limits by diluting a catastrophic moment with temporal stretches and inversions throughout the performance. The present is frozen and diffracts forever, the past and the future cease to exist.The music is interpreted live with electronic instruments, the video is generative.
2:30pm-3pm Suguru GOTO 
These are based upon sensor technology, as well as programming of Mapping Interface and robotics in order to construct instruments virtually, and at the same time this explores the relationship between interfaces and humans (Man and Machine). Suguru Goto has been appointed Associate Professor at Tokyo University of the Arts since April, 2017, and he has been further developing this research and work productions. Having been basing on the interactive environment, it reacts images and sound in the space of virtual reality. As a feature of reproduction of the virtual space within this research, the simulation of a zero gravity space has been conceived in the process of development. The results and knowledge of this research will be able to be perhaps applied to new devices, including new art expression, and will create a new flow in the field of experiential expression using sounds and images.
3pm-3:30pm Julia BLONDEAU (IRCAM)
This presentation will focus on the creation of the work Namenlosen, premiered at the Philharmonie 2 de Paris in June 2017 by the Ensemble Intercontemporain. I will discuss the use of the language Antescofo and its connections with Pnoramix (new graphical interface for Spat) and to CSound (for the generation of synthesis in real-time). I will provide a few examples of uses of multiple times and writing as well as a library of spatialization with automatic source assignment.
3:30pm-4pm Pedro GARCIA-VELASQUEZ and Augustin MULLER (IRCAM)
Residency release in artistic research Ircam/ZKM
The aim of this project is to explore the possibilities of characterizing imaginary and virtual spaces. Rather than trying to create an acoustic simulation of a place, we will explore the expressive and musical possibilities of particular acoustics connected to the evocative power of sound and the expressivity of memory.
After numerous concerts and binaural sound experiences, we created a library of High Order Ambisonics (HOA) format acoustic imprints that can be used in a variety of situations for the acoustic and oneiric characteristics of existing venues, stressing the possibilities of immersion of spatialized listening. This library focuses on the remarkable acoustics of certain places, but also on their poetic nature and their evocative power. These acoustic imprints and a few of the ambiances captured in situ are used here as etudes, or sketches that make it possible to explore certain possibilities and offer an acoustic journey in these re-imagined spaces.
4pm-4:30pm Break
Session: Collective interaction and geolocation Session: Gesture – therapy – bioart
4:30pm-5pm Jan DIETRICH 
StreamCaching est un projet musical qui a débuté en juin 2017 à Hambourg. Il a été lancé au Blurred Edges festival de musique contemporaine: 10 compositions ont été commandées, numérisées, munies de données GPS et localisées à travers la ville. Le public a pu rechercher les pistes avec le smartphone et les écouter directement sur le site.
La présentation présentera l’historique et l’idée du projet StreamCaching  soit expliquer le concept de localisation de 10 œuvres d’art le long des voies au sol du satellite, montrer des extraits des compositions et de l’art visuel, donner un aperçu de la façon dont le projet se poursuivra et laisser place à la discussion sur les questions techniques et sociales.
Andreas BERGSLAND and Robert WECHSLER 
The MotionComposer is a therapy device that turns movement into music. The newest version uses passive stereo-vision motion tracking technology and offers a number of musical environments, each with a different mapping. (Previous versions used a hybrid CMOS/ToF technology). In serving persons of all abilities, we face the challenge to provide the kinesic and musical conditions that afford sonic embodiment, in other words, that give users the impression of hearing their movements and shapes. A successful therapeutic device must, a) have a low entry fee, offering an immediate and strong causal relationship, and b) offer an evocative dance/music experience, to assure motivation and interest over time. To satisfy both these priorities, the musical environment “Particles” uses a mapping in which small discrete movements trigger short, discrete sounds, and larger flowing movements make rich conglomerations of those same sounds, which are then further modified by the shape of the us
er’s body.
5pm-5:30pm Oeyvind BRANDTSEGG 
The project explores cross-adaptive processing as a drastic intervention in the modes of communication between performing musicians. Digital audio analysis methods are used to let features of one sound modulate the electronic processing of another. This allows one performer’s musical expression on his/her instrument to influence radical changes to another performer’s sound. This action affects the performance conditions for both musicians. The project method is based on iterative practical experimentation sessions. Development of processing tools and composition of interaction mappings are refined on each iteration, and different performative strategies explored. All documentation and software is available online as open source and open access.
The project is run by the Norwegian University of Technology and Science, Music Technology, Trondheim. Collaboration partners at De Montfort University, Maynooth University, Queen Mary University of London, Norwegian Music Academy, University of California San Diego, and a range of fine freelance music performers.
The presentation will look at key findings, artistic and technical issues, and future potential.
Thomas DEUEL 
The Encephalophone is a hands-free musical instrument and musical prosthetic. It measures EEG ‘brain-wave’ signal to allow users to generate music in real time using only thought control, without movement. With unique Brain-Computer Interface (BCI) algorithms it harnesses the user’s electrical brain signals to create music in real-time using mental imagery and not requiring any movement. Thus it can be used with paralyzed individuals as a musical prosthetic. It has been experimentally proven to work with reasonable accuracy, and is now being used in clinical trials with patients who are paralyzed from stroke, MS, ALS, or spinal cord injury. These patients who have lost their musical abilities due to neurological disease are empowered to create music in real-time for the first time since their injury without needing movement.
5:30pm-6pm Garth PAINE
Garth Paine present his artistic residency project in the context of residencies Ircam and ZKM.
Future Perfect will be a concert performance using smart phone virtual reality technologies and ambisonic/ WaveField sound diffusion.Future Perfect explores the seam between virtual reality as a documentation format for environmental research and archiving nature, combining the thoughts that:

1) ‘nature’ as we know it may, in the near future, only exist in virtual reality archives, and the

2)  notion of the virtual, a hyper-real imaginative world contained by a technological mediation can be presented to individual as a personal experience.

The Future Perfect performance will not have a fixed Point of View. interactive crowd mapping using smart phone beacons will generate personal journeys through the work and determine each audience members own viewing and listening perspectives.  The work will draw on the deep expertise at IRCAM in WaveField Synthesis techniques, which through the smart phone tracking will allow sonic objects to be attached to and follow people within the concert space.  HOA ambisonics will use SPAT to creat an immersive sound field immersion. Smart Phone tracking will allow the tracking of people within the concert space, using flocking and spatial spread to drive interactive musical and animation parameters.

The work will be made from 360 VR footage shot by Paine in nature preserves in Paris and Karlsruhe, blended with procedural animations, derived from plant images and HOA record gins made by the composer  at the same location.  Participants will be able to walk freely through the space, with vector lines being drawn between people subject to proximity and vectors of movement.  Other individuals will be indicated in the VR space as outlines to make movement safe and to help develop a collective consciousness

6pm-7pm Emanuele PALUMBO 
I would like to present my research work as a performative installation that will address two areas of interest: the relationship between the musical gesture and the physiological response of a saxophonist, the relationship between listening and a the physiological “resonance” of a physiological performer, and finally, the relationship between the two. These different types of interaction will be explored throughout the spaces and moments of the installation that, via a computer, automatically generates its form. The psychological parameters of the musician and physiological interpreter are captured via the LISTEN system, processed by a computer and used to create—in real-time—both the electronic sounds and the score. The physiological interpreter is a dancer who will be in different positions in the space: standing, sitting, lying down. Here my work is combined with a colleague, the choreographer Zdenka Brungot Svitekovà. Zdenka works on the somatic power of certain techniques to manipulate the body: another dancer will be responsible for managing the “physiological interpreter”, creating changes in the quality of the fabric of the body; the result will therefore be a change in the music generated. The people who enter the installation space are also invited to interact with the physiological interpreter. In a third part of the installation, we will explore the relationship between the saxophonist and the physiological interpreter.
This installation will also display the technology used and the data captured. Monitors with this information will be displayed to the participants.
At the end of the 30-minute performance, I will present the installation


Time Nono ClassRoom Shannon Meeting and ClassRoom
10am-10:30am Alexander MIHALIC 
Sampo is an extension for a performer playing an acoustic instrument. It was designed to play all types of electroacoustic music with an acoustic instrument.
Direct access to the settings:
Triggering sound files
Triggering control sequencesSampo lets the performer play works already in the mixed music repertory; a repertory of several hundred works written since the 60s. Works—ensembles of electroacoustic configurations and fixed contents—are accessible via a graphical interface on the Sampo’s touch screen. Distribution of the electroacoustic settings and sound files is carried out on a server and is available using Sampo, equipped with a WIFI connection and automatic access to the database.
10:30am-11am Alexander MIHALIC 
Sampo is an extension for a performer playing an acoustic instrument. It was designed to play all types of electroacoustic music with an acoustic instrument.
Direct access to the settings:
Triggering sound files
Triggering control sequencesSampo lets the performer play works already in the mixed music repertory; a repertory of several hundred works written since the 60s. Works—ensembles of electroacoustic configurations and fixed contents—are accessible via a graphical interface on the Sampo’s touch screen. Distribution of the electroacoustic settings and sound files is carried out on a server and is available using Sampo, equipped with a WIFI connection and automatic access to the database.
11am-12pm Marco LIUNI and Emanuele PALUMBO
Ce hands-on permettra aux participant d’expérimenter l’utilisation de capteurs physiologiques (respiration, battement cardiaque, réponse électrodermale), et de s’en servir pour envoyer des flux de messages à un patch Max. Des applications musicales seront proposées, en ligne avec le travail effectué pour la pièce Artaud overdrive (2015-2016) pour ensemble, trois dispositifs Listen et électronique. 


7:30pm–10pm MEET-UP HACK DAYS
Audio profession Community

Studio 5

Friday 9th, March

Friday 9th, March

Conference room, demos and posters

Time Stravinsky Conference Room Studio 5 – Demo and Workshops
Session: Notation – perception – languages
9:30am-10am Daniel MANCERO BAQUERIZO 
In the electroacoustic music field, the compositions based on soundscapes or “soundscape compositions” can be characterized by the presence of sound elements from the environment throughout their repertoire. In contrast to sound arts, this is a form of composition in that it uses musical language, rather than the result being a soundscape, it is a composition that uses a sound environment as source material for musical creation, implying a structural strategy for composition. In my PhD thesis, I start with the idea that these compositions, using a very specific logic of sound organization on a poetic-perceptive level, can be distinguished and classified in 3 groups according to morphological criteria applied to the sound mass: 1/ spectral and brilliance flattening, 2/ the distribution, spectral symmetry, and brilliance, 3/inharmonic, amplitude, and brilliance. After characterizing and categorizing a corpus representative of the repertoire [Mancero, Bonardi, and Solomos 2017], I developed computer tools for the segmentation, description, instantiation, and harmonic analysis of the pertinent sound materials with the goal of consolidating a few harmonic models for musical composition. These models respond to two logics for analysis [Bregman 1990]. The first, follows the principle of sequential regrouping of formant frequencies. The second, follows the principle of simultaneous grouping of formant peaks, made up of cords and non-octave modes. I developed a few patches for acoustic segmentation and description with the Mubu and Pipo [Norbert Schnell] libraries and ircamdescriptors [IRCAM’s Analysis/Synthesis team]. I also developed a few tools for harmonic analysis that use primarily the FTM & CO, Gabor libraries [IRCAM’s Real-Time Musical Interactions team] and Bach [Andrea Agostini & Daniele Ghisi], in Max. In complement, I used the EAnalysis [Pierre Couprie] software for the characterization of pertinent materials and the choice of acoustic descriptors. A few musical compositions were carried out during the research and perfection of the tools for harmonic analysis, making it possible to associate research with creation, notably “Chant Elliptique n°2” for Celtic harp and electronics, “la rugosité de la nuit” for accordion and electronics, “Turgescences” for mandolin, guitar, and flute, and “Estambre urdido” for an ensemble of 5 percussionists.
10am:10:30am Nadir BABOURI
A presentation of a fancy Antescofo Library converting mathematical parametric functions into curves controlling Spat sources’ trajectories. The outcome of these processes are X, Y and Z or Azimut, Elevation and Distance you can scale and convert to suitable data.
My aim is to present and propose a collaboration with a forum member – a programmer – to further develop this library and achieve a more simple human readable scripting such as the ‘Turtle’ python library. 
10:30am-11am Andrea AGOSTINI 
The project I present here is a simple textual programming language meant to ease the manipulation of the data structures of bach (an extension of Max for musical representation and computer-aided composition). Its goal is facilitating the representation of non-trivial processes and algorithms, specifically in the context of musical formalization. This is, in general, something not easily achievable in Max without resorting to writing code in some established, general-purpose programming language (C, C++, Java, JavaScript and more) through dedicated APIs whose bindings with Max tend to be cumbersome and, in some cases, inefficient because of the deep underlying differences in the programming paradigms and data representations. On the contrary, the language I propose is meant to be extremely simple and tightly integrated to the Max environment and the data structures of bach. The project is the outcome of the Musical Research Residency I carried out at IRCAM in 2017.
11am-11:30am David ZICARELLI and Emmanuel JOURDAN 
We will present the the latest news of Cycling ’74 regarding Max and Max for Live.
11:30am-12pm Break
12pm-12:30pm Denis BEURET
I developed a program that simulates a virtual quintet (drums, bass, piano, and another instrument) of jazz soloist(s). This ensemble grooves and has a wide range of controls for the rhythmic parameters of the groove. It is easy to make all the instruments play together following a soloist, give them more freedom, have them play poly-rhythmically, follow different inputs individually (audio or MIDI file, microphones, Omax, etc), control different parameters of the performance, etc. This gives a more natural feeling to the instruments. The ensemble can also follow variations in tempo and variations in the intensities of any given inputs. Concerning harmony, chords with 6 notes are generated in real-time depending on the notes played by the instruments. This virtual quintet is an innovative tool that can be set up to play different styles with a broad range of sounds.
David ZICARELLI and Emmanuel JOURDAN (Cycling `74)
Meeting Max experts
12:30pm-1pm Juan Manuel ABRAS CONTEL 
‘Diálogos franciscano(n)s’, for flute and electronics, is linked to new technologies—like voice and sonic modelling (provided by Ircam Trax v3), audio spatialization, spectral morphing—, musical ekphrasis, numinosity, intertextuality and extended performance techniques. The piece is an artistic transmedialization of a stained glass made by Frère Éric for the Eglise romane de Taizé (France) depicting St. Francis of Assisi surrounded by six birds. The listener seems to hear how 6 different sound sources (representing birds) appear at 4 equidistant spots (representing trees) and start emitting signals from the periphery to the center, where the flutist (representing St. Francis) is located, while rotating counterclockwise at constant speed and transforming themselves into children’s voices (representing angels), just before the birds seem to take off rapidly. The flutist rotates almost constantly over his/her vertical axis to follow the apparition of the mentioned sound sources and answer to each of the emitted bird songs with his/her flute by playing in a somehow canonical way (hence the title) their corresponding transcriptions, which are connected by passages characterized by the use of extended performance techniques.
1pm-2pm Lunch Buffet Lunch Buffet
2pm-2:30pm Demo-poster 
Robert LISEK Performance
Music Dices is one of the students’ projects created in the framework of the master’s program in Music Design  the Faculty of Digital Media of Furtwangen University started last autumn. The installation consists of a musical dice game that allows for creating music arrangements by chance. In the game, up to three players throw foam dices into the room. The motion and position of the three dices determine the concatenation and combination of three music  tracks (i.e. drums, rhythm guitar, and lead guitar). The implementation of the game is entirely based on mobile web technologies and integrates the Soundworks framework developed at Ircam.
Design: Tanita Deinhammer, Supervisor: Prof. Dr. Norbert Schnell

Aurelian BERTRAND ZEF The Electric Violin
3pm-3:30pm Forum : perspectives and new platform


Time Nono ClassRoom Shannon Meeting and ClassRoom
9:30am-10am Hugo SILVA (PLUX) and ISMM Team (IRCAM)
Physiological data has had a transforming role on multiple aspects of society, which goes beyond the health sciences domains to which they were traditionally associated with. While biomedical engineering is a classical discipline where the topic is amply covered, today physiological data is a matter of interest for students, researchers and hobbyists in areas ranging from arts, programming, engineering, among others. Regardless of the context, the use physiological in experimental activities and practical projects is heavily bounded by the cost and limited access to adequate support materials.
In this workshop we will focus on BITalino, a versatile toolkit composed of low-cost hardware and software, and created to enable anyone to create cool projects and applications involving physiological data. The hardware consists of a modular wireless biosignal acquisition system that can be used to acquire data in real time, interface with other devices (e.g. Arduino or Raspberry PI), or perform rapid prototyping of end-user applications. The software comprehends a set of programming APIs, a biosignal processing toolbox, and a framework for real time data acquisition and postprocessing.
10:00am-11am Hands-on
Alain BONARDI ( IRCAM)  Philippe GALLERON, Eric MAESTRI, Jean MILLOT, Eliott PARIS, Anne SEDES ( University Paris 8) 
The ANR-funded MUSICOLL project (2016-2018) aims at redesigning the musical practice of realtime graphical computing in a collaborative manner. Hosted at the Maison des Sciences de l’Homme Paris Nord, it brings together the Centre de Recherche en Informatique et Création Musicale (CICM) belonging to MUSIDANSE Lab at Paris 8 University and OhmForce company specialized in collaborative digital audio. In this framework, we develop Kiwi, an environment for realtime collaborative music creation enabling to several creators to simultaneously work on the production of a sound process hosted online. We concurrently design a new course in patching for beginners that will be tought at Paris 8 University from february to may 2018. During this hands-on session, we will propose to participants a first introduction to Kiwi on their laptop computers as well as a first collaborative approach of patching.


1:30pm-4pm Diemo SCHWARZ
Dirty Tangible Interfaces (DIRTI) are a new concept in interface design that forgoes the dogma of repeatability in favor of a richer and more complex experience, constantly evolving, never reversible, and infinitely modifiable. We built a prototype based on granular or liquid interaction material placed in a glass dish, that is analyzed by video tracking for its 3D relief. This relief, and the dynamic changes applied to it by the user, are interpreted as activation profiles to drive corpus-based concatenative sound synthesis, allowing one or more players to mold sonic landscapes and to plow through them in an inherently collaborative, expressive, and dynamic experience.

Salle Shannon


8pm-9:30pm La princesse Légère

Théâtre national de l’Opéra comique


Time Studio 1 Under the glass roof
10am-4:30pm Pedro GARCIA-VELASQUEZ and Augustin MULLER (IRCAM)
Residency release in artistic research IRCAM/ZKM
The aim of this project is to explore the possibilities of characterizing imaginary and virtual spaces. Rather than trying to create an acoustic simulation of a place, we will explore the expressive and musical possibilities of particular acoustics connected to the evocative power of sound and the expressivity of memory.
After numerous concerts and binaural sound experiences, we created a library of High Order Ambisonics (HOA) format acoustic imprints that can be used in a variety of situations for the acoustic and oneiric characteristics of existing venues, stressing the possibilities of immersion of spatialized listening. This library focuses on the remarkable acoustics of certain places, but also on their poetic nature and their evocative power. These acoustic imprints and a few of the ambiances captured in situ are used here as etudes, or sketches that make it possible to explore certain possibilities and offer an acoustic journey in these re-imagined spaces.
Tristan SOREAU
void -. ..- . .() is an interactive installation that places the visitor in a listening situation opposite a digital biotope. The listener finds herself in the presence of a simulator of biological entities, an invisible swarm with only a sonorous manifestation. The quieter the visitor, the more the swarm makes itself manifest. Conversely, the louder the visitor, the quieter the swarm. The starting point for this work is the desire to let the behavior of swarms (be they birds or insects) be heard. The latter presented unusual characteristics to put to spatialized sound. We can observe a cloud that is both a mass and a sum of individuals. The movement of a swarm can be seen both in its ensemble and also as individual trajectories. On one hand, the installation is made up of a program—belonging to the family of multi-agent systems—that simulates the movements of a digital cloud. These movements are decided based on the public’s actions. The quieter the audience, the more the cloud manifests itself and the more individuals can be found in the cloud. Conversely, the louder the visitors, the sparser the cloud which assumes an escape behavior. On the other hand, the sound manifestation of the cloud is carried out in an “environment” made up of nests designed algorithmically with forms borrowed from nature. The mass’ movement is re-transcribed via a sound layer that moves from one nest to another depending on the virtual movement generated by the program. Individual trajectories manifest themselves through succinct sounds that are scattered through the space. void -. ..- . .() is born of the association of writing a program that simulates the behaviors of swarms and the fabrication of an environment in which the swarm can manifest itself through sound and in reaction to the public’s behavior. The installation troubles the listener, simulating a swarm equipped with affects; it makes one believe there is a biological entity, nonetheless at the threshold of digital, forcing a sort of strangeness to surface, a paradox: this entity is completely digital, however its behavior and the nature of its manifestation seem to be organic.

Last update : March 6, 2018