All conferences and pre-recorded videos will be in English.
Sound design and multimedia interactions - Pre-recorded videos
- Minh Boutin, Marin Scart, Jean-Baptiste Krauss (Ecole Estienne) : Bolidarium
- Rosie Ann Boxall (Royal College of Art) : Magical Stones of Old Britain
- Luxury Logico, Andrea Cera (C-Lab Taïwan) : Points of Light in the Night : A Conversation on The Insomnia Sketchbook
- Nicholas Faris (Royal College of Art) : Performative Audio Feedback
- JP Guerrier (Royal College of Art) : Texture Translation
- Lera Kelemen (Royal College of Art) : Tender Controller
- Clovis McEvoy : Pillars of Introspection - a work for virtual music
- Wu Shangyun (Royal College of Art) : Human Who Seeks Eternity
- Carlo Siega (Anton Bruckner Private University) : Re-actualisation practices through video-art music. A case study.
- Shontelle Xintong Cai (Royal College of Art) : Launch a dream
- Buket Yenidogan (Royal College of Art) : Becoming with
- Benjamin Zervigón (Peabody Conservatory) : Soundings in Fathoms - reconstructing the concert experience in digital media
- Nicolas Misdariis, Diemo Schwarz (Perception and Sound Design Team) : SkataRT, outil de création et design sonore
- Nicolas Misdariis, Patrick Susini, Romain Barthelemy (Perception and Sound Design Team) : SpeaK – Lexicon of sounds for sound design
- Frédéric Bevilacqua, Diemo Schwarz, Jean-Etienne Sotty (Equipe ISMM) : News Interaction Sound Music Movement Team
- Jean Lochard (Ircam) : New Ircam Forum Max For Live devices
- Leon Hapka (Royal College of Arts) : Sonic Harmony Systems
- Qinyi Wang (Royal College of Arts) : Self Independences and groups
- Wenqing Yao (Royal College of Arts) : Land Speech
- Nirav Beni, Lucy Papadopoulos, Susan Atwill (Royal College of Arts) : Spin
- Carina Qiu (Royal College of Arts): < Spinning Brother __/''''\__ *3 >
- Rosie Ash (Royal College of Arts) : Home
- Frances Allen (Royal College of Arts) : Rubiks Cube
- Jiuming Duan (Royal College of Arts) : Motion-sonic Installation
Audio Production - Pre-recorded videos
- Charles-Edouard Platel : Ondolon, sculpture of recorded sound
- William Petitpierre : IRCAM Technologies - vector of expressiveness
- Benjamin Hackbarth (University of Liverpool) : Concatenative Sound Synthesis with AudioGuide / Workshop AudioGuide
- Kei Nakano (Osaka University of Arts) : New Sounds by the "Through the Looking Glass Guitar"
- David Shulman (Royal College of Arts) : Journey into the Body Electric
- Jean-Marc Jot (iZotope) : Objects of immersion - unlocking musical spaces
- Pierre Guillot (Ircam) : Above AudioSculpt: TS2 and Analyse
- Thibaut Carpentier, Olivier Warusfel (Ircam) : News Team EAC
- Thomas Hélie, Robert Piéchaud, Jean-Etienne Sotty : News Team S3AM
- Paul Escande (ENSA, TALM-IRCAM) : Augmented Architectures
Composition - Pre-recorded videos
- Vincenzo Gualtieri (Conservatory of Music of Avellino) : the (BTF) project
- Kai Yves Linden : Anatomical insights into "Vertical Structure"
- Marco Bidin (ALEA) : OMChroma new tutorials and documentation
- Johnny Tomasiello : Moving Toward Synchrony
- Claudia Jane Scroccaro (HMDK Stuttgart) : Detuning the space in I Sing the Body Electric (2020), for Double Bass and Electronics
- David Zicarelli, Emmanuel Jourdan (Cycling '74) : Max News
- Miller Puckette : Modular software and remote workflows
- Karim Haddad, Jean-Louis Giavitto (RepMus Team Ircam) : News (OM and Antescofo)
- Jacopo Greco d'Alceo (CNSM Lyon) : Hybridation, a mind-body problem: live coding
- Johann Philippe (CNSM Lyon) : Digital-analog hybridation
- Nishi Daiki (HMDK Stuttgart) : Creations of Instrumental Music Made by the Noise in the Natural Phenomenon
- Bengisu Onder (HMDK Stuttgart) : Creation of supernatural formantic sounds with OM Chant Library in Open Music
- Delia Ramos Rodriguez, Brandon Lincoln Snyder (HMDK Stuttgart) : Meta-Music: Creating Multimedia Music From a Multilayered Compositional Process
- Niklas Lindberg (HMDK Stuttgart) : Presenting a piece for a vocal performer and electronics
- Eduardo Valiente (HMDK Stuttgart) : OM-Chant as a filtering tool
Artificial Intelligence - Pre-recorded videos
- Matt Lewis (Royal College of Art) : Becoming Soundscape
- Robert B. Lisek (Institute of Advanced Study CEU) : Meta-composer
- Jason Palamara (Indiana University - Purdue University) : AVATAR - A Machine Learning Enabled Performance Technology for Improvisation
- Anna Huang (Google Magenta) : From generative models to interaction to the AI Song Contest
- Doug Eck (Google Magenta) : An overview of AI for Music and Audio Generation
- Jérôme Nika (Ircam) : Interaction with musical generative agents
- Neil Zeghidour (Google Magenta) : From psychoacoustics to deep learning: learning low-level processing of sound with neural networks
- Axel Roebel (IRCAM) : Xtextures - Convolutional neural networks for texture synthesis and cross synthesis
- Philippe Esling (RepMus Team Ircam) : Tools for creative AI and noise
Voice - Pre-recorded videos
- Fabio Cifariello Ciardi (Conservatorio "F.A. Bonporti" di Trento e Rive del Garda) : Shrinking voices: strategies and tools for the instrumental trascription of speech segments
- Greg Beller (Ircam) : Melodic Scale and Virtual Choir, Max ISiS
- Axel Roebel, Nicolas Obin, Yann Teytaut (AnaSynth Team Ircam): Deep learning for Voice processing
- David Guennec (Viadialog) : Towards helpful, customer-specific Text-To-Speech synthesis
Sound design and multimedia interactions
BOLIDE is a multidisciplinary collective of young designers whose ambition is to bring together graphic, interactive and sound design in interactive experiences and installations.
Bolidarium
The BOLIDARIUM is an interactive musical playground. It aims to make a vector for social bonding out of its music. This playground design mixes algorithmic creation and physical interactions to lead a participatory performance as a device for social and musical experimentation.
--------------------------------------
Rosie Ann Boxall is a London based sound designer and installation artist. After her background training in sound for TV, film and radio, she has expanded her practice at the Royal College of Art into art installations for education. She believes that the future of education is interactive, moving away from linear curriculums and towards more positive learning spaces and practices. She aims to do this by creating sound installations designed to teach art, history and science, as well as listening and communication skills.
Magical Stones of Old Britain
This is a sound sculpture focused on teaching children about British folklore and mythical creatures. Taking the form of a stone circle, speakers will play spoken versions of myths from the British Isles for interactors to listen to and engage with. There are five stones, representing five folkloric figures all with their own personalities and ways of telling a story. For IRCAM I am aiming to have one of the stones completed to show and get feedback for my final project at the RCA through a video demo.
--------------------------------------
Andrea Cera is an Italian electroacoustic composer and sound designer. His musical work is concerned with hybridization and distraction. His design activity explores cross-modality and intrusiveness. He collaborates with IRCAM's Sound Perception & Design team, and with the Infomus Research Center of University of Genova. Among his creations: the music for works by choreographer Hervé Robbe, director Yan Duyvendak, ricci/forte company; sounds for Renault's electric concept cars; algorithms for several projects of movement sonification. He received a Nomination in the category "Research & Development" at the International Sound Awards 2018, for the sound design of Renault's Symbioz concept car. Based on the concept of “hybrid,” Luxury Logico is created by four contemporary artists born in the 1980s: Chih-chien Chen, Llunc Lin, Keng-hau Chang (1980-2018), Geng-hwa Chang, known by their lighthearted style that centers on the idea of “DELIGHTFUL WEIRD-LAND.” Inspired by natural environment, tackling thoughts and ideas that filled the spectacles of contemporary society, integrating modern technology and cultivation of humanities, representing their ideas via “music,” “visuality,” “installation,” and “text,” their works of fantasies manifest in various forms and genres, including drama, movies, dance, architecture, pop music and economic behavior. Given force by the unceasing integration, LuxuryLogico comes into being.
Points of Light in the Night : A Conversation on The Insomnia Sketchbook
LuxuryLogico and Andrea Cera are working together to construct a sketchbook comprising short stories took place in the darkness: these stories revolve around the sound fields or sounds of consciousness, clumsiness, pureness, dances, conflicts and destruction, as if we were half awake and eulogizing the mirages amidst dreams and realities under the guidance of a light spot. There is always a paradox in mechanized simulation of life. How can mechanical movements be demarcated from natural ones? Which kind of voice can animate mechanical movements? Which type of acoustic mapping in spatial environments will interact with these mechanical devices? How will ambient soundscapes affect people’s perception of light?
--------------------------------------
Nicholas Faris is a London based sound designer and musician whose interests lie in natural phenomena, spatial listening and cognitive perception. With a background in performance and creative music technology, Nicholas is currently studying an MA in Sound Design within Information Experience Design at the Royal College of Art. In recent work, Nicholas has been exploring the sonic properties of material as a sculptural tool to explore composition and arrangement beyond conventional forms of musical listening. His current research investigates how silence, in spiritual and meditative practices, informs responses to ‘noise’ within creative practice. The majority of his work consists of compositional pieces influenced by techniques used in Electroacoustic music and Music Concrète. Nicholas’ compositional work explores the crossover between synthesis and organic material to exploit natural sonic gestures and tonal content found within his recordings.
Performative Audio Feedback
This project shows some developments utilising audio feedback and its potential to build sculptural instruments with composition and performance in mind. The intention of this project is to merge musical gesture and timbre as a way of generating complex sonic palettes from materials and hardware. The importance of gesture in this project refers to Smalley’s theory of spectromorphology and the importance of motion, gesture and its surrogates. This project has been particularly informed by third order surrogacy and the potential to ignite curiosity within the listener from a symbiotic movement between sonic gesture and interaction. Cybernetics as a process orientated way of designing has influenced the discourse of these experiments benefitting from the interactivity between performer and the behaviours of audio feedback.
--------------------------------------
I am a sound designer and artist, currently studying Sound Design at the Royal College of Art as part of the MA Information Experience Design course. Through my practice I am interested in investigating the sonic possibilities of texture, along with our relationships to human-made and natural materials and spaces. I am also interested in the interaction of art and science, and the possibilities of cross field collaboration and public outreach. I come from a background in theatrical sound design, having previously graduated from drama school in London and worked on various productions as a sound designer. Performance is another focus of my practice, and I am interested in bringing experimental sound design and new sonic perspectives to dance and theatre practice.
Texture Translation
The piece is an investigation into the sonic relationship between touch, gesture, and texture. The concept behind my project is to find ways to listen to our interactions with our immediate environment, composing by using the objects and materials around us, and discovering new ways to play with our surroundings. It is easy to take for granted the myriad of textures which surround us on a daily basis; my piece allows us to consider these textures in a different context, and “zoom in” to their micro detail, providing a new method of examining the spaces in which we inhabit. The piece will take the form of a performance, with sound reflecting the body’s movements and interactions with its surroundings and material environment.
--------------------------------------
Lera Kelemen (b. 1994) is an artist and sound designer from Timișoara, Romania, currently completing her MA in Sound Design at the Royal College of Art in London, UK. Her practice has an interdisciplinary approach working predominantly with installations and sound through which she interrogates notions of spatiality, interactivity, body, personal narratives and the dynamics of the built environment. She graduated with a degree in Fine Arts in 2018 and was the recipient of the Art Encounters Award for her graduation project. She has participated in a number of residencies and coproductions across Timisoara, Bucharest, London and Hannover.
Tender Controller
Tender Technology are design fiction objects that attempt to envision a future in which technology has a tender rather than technical function, where the body is re-centralised as the main interface that triggers digital signals. In an increasingly digitised world, where devices are optimised to perform functions fast in order to be profitable, the body becomes an accessory that performs mechanical gestures, and therefore a mere extension of the machine. Aspects of care, tenderness, warmth, and even affection, become remote, although it is the body that they are usually experienced through.
--------------------------------------
Clovis McEvoy is an award-winning composer, sound artist and researcher currently based in Berlin, Germany. Clovis specialises in music for virtual reality, interactive music and works for instruments and electronics. His works have been performed in America, France, England, Italy, South Korea, Australia, New Zealand, Germany, Switzerland and Portugal. From February-August 2020 Clovis was hosted as the Artist-In-Residence by the TAKT Institute in Berlin, Germany and by DME-Seia at the Serra da Estrela nature reserve, Portugal. Clovis’ most recent project is a collaboration with architectural firm, Pac Studio, on a new virtual reality sound installation that premiered at Ars Electronica’s ‘Aotearoa Garden’ in New Zealand and will be exhibited at the Centre of Contemporary Art, Christchurch from late 2020 to early 2021.
Pillars of Introspection - a work for virtual music
This presentation examines one of my own works written for the emerging medium of virtual music – i.e., musical works designed to be experienced primarily within a virtual reality environment. Pillars of Introspection (2019) explores issues of disempowerment, empowerment and the journey between these extremes through shifting levels of interactivity, spatial constructs and isomorphic audiovisual materials. The development of Pillars of Introspection has been a highly experimental process for me as a composer. Without any well-trodden repertoire to look to, or any formal training in coding or game design, my creative direction and technical skills were forced to evolve in tandem through a process of intuition, trial and error and inadvisable amounts of caffeine. Through discussion of the work and its development, the presentation offers both an artistic and technical overview of the project and, in particular, the area of embodied interactivity, and how forms of interaction can serve as practical metaphors for a works’ underlying themes are highlighted. The software elements of the works’ construction are also detailed; with in-depth descriptions of how Unreal Engine 4, FMOD Studio and Max may be utilized by other composers working in this field. Finally, my own perspective on the works successes and flaws are outlined and discussion is given to possible future developments in this area of compositional practice.
--------------------------------------
Shangyuan Wu, a multimedia and sound designer from Taiwan, is one of the sound design students in the IED program, Royal College of Art. She focuses on the researches about digital media, sonic interaction and soundscape composition. She was selected as one of the artists of the marine science residency program in Science Gallery Venice, 2020. Meanwhile, she had composed for children film festivals and designed experiments for Academia Sinica in Taiwan. The dissertations she cooperated with Academia Sinica in Taiwan were selected as conference papers of DH 2019 and DH 2020.
Humans Who Seeks Eternity
Music is time. Time is the dimension which humans have not passed over. We spend time pursuing knowledge, wealth, fame and status, then we pursue time hoping to live longer. At the end, we pursue eternity. Therefore, the project 《 Humans who pursue eternity 》tries to describe the intention of pursuing eternity with an web-based sonic interaction artwork/project. The music in the artwork/project is the concrete form of time. The soundscape audience heard at the moment is the present of space. Pursuing is an action which transfers into the audience’s moving speed. The sensor for detecting the speed and sound carrier is based on cell phones with the internet and built-in gyroscope. The music (sounds) changes by the audience's movement. When the space has more than two audiences, the music (sounds) will create a multidimensional meaning of humans who pursue eternity by sounds and movements.
--------------------------------------
In 2018 Carlo Siega won the renowned Kranichsteiner Music Prize for interpretation at the Darmstadt Summer Courses for New Music. As a performer devoted to contemporary music, he actively worked with composers as Peter Eötvös, Pauline Oliveros, Stefan Prins, Rebecca Saunders, Simon Steen-Andersen, and many others. He has performed as a soloist and with ensembles in many venues throughout Austria, Belgium, Croatia, Finland, Germany, Italy, Poland, Slovenia, Spain Sweden, and elsewhere. As a scholar, he has been invited as a lecturer at the ‘PARL_Next Generation’ Symposium (Linz), Re-envisaging Music Conference 2020 at the ‘Accademia Chigiana’ (Siena), The 21st Century Guitar Conference (Lisbon), and others. After completing his music studies in Venice, Milan and Brussels (Ictus Ensemble Academy), and the philosophy studies at "Ca 'Foscari University" - Venice, he is currently a doctorate candidate at the "Anton Bruckner" Private University in Linz, Austria.
Re-actualisation practices through video-art music. A case study.
Within the current experimental music field, new approaches embrace transdisciplinary practices which also belong to other performative and visual arts. New technologies extend the performative possibilities of the acoustic sounds, translating the musical gestures into images. It generates the so-called video-art music, in which the world of image augments the sound production of the music compositions, unifying the aural and visual senses into the same artwork. Herein, the concert format is extended, and the bodily presence of the musician is explored as part of the composition itself. How does the musician's performativity answers to the visual materiality of the sound? Does it affect the perception of the performativity itself? The purpose of this contribution is describing an approach within the interpretation practice. Here, the video-media technologies serve as a creative tool for a re-actualisation process from the already existing repertoire. The intervention proposes a case study based on a 'remaking' operation of "Serenata per un Satellite" by Bruno Maderna (1920 – 1973) for guitarist and random video-electronics. The core of this presentation aims to investigate the performer’s creative agency, which extends and converts the 'music composition' into a performative video installation through a Max/MSP & Jitter environment. Here, the will is underlining the internal performative patterns of the re-interpretation processes, where the video media augments the performer's physical engagement. The remaking process of a musical artwork opens towards new perceptions of the body performativity, where musical and visual domains are connected and hybridised.
--------------------------------------
Shontelle Xintong Cai is a research-based artist and designer, currently based in Toronto and London. She engages in posthumanism, scientific systems, informatics, transmedia storytelling, and diverse approaches of cross-sensory interactive experience. Her art and design practices are influenced by her academic background in visual communication, human ecology, and plant physiology. Her works create innovative dialogues, envisioning the relationships between post-human and non-human entities. She experiments with information visualization and material interactivity via graphic, audio-visual communication, creative coding and installation. She opens the discussions in relation to emerging technologies, synthetic bio-design, and tangible objects through political ecology and animism to her audience.
Launch a dream
“Launch A Dream” is a project based on telemetry data analysis of stratospheric balloon flight, interacting with sonic information produced by data sonification, and sonic improvisation. My intention is to create an immersive sonic experience of space exploration, particularly with a stratospheric balloon to my audience, enabling them to get involved and improvise a part of sound making during the performance. After "launching the dream of space flight", my audience would reflect the relationship among selfhood, system, and space. This installation is presented by two parts: the demonstration of data sonification and sonic interaction based on the logic of the autopoietic system. I embed systematic thinking when I design the interactive installation. Autopoietic system is a self-creating and self-organized system, sometimes needing the outside activities to trigger certain actions and maintain the balance or functions of systems. I consider my participant's actions are considered as the element of foreign "low-level randomness of an autopoietic system, activating a part of this complex system. The two dependent autopoietic systems intertwine with each other and influence the collective environment. The effectiveness and stability of balloon flight would be affected by the magnetic field on earth. Thus, I test the adaptability and application of autopoietic system, experimenting how the system stimulates human perception and interaction of data sonification and DIY instrument. I analyze the open data from Canadian Space Agency (CSA) database in 2018, and create the moving images and soundtrack of telemetry data collected in balloon flight.
--------------------------------------
Using creative coding, 3D design and ML technologies as well as paint and film as media, I create multisensory experiences, experimental musical instruments and performances which communicate the information gathered from philosophy, science, research and self-discovery. Real-time is a fascination for me since I believe it can capture the ever slippery “now”. My process usually focuses on translation of information between human sensorium and machine algorithms, creating something in-between and unknown to both. Currently I’m continuing my postgraduate studies at Royal College of Art, Information Experience Design M.A.
Becoming with
Becoming With is an experimental musical instrument which produces audio/music real-time from improvised body movements and dance using a Kinect sensor, Touchdesigner and Wekinator Machine Learning. Dance and music becomes with each other. Traditionally dance is seen as a response to music, however in this project I wanted to destroy this dualism which separates music and dance. I imagined them as one, becoming with each other in a constant flux. As in quantum physics, this performance is indeterminant, within its nature of improvisation. There is no way to imagine, decide or produce the sounds nor the movements before they actualise. It is inspired by the views of quantum physicist and philosopher Karen Barad who argues that no entities exist before the intra-actions, everything emerges through various and numerous intra-actions in the level of quantum. As in quantum theory, in this project, subject and object dualism is under interrogation. There is no input/output relation between dance and music. Indeterminacy and post-dualism are the cores of the performance. It is both a tool for improvised performance and an interactive sound installation. Becoming With is a manifestation of intra-actions where sound and dance emerge.
--------------------------------------
Born in 2000 in New Orleans, LA, B.K. Zervigón began composing at a young age. Often dealing with the challenges facing the Gulf Coast due to climate change and industrialization, his work seeks to create soundscapes which reflect the interplay between heavy industry and ancient, sickly nature. This Southeast Louisiana landscape seeps into his work through often massive architectural processes against intensely emotional and intuitive feeling- like water moving through an inundated oil refinery or a child standing on a Mississippi floodwall built in the 1930s (such as Venice, Norco, & Destrehan, LA and the Bonnet Carré Spillway). His output includes works for piano, retuned piano, instrumental solos, chamber works and electronic music. He currently has the privilege of studying under and playing the music of Michael Hersch at Peabody Conservatory. Previously, he studied under Yotam Haber at the University of New Orleans while at Benjamin Franklin High School. Recent collaborations include a setting of poet Nicole Cooley’s Breach to commemorate the 15th anniversary of Katrina, and multimedia projects with photographer Luca Hoffmann. Ben’s scores can be found on IMSLP.
Soundings in Fathoms - reconstructing the concert experience in digital media
Soundings in Fathoms was commissioned by the New York New Music Ensemble as part of their “Socially Distant Micro-Commission” initiative. The ensemble invited composers to reimagine the concert experience and create moving music heard and conceived through digital media. The resulting work combined aspects of concert music with rigorous electronic process and editing to create a massive sound world from only 4 incredible instrumentalists. Through the recording and post production processes, the piece was able to achieve technical aspects not possible in chamber concert- such as many retuned pianos, scordaturas, dense manipulation of recorded instrumentals, and much more. Place is conjured in an unusual way through extensive use of well blended field recordings from industrial Coastal Louisiana. Soundings in Fathoms reimagines sound design, attempting to go beyond the live concert experience and create concert work conceived purely for digital media. This is sound design which aims only to place the listener into a landscape in which music takes place. In this presentation, I would like to share my perspective on composing this work, the ideas it opens up about digital media and portraying location in music. I will deal with issues of intonation in such densely microtonal music, as well as going beyond the possible with recording. I would like to challenge everyone composing right now to consider the new possibilities of a fragmentary, distanced recording process.
--------------------------------------
Nicolas Misdariis : I’m a research director and the head of Ircam STMS Lab / Sound Perception & Design team. I’m graduated since 1993 from CESTI-Supmeca, an engineering school specialized in mechanics. Then, I got my Master thesis on acoustics at Laboratoire d’Acoustique de l’Université du Maine (LAUM, Le Mans) and my PhD at Conservatoire National des Arts et Métiers (CNAM – Paris) on the following topic : synthesis/reproduction/perception of musical and environmental sounds. Then, I recently defended my HDR (Habilitation à Diriger des Recherches), affiliated to the Technolgy University of Compiègne (UTC) on the topic of Sciences of Sound Design. I’ve been working at Ircam as a researcher since 1995. I contributed in 1999 to the introduction of sound design at Ircam (Sound Design team /L. Dandrel), and to its evolution in 2004 (SPD team /P. Susini). During these years, I especially developed research works and industrial applications related to sound synthesis and reproduction, environmental sound and soundscape perception, auditory display, human-machine interfaces (HMI), interactive sonification and sound design. Since 2010, I’m also a lecturer in the Sound Design Master at the Fine Arts School of Le Mans (ESBA TALM, Le Mans).
Diemo Schwarz, born in Germany in 1969, is a researcher at IRCAM, and a musician and creative programmer. His scientific research on sound analysis/synthesis and gestural control of interaction with music is the basis of his artistic work, and allows to bring advanced and fun musical interaction to expert musicians and the general public via installations like the dirty tangible interfaces (DIRTI) and augmented reality (Topophonie mobile). In 2017 he was DAAD Edgar-Varèse guest professor for computer music at TU Berlin. He performs on his own digital musical instrument based on his CataRT open source software, exploring different collections of sound with the help of gestural controllers that reconquer musical expressiveness and physicality for the digital instrument, bringing back the immediacy of embodied musical interaction to the rich sound worlds of digital sound processing and synthesis. He interprets and performs improvised electronic music as member of the 30-piece ONCEIM improvisers orchestra, or with musicians such as Frédéric Blondy, Richard Scott, Gael Mevel, Pascal Marzan, Massimo Carrozzo, Nicolas Souchal, Fred Marty, Hans Leeuw. He composes for dance and performance (Sylvie Fleury, Frank Leibovici), video (Benoit Gehanne and Marion Delage de Luget), and installation (Christian Delecluse, Cecile Babiole).
SkataRT, sound design and creation tool
The SkataRT environment is built at the intersection of a research and development work on concatenative corpus-based sound synthesis (CataRT) and a European research project on the issue of voice imitation as a tool for sketching and rapid prototyping (Skat-VG). It thus offers different fields of application and use, ranging from musical creation to sound design, through the exploration of sound corpuses, or performance. From a technological point of view, it is embodied in a "Max for Live" device developed by Manuel Poletti and Thomas Goepfer (Music Unit), in the framework of a collaboration with ISMM and PDS teams of Ircam. After a brief general introduction, a demonstration of the current state of development of the software and its various sound possibilities will be given during the presentation.
--------------------------------------
- Nicolas Misdariis, Patrick Susini, Romain Barthelemy (Perception and Sound Design Team)
Nicolas Misdariis : I’m a research director and the head of Ircam STMS Lab / Sound Perception & Design team. I’m graduated since 1993 from CESTI-Supmeca, an engineering school specialized in mechanics. Then, I got my Master thesis on acoustics at Laboratoire d’Acoustique de l’Université du Maine (LAUM, Le Mans) and my PhD at Conservatoire National des Arts et Métiers (CNAM – Paris) on the following topic : synthesis/reproduction/perception of musical and environmental sounds. Then, I recently defended my HDR (Habilitation à Diriger des Recherches), affiliated to the Technolgy University of Compiègne (UTC) on the topic of Sciences of Sound Design. I’ve been working at Ircam as a researcher since 1995. I contributed in 1999 to the introduction of sound design at Ircam (Sound Design team /L. Dandrel), and to its evolution in 2004 (SPD team /P. Susini). During these years, I especially developed research works and industrial applications related to sound synthesis and reproduction, environmental sound and soundscape perception, auditory display, human-machine interfaces (HMI), interactive sonification and sound design. Since 2010, I’m also a lecturer in the Sound Design Master at the Fine Arts School of Le Mans (ESBA TALM, Le Mans).
Patrick Susini : As Director of Research at IRCAM, an important issue in my work was to design an experimental and theoretical framework making it possible to combine research on perception with sound design applications by addressing several questions: How do we communicate the characteristics of a sound? What are the perceptive dimensions of timbre that underlie the qualities and identity of a sound? How does the signification of a sound depend on a process of interaction between an individual and their environment? These works are still in progress, but we can find some of the answers to these questions in recent works in which I participated: Measurements with Persons (Psychology Press, 2013), Sonic Interaction Design (MIT Press, 2013), and Frontiers of Sound in Design (Springer, 2018). This research is carried out through collaborations with composers as well as in an educational setting, notably with the creation of the diploma in Sound Design at école des Beaux Arts du Mans where I have taught since 2010. Another focus in my work is the cognitive processes involved in the perception of complex, multi-source soundscapes (non-stationary), notably in terms of interaction between local and global information. This subject is studied in close collaboration with two other CNRS laboratories: the LMA in Marseille and the CRNL in Lyon.
Romain Barthelemy : Trained in classical and contemporary music composition (Conservatoire Massenet, NUIM), then in sound design (ESBAM-IRCAM), I work as a freelancer in the fields of sound design, digital art and industrial sound design. I am a regular collaborator of the Laps agency and a founding member of the AAIO collective.
SpeaK – Lexicon of sounds for sound design
SpeaK is a sound lexicon that offers definitions of main sound properties. Each term of the lexicon is illustrated by sound examples that have been created or recorded on purpose, in order to highlight the given property. The tool is embedded in a Max interface, developed by Frederic Voisin, that can be customized by adding new terms and/or sounds according to the needs of a given usecase. One goal of SpeaK is to foster a shared understanding of sound phenomenons, in the frame of a collaborative sound design process. For that, it is often associated with Intermediary Objects of conception (typically, a card set) that allow to transpose general insights into sound characteristics, and then inform the final stage of sound design. Since few years, the ergonomics of Speak have been especially developed thanks to different sound design projects that will be detailed during the presentation and will serve as an illustration of the validity of this approach.--------------------------------------
- Frédéric Bevilacqua, Diemo Schwarz, Jean-Etienne Sotty (Equipe ISMM) : News Interaction Sound Music Movement Team
Frédéric Bevilacqua : Head researcher at IRCAM, leading the team Sound Music Movement Interaction, my work concerns gestural interaction and movement analysis for music and perfoming arts. The applications of my research range from digital musical instruments to rehabilitation guided through sound feedback. I coordinated several projects such as Interlude (ANR Prize for Digital Technologies 2013, Guthman Prize 2011 of new musical instruments) on new interfaces for music or the ANR Legos project on sensorimotor learning in interactive music systems. After scientific and musical studies (Master in Physics and PhD in Biomedical Optics from EPFL, Berklee College of Music in Boston), I was a researcher at the University of California Irvine (UCI), before joining IRCAM in 2003.
Diemo Schwarz, born in Germany in 1969, is a researcher at IRCAM, and a musician and creative programmer. His scientific research on sound analysis/synthesis and gestural control of interaction with music is the basis of his artistic work, and allows to bring advanced and fun musical interaction to expert musicians and the general public via installations like the dirty tangible interfaces (DIRTI) and augmented reality (Topophonie mobile). In 2017 he was DAAD Edgar-Varèse guest professor for computer music at TU Berlin. He performs on his own digital musical instrument based on his CataRT open source software, exploring different collections of sound with the help of gestural controllers that reconquer musical expressiveness and physicality for the digital instrument, bringing back the immediacy of embodied musical interaction to the rich sound worlds of digital sound processing and synthesis. He interprets and performs improvised electronic music as member of the 30-piece ONCEIM improvisers orchestra, or with musicians such as Frédéric Blondy, Richard Scott, Gael Mevel, Pascal Marzan, Massimo Carrozzo, Nicolas Souchal, Fred Marty, Hans Leeuw. He composes for dance and performance (Sylvie Fleury, Frank Leibovici), video (Benoit Gehanne and Marion Delage de Luget), and installation (Christian Delecluse, Cecile Babiole).
Jean-Etienne Sotty : Born on March 20, 1988. A native of Bourgogne, France, he fought with the most distinguished professors: O. Urbano, C. Girard, P. Bourlois and finally T. Anzellotti, under the teaching, he obtained Master at the HKB Bern. His excellence opens the doors of the CNSMDP (PhD); his musical knowledge earned him to obtain music aggregation.With this course, there is no limit to his musical desires: recital, contemporary creation, improvisation, concerto with orchestra ... his activities are as diverse as possible. Insatiably creative collaborates with many composers and creates the first accordion in France in the duo XAMP that he forms with Fanny Vicens.In the most prestigious places: Théâtre du Châtelet, KKL Lucerne, Les Subsistances (Lyon), Center George Pompidou / IRCAM, Spring Arts Festival (Monaco), Teatro Mayor (Bogota), Sadler's Wells (London), Konzerthaus Vienna, Festival Presences ...
--------------------------------------
Jean Lochard is Computer Music Designer at Ircam since 2001. He teaches techniques of synthesis, musical acoustics, techniques of studio and tools of musical creation in real time to the young composers of the Computer Music Cursus of the institute. As a researcher and developer at Ircam, he mainly programmed the Ircamax Vol 1 & 2 plugins for Ableton Live and Najo Modular Interface, a user interface to produce a larger variety of sound in Max. As an electronic music composer, he worked on many projects: cinema screening with François Régis, interactive music installations with Pierre Estève, live performing with Suonare e Canta. He was involved in the creation of musical software for Karlax, a new instrument for live performance. He also collaborated with various artists: Emilie Simon, Avril, Jackson and his Computer Band, Jean-Michel Jarre… He also produced two albums with his own music : A Quiet Place on This Planet (2015) and New Brain (2019).
New Ircam Forum Max For Live devices
During this session, 5 new devices distributed by the Forum will be introduced: MarblesLFO and PendulumsLFO use the physical function of Max to emulate two physical systems (marbles falling in a box and a double pendulum) and transform their data into Low Frequency Oscillators that can be mapped to any Live session parameter. SuperVPSourceFilter and SuperVPCross are focused on performing stereo cross synthesis with the SuperVP™ engine. LadderFilter integrates an emulation of the famous Moog ladder filter alongside functionalities usually available in filter pedals effects.
--------------------------------------
Rosie harnesses the evocative power of sound to amplify the voices of refugee and migrant communities. Her work focuses on collaborating with communities to develop participatory projects that empower diverse voices. More broadly, Rosie’s research focuses on social inclusion and community resilience through collaborative projects centered around voice and conversation as artistic practice. Whilst studying, some of her works made in collaboration with the Compass Collective have been exhibited at ‘Late at the Tate’s’ online showcase; in an online sharing of a short film for Refugee Week supported by Migrant Arts; and played live as part of a performance at the Southwark Playhouse. Independently, her previous work made with the Say It Loud Club was commissioned and exhibited by Usurp Arts.
Home
“The spirit of a home can affect our whole life.” – Daphine Adikini, Member of Say It Loud Club.
Over the past year, our homes have become more important than ever. In collaboration with eight members of the Say It Loud Club, a collective of LGBTQ+ asylum seekers and refugees, this film looks at how we define home, how this definition can change overnight, and what it means to have access to a space we can call our own. Made during lockdown with words, films and images from eight unique and powerful voices - “Home is what we make it and want it to be.” https://youtu.be/0b0Y1eVtpxY
--------------------------------------
Frances Allen is a Sound Design student at the Royal College of Art, having previously studied music at the Reid School of Music, University of Edinburgh. In her compositional works, Frances is interested in the relationships that can be created between art/design and sound. Much of her work focuses around the idea of rotational structures and sequences to create sound environments or soundscapes to go alongside visual components.
Rubiks Cube
A Sound Art piece which recreates the movements of the Rubik’s Cube. The puzzle distorts and resolves through the movements of the faces which creates harmonic discord and resolution. To create the work, each colour of the cube has been assigned a pitch and each face has been assigned an octave range. Then through the use of a MAX/MSP, the frequencies of each colour on each face through each sequence is charted, to create a rotating soundscape.
--------------------------------------
Jiuming Duan is an artist and designer born in China and raised in Beijing. He came to London to study for an MA in Information Experience Design at the Royal College of Art. Before that, he completed his bachelor degree at the Communication University of China in art design for film and theatre, where he focused on theatre design. Currently, he works across installation, sound, performance, interaction and set design. His work often offers a poetic focus. He looks at ways of engaging the audience with complicated problems simply and tangibly.
Motion-sonic Installation
This presentation includes a series of experiments and process about connecting an accelerometer sensor to oscillators. With the data from the accelerometer sensor, the frequency of sound can be changed instantly. It is used to capture the movements of wind and water ripples in nature, and transform them into a sonic experience in a poetic way. It explores the relationship between spatial movements and sonic experience, which gives another perspective of how do we perceive natural movements.
--------------------------------------
Leon Hapka is a researcher and new media artist who straddles the line between the creative and the scientific. Apart from his background in medical sciences (MSc in Medical Biotechnology) from a western perspective, he has a special interest in Vedic Sciences, as well Chinese Philosophy. He is currently studying MA Information Experience Design at Royal College of Art in London with special focus on Sound.
Sonic Harmony Systems
Knowing that everything in our universe is in motion due to constant vibrations that every object is producing on atomic level all the time, makes it easy to understand that it would mean, that every object is sensitive to its own frequencies as well to everything that emitates frequencies around it. Even when something seems and feels like hard or static, it resonate at various frequency rate and therefore is oscillating between at least two states at all time. It is important to understand fully those occurrences and their meaning as it can help us to recognize and interpret how noise pollution of cities, radio-frequency waves of mobile phones etc really affect us. Visualisations of the power of simple tones might be crucial to realised how big effect, the constant mixture of unnatural sounds we are surrounded by, can have on our bodies. It can also, show us potential way for developing very effective, economically cheap and easily accessible healing methods. Sonic frequencies seem to hold the power over matter and that means they can help to balance our health on molecular and cellular level, as well to bring our attention to destructive powers and increasing danger of noise pollution and electromagnetic fields around, showing us that it is something that can’t be ignored and can potentially be a cause of worst-case diseases.
--------------------------------------
I am a postgraduate student who are majoring Information Experience Design in RCA now. And my BA session is Fashion Design in China. Last year I worked for Vogue China and have various experiences of working with commercial brands, such as Charles Keith,Bio-Oil,Dior and BMW, etc. I also worked for some designs about leather, graphic design, textile and fashion work as a freelancer during my BA session. I was involved in organizing some lectures for celebrities from different areas, such as fashion magazines, some brands' leader. I have some fashion design work got some awards, also displayed on New York Children's Fashion Week. I tried new media in my graduated design, I used AR in fashion design, to mix the visual feeling with each other. And I started transferred myself into technical area. I am searching more ways to mix diverse technics and art to display more productive work.
Self Independences and groups
Nowadays, groups always limit and standardize us in society. If you are a self-independence and different from others, you will become isolated from groups. From my perspective, this social phenomenon could create unfavourable limitations for human development. On the other hand, the public voice would be too harsh for those who show their self-independences. Through an in-depth consideration and research on this problem, I insist we should seek common ground and reserve differences on this issue. Furthermore, I have read a book of a well-known Chinese writer, Chiang Hsun, whose name is ‘Six Lectures on Loneliness’. He mentioned that 'When individuals conflict with groups in Confucianism culture, who are self-independences, it could appear such individuals that receive beyond expected loneliness. Others would feel pity for them, ask them the reasons for keeping, and even consider them as unaccepted alien races.' Therefore, research on the relationship between individuals and groups occurred.
--------------------------------------
I'm Wenqing Yao. I am currently studying information experience design -moving image design pathway at Royal College of Art. My research focuses on communication through moving images and sound interaction. I am interested in the topic of nature and philosophy.
Land Speech
Human life is inseparable from the land, whether in the past or now. Plowing land, building houses, planting trees, etc. People keep asking for land, and land is constantly given. However, how does the land feel when people perform these actions to it? What is the emotion of the land? The sound of the land is often overlooked, and the land often plays the silent role. Therefore, I wish to make the land speak, amplify the feedback sound of the land through simple sound technology, and let the land speak out for its feelings.
--------------------------------------
Nirav Beni is a South African new media artist and engineer currently based in London. Having a background in Mechatronics Engineering, he often positions himself at the intersection of art and technology, where he aims to incorporate AI, machine learning and other emerging technologies into his practice in order to create evocative and immersive audiovisual experiences.
Lucy Papadopoulos: Creative practitioner testing the boundaries between prevailing dichotomies such as micro/macro, subject/object and human/nonhuman. I have a trans-disciplinary approach drawing on physics, neuroscience, climate change and philosophy to laterally explore fundamental questions of epistemology. In a world where science fiction feels more and more believable, and concepts like ‘fake news’ increasingly rot inner dynamics of trust, I am interested in how meaning and agency is created at the outer boundaries of what is known and what is knowable..
Susan Atwill is a sound hunter and artist, merging physical and audio experimentations of everyday materials to create kinetic, abstract audiovisual performances and installations. She currently is completing a Masters in Information Experience Design at the Royal College of Art.
Spin
We are a collective of sound artists and designers from the Royal College of Art. We have combined light, sound, and the idea of rotation, or ‘spin’, to produce an experimental film, ideally for live performance using Max. We use ordinary everyday objects to consider life in the pandemic, how to connect with people, whilst not being physically with them. We are both presence and absence at the same time. Subject/object, human/nonhuman. We are going around in circles.
--------------------------------------
< Spinning Brother __/''''\__ *3 >
My study is focusing on transmit information/experience of animals to humans. In my practice, I minimize the technology and materials while experimenting with them, maximizing my entity to participate fully. The <Spinning Brother __/''''\__ *3 > is an audio + video piece, it shows the mixed perspective of two different speces : human and spider. The vision shows human sights,the audio represents the spider's angle.
--------------------------------------
Audio Production
Charles-Edouard Platel is a selt-taught composer of electroacoustic music since 2006. His catalog of about forty works is available on CD or in streaming. Qualified as a "free electron", he organizes himself 2 or 3 spatialized concerts per year within the framework of the Helium Artists of the Chevreuse Valley. It has been regularly played at the Futura Festival and broadcasted on France Musique and Radio Libertaire. Former engineer, he develops on Max/MSP/Jitter the instruments of musical creation he needs: Randolon (random generator) and Ondolon (sound sculpture).
Ondolon, sculpture of recorded sound
Ondolon is a software instrument for practicing sound sculpture by improvising: all the settings, in a single window, can be modified in real time when playing, according to the musician's feedback, whose actions can be memorized as the session progresses so that the work session can be replayed identically. Technological artifacts are not new, but the goal is to exploit them with ease to serve an aesthetic intention.
--------------------------------------
William Petitpierre is a sound designer, musician and performer. Recently graduated from a DNSEP in Sound Design, his practice is oriented towards interaction design, field recording, image music, and real-time sound manipulation during live electronic performances by the duo parhelic shell. He is also a speaker and instructor for the Mu collective and Soundways technology allowing the creation of geolocalized sound paths.
IRCAM Technologies - vector of expressiveness
The objective of this demo is to present an artistic use of Modalys and CatArt in the form of a live performance. This performance is based on the use of Sensel's Morph to control instruments in real-time in Max/MSP responding to a pre-established gesture. The project seeks to strengthen the gesture-sound relationship in electronic creation in order to find an almost tangible relationship with the sound, made possible through the use of physical models and a reflection on mapping strategies.
--------------------------------------
Benjamin Hackbarth directs the Interdisciplinary Centre for Composition and Technology and teaches composition University of Liverpool where he lectures and writes music for instruments and electronic sound. He has a Ph.D. from the University of California, San Diego where he studied composition with Roger Reynolds and received a Master's degree while working with Chaya Czernowin. He has a bachelor's degree in composition from the Eastman School of Music. Ben has been composer in research at IRCAM three times since 2010. He was also a composer affiliated with the Center for Research and Computing in the Arts (CRCA) and a Sonic Arts Researcher at CalIT2. He has had residencies at Cité des Arts, Centre Internationale de Récollets, Akademie Schloss Solitude and the Santa Fe Chamber Music Festival. In addition to writing concert music, he has collaborated with other artists to create multimedia installations with realtime graphics, sound and motion tracking. Notable performances include those by the Arditti String Quartet, Ensemble InterContemporain, the New York New Music Ensemble, the L.A. Percussion Quartet, the Collage New Music Ensemble, Ensemble Orchestral Contemporain, Ensemble SurPlus and the Wet Ink Ensemble. His work has been presented in venues such as Cité de la Musique, Akademie Schloss Solitude, the MATA festival, SIGGRAPH, the Florida Electro-acoustic Music Festival, the Santa Fe Chamber Music Festival, the Ingenuity Festival, E-Werk, the Pelt Gallery, the San Diego Museum of Art, the Los Angeles Municipal Art Gallery, the Roulette Concert Space and Espace de Projection at IRCAM. Ben's music can be heard on CD releases by the Carrier Records and EMF labels.
Concatenative Sound Synthesis with AudioGuide / Workshop AudioGuide
This demonstration gives an overview of recent developments in the standalone corpus-based concatenative sound synthesis program AudioGuide. New features and methods include revamped hierarchal sound segment matching routines and polyphonic support via target sound signal decomposition. This talk also covers new ways of integrating AudioGuide's output into different musical workflows, including new Max patches for manipulating concatenated outputs, including bach.roll support for sound concatenations and descriptor data, and a new interface in AudioGuide which permits the creation of acoustic scores.
--------------------------------------
Surrealist, Sound Artist. Associate Professor of Osaka University of Arts. He had won the prize in international exhibition for the creation about the Sound/Visual work. Also worked at the international Jazz festival at Russia as a Visual Artist and VJ. He had advocated Visualization for the Open Data project in Japan.(LOD, The Semantic Web, XML) Working as a researcher at Osaka University and The National Institute of Advanced Industrial Science and Technology (AIST), he had involved in the project for the signal processing and machine learning applying for the sound and music recognition. As a leader, elected as a support group for the art project in MM21 Yokohama city. Invited from the university and art space such as DOM culture center(Moscow), IRCAM(Paris),Lin's Culture(Taipei),National Tsing Hua University(Hsinchu City,Taiwan). He is now working for the project "Through The looking Glass Guitar" from the debut concert at DOM culture center in 2015.Later the project had developed as a patent in 2020. He is interested in and learning from Vocaloid,Visual-Kei,B-boy(Hiphop) for not only it's sound but also the fashion and the visual.
New Sounds by the "Through the Looking Glass Guitar"
Demo about the sounds of "Through The Looking Glass Guitar".Using Electric Guitar and Artiphon(Guitar Synthesizer).Some explanation about the concept and patent.
--------------------------------------
Dave is a multidisciplinary creative practitioner whose approach to artistic research and making is informed by his background in musical improvisation. His current research, drawing on psychogeography to explore liminal spaces where technology and nature are intertwined makes use of this intuitive approach and utilises an assortment of ad-hoc hacks, DIY equipment and devices to listen to and experience sounds generated by electromagnetic fields in unforeseen ways.
Journey into the Body Electric
The project has been devised in order to generate an embodied sonic experience of electromagnetic fields using intuitive devices including a glove with several induction coils and an omnidirectional E.M. microphone with multiple channels acting as a sonic ‘compass’. Textural variations of sonified electromagnetic fields captured through these devices can in turn be experienced binaurally through a headphone mix with each channel having its own specific panning position, a multichannel speaker array, or through an improvised, ad-hoc, multichannel sound diffusion headset utilising surface transducers and bone conduction. The experimental wearables both encourage critical reflection on our relationship with technology whilst exploring speculative possibilities of a human sixth sense akin to magnetoreception in animals.
--------------------------------------
Dr. Jean-Marc Jot currently serves as VP of Research & Chief Scientist at iZotope, a leading developer of breakthrough, intuitive audio and music production tools found in professional facilities and home studios alike. Prior to iZotope, Jean-Marc headed the development of pioneering immersive audio processing technologies, platforms and standards for virtual or augmented reality, home and mobile entertainment with Magic Leap, DTS and Creative Labs. Before relocating to California in the late nineties, he conducted research at IRCAM in Paris, where he designed the original Spat software suite for spatial audio creation and performance, which has since spawned several award-winning products. Jean-Marc is a regular invited speaker in industry or academic events, and a recipient of the Audio Engineering Society Fellowship award.
Objects of immersion - unlocking musical spaces
During the past decade, new object-based immersive audio content formats and creation tools were developed for cinematic and musical production. These technologies free the music creator from the constraints of normalized loudspeaker configurations. They also support head rotations along three degrees of freedom (3-DoF), thus unlocking a natural immersive listening experience through headphones or wearable audio devices. Meanwhile, interactive audio experiences in video games and virtual or augmented reality require a scene representation supporting 6-DoF listener navigation, where an audio object models a natural sound source having controllable distance, orientation, and directivity properties. Additionally, acoustic environment properties must be explicitly included and decoupled from the sound source description. We examine and compare these two conceptions of object-based spatial audio and seek to unify them with a view to connecting previously disparate digital media applications and industries.
--------------------------------------
Pierre Guillot has a doctorate in Aesthetics, Science and Technology of Arts, specialising in Music. He presented his thesis at the University of Paris in 2017 as part of the Arts-H2H Laboratory of Excellence programs. Throughout his research career, he has participated in the creation of numerous projects and tools for music, including the HOA ambisonic sound spatialization library, the Kiwi collaborative patching software or the multiformat and multiplatform plugin Camomile. Since 2018, he joined IRCAM within the Innovation and Research Means department, where he is in charge of the development of the AudioSculpt project and its derived products.
Above AudioSculpt: TS2 and Analyse
This presentation introduces two software developments, TS2 and Analyse, heirs of the tools offered by the AudioSculpt environment. The IRCAM Lab. TS2 is a sound processing software developed within IRCAM's IMR department. Built around the SuperVP (Super Phase Vocoder) audio engine, developed by the Analysis-Synthesis team, the TS software offers the possibility of modifying and creating new sounds using transpositions, time stretching and resynthesis tools. In the framework of the IRCAM Forum, the new possibilities related to the multichannel support and the spectral clipping filter of the latest version of TS2 will be presented, among others. Analyse is a software development project allowing to visualize and edit audio analyses generated by plugins. The objective is to offer a simple but dynamic interface meeting the expectations of research in signal processing, musicology and compositional practices.
--------------------------------------
Thibaut Carpentier studied acoustics at the École centrale and signal processing at Télécom ParisTech, before joining the CNRS as a research engineer. Since 2009, he has been a member of the Acoustic and Cognitive Spaces team in the STMS Lab (Sciences and Technologies of Music and Sound) at IRCAM. His work focuses on sound spatialization, artificial reverberation, room acoustics, and computer tools for 3D composition and mixing. He is the lead developer and head of the SPAT project as well as the 3D mixing and post-production workstation Panoramix.
After having followed a scientific and musical education, Olivier Warusfel obtained the degree of doctor in acoustics from the University of Paris 6. He currently leads the team Espaces Acoustiques et Cognitifs, in the research and development department of IRCAM. His main fields of research are signal processing applied to sound spatialization techniques as well as hearing and spatial cognition. He has also taught musical acoustics at the University of Paris 8 and has been teaching room acoustics and pedagogical coordination at the DEA ATIAM, accredited by the University of Paris 6 (Acoustics, Signal Processing and Computer Science applied to Music).
News Team EAC
In this session, we will present the software developments made in 2020 around Spat (for Max) and Panoramix (standalone). These developments concern the addition of new functionalities (notably for the manipulation and decoding of Ambisonics streams), bug fixes, and CPU optimization of the tools. We will demonstrate the features through several example patches in Max.
--------------------------------------
News Team S3AM
This year S3AM team’s presentation highlights the following: 1) the just-released v3.6 of Modalys (physical model-based sound synthesis) with major leap forward in 3D design and scriptural lutherie under Max environment 2) Jean-Étienne Sotty's musical residency about the hybridization of the accordion.
--------------------------------------
French sound designer Paul Escande began his career with architecture before turning to electronic music as an artist and DJ. His skills as a producer opened the doors of specialized music publishers for which he creates samples, sound banks, and presets (Native Instruments, Sample Magic, Splice). In order to expand his practice, he joined the DNSEP Sound Design (TALM-IRCAM) in 2018 and started a fruitful collaboration with the digital art studio ExperiensS. There he continues his Space/Sound experiments on 3D sound, interaction, and real-time in the context of immersive sound installations and experiences.
Augmented Architectures
At the intersection of the fields of architecture, sound design, and video games, "augmented architectures" are part of the DNSEP Sound Design diploma project (TALM-IRCAM) presented in October 2020. This project aims to develop a prototyping tool for the preview of sound sculptures in real-time.
Initially based on formal experimentation and the assembly of materials in the workshop, the health crisis in the Spring of 2020 finally led to the choice of the game engine as a space for design and expression.
This project presents a state of the art solution for real-time sound manipulation in a 3D engine and mobilizes a number of technologies developed by IRCAM (e.g. Spat, Modalys, Evertims) whose possibilities overlap with those of virtual reality and computational design. Virtualization then becomes the engine of design and an autonomous means for the simulation of sensitive experiments.
Initially based on formal experimentation and the assembly of materials in the workshop, the health crisis in the Spring of 2020 finally led to the choice of the game engine as a space for design and expression.
This project presents a state of the art solution for real-time sound manipulation in a 3D engine and mobilizes a number of technologies developed by IRCAM (e.g. Spat, Modalys, Evertims) whose possibilities overlap with those of virtual reality and computational design. Virtualization then becomes the engine of design and an autonomous means for the simulation of sensitive experiments.
--------------------------------------
Composition
Vincenzo Gualtieri is a composer and performer of acoustic and electro-acoustic music. He was finalist of several national composition contests. In 2004, with the opera "Field" he won the 1st prize at the International Competition of Electroacoustic Miniatures of Seville: "Confluencias". His compositions have been performed both in international and national context. On several occasions he's also performed - as sound engineer - electro-acoustic operas by many composers.
Some of his writings tackle issues such as when (and the ways in which) the technological innovations - in musical contexts - invite, suggest (or inhibit) non-conventional (subversive?) approaches. On September 2016, the opera "(BTF)-5, for augmented Cajon and live electronics" has been premiered at the De Montfort University in Leicester (UK). In a lecture he has also drawn the guidelines of his current working project: (BTF). The (BTF) project focuses the development of the interactions between analog and digital audio systems, which are at the same time adaptive, site-specific and auto-poietic (by feed-back loops with self-regulation behaviours). Recently (October 2017), the opera "(BTF)-7 for Tibetan bells and live electronics", has been performed at the Norwegian Museum of Science and Technology in Oslo, during the annual Kyma International Sound Symposium. Part of his operas are published by the Publishing House TAU-KAY of Udine.
He is professor of composition at the Conservatory of Music "D. Cimarosa " of Avellino (Italy). https://www.vincenzogualtieri.com/
the (BTF) project
One of the aims of this project is to let a machine-system simulate “the improvisative-freedom” of a biological-system (human performer) while performing a kind of sound-organization. Both systems are subject to causal-responsibility, which is voluntary (an effect of his will) in a living system, and mechanical/automatic/involuntary in a machine. It is the machine whose behaviors are, however, dependent on the History (and not necessarily only on its own History), therefore unpredictable and undeterminable analytically: a machine that can thus be considered “non-trivial”. The project attempts to state a shared paternity on both cause-effect responsibility while experimenting with negative-feedback in audio systems. This poses issues such as: complex behaviors arising from interactions between multiple, strictly integrated systems, that are mutually co-determined; emerging sonorities which are defined as second order. The latter are often unpredictable as they are related to the quality of the interactions between certain systems and one or more sub-systems (e.g. the environment where the performance takes place). Systems and sub-systems generate networks of circularly-causal-relationships together with continuous forms of self-organization. Therefore one composes: interactively “with” the instant verifications of the audible outcomes; “the” interactions – or at least, the ground where interactions take place - between processes (DSP, scheduling etc.) that are internal to the machine. The goal is to make the systems somehow available, adaptable, to read/listen to both their internal and external (environment) conditions and modify them accordingly, where possible.
--------------------------------------
Kai Yves Linden, born 1960, studied composition with Wolfgang Hufschmidt at the Folkwang Hochschule in Essen, Germany, from 1982 to 1986. Currently working as a software engineer in a standardisation institute he pursues compositional projects in his free time. His compositions comprise chamber works for solo instruments and ensembles and vocal music.
Anatomical insights into "Vertical Structure"
"Vertical Structure" is a mixed music for flute and electronics depending on IRCAM technologies Antescofo and Mubu. In my presentation I will explain the conceptual principles of the piece and dissect some excerpts from its 19 minutes duration. I will also consider practical strategies in the compositional process to gain productivity and creativity when working with a complex software set-up.
--------------------------------------
Marco Bidin is a composer, organist and harpsichord player. He worked as a Lecturer for the Studio for Electronic Music at the HMDK Stuttgart, where he completed the Composition and Computer Music studies under the guidance of Prof. Marco Stroppa. He also studied Organ in Udine, Early Music in Trossingen, and Contemporary Music Performance in Stuttgart. He performed as a soloist in major international festivals, and his compositions have been premiered in Europe, Asia and Canada. He realized recordings for Taukay, Vatican Radio and other labels. As a lecturer and researcher, he has been invited to hold masterclasses and conferences at institutions such as Lisboa INCOMUM (Portugal), Pai Chai University (South Korea), Silpakorn University (Thailand), Shanghai Conservatory (China) and IRCAM (Paris).
OMChroma new tutorials and documentation
This presentation will focus on my new set of tutorial patches, videos and documentation of the OpenMusic library OM-Chroma. After a five-years teaching experience at the HMDK Stuttgart, I collected the requests and demands of our dedicated students regarding the OMChroma library. The existing documentation and tutorial patches, created by Luca Richelli, are very detailed and exhaustive, demanding some pre-existing knowledge in the OpenMusic environment and a certain degree of proficiency in CAC. This proved to be difficult for composition students without such a background. The most efficient learning strategy resulted in being starting from the existing tutorials, extrapolating some material and focusing on its developments. By combining the technical approach with a compositional one, the new users can blend learning how to use the library, how to formalise the compositional thinking in the OpenMusic environment, and how to develop an esthetic approach to designing virtual instruments and composing sound. In the case of algorithmic procedures that proved being challenging for beginners, I created videos to guide the user step-by-step into the composition of the patch. Furthermore, a whole new chapter has been dedicated to the works composed wit OMChroma by the HMDK Stuttgart students, to allow new users to see (and listen) what enormous potential this library has after just a few months of dedicated practice.
--------------------------------------
Artist Biography/Statement: Johnny Tomasiello is a multidisciplinary artist and composer living and working in NYC. Tomasiello approaches sound making not only through his choice of instruments, but by pushing the technical limitations of the musical systems employed, and questioning their validity and truth. His work is malleable, and is informed by extensive research into history and technology, neuroscience, and political movements. He uses that knowledge to develop significant, collaborative works that examine new means of production. IEvidence, education and research are integral aspects of Johnny’s projects as seen, for example, in his exploration of how humans interact with, and are affected by, the external world on a physiological level. He started his academic career as a painter. Interested in the connections between science and art, he completed a Bachelor of Science degree in cognitive science/psychophysics and was awarded a research grant at the University of Medicine and Dentistry of NJ, Medical School. His thesis was entitled "Musical Stimuli and its Effects on Human Physiology". In a broad sense, that research was the beginning of a lexicon for sound and color, defining auditory and visual stimuli through its quantitative effects on human physiology. He has looked to the shared characteristics between art and science for inspiration. His personal work balances social and historical references with allegorical storytelling, designed to provoke awareness and critical thinking. He has performed his work live and organizes performances and public workshops.
Moving Toward Synchrony
Moving Towards Synchrony by Johnny Tomasiello Link to work:
https://player.vimeo.com/video/447983253 Introduction Moving Towards Synchrony is an immersive audio and visual work whose purpose is to explore the reciprocal relationship between electrical activity in the brain, as well as other biorhythms, and external stimuli that has been generated and defined by those same physiological events.
Ultimately, the aim of the work is to present an interactive computer-assisted compositional performance system as an installation artwork and to teach participants how to influence a positive change in their own physiology by learning to influence the functions of the autonomic nervous system. In addition to the neuroscience concerns mentioned above, this work is designed to explore the validity of using the scientific method as an artistic process. The methodology will be to create an evidence-based system for the purpose of developing research based projects. Method
The project collects physiological data through non invasive neuroimaging by means of a Brain Machine Interface (BMI) designed in Max 8. Brainwave and heart rate data are used to generate realtime and interactive music and visual compositions which are simultaneously experienced by a subject. The melodic and rhythmic content, as well as the visuals, are derived from, and constantly influenced by, the subject’s EEG and heart rate readings. A subject, focusing on the generative stimuli, will attempt to elicit a change in their physiological systems through their experience of the bidirectional feedback.
--------------------------------------
Claudia Jane Scroccaro is an Italian composer of instrumental and electroacoustic music. The sonic aspect holds a dominant role in her work and it reflects deep interests for electronic music and for music of aural tradition. Her creative approach explores a musical dramaturgy shifting between a humanly perceivable listening experience and microphonic projections of the dynamic properties of sound on multi-dimensional spaces, resulting into an alternation between kaleidoscopic movements and introspective explorations. She has attended the 2019.2020 IRCAM Cursus and, as a DAAD scholarship holder, is currently enrolled in the Konzert Examen program in Composition at the HMDK Stuttgart, where she obtained the Masters degree, studying in the class of Marco Stroppa, while attending masterclasses held by Philippe Leroux and Franck Bedrossian. She has been selected for the 2020.21 SWR Vokalensemble Academy and is Tutor for the Electronic Music Studio at Stuttgart HMDK. Her music has been performed in Europe and in the USA (Ensemble Ascolta, ECCE Ensemble, Ensemble Suono Giallo, Concrete Timbre, EchtZeit Ensemble, Ensemble Musikfabrik) and she has been composer in residence for the Music Innovation and Science Centre in Vilnius. https://soundcloud.com/cjscroccaro
Detuning the space in I Sing the Body Electric (2020), for Double Bass and Electronics
In composing I sing the body electric, for double bass and electronics I was intrigued by the possibility to discover an unheard voice of the double bass, detuning dynamically the strings with the aim to create new acoustic relationships that would also correspond to changes in the shape of the space. The initial idea sprung from the possibility to design a musical dramaturgy that could explore the physical tension between the body of the instrument and that of the musician, Florentin Ginot. The listener is guided through a research for an inner balance between the electronics and the instrumental sounds and this same research becomes the formal process and dramaturgy of the piece. In order to attain this result there is a gradual transition from an eight-channel tape, towards a live-electronic sound processing, relying on two basic treatments, a Fq-Shift and a multi-band filter, engineered inside Max through a real-time spectral analysis, controlled through Spat and performed live. The promises implied in the sonic and physical desires of these bodies - and determined by their relationships - evolve through a process of spatial transformation, where the actions of deformation and distortion project the listener in an immersive, ever-changing and unpredictable space, shaped in real-time according to the spectral changes of the sound of the double bass and controlled by a live-electronics performer. IRCAM CURSUS 2019.2020 Manifeste Festival 2020 - IRCAM, Paris Double Bass : Florentin GINOT Live-Electronics : Claudia Jane SCROCCARO https://medias2.ircam.fr/x1e9fff / Double Bass : Florentin Ginot - https://youtu.be/LyBUzCCSBVE (Video recording with binaural mix)
--------------------------------------
Patching With Complex Data
I will show some recent experiments using the Max dictionary in a variety of scenarios related to generative sequencing. Since a patcher itself is in the form of a dictionary, we can also capture and manipulate this data, which may lead to applications ranging from remote collaboration to guided educational experiences.
--------------------------------------
Dr. Miller Puckette (Harvard; mathematics) is known as the creator of Max and Pure Data. As an MIT undergraduate he won the 1979 Putnam mathematics competition. He was a researcher at the MIT Media lab from its inception until 1986, then at IRCAM, and is now professor of music at the University of California, San Diego. He has been a visiting professor at Columbia University
and the Technical University of Berlin, and has received two honorary degrees and the SEAMUS award.
and the Technical University of Berlin, and has received two honorary degrees and the SEAMUS award.
Modular software and remote workflows
My musical collaboration with percussionist Irwin took an unplanned turn when we started working remotely. Over the past year we've developed a workflow that allows us to perform together in real time using instruments that I write in Pure Data but Irwin plays in Ableton Live, with audio, control, and video streams bouncing back and forth between our offices. The solutions we've found
are interesting both in how we deal with latency limitations and also in that the distinction between environments and pluggable modules has shifted, so that an entire software environment can pretend to be a module inside a different one.
are interesting both in how we deal with latency limitations and also in that the distinction between environments and pluggable modules has shifted, so that an entire software environment can pretend to be a module inside a different one.
--------------------------------------
Karim Haddad was born in 1962 in Beirut Lebanon. He studied at the national conservatory there until it closed its doors in 1975 due to the civil war. He then went on to study philosophy and literature. Haddad received six awards from the CNSMD de Paris in addition to the Diplôme Supérieur de Composition with honors. He has worked with composers such as A.Bancquart, P. Mefano, K. Huber, and Emmanuel Nunes. This learning period is marked by his keen interest for non-tempered spaces and their strong relationship with temporal poetry. In 1992 and 1994 he took part in Ferienkursen für Musik in Darmstadt where he received a scholarship. In 1995, he took a class in computer music at IRCAM, and from that point on, the computer became the only tool he used for the elaboration of his works. As a computer music expert, and more particularly as an expert in computer-assisted composition, in 2000 he is given the responsibility of technical support for the IRCAM Forum. He has developed several tools for the OpenMusic environment (synthesis control via Csound), as well as interfaces between this environment and score editors such as Finale and Lilypond. From 2008 to 2014, he taught computer-assisted orchestration at the CNSMD de Paris. Beginning in 2015, he joined the Musical Representations team at IRCAM as a scientist and began, simultaneously, the music doctorate research and composition program.
Jean-Louis Giavitto : Senior computer scientist at CNRS, my work focuses on the development of new programming paradigm based on temporal and spatial relationships. In my previous life, I applied these researches to the modelling and the simulation of biological systems, especially in the field of morphogenesis, at the University of Evry and at Genopole, where I co-founded the IBISClaboratory (Informatics, Integrative Biology and Complex Systems). Since my arrival at IRCAM (January 2011), my work has focused on the representation and manipulation of musical objects, for musical analysis, composition and performance on stage. I am especially interested in the specification of real-time interactions involving fine temporal relationships during performances, in the context of the Antescofo system used for the production of mixed music pieces at IRCAM and elsewhere in the world. This technology benefits everyone today, thanks to the creation of a spinoff. I participated with Arshia Cont in the creation of the INRIA MuTAnt project-team within the RepMus team and I am also deputy director of the joint research lab Ircam-CNRS-Sorbonne University.
--------------------------------------
Jacopo Greco d'Alceo, composer, director, dancer and choreographer, begins young the studies of contemporary dance in Austria. Because of his paternal slavic origins, he was then obliged to musical studies. He installed in Milan where he started practicing the composition with Giovanni Verrando. He was completely amazed by musical composition and he chose to become a composer. He then moved to CNSMD de Lyon where he found a prolific environment for his research. He believes music begins at the source of a mouvement. All his pieces destroy the boundaries between musical and dance composition. These interests bring him to explore space in a different way. His piece "soffio" (2018) - for video, electronic and double bass, is born for the Pinacoteca di Brera (Milan) gallery and it was also played at the New York University Bobst Library, (New York), awarded by the Mise-en Festival.
He pursued these ideas in his recent work "TRICOT" (2021), a vidéo-choreography in the space of the "Médiatheque Nadia Boulanger" of the CNSMD, where he is composer, director and choreographer.
Mouvement is always in his pieces: "A letter to Taipei" (2020) is an electroacoustic piece, almost cinematographic, where the space between the artist and his work is so narrow that the piece become the composer.
He pursued these ideas in his recent work "TRICOT" (2021), a vidéo-choreography in the space of the "Médiatheque Nadia Boulanger" of the CNSMD, where he is composer, director and choreographer.
Mouvement is always in his pieces: "A letter to Taipei" (2020) is an electroacoustic piece, almost cinematographic, where the space between the artist and his work is so narrow that the piece become the composer.
Hybridation, a mind-body problem: live coding
In live coding possibilities are infinites. Every performance is like drawing a line between dots in the space. Ideas can pour out suddenly or be prepared for later. My approach to this path came from some specific reasons: I was thinking in terms of structures and ways where systems could react quickly to realtime inputs with all the unchained and limitless freedom of a dancer moving in a studio and, at the same time, master the potential expression of events conceived ahead in time. 1. Embody the abstraction of code to become physical: contemporary dance as an inspiration for algorithm 2. Composition and improvisation: a balance between live performance and deferred events 3. Notation: a code can be visualised into anything 4. Trans-disciplinary approach to live performance and composition.
--------------------------------------
Electroacoustic music composer, I graduated from the Conservatory of Toulouse and the National Conservatory of Music and Dance of Lyon. I have been composing music for fixed medium, as well as some musical performances and multimedia installations (...). In every project, I want to take a specific care about what is related to poetic expression. I truly believe that the tools and devices we use to compose are strongly related to musical perception. I have been working with several audio programming environments, like Max MSP and Csound. Then, I went further by learning and using some more general purpose programming languages - C++, Lua, Ruby (...) - in order to build some new musical softwares and tools. I wrote jo_tracker, a soundtracker software for Csound, and I am currently working on Nocturnal, a 3D digital interactive score system for electroacoustic music. Those projects offer some different point of views about the act of composing. They all allow hybridization between different tools, whatever their nature is (sensor, analog device, software, etc.).
Digital-analog hybridation
Hybrid relationship between digital and analog devices means that an analog tool can be controlled through a digital signal allowing a combination between the rich and crispy tones of electronic circuits and the extreme precision of a computer. Giving some exemple of technical achievements and reflections I worked over the last few years, I will give an overview of the main techniques and their musical implications in software and hardware. 1. Analog control: from simple automation to signal philosophy 2. Hybrides modules: Nebulae, Bela, DC-coupled (Expert Sleepers) 3. Writing music for analog modular device: soundtracker, Csound and 3D digital score.
--------------------------------------
Daiki NISHI, a composer/conductor originally from Japan, studies composition as a kontakt-student of Prof. Carlo Forlivesi at HMDK Stuttgart. Until 2020, he received diploma of composition at La Schola Cantorum with Nicolas Bacri, superior diploma of composition at l’Ecole Normale de Musique de Paris with Michel Merlet, music researcher diploma of composition at Le Conservatoire à Rayonnement Regional d’Aubervilliers with Jonathan Pontier, and diploma of orchestra direction at La Schola Cantorum de Paris with Adrian McDonelle. Most of his works are based on French Neo-romanticism. On the other hand, he researches the algorithmic composition mainly with Python and Open Music, especially in the field of full/semi-automatic composition of traditional occidental music. They have been performed in France, Japan, and also Germany, Austria, Italy, South Korea, etc.
Creations of Instrumental Music Made by the Noise in the Natural Phenomenon
For this presentation, I will illustrate how I manipulate the application SPEAR, analyser, editor, and converter from WAV files to SDIF files, to prepare SDIF files to input to OpenMusic and generate some simple music scores. The two sound materials I chose are the applause and the bell of the clockwork that notifies “the twenty o’clock”, from the soundscape of my memories “le premier confinement” (the first lockdown) in March 2020, in Paris. The goal is to show one of the possible paths for music creation, but also to research the reasons why those soundscape’s features, such as its “harmonies” and “almost noise” components, had such a strong impact on me in those days. The main three objectives of this study are : - Detection of harmonic elements (over-tone constructions) from “almost noise” in the natural phenomenon - Detection of dissonant elements (the noise constructions) from the tuned sound - Simplification of the two materials as mentioned above to generate the purely instrumental music score The first material, “almost noise” in the natural phenomenon, is not pure artificial noise as the white noise, but it has some features that specifically identify its sound, like the sea wave, for example. I set up a hypothesis that is especially concerned with the constructions of harmonic-overtones included in the noise. The second one, “tuned sound”, usually has certain harmonic-overtone constructions. However, the definitive element that identifies its sound is rather “the noise” component, than its main harmonic construction.
--------------------------------------
Born in a family of musicians, she received her first music lessons, learning piano from her father, the Turkish composer Burhan Önder. From the age of 11, she continued her piano studies at Ankara State Conservatory, attending also elective preparatory lessons for composition. After graduating with honours from Ankara Conservatory, she was accepted at the University of Music and Performing Arts Stuttgart in the class of Prof. Marco Stroppa, for the Bachelor degree in composition. In 2020 she was an active participant in Ircam Forum Paris as a representative student of HMDK Stuttgart. He music har been heard in concerts and festivals in Europe and Turkey and has been performed by Ensemble Ascolta and echtzeitEnsemble, among others.
Creation of supernatural formantic sounds with OM Chant Library in Open Music
For this presentation I will offer an insight on the artistic approach I used to compose the electronics for In Depth of a Dream. Other than an extensive overview on the different creative techniques I combined to obtain the electronic sounds, the discussion will mostly focus on my personal usage of FOF synthesis in the Open Music environment, more specifically in the Chant Library, in order to create formantic outlandish sounds. In modifying the parameter control, I had the chance to discover a variety of sounds with a broad range of characteristics. Therefore, we will be seeing how the different parameter settings I used led to these results and how I achieved the various alien timbres by setting unusual control parameters in the OM Chant Library. Finally, I will show how I integrated these different possibilities to create "the dream dimension" in my piece.
--------------------------------------
Brandon Lincoln Snyder is a Korean American composer currently based in Stuttgart, Germany. He received his Bachelor of Arts at Harvard University under Chaya Czernowin and Steven Kazuo Takasugi and is currently pursuing his Masters in Composition under Martin Schüttler. He is the founding director of Browser, a commissioning project for web-based music, and his music has been commissioned by the Bach Society Orchestra, Face the Music, Nebula Ensemble, and the Brattle Street Chamber Players. Delia Ramos Rodríguez received her first violin lesson at the age of 5, and has since then won multiple prizes as a student under Álvaro Puyou at "Conservatorio Profesional de Música Amaniel" in Madrid (Spain), where she played several solo concerts with its orchestra. The BBVA-Foundation Scholarship-holding violinist finished her Bachelor under Joaquín Torre at the "Real Conservatorio Superior" in Madrid in 2016 and in 2019 her Masters studies under Prof. Anke Dill at the "Hochschule für Darstellende Kunst und Musik" Stuttgart (Germany). She currently pursues her Master in Contemporary Music at this same institution under Melise Mellinger and Nurit Stark and works as assistant of the "Studio Neue Musik”.
Meta-Music: Creating Multimedia Music From a Multilayered Compositional Process
Brandon Lincoln Snyder and Delia Ramos Rodríguez will present their piece “Music About Music About Background Music - II”. For three performers, speech, instruments, electronics and video, the piece stacks multiple layers of speech-representation on top of each other. The semantic meaning of words layer on top of the acoustic musicality of the spoken voice, all of which covers a layer of visual text, all of which is represented as a video, which plays simultaneously over live performances of Delia and Brandon performing a similar layering. Delia and Brandon met several times to develop the material of the piece, primarily through structured improvisations and small sketches. This ultimately allowed the piece to find material hyper-specific to Delia and Brandon’s strengths as performers and interpreters. Each layer of the piece was constructed separately, handling the same set of material through different formats/software: Open Music, Davinci Resolve, MaxMSP, Ableton Live, and handwritten sheet music. In the end, the piece applies a hyper-specific material to a chaotic structure. The multiple layers of the piece were reached through a multi-layered compositional process, one in which not any single piece of score, sketch, or video fully documents a performance of the work. This presentation attempts to illustrate how this specific collaborative process tailored specifically into the needs and goals of the artistic project.
--------------------------------------
Niklas Lindberg is a composer and performer working with extended and custom formats, defining and designing the framework and the medium as well as the parameters and forms within them - ideally, with the media serving not only as transparent containers for the piece as a structural entity, but as self-reflective, conspicuous parts of the work in and of themselves, emphasizing the compounding material foundations of a work. While the format takes a precedence in their work, the question steadily arises where format or mediation begins and the mediating body ends - where the body meets interface, and to which agent the interface is an appendix. Largely, a musical situation is understood as happening when body meets interface meets body, and their work is concerned both with building abstracted musical interfaces as well as with transforming and embellishing more basal kinds of interfaces like the(ir own) human voice to other, impossible sounds as a musical expression. After studies in classical piano with Taru Kurki at the music conservatory in Falun, Niklas attended composition lessons in Gotland School of Music Composition, the Norwegian Academy of Music and Darmstädter Ferienkurse, with teachers Per Mårtensson and Natasha Barrett, among others. Currently they are doing a year abroad in Hochschule für Musik und Darstellende Kunst Stuttgart with Piet Meyer and Martin Schüttler.
Presenting a piece for a vocal performer and electronics
I will be presenting a piece for a vocal performer and electronics which I am currently developing. The piece will be exploring the thresholds between digital augmentation as hyper-instrument and more detached, superimposed processes. The piece is perhaps not as much about advanced and exotic sounding processes as the kitsch and ubiquitous; a central idea on the other hand is the contiguity of the over-polished and the raw and untreated, and common “beautifying” treatments being taken beyond the confines of fidelity and used as musical syntax, articulation and transformation. In this spirit of beautification and over-polishedness, I will seek to integrate a process that’s in “editing time” in a live performance with a high degree of responsiveness, along with more augmentative real time processes, and possibly introducing the possibility of interacting with this as a performer in ways that allow for spontaneity, and spur performance in directions not entirely pre-determined. The piece will employ a combination of tracking and sequencing, in different layers of processes which are more or less detached from the performed vocal, but all stemming from it, ranging from real time interactive augmentations all the way to offline, edited surrogates for the real time performance. Hopefully, this hybrid offline and real time workflow and aesthetic will unlock some interesting potentials, and the perceptual thresholds (subtle or not) between a process that is in varying degrees “aligned with” the gestures of the voice, or independent of the voice it’s treating, could be made into an expressive parameter.
--------------------------------------
Composer and pianist born in Toledo (Spain) in 1992, he is currently member of Marco Stroppa's composition class at the Staatliche Hochschule für Musik und Darstellende Kunst of Stuttgart, where he continues his master studies and where he is also student of the contemporary piano specialist Nicolas Hodges. He has previously studied at the Conservatoire à Rayonnement Régional de Paris, the Conservatori Superior del Liceu de Barcelona, and the Conservatorio Superior de Música de Aragón. His main mentors in the field of composition have been personalities such as José María Sánchez-Verdú, Carlo Forlivesi, Édith Canat de Chizy or Ramon Humet. His works have been performed in different countries by BCN216, Nou Ensemble or Echtzeit Ensemble; directed and performed by musicians such as Nacho de Paz, Mikhaïl Bouzine or Francesc Prat; programmed at international festivals and projects such as the Musica Senza Frontiere in Perugia or the Musicant Picasso at the Museu Picasso of Barcelona and commissioned in the frame of the Beyond Project in St. Eberhard (Stuttgart). He has attended master classes held by Oliver Knussen, Tristan Murail, Raphaël Cendo, Ramon Lazkano, Luis de Pablo, Alberto Posadas, Ramon Coll or Guido Arbonelli, among others. Likewise, he has actively participated in the workshops of different editions of the Mixtur festival in Barcelona and followed the courses of the Institut Français- Barcelona Modern or the Rafael Orozco Superior Conservatory of Córdoba.
OM-Chant as a filtering tool
The OM-Music library OM-Chant provides a high quality filter that makes possible a very accurate manipulation of previously recorded sounds. This aspect of the library may seem secondary, in comparison with the varied processes of synthesis, but it is actually a wide working path to explore. The filter of OM-Chant offers several remarkable advantages that are not to be found in other softwares: the quality of the filtered sounds is definitely superior to other programs thought to process the sound live. The results that we are going to obtain from OM-Chant are therefore suitable for mixing purposes and composition. Besides, the control of parameters allowed by OM-Music expands the precision of detail that actual composers are seeking to apply to the music of our times, in this presentation we are going to see three different kind of patches that represent three ways to apply the filter of OM-Chant: using the formant’s data, the break point function and the lisp procedures typical in OM. After that we are going to hear a musical example where all the sounds are the result of filtering the lowest note of a piano played mute, pizzicato or ordinario.
--------------------------------------
Artificial Intelligence
Dr Matt Lewis is a sound artist and musician and leads the Sound Pathway at the Royal College of Art in London. He has exhibited and performed nationally and internationally in countries including Austria, Brazil, Portugal, Serbia and the USA, in festivals and venues such as Whitechapel Gallery, Café Oto, The Roundhouse, Diapason NYC, MK Gallery and Centro Cultural Sao Paulo. Recent commissions include Clandestine Airs with Resonance FM and VOID, no such thing as empty space, in collaboration with deafblind charity Sense, Where is the Rustling Wood? part of Metal Culture’s Harvest 15 with Studio Orta, Music for Hearing Aids as part of Unannounced Acts of Publicness in Kings Cross and Exploded Views with Turner Contemporary. From 2012-13 he was an Artist Fellow at Central St Martins and was twice a resident artist with Metal Culture. Matt previously taught at CSM, University of Greenwich and LCC. Matt is co-director of Call & Response, based at Somerset House, one of Europe’s only independent sound spaces and leads the Sound Pathway at the Royal College of Art in London.
Becoming Soundscape
Drawing on collaborative work with residents, acousticians, social scientists, musicians and local government, this project explores how Artificial Intelligence and Machine Learning might be more inclusive when it comes to participatory sonic design. The work uses immersive listening spaces and AI, to create critical environments and experiences where iterations can be experienced, mixed and problematised by those directly affected by the planning and developments. The aim is to suggest ways in which we might both critique AI and also take a more holistic approach to sound in the designed environment, thereby honouring the affective, contextual nature of sonic experience. These suggested approaches echo some of the principles of the Design Justice Network and Sound Thinking (Henriques: 2011) as well as traditional methodologies and strategies from group musical improvisation. This work is realised in a series of immersive and device based speculative fictions that employ AI, Audio Description and generative soundscape composition. Examples from which will be presented in an interactive participatory format during the presentation.
--------------------------------------
Robert B. Lisek PhD is an artist, mathematician and composer who focuses on systems, networks and processes (computational, biological, social). He is involved in a number of projects focused on media art, creative storytelling and interactive art. Drawing upon post-conceptual art, software art and meta-media, his work intentionally defies categorization. Lisek is a pioneer of art based on Artificial Intelligence and Machine Learning. Lisek is also a composer of contemporary music, author of many projects and scores on the intersection of spectral, stochastic, concret music, musica futurista and noise. Lisek is also a scientist who conducts research in the area of foundations of science (mathematics and computer science). His research interests are category theory and high-order algebra in relation to artificial general intelligence. Lisek is a founder of Fundamental Research Lab and ACCESS Art Symposium. He is the author of 300 exhibitions and concerts, among others: GOLEM - ZKM Karlsruhe; QUANTUM ENIGMA - Harvestworks Center New York and STEIM Amsterdam; TERROR ENGINES - WORM Center Rotterdam, Secure Insecurity - ISEA Istanbul; DEMONS - Venice Biennale (accompanying events); Manifesto vs. Manifesto - Ujazdowski Castel of Contemporary Art, Warsaw; NEST - ARCO Art Fair, Madrid; Float - Lower Manhattan Cultural Council, NYC; WWAI - Siggraph, Los Angeles.
Meta-composer
Meta-Composer is a neural network equipped with the ability to combine structures and partial compositions in a flexible and combinatorial way to create a new consistent general composition. Meta-composer is therefore an intelligent meta-agent that monitors sound events worlds and intervenes in the structure according to the previously trained model. In my approach, the performer interacts with sound sequences. The goal of Meta-Composer is to understand user’s behaviour and create new interesting compositions by adding new music elements and structures. The meta-composer creates possible future trajectories by combining and generalising previously created sub-structures and selects potentially the most interesting configurations of sound events. The software is autonomous, but the elements, goals and rules of the initial story are created by the human composer. Therefore, meta- composer can be treated as a creative partner and a tool supporting the work of a composer or performer. Meta-composer framework is based on Meta-Learning. Meta-learning is the next generation of artificial intelligence systems. The artificial agent is not learning how to master a particular task but how to quickly adapt to new tasks. The goal of meta-learning is to train the agents to learn knowledge that can be generalise to new problems and performances.
--------------------------------------
Jason Palamara is a technologist, composer-performer, and educator from New Jersey, serving as an Assistant Professor of Music Technology at Indiana University-Purdue University Indianapolis (IUPUI). He specializes in the development of machine learning enabled performance technologies for music, AI music software and the creation of new music for dance. He is the founder and director of IUPUI’s DISEnsemble (Destructive/Inventive Systems Ensemble). His latest album, [bornwith 2brains] is available on iTunes, Spotify, YouTube, CDBaby and anywhere else one might look for new music. He regularly performs and composes music for modern dance as a solo artist and maintains a long-term creative partnership with percussionist-composer Scott Deal, with whom he has designed the AVATAR, an application which uses machine learning to play along with a live improvisation.
AVATAR - A Machine Learning Enabled Performance Technology for Improvisation
AVATAR is a system designed by Jason Palamara and Scott Deal which actively listens to live audio and plays along appropriately in a style patterned after a given model. The AVATAR system is made to be user-friendly and works as a standalone Max patch or as a M4L device in Ableton Live. Using AVATAR, a musician can easily create their own machine learning models from any standard MIDI file and use that model to create new improvisations in tandem with a live improvised performance. This presentation will go into depth about the inner workings of how AVATAR was constructed, show some clips of AVATAR performances and give a quick demonstration of AVATAR being set up and used in a new Ableton session.
--------------------------------------
Anna Huang is a Research Scientist at Google Brain, working on the Magenta project. She is the creator of Music Transformer, and the ML model Coconet that powered Google’s first AI Doodle, the Bach Doodle. In 2 days, users around the world spent 350 years composing and the ML model harmonized more than 50 million melodies. Her work is at the intersection of machine learning, human-computer interaction, and music, publishing at conferences such as ICLR, IUI, CHI, and ISMIR. She is currently an editor for TISMIR's special issue on AI and Music Creativity, and a judge and organizer for the AI Song Contest. She holds a PhD in computer science from Harvard University, a masters in media arts and sciences from the MIT Media Lab, and a B.S. in computer science and B.M. in music composition both from the University of Southern California.
From generative models to interaction to the AI Song Contest
I'll start by discussing how we address some of the ML challenges in music modeling. The result is Music Transformer, a self-attention based model that generates music that sounds coherent across multiple time scales, from the 10-millisecond scale of expressive timing in performance to the minute scale of storytelling. Next, I'll illustrate how taking a human-centered approach enabled us to go beyond common sequence modeling assumptions and support user interaction. The result is Coconet, which powered Google’s first AI Doodle, the Bach Doodle, in two days harmonizing more than 50 million melodies from users around the world. Yet in the AI Song Contest last year, we found that teams faced three common challenges when using ML tools in their songwriting process because end-to-end ML models were not decomposable for musicians to tweak individual musical components, were not aware of the larger musical context it was generating for, and also not easily steerable to bear a certain effect or mood. These observations bring us back to the drawing board, how can we learn from musicians' needs to design our next iteration of ML models and tools?
Douglas Eck is a Principal Scientist at Google Research and a research director on the Brain Team. His work lies at the intersection of machine learning and human-computer interaction (HCI). Doug created and helps lead Magenta, an ongoing research project exploring the role of machine learning in the process of creating art and music. He is als a leader of PAIR, a multidisciplinary team that explores the human side of AI through fundamental research, building tools, creating design frameworks, and working with diverse communities. Doug is active in many areas of basic machine learning research, including natural language processing (NLP) and reinforcement learning (RL). In the past, Doug worked on music perception, aspects of music performance, machine learning for large audio datasets and music recommendation. He completed his PhD in Computer Science and Cognitive Science at Indiana University in 2000 and went on to a postdoctoral fellowship with Juergen Schmidhuber at IDSIA in Lugano Switzerland. Before joining Google in 2010, Doug was faculty in Computer Science at the University of Montreal (MILA machine learning lab) where he became Associate Professor. For more information see http://g.co/research/douglaseck.
An overview of AI for Music and Audio Generation
I'll discuss recent advances in AI for music creation, focusing on Machine Learning (ML) and Human-Computer Interaction (HCI) coming from our Magenta project (g.co/magenta). I'll argue that generative ML models by themselves are of limited creative value because they are hard to use in our current music creation workflows. This motivates research in HCI and especially good user interface design. I'll talk about a promising audio-generation project called Differentiable Digital Signal Processing (DDSP; Jesse Engel et al.) and about recent progress in modeling musical scores using Music Transformer (Anna Huang et al.). I'll also talk about work done in designing experimental interfaces for composers and musicians. Time permitting I'll relate this to similar work in the domain of creative writing. Overall my message will be one of restrained enthusiasm: Recent research in ML has offered some amazing advances in tools for music creation, but aside from a few outlier examples, we've yet to bring these models successfully into creative practice.
--------------------------------------
Jérôme Nika is a researcher and a computer music designer / musician specialized in human-machine musical interaction. He graduated from the French Grandes Écoles Télécom ParisTech and ENSTA ParisTech. In addition, he studied acoustics, signal processing and computer science as applied to music and composition. He specialized in the applications of computer science and signal processing to digital creation and music through a PhD (Young Researcher Prize in Science and Music, 2015; Young Researcher Prize awarded by the French Association of Computer Music, 2016), and then as a researcher at Ircam. In 2019-2020, he was in residency at Le Fresnoy – Studio National des Arts Contemporains, and worked as a computer music designer / musician / researcher on 3 projects: Lullaby Experience, an evolutive project by composer Pascal Dusapin, and two improvised music projects: Silver Lake Studies, in duo with Steve Lehman, and C’est pour ça, in duo with Rémi Fox. In 2020, he became permanent researcher in the Music Representations Team at Ircam.More than 60 artistic performances have brought the tools conceived and developed by Jérôme Nika into play since 2016 (Onassis Center, Athens, Greece; Ars Electronica Festival, Linz, Austria; Frankfurter Positionen festival, Frankfurt; Annenberg Center, Philadelphia, USA; Centre Pompidou, Collège de France, LeCentquatre, Paris, France; Montreux Jazz festival, etc.). Among them, the DYCI2 library of generative musical agents combines machine learning models and generative processes with reactive listening modules. This library offers a collection of “agents/instruments” embedding a continuum of strategies ranging from pure autonomy to meta-composition thanks to an abstract “scenario” structure.His research focuses on the introduction of authoring, composition, and control in creative interactions with generative agents. This work led to numerous collaborations and musical productions, particularly in improvised music (Steve Lehman, Bernard Lubat, Benoît Delbecq, Rémi Fox), contemporary music (Pascal Dusapin, Marta Gentilucci), and contemporary art (Vir Andres Hera, Le Fresnoy - Studio National des Arts Contemporains).
Interaction with musical generative agents
The Musical Representations team explores the paradigm of computational creativity using devices inspired by artificial intelligence, particularly in the sense of new symbolic musician-machine interactions. The presentation will focus in particular on a line of research mixing the notions of interaction, musical memory and generative processes: composing generative agents at the scale of narration or behaviour. We will present the novelties of the DYCI2 library which implements the results of this research: a library of generative agents for performance and musical composition combining free, planned and specified, and reactive approaches of generation from a corpus.
--------------------------------------
Neil Zeghidour is a Research Scientist at Google Brain in Paris, and teaches automatic speech processing at the MVA master of ENS Paris-Saclay. He previously graduated with a PhD in Machine Learning from Ecole Normale Superieure in Paris, jointly with Facebook AI Research. His main research interest is to integrate DSP and deep learning into fully learnable architectures for signal processing.
From psychoacoustics to deep learning: learning low-level processing of sound with neural networks
Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limitations of handmade representations. In this talk, I will present LEAF, a new, lightweight, fully learnable neural network that can be used as a drop-in replacement of mel-filterbanks. LEAF learns all operations of audio features extraction, from filtering to pooling, compression and normalization, and can be integrated into any neural network at a negligible parameter cost, to adapt to the task at hand. I will show how LEAF outperforms mel-filterbanks on a wide range of audio signals, including speech, music, audio events and animal sounds, providing a general-purpose learned frontend for audio classification.
--------------------------------------
Xtextures - Convolutional neural networks for texture synthesis and cross synthesis
Neural style transfer applied to images has received considerable interest and has triggered many research activities aiming to use the underlying strategies for manipulation of music or sound. While the many fundamental differences between sounds an images limit the usefulness of a direct translation recent research in the Analysis/Synthesis team has demonstrated that a rather similar approach to what is used to manipulate painting style in images allows for quasi transparent analysis/resynthesis of sound textures. Instead of working on 2D images in the case of sound textures the convolutional networks work on the complex STFT.
The presentation will introduce the Xtextures command line software that is available in the Forum ans that allows using these techniques not only for resynthesis of textures but also in a more creative way for texturization of arbitrary sounds.
--------------------------------------
Philippe Esling received an MSc in Acoustics, Signal Processing and Computer Science in 2009 and a PhD on multiobjective time series matching in 2012. He was a post-doctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at IRCAM, Paris 6 since 2013. In this short time span, he authored and co-authored over 15 peer-reviewed journal papers in prestigious journals such as ACM Computing Surveys, Publications of the National Academy of Science, IEEE TSALP and Nucleic Acids Research. He received a young researcher award for his work in audio querying in 2011 and a PhD award for his work in multiobjective time series data mining in 2013. In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized at fall 2014 and already used by a wide community of composer. He directed six Masters interns, a C++ developer for a full year, and is currently directing two PhD students. He is the lead investigator in time series mining at IRCAM, main collaborator in the international France-Canada SSHRC partnership and the supervisor of an international workgroup on orchestration.
Tools for Creative IA and Noise
We will present the latest creative tools developed by the RepMus team (ACIDS project), enabling real-time audio synthesis as well as music generation and production and synthesizer control, all in open-source code, as well as Max4Live and VST devices. This presentation will focus on a deeper rhetorical question about the place of noise in AI, both as an end but also as a means.
--------------------------------------
Voice
Fabio Cifariello Ciardi (1960) is a composer interested in using sound and technology to mine real-world phenomena. Since 2006 he has been interested in the instrumental transcription of speech. He studied with Tristan Murail, Philippe Manoury (IRCAM), Franco Donatoni (Accademia S.Cecilia). His compositions have been awarded at competitions like L. Russolo 1992, ICMC 1993 CD-selection, IMEB-Bourges 1998, Valentino Bucchi 1999, AITS “Best sound in Italian motion pictures 2011” (Rome, Italy). His works have been commissioned by Venice Biennale, Orchestra Haydn di Trento e Bolzano, Divertimento Ensemble - Ernst von Siemens Music Foundation, Instituts für Neue Musik Friburg, Singapore University, Stockholm Electronic Music Studio, IMEB Bourges.
In 2006 Arte.tv has produced the Piroschka Dossi's and Nico Weber's documentary "Contre-attaque - Quand l'Art prend l'économie pour Cible: La Spéculation" about Cifariello Ciardi's sonifications of stock market data.
He has developed software algorithms for dissonance calculation, sound spatialization, financial data sonification, speech instrumental transcription, and several computer-aided composition tools.
Cifariello Ciardi is a tenure professor of Composition at Trento Conservatory (www.conservatorio.tn.it) and he is one of the founding members of Edison Studio (www.edisonstudio.it).
Shrinking voices: strategies and tools for the instrumental trascription of speech segments
In the past as well as today, the prosody of speech has spiked composers' interest even when instrumental-only music is considered. Today, however, despite the flourishing of increasingly effective audio-to-midi converters, scoring speech patterns for acoustic instruments still remains an elusive issue. This is particularly the case when the original source is not concurrently audible and/or when the simple f0 tracking doesn't satify composer's needs.
Within this context, the presentation will illustrate a set of Open Music modules specifically tailored for the analysis and the compositional manipulation of prosodic descriptors.
Using temporal and spectral information represented in the SDIF format, algorithms for pitch rectification (Mertens, & d'Alessandro, 1995), harmonicity-based segmentation (Parncutt & Strasburger,1994; MacCallum & Einbond, 2007) have been implemented together with several utilities for data filtering, clustering and quantization. All modules share the same input-output data format to allow a smooth user interaction and feedback.
Although the speed of spectral changes in speech production exceeds both human and instrumental capabilities, the computer-assisted transcription of spoken voice is worth considering for at least two reasons. Firstly, it places the prosodic features of speech at the roots of musical communication. Secondly, and on a more general note, it might contribute to further develop tools for the orchestration of dynamically changing spectra. Orchestral and ensemble rendering of speech excerpts will be presented to outline the current state of the research. Mertens, P. & Alessandro, Ch. d’ (1995) Pitch contour stylization using a tonal perception model. Proc. Int. Congr. Phonetic Sciences 13, 4, 228-231 (Stockholm 1995). MacCallum, J., & Einbond, A. (2007, August). Real-time analysis of sensory dissonance. In International Symposium on Computer Music Modeling and Retrieval (pp. 203-211). Springer, Berlin, Heidelberg. Parncutt, R., & Strasburger, H. (1994). Applying psychoacoustics in composition:" harmonic" progressions of" nonharmonic" sonorities. Perspectives of new music, 88-129.
--------------------------------------
Greg Beller works as an artist, a researcher, a teacher and a computer designer for contemporary arts. At the nexus of Arts and Sciences at IRCAM, he has been successively a PhD student on generative models for expressivity and their applications for speech and music, a computer music designer, the director of Research/Creation Interfaces Department and the product manager of the IRCAM Forum. Founder of the Synekine Project, he is currently doing a second PhD on “Natural Interfaces for Computer Music” at the HfMT Hamburg in the creation and the performance of artistic moments.
Melodic Scale and Virtual Choir, Max ISiS
In this presentation, Greg Beller will present recent developments in voice processing. Melodic Scale is a Max For Live device that automatically modifies a melodic line in real time, by changing its scale, mode or temperament. Virtual Choir is a Max for Live device, which creates a choir effect in real time, harmonizing the voice on different musical scales. Max ISiS is a Max GUI for ISiS singing synthesis software.
--------------------------------------
Axel Roebel is research director at IRCAM and head of the Sound Analysis-Synthesis team (AS). He received the Diploma in electrical engineering from Hannover University in 1990 and the Ph.D. degree (summa cum laude) in computer science from the Technical University of Berlin in 1993. In 1994 he joined the German National Research Center for Information Technology (GMD-First) in Berlin where he continued his research on using articial neural networks for modeling of time series of nonlinear dynamical systems. In 1996 he became assistant professor for digital signal processing in the communication science department of the Technical University of Berlin. In 2000, he obtained a research scholarship at CCRMA, Standford University, where he started an investigation into adaptive sinusoidal modeling. In 2000 he joined the Sound Analysis-Synthesis team of IRCAM where he obtained his Habilitation from the Sorbonne Université in 2011 and where became research director in 2013. He has developed state of the art speech and music analysis and transformation algorithms, is the author numerous libraries for signal analysis, synthesis and transformation as for example SuperVP, a software for music and speech signal analysis and transformation that has been integrated in numerous professional audio tools. He has recently started to investigate signal processing algorithms based on deep learning. He has published more than 100 publications in international journals and conferences.
Nicolas OBIN : I am associate professor at the Faculty of Sciences of Sorbonne Université and researcher in the Sound Analysis and Synthesis team at Ircam (STMS lab - Ircam/CNRS/Sorbonne Université). I have a PhD. in computer sciences on the modeling of speech prosody and speaking style for text-to-speech synthesis (2011) for which I obtained the best PhD thesis award from La Fondation Des Treilles in 2011. Nicolas OBIN is a researcher in audio signal processing, machine learning, and statistical modeling of sound signals with specialization on speech processing. My main area of research is the generative modeling of the expressivity in spoken and singing voices, with application to various fields such as speech synthesis, conversational agents, and computational musicology. He is actively involved in promoting digital science and technology for arts, culture, and heritage. In particular, he collaborated with renowned artists (Georges Aperghis, Philippe Manoury, Roman Polansky, Philippe Parreno, Eric Rohmer, André Dussolier), and helped to reconstitute the digital voice of personalities, like the artificial cloning of André Dussolier's voice (2011), the short-film Marilyn (P. Parreno, 2012) and Juger Pétain documentary (R. Saada, 2014). He regularly conducts guest lectures for reknown schools (Collège de France, Ecole Normale Supérieure, Sciences Po), organizations (CNIL, AIPPI) and in the press and the media (Le Monde, Télérama, TF1, France 5, Arte, Pour la Science).
Yann Teytaut is a Ph.D. Student at IRCAM in the Sound Analysis/Synthesis Team. Former Master’s Student at IRCAM following the ATIAM program (Acoustics, Signal processing and Computer science applied to Music), he got familiar with alignment algorithms, and specifically audio-to-score alignment, during an internship at Antescofo SAS. His current work investigates deep learning-based speech and singing alignment with text/phonemes towards the analysis and transformation of musical singing voice style (ANR Project ARS).
Deep learning for Voice processing
Deep Neural Networks are increasingly dominating the research activities in the Analysis/Synthesis team and elsewhere. The session will present some of the recent results of the research activities related to voice processing with deep neural networks. The presentation will discuss notably
- Analysis: F0 analysis with convolutional neural networks, Speech/Singing alignment with the CTC loss (ANR project ARS)
- Neural vocoder (ANR projects ARS and theVoice)
- Voice Conversion (ANR project theVoice).
- Analysis: F0 analysis with convolutional neural networks, Speech/Singing alignment with the CTC loss (ANR project ARS)
- Neural vocoder (ANR projects ARS and theVoice)
- Voice Conversion (ANR project theVoice).
--------------------------------------
David Guennec is a computer researcher with a passion for the history of sound reproduction, specializing in the field of new voice technologies. After a PhD on speech synthesis, he moved on to the creation of vocal assistants incorporating the entire vocal reproduction chain; from speech recognition to synthesis and understanding of natural language. Currently working at ViaDialog, he focuses mainly on speech synthesis and recognition.
ViaSpeech is a collection of conversational artificial intelligence modules by ViaDialog. These technologies allow companies and large organizations to automate part of their telephone call flows, to know the emotions expressed during customer exchanges, to respond with expressive synthetic voices, to understand and respond to textual exchanges by e-mail, sms, chat, and via consumer messaging platforms. ViaDialog is a reference in Europe in the Customer Interaction Management industry with an R&D laboratory in Lannion where they design high-performance, industrial, low-latency tools with a high level of RGPD protection. Their teams of data scientists and PhDs in Artificial Intelligence cover all scientific and technological aspects and design high-performance solutions in French, English, German, Italian, and Spanish. Their solutions are part of a larger ecosystem of third party platforms and are natively integrated in the ViaFlow solution, a pioneer in the automation of business actions in customer relationship centers
Towards helpful, customer-specific Text-To-Speech synthesis
The subject of automatic speech synthesis began to be popularised as early as the 1990s. Each of us has already had to deal with automatic answering machine voices that made us all suffer in the beginning. Today, however, the progress made both in terms of language comprehension and the acoustic quality of speech synthesis approaches have helped us make giant leaps forward, and new vocal services are currently seeing their quality and capabilities improve significantly with increasingly more human-sounding and expressive voices.In this presentation, I will briefly review recent advances in speech synthesis. After this introduction, I will discuss topics related to the customization of synthetic voices to the customer's needs; and this on several levels. First, at the level of the main components of oral expression: language, speech style, language register and gender for example. Then, I will address issues at the level of the utterance; prosodic for the most part (pitch and flow manipulation). Finally, I will finish by discussing the subsidiary elements to be taken into consideration in order to best meet the needs of customers and end-users of synthetic voices in our constantly changing world.
--------------------------------------
With the support of:
A content here is restricted to Premium users, please login or subscribe to read or play it.