Speakers and presentations

IRCAM Forum Workshops Hors les Murs Montreal, February 2021

REGISTER NOW

SPEAKERS

IRCAM PRESENTERS

Carpentier Thibaut

Conforti Simone - “Within the sound”, recherche de nouveaux sons via TS2

Esling Philippe

Giavitto Jean-Louis - Designing cyber-temporal systems with Antescofo

GUEST COMPOSERS

Blondeau SashaCella Carmine Emanuele, Di Castri Zosha

Fernandez José Miguel - Centralized score: AntesCollider and other artistic applications in Antescofo

Hervé Jean-LucO'Callaghan James, Leroux PhilippeSpiropoulos Georgia

MCGILL DIGITAL COMPOSITION STUDIO STUDENT COMPOSERS

Barash Omer, Blanchard Joy, Han Xue, Regnier Jonas

SPEAKERS

Anésio Azevedo Costa Neto / Instituto Federal São Paulo - Universidade Brasília - IDMIL - McGill University
Anésio Azevedo is a Philosophy teacher at Instituto Federal de São Paulo (IFSP) and is a Ph.D. candidate for Universidade de Brasilia (UnB), Brasília, Brazil. Under the name "stellatum_," Anésio explores sounds, either recorded from Cerrado (biome from Brazil) or produced by himself. Combining different sonic characteristics into ambiances, Anésio aims to expand one’s perception of Nature by offering cues to perceive its owns length of time.  
Cerrado—Applying spatialization techniques to expanded perceptive fields
As an art researcher, I depart from the assumption that each specific environment has a kind of listening that becomes fundamental toward what we perceive in general in that environment. The aim of this research presentation is to show how I manage to enhance the sensation of immersiveness by modifying the distribution, the trajectories, and the intensity of sound sources in a tridimensional spatial axis and how this has helped me to develop an expanded ambient wherein people could experience an amount of Nature’s complexity that underlies what is possible to perceive. The artistic idea evokes not only the technique research but also the continuous process that gathers audiovisual data into potential materials as building blocks through my performances.

Aurélien Antoine, Philippe Depalle, Stephen McAdams (McGill University)

Aurélien Antoine is a post-doctoral fellow at McGill University working with Stephen McAdams and Philippe Depalle. His current research focuses on modeling of orchestration effects and techniques from machine-readable symbolic score information and audio signals through data mining and machine learning techniques. This work benefits from the resources available in the Orchard database and aims to expand it. Its outcomes will also contribute to the understanding and use of the different orchestration effects and techniques.
Harnessing the Computational Modelling of the Perception of Orchestral Effects for Computer-Aided Orchestration Tools (EN)
Recent developments in the field of computer-aided orchestration have provided interesting approaches for addressing some of the many orchestration challenges, supported by advances in computational capacities and artificial intelligence methods. Nevertheless, harnessing the many sides of this musical art, which involves combining the acoustic properties of a large ensemble of varied instruments, has not yet been achieved. One interesting aspect to investigate is the perceptual effects shaped by the instrument combinations, such as blend, segregation, and orchestral contrasts, to name but three. These effects result from three auditory grouping processes, namely concurrent, sequential, and segmental grouping. Therefore, research in Auditory Scene Analysis (ASA) could be utilised to establish computational models that process symbolic musical score and audio signal information to identify specific orchestral effects. Our work in this area could help to understand and identify the different musical properties and techniques involved in achieving these effects that are appreciated by composers. These developments could benefit systems designed to perform orchestration analysis from machine-readable musical scores. Moreover, grasping the different parameters responsible for the perception of orchestral effects could be incorporated into computer-aided orchestration tools designed to search for the optimal instrument combinations by adding perceptual characteristics to their search methods.

Denis Beuret 
Denis Beuret is a Swiss composer, trombonist, video artist, computer developer, improviser, and cultural mediator. He studied percussion, trombone, computer music, conducting, and orchestration, as well as cultural mediation. He is specialized in sound research: extended playing techniques of the bass trombone and integration of electronics in concerts. He has developed an augmented bass trombone, equipped with various sensors that allow him to control musical programs according to his movements and playing. He has presented his work on several occasions at IRCAM, as well as at ImproTech Paris - New York 2012.
Ensemble virtuel, un programme qui groove
This program generates melodies, bass, drums, and appropriate chords in real-time and plays it all in rhythm. It analyzes the pitches and dynamics of four audio sources (microphones or files), it adjusts to the speed of a sound source and measures its musical production in real-time, according to rhythms or split values selected. Concerning orchestration, it is possible to choose the sounds you want, as the program analyzes audio or MIDI and generates MIDI. For this presentation and the demo, Denis Beuret will play the trombone and use a pedalboard that allows him to control audio effects in real-time.
 

Monica Bolles

Monica Bolles has been working with spatial audio since 2011 when she first gained access to her local planetarium’s 15.1 channel surround system. Since then she has been continuously building toolsets in Max MSP to be able to create large textured soundscapes that explore space, movement, and interaction. Tapping into her roots in traditional audio engineering she works with composers and live performers to explore methods of translating their work to spatial environments while exploring the role the audio engineer plays as a performer and musician. As an artist, she has been focusing on building custom instruments that explore data sonification and use gestural control to create improvised spatial audio experiences. As a producer, she puts together teams to build large immersive works that bring together live performance, dance, 360-projections, spatial audio and other new technologies.
Orbits: An exploration in spatial audio and sonification
Orbits is a custom designed instrument for spatial music live performance that utilizes data relating to the rotational patterns of Venus and Earth around the Sun. This piece has been interpreted in spatial audio arrays ranging from 8 channels to 140 channels and is performed through the use of an iPad and Mi.Mu gloves. In this presentation, Monica Bolles will provide a short demo of the instrument and discuss her approaches to sonifying data for multichannel speaker arrays, her use of gestural control for live performance, and the challenges and solutions she has developed for touring live spatial music.

Linda Bouchard / Resident at Matralab - Concordia University
Born in Val d’Or, Québec, Linda lived in New York City from 1979 to 1991 where she was active as a composer, orchestrator, conductor, teacher and producer. She was composer-in-residence with the National Arts Center Orchestra (1992-1995) and has been living in San Francisco since 1997. Her works have received awards in the US and Canada, including a Prix Opus Composer of the Year in Quebec, Fromm Music Foundation Award, Princeton Composition Contest, SOCAN Composition awards and residencies from the Rockefeller Foundation, Civitella Ranieri, Camargo Foundation and others. Bouchard’s music is recorded in Germany on ECM, USA on CRI, and in Canada on Marquis Classics. Since 2010, Linda has been creating multimedia works that have been performed to critical acclaim in North America. In 2017, she received a multiyear grant from the Canada Council for the Arts to develop tools to interpret data into musical parameters. 
Structures vivantes
Live Structures is a research and composition project that explores different ways to interpret data into graphic notation and compositions. The Live Structures project started in October 2017, supported by an Explore and Create Grant from the Canada Council for the Arts received by Bouchard. One of the goals of the Live Structures project is to interpret data from the analysis of complex sounds into a visual musical notation. The tool, developed in collaboration with Joseph Browne of matralab at Concordia University, is called Ocular Scores™. So far, three iterations of the Ocular Scores Tool have been created, each performing multiple functions: a) the ability to draw an image from the analysis of complex sounds that can be used as gestural elements to compose new works or to compare a complex sound against another complex sound, b) the ability to draw full transcriptions of a performance for future interpretation, and c) the ability to draw images in real time and to manipulate those images to create interactive projected scores to be performed live by multiple performers. These various applications and how they can inspire composers and performers will be demonstrated with a live performer (tbd).

Jeffrey Boyd / Université de Calgary

Jeffrey Boyd is a professor of computer science at the University of Calgary. His interests include computational musicology, sonification, interactive art, and video and sensing applied to human movement.  Friedemann Sallis is a professor emeritus in the Division of Music at the University of Calgary. Martin Ritter holds a DMA from the University of British Columbia and is a PhD candidate in Computational Media Design at the University of Calgary.
The hallucinogenic belfry: analyzing the first forty measures of Keith Hamel's Touch for piano and interactive electronics (2012)
Computational musicology is emerging out of the necessity of finding methods to deal with music (both art and popular) that escapes conventional Western notation. To better understand this music, we use computational methods to decompose recordings of performances of contemporary music into digital musical objects. In this paper, we examine the first 40 measures of Touch for Piano and Interactive Electronics (Hamel 2012). Hamel uses the spectra of bells as a basis for a score that mimics the timbre of bells, and combines it with the piano's timbre, electronic processing, and spatial rendering with eight speakers surrounding the audience. For musical objects, we elect to use the over 200 bell samples used in the electronic portion of the piece. An exhaustive computer search for occurrences of each bell sample over four recordings of performances by the same pianist (Megumi Masaki) in two venues over 100 directions (sampled with an ambisonic microphone) yields a database of thousands of bell event detections. Our search produced 1) numerous objects (samples) explicitly coded into the electronic 'score' (not a surprise); 2) a surprisingly large number of objects not explicitly coded. The latter group corresponds to pitches in higher registers, labelled "brass bell dry" or "glock" in the electronic score. Inspection of our code, and verification by listening, confirm that these objects are not produced from the bell samples in the electronic source. On the contrary, they are the product of piano pitches carefully harmonized in real time to gradually bring them closer to the bell samples in the course of the forty-measure segment. By disseminating these sounds in the concert space, the composer invites the audience to gradually enter his hallucinogenic belfry, where the musical work takes place.

Amy Brandon / Dalhousie University
Canadian composer and guitarist Amy Brandon's pieces have been described as '... mesmerizing' (Musicworks Magazine) and ‘Otherworldly and meditative ... [a] clashing of bleakness with beauty …’ (Minor Seventh). Upcoming 2019-20 events include premieres by KIRKOS Ensemble (Ireland), Exponential Ensemble (NYC) as well as performances and installations at the Winnipeg New Music Festival, the Canadian Music Centre and the centre d’experimentation musicale in Quebec. She has received Canadian and international composition awards including the Leo Brouwer Guitar Composition Competition (Grand Prize) and is currently completing an interdisciplinary PhD at Dalhousie University in Halifax, Nova Scotia.
Composing for AR space: creating interactive spatial scores for the METAVision headset
In the last several years, the compositional and performance possibilities within VR and AR environments has grown, with composers such as Paola Prestini and Giovanni Santini, among others, writing works for various VR, AR and 360 technologies. My own compositions from 2017-19 have focused on the particular affordances of the METAVision AR headset, which inhabits the intersection of graphic score, controller and improvisational movement. The primary goal of the works (Hidden Motive, 7 Malaguena Fragments for Augmented Guitar, flesh projektor) is and was the discovery and manipulation by the improvisor (or musician) of the affordances of the AR space, especially its reactivity to hand gestures. The works explore how individual bodies interact within a performative augmented reality environment, in particular how those interactive spaces can meld with the 'real' world objects such as instruments. In this demonstration I will show previous works for the METAVision AR headset as well as demonstrate a current developing multi-channel work.

Taylor Brook / Université de Columbia
Taylor Brook is a Canadian composer who has been based in New York since 2011. Brook writes music for the concert stage, electronic music, as well as music for video, theatre, and dance. Described as “gripping” and “engrossing” by the New York Times, Brook’s compositions have been performed around the world by ensembles and soloists such as Anssi Kartunnen, Mira Benjamin, Ensemble Ascolta, JACK Quartet, Mivos Quartet, Nouvel Ensemble Moderne, Quatuor Bozzini, Talea Ensemble, and others. His music is often concerned with finely tuned microtonal sonorities and exploring the perceptual qualities of sound. In 2018 Brook completed a Doctor of Musical Arts (DMA) in music composition at Columbia University. He holds a master’s degree in music composition from McGill University. Currently, Brook is a Core Lecturer at Columbia University and the technical director of TAK Ensemble.
Human Agency and Meaning of Computer-Generated Music in Virtutes Occultae
This paper will explore concepts around compositional control that arise from computer-generated music and computer improvisation. Drawing from an analysis of my electroacoustic composition, Virtutes Occultae, I will explore the implications of computer improvisation on the role of the composer, how value is attributed to experimental art, and the broader relationship to data and automation in society at large.
In creating the software to generate music for Virtutes Occultae I was confronted with decisions regarding the degree of control or chaos I would infuse into the improvising algorithm. The amount of randomization and weighted probabilities integrated into the software set the levels of unpredictability; the unpredictability of the computer improvisation become artistically stimulating, even leading the composer to imitate the computer improvisor in more traditionally through-composed sections.
Recent commercial ventures (AIVA, Jukedeck, Melodrive, etc.) boast algorithms that generate commercial jingles and soundtrack music automatically: choose a mood and a style to create an original piece of music with the click of a button. The AIVA engine promotes an uncanny function where one may select an existing piece of music, say a Chopin Nocturne, and move a slider between “similar” and “vaguely similar” to create a derivative work. What does this method of creating music mean for how we value music? While this software creates music for commercial purposes, I have employed similar techniques in non-commercial art in Virtutes Occultae and other works. Unpacking the ramifications of what computer-generated music means for the role of an artist and their relation to their art is a complex and multifarious subject that must be considered.

Michael L. Century / Rensselaer Polytechnic Institute
Michael Century is Professor of New Media and Music in the Arts Department at Rensselaer Polytechnic Institute in Troy, N.Y., which he joined in 2002. Musically at home in classical, contemporary, and improvisational settings, Century holds degrees in musicology, from the Universities of Toronto and California at Berkeley. Long associated with The Banff Centre for the Arts, he directed the Centre's inter-arts program between 1979-1988, and founded its Media Arts program in 1988. Before RPI, he was a new media researcher, inter-arts producer, and arts policy maker (Government of Canada Canadian Heritage and Department of Industry 1993-98. His works for live and electronically processed instruments have been performed and broadcast in festivals internationally.
Performance-demonstration of Pauline Oliveros’s Expanded Instrument System for HoA using Spat
Over the arc of her career as a composer and performer, Pauline Oliveros (1932-2016) maintained an abiding interest in expanding the aperture of temporal experience, and often referred to her own Expanded Instrument System (EIS) as a “time machine”— a device to permit present, past and future to occur, in her own words, “simultaneously with transformations”. With permission from the Pauline Oliveros Trust, I am continuing to develop and here propose a demonstration for the Forum of my own live-electronic music using accordion within a HoA system using Spat. The demonstration will involve my own musical performance and the programming collaboration of Matthew D. Gantt. The EIS system played a significant role not only in Oliveros’s recorded oeuvre, but also has had a significant impact In the broader history of live Electronic Music. My research into the history of the system also would permit me to provide a short resume about Oliveros as an artist and the way her system for manipulating and modulating improvisatory music with time delays developed over half a century. This development began with classic works for tape, followed successively by manual and foot-controlled outboard delay machines of the 1980s, MIDI controlled Max patches (early 1990s), a full digital transcription in MaxMSP (2002) and finally various modules for basic spatialization. The demonstration here proposed uses EIS in conjunction with Spat and provides a significant development in the power of the system, in both aesthetic and technical aspects.

Christopher Chandler / Union College
Christopher Chandler is a composer, sound artist, and the co-founder and executive director of the [Switch~ Ensemble]. He serves as Assistant Professor of Music at Union College in Schenectady, NY where he teaches courses in music theory, composition, and technology. His acoustic and electroacoustic work draws on field recordings, found sound objects, and custom generative software. His music has been performed across the United States, Canada, and France by leading ensembles including Eighth Blackbird, the American Wild Ensemble, the Oberlin Contemporary Music Ensemble, the Cleveland Chamber Symphony, and Le Nouvel Ensemble Moderne. His has received recognition and awards for his music including a BMI Student Composer Award, an ASCAP/SEAMUS Commission, two first prizes from the Austin Peay State University Young Composer's Award, winner of the American Modern Ensemble’s Annual Composition Competition, and the Nadia Boulanger Composition Prize from the American Conservatory in Fontainebleau, France. Christopher received a Ph.D. in composition from the Eastman School of Music, an M.M. in composition from Bowling Green State University, and a B.A. in composition and theory from the University of Richmond.
The Generative Sound File Player: A Corpus-Based Approach to Algorithmic Music
The Generative Sound File Player is a composition and performance tool for algorithmically organizing sound. Built in Max and incorporating the bach library and MuBu, the software allows the user to load, analyze, and parametrically control the presentation any number of sound files. At its core is bach’s powerful new bell (bach evaluation language on lllls) scripting language that allows for rich and powerful control of sound through text-based input or a graphical interface. The software sits at the intersection of generative music, concatenative synthesis, and interactive electronics. For the IRCAM Forum Workshop 2020, I propose giving a demonstration of its core functionality, creative applications, and recently developed spatialization capabilities.

Carlos Delgado

Carlos Delgado's music has been heard in concerts, festivals, and radio broadcasts in England, Finland, France, Germany, Hungary, Italy, Japan, Romania, Spain, and the United States. As a composer specialized in electroacoustic chamber music and multimedia, his works have been presented at venues such as Merkin Recital Hall in New York; the Rencontre Internationale de Science & Cinema (RISC) in Marseille, France; and St. Giles Cripplegate/Barbican in London. He has participated in festivals including EMUFest and (Rome); ManiFeste 2015 Académie (IRCAM, Paris), and the 2018 New York City Electroacoustic Music Festival, and has appeared as a laptop performer at Symphony Space and the Abrons Art Center (New York); the Musica Senza Frontiere Festival, in Perugia; and many others. His works are available on the New World Records, Living Artist, Capstone Records, and Sonoton ProViva labels. He holds a Ph.D. in music composition from New York University.
Multidimensional Movement: Gestural Control of Spatialization in Live Performance
Lev is a gestural control software instrument I have developed that allows for the spontaneous control of sound, video, and spatialization in live performance. Lev takes video data from a laptop computer’s built-in video camera and splits them into three matrices, arranged in the form of a doorway or gate along the outer edges of its field of view. Each of these matrices defines three separate play zones which a live performer uses to generate data that control the instrument’s audio and video output. The play zone for the performer’s right hand provides for control of pitch and duration, while the left hand’s zone can be mapped to control various parameters such as amplitude, pitch-bend, filtering, modulation, etc. Bridging the two is the third matrix, located along the upper edge of the camera’s field of view, which defines a control zone for spatialization (panning and reverberation). The gestural data generated by all three play zones can simultaneously be mapped to control video processing parameters such as chromakeying, brightness, contrast, saturation, etc. The result is a multidimensional performance that amplifies a live performer’s spontaneously produced gestures by expanding their reach into the domains of sound, spatial location, movement, and video. The program was written in Max/MSP, and it is named after Lev Termen, inventor of the Theremin.

Catherine Guastavino, François-Xavier Féron, Cédric Camier

François-Xavier Féron is researcher at the French National Centre for Scientific Research (CNRS). He is affiliated to Science and Technology of Music and Sound laboratory (STMS – CNRS, Ircam, Sorbonne université) and is member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT).
Cédric Camier is an electroacoustic composer and research engineer in Acoustics affiliated to Saint-Gobain Recherche. He is also member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT).
Catherine Guastavino is associate professor (William Dawson Scholar - School of Information Studies, McGill University) and member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT).
The sound centrifuge: spatial effects induced by circular trajectories at high velocity
“If revolutions of sound in space go beyond a certain barrier of revolutions per second, they become something else” explained Karlheinz Stockhausen during a conversation with Jonathan Cott in 1971. Later, the Portuguese composer Emmanuel Nunes, in close collaboration with computer music designer Eric Daubresse, experimented at Ircam with sounds moving at very high velocities and observed new perceptual effects. These musical experimentations inspired us to initiate a line of research on the perception of spatial figures back in 2009. Through a series of controlled scientific experiments conducted at CIRMMT, we were able to document perceptual mechanisms at play to track moving sounds. We estimated perceptual thresholds for auditory motion perception and velocity perception and investigated the influence of reverberation, spatialization techniques and loudspeaker configurations. Throughout this process, new tools based on a hybrid spatialization method combining numerical propagation and angle based amplitude panning, were developed to move sounds around the listener at very high velocities. At such velocities, the revolution frequency is in the same order of magnitude as audible frequencies. New effects were obtained by manipulating the listener position, the direction of revolution, the velocity and the nature of the sound material using our custom-built “sound centrifuge” developed in Max-MSP. They include spatial ambiguities, Doppler pitch-shifting and amplitude modulation induced by velocity apparent variation, timber enrichment, spatial beating (amplitude modulation pattern as a function of the revolution frequency and the sound fundamental frequency) and a spatial wagon-wheel effect, all dependant on the listening position. These effects were first used for creative purposes in two multichannel electro-acoustic pieces by composer Cédric Camier created in 2017 and 2019. In this demonstration, effects will be presented parametrically as core elements of spatial studies based on velocity.

Sophie Dupuis and Emilie Fortin

Sophie Dupuis is a composer from New Brunswick interested in interdisciplinary art music and music for small and large ensembles. She is recognized for her impressive technique and endless imagination. She finds her voice in her childhood spent in the picturesque scenery of the Maritimes and, conversely, by her attraction to raw, electrical and harsh sounds. Émilie Fortin is an adventurous musician and teacher who explores every possible facet of the trumpet. A versatile performer, she is a freelancer for several ensembles and orchestras. She has contributed to the creation of over fifteen works internationally with various composers in an effort to enrich the repertoire of her instrument. She is also the artistic director of Bakarlari, a collective of soloists. The collaboration between them exists in order to push the limits of traditional musical language, in order to create an immersive concert experience.
Reconceptualiser une oeuvre interdisciplinaires: Known Territories 2.0.
In October 2017, Sophie Dupuis composed the piece "Known Territories" for trumpet, tape, and two dancers. The work referred to memories and childhood; how the future is always linked to the past, and how the present is always charged with both dimensions. "Known Territories" was performed at Array Music (Toronto), the city of study at the time of the composer's death, and after much discussion it became clear that it was not necessarily easy to get the piece to travel due to the size of the base. This is why Dupuis/Fortin decided to make a version for solo trumpet and tape, but without taking anything away from the narrative framework of the basic work, which is in fact composed of excerpts triggered at specific moments during the piece, in order to give Émilie the space to express herself freely through her playing. She will integrate several elements from dance, mime and theater in her performance. This composition represents memories of Sophie's childhood spent in a rural New Brunswick village, in an isolated house located on a large lot that seemed to become magical at dusk, with its sparkling frog songs and fireflies. The melancholy she feels when these memories come back to her, as well as the comfort and discomfort she feels at the same time, have been conveyed through music. "Known Territories" goes off the beaten path of traditional musical performance, asking the performer to go beyond her physical limits, while being supported by electronics that add another dimension to the work.

Nicola Giannini / Université de Montréal, CIRMMT

Nicola Giannini is a Sound Artist and an Electroacoustic Music Composer based in Montreal, Canada. His practice focuses on immersive music, both acousmatic and performed. He is interested in sounds that evoke physical materials. His works have been presented at the Toronto International Electroacoustic Symposium (TIES), Akosuma Festival (Canada), Cube Fest, ICMC, New York City Electroacoustic Music Festival (USA), TEDxLondon, Sound-Image Colloquium, Serge Postgraduate Conference (UK), Sound Spaces (Sweden), File Festival (Brazil), Coloquio Internacional : Espacio-Inmersividad, Electroacoustic Music International Exhibition (Mexico), Palazzo Strozzi Museum, Audior and Tempo Reale (Italy), among others. In July 2017 he has been a guest composer at the EMS in Stockholm.
His piece “Eyes Draw Circles of Light” obtained the first prize at the Jeux de Temps / Times Play (JTTP) 2019 competition organized by the Canadian Electroacoustic Community, and an Honorable Mention at the XII° Fundación Destellos (Argentina) Electroacoustic Competition 2019. His piece “For Hannah” was chosen as finalist at the International composition competition Città di Udine 2018.
Originally from Italy, Nicola has a master degree in Electroacoustic Composition from the Conservatory of Florence, with honourable mention. Nicola is a doctoral student (D.Mus) at the Université de Montréal, under the supervision of Robert Normandeau, and is presently a research assistant with the Groupe de recherche en immersion spatiale (GRIS). He is one of the student coordinators at CIRMMT for the Research Axis on Expanded musical practice.
Eyes Draw Circles of Light, acousmatic piece for dome of speakers
Eyes Draw Circles of Light explores specific aspects of the human unconscious, characterizing that brief moment when we are about to fall asleep. Through sound spatialization, a multidimensional unconscious representation was created that evokes the relationship between psyche and body. The fast and involuntary body movements, hypnic jerks, that may occur at that time have been underlined. The work is a collaboration with the artists Elisabetta Porcinai and Alice Nardi, who wrote a poem for it, and aims to find a balance between elegance and experimentation. The text was interpreted by Porcinai. The work was composed in the immersive music studios at the Université de Montréal.

Matthew D. Gantt / Rensselaer Polytechnic Institute, Harvestworks

Matthew D. Gantt is an artist, composer and educator based in Troy, NY. His practice focuses on (dis)embodiment in virtual spaces, procedural systems facilitated by idiosyncratic technology, and the recursive nature of digital production and consumption. He has presented or performed at a range of institutional and grassroots spaces including Panoply Performance Laboratory, Harvestworks, New Museum, The Stone, Issue Project Room, and internationally at IRCAM (Paris) and Koma Elektronik (Berlin), among others. He has been an artist-in-residence at Pioneer Works, Bard College, and Signal Culture, and is a current PhD candidate at Rensselaer Polytechnic Institute. Gantt releases music with Orange Milk and Oxtail Recordings, teaches experimental music and media across academic and DIY contexts, and worked as a studio assistant to electronics pioneer Morton Subotnick from 2016 – ’18.
Sound and Virtuality: Creative VR, Ambisonics and Expanded Composition
Virtual reality offers the contemporary composer a number of affordances beyond the creation of games, simulations, or simple 'audio-visual' music. This demonstration will showcase new approaches for applying techniques common to generative music, modular-style patching and electronic composition to immersive environments via OSC bridging of Unity/VR, Max/MSP, and IRCAM's Spat~/Panoramix. Hands-on demonstrations of VR works-in-progress will show both 'composer-friendly' methodologies for working with real-time spatial sound and immersive media, as well as new conceptual frames for approaching contemporary VR, such as digital kinetic sculpture, immersive media as simultaneous site and score for performance, and VR as both sonic instrument and concert hall.

Louis Goldford / Columbia University

Louis Goldford is a composer of acoustic and mixed music whose works are often inspired by transcription and psychoanalysis. He has collaborated with ensembles such as the Talea Ensemble, JACK Quartet, Yarn/Wire, Ensemble Dal Niente, the Meitar Ensemble, and Rage Thormbones. Recent projects include premieres with violinist Marco Fusi, with musicians of the Cité internationale des arts and the Conservatoire supérieur de Paris. Additionally, Louis completed his Cursus at IRCAM in 2019. His works have been presented at music festivals across Europe and North America, and at international conferences. Louis has also performed in Taiwan, Poland and the United States. He is a Dean's Fellow at Columbia University in New York, where he studies with Georg Friedrich Haas, Zosha Di Castri, George Lewis, Brad Garton and Fred Lerdahl. In workshops and individual lessons Louis also worked with Brian Ferneyhough, Philippe Leroux, Yan Maresz, Chaya Czernowin and others.
Assisted Orchestration, Spatialization, and Workflow in Two Recent Compositions
In this presentation concentrating on my latest two works from 2019, I will discuss my extended use of Orchidea, the latest assisted orchestration platform in the Orch* tools lineage, and spatialization using OpenMusic and the Ircam Spat~ package for Max.
These pieces include “Au-dessus du carrelage de givre” for tenor, electronics, and video, premiered at the Soirée du Cursus at the Ircam ManiFeste, 18 June 2019 at the Centquatre, Paris, as well as “Tell Me, How Is It That I Poisoned Your Soup?” for 12 players and live electronics, premiered by the Talea Ensemble in New York City, 31 March 2019 at the DiMenna Center for Classical Music. Score, sound, and video excerpts will be presented alongside examples from the software tools used to generate specific passages.
Because these tools generate lots of intermediate files, including Max and OpenMusic patches, orchestral analysis and resynthesis, and video work, it quickly becomes necessary to organize one’s studio workflow in a deliberate way. I will offer some solutions for coordinating and synchronizing project files over many computers using the Git version control architecture, with particular emphasis on how such tools may be harnessed by artists and creators.

Andrea Gozzi / SAGAS - University of Florence
Musician and musicologist; graduated in music from the University of Paris 8 Vincennes-Saint-Denis and member of the team of Tempo Reale, Florence's centre for musical research, production and pedagogy, founded by Luciano Berio. PhD student at SAGAS (University of Florence), Andrea Gozzi is also a lecturer in Sound Design at the LABA Academy in Florence and a lecturer in Rock History and Sound Design at DAMS (University of Florence). As a musician, he has worked with Italian and international artists, both live and in the studio.Participating in events such as LIVE 8 in Rome in 2005, he has also played in France, England, Germany and Canada.He has published books and essays on the history of rock and musical biographies in Italy and Canada.
Listen to the theatre! Exploring Florentine performative spaces
A music performance space constitutes the frame as well as the content of the listeners’ experience. The acoustic environment forces continuous negotiations that differ according to a listener’s role and position as composer, performer or audience member. The aim of my research is to investigate the acoustics of a performative space, the Teatro del Maggio Musicale Fiorentino in Florence, following two complementary paths, both based on an interactive model. The first offers an impulse-response experience: the user can virtually explore the opera hall by choosing between the binaural reproductions of 13 different listening positions. The second is about the aural and visual perception of a performance of the romance “Una furtiva lagrima” from Donizetti’s opera L’elisir d’amore. The user, through ambisonics, 360 degree videos and virtual reality, will experience this performance from three different positions in the theatre: on stage, in the orchestra pit and in the audience seating.
Florian Grond et Weslaw Woszcyk / McGill University
Florian Grond is an interaction designer working as a research associate in the Sound Recording Department of the Schulich School of Music at McGill University. His interdisciplinary research and design interests focus on the immersive use of sound with several years of experience in sound recording with microphone arrays and reproduction with multi speaker setups. He currently applies his 3D sound capture and mixing expertise in collaboration with Randolph Jordan to this year’s Canada Pavilion at the art Biennale in Venice. For many years he has also been active as an independent media artist, exhibiting his works in solo and group exhibitions at several venues in North America, Europe and Asia. His artistic and research projects apply creative sonic practices to multimodal participatory design in the context of disability, the arts, and assistive technology. Over the last years, he started various collaborations with colleagues with disabilities from the local community, academia and the arts resulting in research output, artistic creations and the curating of exhibitions.
Wieslaw Woszczyk is an internationally recognized audio researcher and educator with leading expertise in emerging technology trends in audio. Woszczyk holds the James McGill Professor Research Chair position and a full professorship at McGill University, and is the founding director of the Graduate Program in Sound Recording (1978), and founding director of the CIRMMT Centre for Interdisciplinary Research in Music Media and Technology, an inter-university, inter-faculty, interdisciplinary research center established at McGill University in 2001. An AES member since 1976, Woszczyk is a Fellow of the Audio Engineering Society (1996) and the former Chair of its Technical Council (1996-2005), Governor (twice, in 1991-1993 and 2008-2010) and President (2006-2007). He also served on the Review Board of the AES Journal. Woszczyk received the Board of Governors Award in 1991 and a group Citation Award in 2001 for “pioneering the technology enabling collaborative multichannel performance over the broadband Internet.”
Exploring the possibilities of navigating and presenting music performances with a 6DoF capture system
Capturing multiple sound fields in a music performance space is nowadays possible using several synced higher order microphone arrays separated in space. With post-processing software for the interpolation between these points of capture, this new technology is known as 6DoF, as it affords 3 translational degrees of freedom across the scene, in addition to the already existing 3 rotational degrees of freedom within a single point. This offers new possibilities in the post-production step with regards to selection from the capture and balancing the recorded acoustic scene. In the context of VR, 6DoF offers new possibilities for immersing the listener into original audio content that covers an extended space, e.g. orchestral ensembles. This gives the listeners the possibility to explore the performance space with their own navigation strategies using interactive tools enabling rotation and translation. Depending on the spatial resolution of sound field capture, a range of possibilities is available in sound balancing, including variations in perspective, separation of sources and ambience, degree of sharpness and diffusion of the image in a 3D presentation. The authors will share their findings from the exploration of 6DoF captures using first order and third order Ambisonics microphone arrays at multiple locations of a performance space. The authors will also discuss various trade offs that may be considered in further improving the quality and flexibility of capturing music via multiple sound fields. Considerations will also be given with respect to reproductions in standard channel-based formats for 2D and 3D sound projections.

Catherine Guastavino, Valérian Fraisse (McGill University, CIRMMT), Simone D’Ambrosio, Etienne Legast (Audiotopie) & Maryse Lavoie (Ville de Montréal)

Catherine Guastavino is an Associate Professor at McGill University in the School of Information Studies, a member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), and an associate member of the McGill Schulich School of Music. Her research interests include auditory perception and cognition, soundscape, spatial hearing and spatial audio, music psychology and multisensory perception. Valerian Fraisse received his M.Sc in Music Technology (ATIAM) from IRCAM and was a graduate research trainee at McGill in the summer 2019. Simone d'Ambrosio and Etienne Legast are composers from the art collective Audiotopie, specialized in sound art in public spaces. Maryse Lavoie is a noise control officer at Ville de Montreal and a former member of CIRMMT. She received a Ph.D. in Psychoacoustics from Université de Montreal on classical guitar timbre.
L’art sonore spatial dans les espaces publics
We present an interdisciplinary collaboration between McGill, Audiotopie and Plateau-Mont-Royal borough around the design and evaluation of spatial sound installations in Montreal. In the summer 2019, the new public square Place des Fleurs-de-Macadam was equipped with a soundlevel meter for acoustic monitoring and the Dryade set-up of 8 speakers developed by Audiotopie. The analysis of questionnaires previously collected on-site and acoustic monitoring allowed us to identify the main activities conducted in and around the square throughout the day. These results informed the compositional process in terms of temporal evolution over the course of the day so that the sound art would resonate with the desired ambiances. Two contrasting compositional strategies, identified in the literature, were deployed, namely an integrated static composition, meant to blend in with the ambient soundscape and an oppositional spatial composition, meant to emerge and grab attention as users move through the space.
A total of 700 questionnaires, collected with passers-by before and during the installation, revealed that the public square was perceived a more pleasant, calmer and less loud in the presence of sound art. These results highlight the potential to purposefully shape public spaces with sound art. The work was conducted as part of the Sounds in the city partnership, which brings together researchers, sound artists, city makers and city users, to look at urban sound from a novel, resource-oriented perspective and nourish creative solutions to make cities sound better.

Rob Hamilton / Rensselaer Polytechnic Institute

Composer and researcher Rob Hamilton explores the converging spaces between sound, music and interaction. His creative practice includes mixed and virtual-reality performance works built within fully rendered networked game environments, procedural music engines and mobile musical ecosystems. His research focuses on the cognitive implications of sonified musical gesture and motion and the role of perceived space in the creation and enjoyment of sound and music. Dr. Hamilton received his Ph.D. from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and currently serves as Assistant Professor of Music and Media in the Department of Arts at Rensselaer Polytechnic Institute in Troy, NY. 
Virtual Instrument Design for the 21st Century
This talk will describe the design, development and implementation of a series of virtual reality musical instruments named Coretet. Coretet is a virtual reality instrument that explores the translation of performance gesture and mechanic from traditional bowed string instruments into an inherently non-physical implementation. Built using the Unreal Engine 4 and Pure Data, Coretet offers musicians a flexible and articulate musical instrument to play as well as a networked performance environment capable of supporting and presenting a traditional four-member string quartet. This talk discusses the technical implementation of Coretet and explores the musical and performative possibilities enabled through the translation of physical instrument design into virtual reality and realized through the composition Trois Machins de la Grâce Aimante (Coretet no. 1).

Lena Heng, Stephen McAdams (McGill University)

 

LLena is currently doing an interdisciplinary PhD in the Music Perception and Cognition Lab at McGill University Canada, under the supervision of Prof. Stephen McAdams. Their research interests are in the area of timbre perception, music hermeneutics, cognitive representation, and emotion perception in music. As a member of Ding Yi Music Company and an adjunct lecturer in the Nanyang Academy of Fine Arts, they are particularly keen on integrating their research interests with performance, music understanding, and listening. Their work on this aspect has earned them the Research Alive award from McGill Schulich School of Music in 2018/19.
Prior to their graduate studies, Lena obtained a B.Mus (Hons.) from the Nanyang Academy of Fine Arts under the tutelage of Mr Wong Sun Tat, and a B.Soc.Sci (Hons.) in Psychology from the National University of Singapore. In 2016, Lena was awarded the NAC - Graduate Arts Scholarship, as well as the McGill University Graduate Excellence Fellowship and McGill University Student Excellence Award for their studies at McGill University.
Conference presentations: SMPC 2019; ICTM 2016.
Book chapter: Negotiations within Singapore’s Cultural Sphere: Case Studies (in print).
Timbre’s function in perception of affective intents. Can it be learned?
Timbre has been identified by music perception scholars as an important component in the communication of affect in music. While its function as a carrier of perceptually useful information about sound source mechanics has been established, studies of whether and how it functions as a carrier of information for communicating affect in music are still in their infancy. If timbre functions as a carrier of affective content in music, how it is used may be learned differently across different musical traditions. The amount of information timbre carries across different parts of a phrase may also vary according to musical context.
To investigate this, three groups of listeners with different musical training (Chinese musicians, Western musicians, and nonmusicians, n = 30 per group) were recruited for a listening experiment. They were presented with a) phrases and measures, and b) individual notes, of recorded excerpts interpreted with a variety of affective intents by performers on Western and Chinese instruments.
These excerpts were analyzed to determine acoustic aspects that are correlated with timbre characteristics. Analysis revealed consistent use of temporal, spectral, and spectrotemporal attributes in judging affective intent in music, suggesting purposeful use of these properties within the sounds by listeners. Comparison between listeners’ perception across notes and longer segments also revealed differences in perception with increased musical context. How timbre is used for musical communication thus appears to be implicated differently across musical traditions. The amount of importance timbre plays also appears to vary for different positions within a musical phrase.
Erica Y. Huynh (McGill University), Joël Bensoam (IRCAM), Stephen McAdams (McGill University)
Erica Huynh is a Music Technology (Interdisciplinary) student at the Schulich School of Music of McGill University. Her dissertation will continue the research conducted during her Master's at the Music Perception and Cognition Lab, under the supervision of Prof. Stephen McAdams, and in collaboration with IRCAM's Joël Bensoam. Her research involves understanding how novel stimuli become incorporated into listeners' mental models of sounds and how action categories and object categories are shaped by those mental models.
Bowed plates and blown strings: Odd combinations of excitation methods and resonance structures impact perception

How do our mental models limit our perception of sounds in the physical world? We have evaluated the perception of sounds synthesized by one of the most convincing methods: physical modeling. We used Modalys, a digital physical modeling platform, to simulate interactions between two classes of mechanical properties of musical instruments: excitation methods and resonance structures. The excitation method sets into vibration the resonance structure, which acts as a filter that amplifies, suppresses, and radiates sound components. We simulated and paired three excitation methods (bowing, blowing, striking) and three resonance structures (string, air column, plate), forming nine excitation-resonator interactions. These interactions were either typical of acoustic musical instruments (e.g., bowed string) or atypical (e.g., blown plate). Listeners rated the extent to which the stimuli resembled bowing, blowing, and striking excitations (Experiment 1), or the extent to which they resembled string, air column, and plate resonators (Experiment 2). They assigned the highest resemblance ratings to: (1) excitations that actually produced the sound and (2) resonators that actually produced the sound. These effects were strongest for stimuli representing typical excitation-resonator interactions. However, listeners confused different excitations or resonators for one another for stimuli representing atypical interactions. We address how perceptual data can inform physical modeling approaches, given that Modalys effectively conveyed excitations and resonators of typical but not atypical interactions. Our findings emphasize that our mental models for how musical instruments are played are very specific and limited to previous exposure and perceived mechanical plausibility of excitation-resonator interactions.

Jack Kelly (McGill University)

Jack Kelly is a Ph.D. candidate and researcher in the sound recording area of the department of music research at McGill University. His research focuses on presence as a perceptual attribute in immersive music reproduction. Using a 22-channel convolution reverb engine, paired with a set of three-dimensional, anechoic recordings of acoustic instruments, his work aims to shed light on the interactions between room and sound-source, in an effort to gain a deeper understanding of the factors that form the sensation of ‘being there’ in virtual musical experiences. Jack has been an active classical recording engineer since 2013. He received an M.mus in sound recording (Tonmeister program) from McGill (2016). His interest in the field grew from his background in composition and performance, developed during his time at Concordia University where he received a BFA in Electroacoustic Studies (2012).

SPAT tools for presence research in immersive music reproduction
Immersive media has become more and more familiar to the average consumer. However, techniques and technologies used to develop immersive media content are still in their infancies. This underscores the need for tools which enable artists and engineers to create compelling 3D content is being increasingly felt. One of the major challenges for content creators is to produce auditory experiences for the listener to feel a sense of ‘being there’ in the virtual environment. To shed some light on this question, investigations into the listener’s sensation of presence in three-dimensional acoustic music reproduction are underway. Exploratory research has been conducted to highlight which aspects of the relationship between direct sound and reverberation have the greatest influence on the listeners’ sensation of presence. In order to conduct this research, a tool was required to create and manipulate immersive reverberation in real-time. The development of this tool has been made possible by a collection of objects from IRCAM’s SPAT toolkit, implemented in Max MSP. This presentation will explore how SPAT5 / SPAT Revolution is being used for presence research into immersive music. Topics will include generating and spatializing early reflections, 22 channel impulse response convolution, spatialization of channel-based microphone arrays, and binauralization.

Mantautas Krukauskas / Lithuanian Academy of Music and Theatre, Music Innovation Studies Centre

Mantautas Krukauskas (b. 1980) – composer and sound artist, teacher at the Department of Composition of Lithuanian Academy of Music and Theatre in Vilnius, where he is also a co-founder and Head (since 2016) of Music Innovation Studies Centre, academic lab for studies, art and research, with a focus on music technology, innovation in music and music education, interactive arts, immersive media, and interdisciplinarity. His compositions, including chamber music, sound art and other works, music for theatre and dance productions have been performed in Lithuania, Austria, Germany, France, Canada, USA, and other countries. Professional profile also includes electronic music performance and work within creative industries sphere with music production and arrangement. Mantautas Krukauskas has been actively involved in diverse field of activities, including coordination and management of international artistic, research and educational programmes. His interests comprise interdisciplinarity, creativity, music and media technologies, and a synergy of different aesthetic and cultural approaches.
Some conceptions for effective use of immersive sound techniques for music composition and orchestration
Immersive and spatial sound technologies are gaining more widespread use, especially in electroacoustic music composition. Although spatial audio is widely considered to be a relatively new field, its roots can be traced to much older times, also in orchestral music. Accessibility of new tools with intuitive interfaces widened possibilities of composers to work with analysis and modelling of spatialization without a steep learning curve and deep specialised knowledge. This enabled us to shift our attention from technological challenges towards artistic ones. From one side, mostly in acousmatic music, major part of the research concerns technical aspects rather than the content itself; from other side we have more abstract inquiries and insights on conceptions of space in music. Studies which concern spatialization techniques, as well as application of discoveries in spatial audio to acoustic music and orchestration are quite lacking. Existing know-how mostly is being held by experts and is shared mostly interpersonally.
Author of this presentation is actively working with spatialization since 2013, which contributed towards acquiring expertise of diverse techniques of adapting and mixing sonic material in 3D space. The scope of work also includes exploring the use of space as a compositional parameter, working with music and sound in interdisciplinary contexts etc. This experience led to discovery of certain trends and directions for artistically effective application of immersive sound techniques both for spatial audio and traditional composition and orchestration.
This presentation will focus on defining most wide-spread artistic contexts for spatial sound application and will describe spatialization techniques which lead towards effective artistic impact. Author will also demonstrate some models and experimentation in using relevant immersive sound architectonic principles for acoustic music composition and orchestration.

Hanna Kim / Toronto University
Hailed as “truly inspired” by Ludwig van Toronto, Hanna Kim encompasses a wide range of traditional, neo-romantic, minimalistic, and improvisational styles for her compositional work. She is the recipient of several awards, including the 2019 Lothar Klein Memorial Fellowship in Composition, the 2019 St. James Cathedral Composition Competition, the 2014 Miriam Silcox Scholarship, and the 2013 Joseph Dorfman Composition Competition (Germany). Ms. Kim has won numerous score calls, and has been asked to compose new works for concert performances across a variety of styles. A native of South Korea, Ms. Kim is currently working toward her doctoral degree (DMA) at the University of Toronto, Canada. In addition to her passion for being a scholar of music, Kim is also an active church musician. She currently serves as the Minister of Music at the Calvary Baptist Church in Toronto. 
Timbre Saturation for Chamber Orchestra
My presentation will be about a new piece for chamber orchestra that I wrote as a dissertation for my doctoral degree at the University of Toronto. The piece specifically considers economy attained through efficient uses of the instruments. The work will be for a smaller number of players than the standard orchestra. Typically, when it comes to scoring chord progressions for a large orchestra, two methods are common: one is doubling, and the other is harmonizing. My composition project begins by questioning whether those common practices cause any sound overload. Furthermore, what if a composer eliminates any melodic/intervallic elements that could be thought of as ‘wastefulness’, such as doubling? By seeking to create the most economic orchestration, the goal of this project is to suggest a solution to optimize the orchestral effects and colors with a limited number of instruments. In addition, my dissertation work will rely on the minimum number of instruments for a large ensemble piece to be as effective as a full-sized orchestra, particularly focusing on the use of colors (timbre). The piece will require twenty-eight players: 1st flute, 2nd flute (+ picc), english horn, 1st clarinet in Bb, 2nd clarinet in Bb, 3rd clarinet in Bb (+ bass cl), tenor saxophone, bassoon, 1st horn, 2nd horn, trumpet in C, 1st trombone, 2nd trombone, mandolin, piano (+ celeste), harp, 1st percussion, 2nd percussion, timpani and Strings (the minimum of 2, 2, 2, 2, 1, and the maximum of 8-8-6-4-3). The piece will have three movements. The performing duration will be approximately fifteen minutes.

Dongryul Lee / University of Illinois at Urbana-Champaign

Dongryul Lee’s music is deeply oriented around acoustical phenomena and virtuosic classical performance practice. He seeks to write music that creates profound aural experiences with both dramaturgy and pathos. His compositions have been performed by ensembles such as Avanti!, Contemporanea, Jupiter, MIVOS, Callithumpian Consort, GMCL, S.E.M., Conference Ensemble, Paramirabo, and Illinois Modern Ensemble among others. He was awarded the third prize in the first Bartók World Competition (Budapest, 2018); the Presser Award for the performance of Unending Rose with Kairos quartett (Berlin, May 2020); the Special Prize Piero Pezzé in the Composition Competition Città di Udine (Italy, 2018); and the Second Prize in the Composition Competition GMCL (Portugal, 2017). His Parastrata has been performed in four cities in Europe and North America. Lee holds degrees in computer science and composition from Yonsei University and the Eastman School respectively, and is an ABD doctoral candidate and lecturer at the University of Illinois at Urbana-Champaign.
A Thousand Carillons: Acoustical Implementation of Bell Spectra Using the Finite Element Method and Its Compositional Realization
I will present an implementation of virtual bells, based on the Finite Element Method from engineering physics. The FEM has been widely used in engineering analysis and acoustics, especially for the creation and optimization of carillons. Beginning with a brief introduction of spectral music inspired by bell sounds, I will introduce the theoretical basis of the FEM and its application to the isoparametric 2-D elements, including the Principle of Virtual Work and FE shape functions. The creation of 3-D virtual bell geometries with structural analysis, and their optimization process will follow, with acoustical background on campanology. For my computational realizations, I follow the groundbreaking research of Schoof and Roozen-Kroon; their research formed the basis on which the first prototype of the major-third bell was designed and cast. A brief introduction of just tuning and 72 TET within the threshold of 5 cents will be analyzed, for the creation of spectral profiles of optimal bell tone-colors. The presentation will be accompanied with original SuperCollider examples and images of bell geometries created in COMSOL multiphysics, to visualize and sonify the characteristics of bells and their sounds.
Stephen McAdams, Meghan Goodchild, Alistair Russell, Beatrice Lopez, Kit Soden, Er Jun Li, Alfa Barri, Shi Tong Li, and Félix Baril (McGill University)
Stephen McAdams studied music composition and theory with Julia Hansen at De Anza College in California (1971-1973) before entering the realm of perceptual psychology (BSc in Psychology, McGill University, 1977; PhD in Hearing and Speech Sciences, Stanford University, 1984). He also studied electronic music with alcides lanza and Edgar Valcárcel at McGill University in 1975-1976. In 1986, he founded the Music Perception and Cognition team at the world-renowned music research centre IRCAM-Centre Pompidou in Paris. While there he organized the first Music and the Cognitive Sciences conference in 1988, which subsequently gave rise to the three international societies dedicated to music perception and cognition, as well as the International Conference on Music Perception and Cognition. He was Research Scientist and then Senior Research Scientist in the French Centre National de la Recherche Scientifique (CNRS) from 1989 to 2004. He took up residence at McGill University in 2004, where he is Professor and Canada Research Chair in Music Perception and Cognition. He directed the Centre for Interdisciplinary Research in Music, Media and Technology (CIRMMT) in the Schulich School of Music from 2004 to 2009. His research interests include multimodal scene analysis, musical timbre perception, sound source perception, and the cognitive and affective dynamics of musical listening. He is currently working on a psychological foundation for a theory of orchestration.
The Orchestration Analysis and Research Database (Orchard)
To provide a tool for researching the role of auditory grouping effects in orchestration practice and thereby the role of timbre as a structuring force in music, a first-of-its-kind online database was created, and an analysis taxonomy and methodology were established. The taxonomy includes grouping processes of three kinds: concurrent, sequential and segmental. These play a role in sonic blends that give rise to new timbres through perceptual fusion, voice separation on the basis of timbre, integration of multiple instrumental lines into surface textures, the formation of orchestral layers of varying prominence based on timbral salience, contrasts based on changes in instrumentation and register, and progressive orchestration in larger-scale gestures. The taxonomy served in the creation of the data model for the database. Scores of 85 orchestral movements from Haydn to Vaughan Williams were analyzed while listening to commercial recordings by experts in teams of two who analyzed the scores in terms of the taxonomy individually and then confronted their results before entering them into the database. A query builder allows for the construction of hierarchical queries on different levels of the taxonomy, and to specify composer, piece, movement, and instrumentation. Results of the queries display the annotated score at the appropriate page and provide a sound clip of the corresponding measures from the commercial recording used in the combined score/aural analysis. The database has proved useful in exploration of the diversity of each orchestral effect and has the potential to be used for data mining and machine learning for knowledge discovery.

Landon Morrison / Harvard University
Landon Morrison is a College Fellow at Harvard University, where he recently began teaching after completing his PhD in music theory at the Schulich School of Music of McGill University in Montreal. His dissertation, titled “Sounds, Signals, Signs: Transductive Currents in Post- Spectral Music at IRCAM,” examines the relationship between contemporary compositional practices, technological development, and psychoacoustics within the context of post-spectral music created at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM). More broadly, his research aims to draw music theory and media studies into an interdisciplinary dialogue that tracks the transductive flow of sounds within new media environments. Recent and forthcoming publications include analytically driven articles in Circuit: musiques contemporaine, Nuove Musiche, and Music Theory Online, as well as a chapter on the history of rhythm quantization to be published in the Oxford Handbook of Time in Music.
Computer-Assisted Orchestration, Format Theory, and Constructions of Timbre
In a multi-authored paper unveiling Jonathan Harvey’s Speakings (2008), Gilbert Nouno et al. document the results of a collaborative IRCAM project driven by the “artistic aim of making an orchestra speak through computer music processes” (2009). The ensuing music dramatizes this aim, depicting an audible program where the orchestra progresses through stages of baby-like “babbling,” adult “chatter,” and finally, “ritual language” in the form of a Tibetan mantra. Turning this narrative on its head, my paper takes Speakings as a point of departure for a genealogical analysis of computer-assisted orchestration techniques, showing how, in order to make an orchestra speak, it was first necessary to make software listen.
Through a close examination of archival documents and “e-sketches,” I follow Harvey’s transition from a hybrid software setup (Melodyne and a custom partial-tracking program) to the newly-developed Orchidée application (Carpentier and Bresson 2010), which notably uses music information retrieval (MIR) methods. I frame this shift in relation to format theory (Sterne 2012), showing how categories used to encode sound files with audio descriptors of low-level timbral attributes (spectral, temporal, and perceptual) are themselves contingent on a number of factors, including: a) sedimented layers of psychoacoustics research grounded in the metrics of timbre similarity (Wessel 1979); b) a delegation of this knowledge to software tools at IRCAM, which one finds already with the implementation of Terhardt’s algorithm for pitch salience in the IANA program (Terhardt 1982); and c) the wider network of institutional negotiations surrounding the establishment of standardized file formats like the MPEG-7 protocol (Peeters 2004).

Ben Neill (Ramapo College of New Jersey)

Composer/performer Ben Neill is the inventor of the mutantrumpet, a hybrid electro-acoustic instrument, and is widely recognized as a musical innovator through his recordings, performances and installations. Neill’s music blends influences from electronic, jazz, and minimalist music, blurring the lines between digital media and acoustic instrument performance.
Neill began developing the mutantrumpet in the early 1980s with synthesizer inventor Robert Moog. In 1992, while in residency at the STEIM research and development lab for new instruments in Amsterdam, Neill made a new, fully computer interactive version. In 2008 he created a new instrument at STEIM, and returned there in 2016-17 to design the latest version.Currently an Artist in Residence at Nokia Bell Labs Experiments in Art and Technology program, Neill has recorded eleven CDs of his music on labels including Universal/Verve, Thirsty Ear, Astralwerks, and Six Degrees. ITSOFOMO, his major collaborative piece with visual artist David Wojnarowicz created in 1989, was recently performed at the Whitney Museum. Other performances include BAM Next Wave Festival, Big Ears Festival, Bing Concert Hall Stanford University, Lincoln Center, Whitney Museum of American Art, Cite de la Musique Paris, Moogfest, Spoleto Festival, Umbria Jazz, Bang On A Can Festival, ICA London, Istanbul Jazz Festival, Vienna Jazz Festival, and the Edinburgh Festival. Neill has worked with many musical innovators including La Monte Young, John Cage, John Cale, Pauline Oliveros, Rhys Chatham, DJ Spooky, David Berhman, Mimi Goese, King Britt, and Nicolas Collins. Neill leads concerts of Young’s The Second Dream with an international brass ensemble.
Fantini Futuro
Fantini Futuro is a new audio-visual performance work for Ben Neill’s Mutantrumpet V4, countertenor, Baroque keyboards, and interactive video projections. It is being created for the 64 channel Antechamber at Nokia Bell Labs where Neill is an Artist in Residence in the Experiments in Art and Technology program.
The piece is based on the music and life of early Baroque trumpeter/composer Girolamo Fantini, who was responsible for bringing the trumpet indoors from the hunt and the battlefield to the realm of art music. Fantini was a musical celebrity in his time, and wrote one of the earliest collections of music written for trumpet alone as well as with keyboards. He also pioneered the use of mutes to expand the dynamic range of the instrument for indoor use along with numerous other playing techniques. Fantini Futuro remixes and collages Fantini’s material both compositionally and in real time during performance through live sampling. The work draws connections between the dynamic energy of early Baroque musical and architectural vocabularies and minimalist patterns and processes, reflecting on the improvisatory musical performance practices of Fantini's time through a variety of interactive technologies.
The visual component is primarily comprised of architectural imagery from places where Fantini lived and performed. The animated architectural imagery is controlled live from Neill’s mutantrumpet and creates the sensation of the performers being situated in virtual worlds created from these historical elements.

Jason Noble (McGill University)

Jason Noble has been described as “a master at translating feeling and imparting emotion through music,” and his compositions have been called “a remarkable achievement, indeed brilliant, colourful, astounding, challenging.” His work seeks balance between innovation and accessibility, motivated by a belief that contemporary music can be genuinely progressive and communicative at the same time. A lifelong chorister and occasional conductor, Jason has composed for many of Canada’s finest choirs, including full mass settings for Pro Coro Canada (Edmonton) and Christ Church Cathedral Choir (Montreal, QC), and works for the Vancouver Chamber Choir (Vancouver, BC), Amabile Choirs (London, ON), Soundstreams (Toronto, ON), voces boreales (Montreal), Nova Voce (Halifax, NS), and Newman Sound (St. John’s, NL).
Jason’s compositions have been performed across Canada and in USA, Argentina, Mexico, France, Belgium, the Netherlands, Germany, Denmark, and Italy, and featured in publications, recordings, and broadcasts. He has held numerous composition residencies including Pro Coro Canada at the Banff Centre, the St. John’s International Sound Symposium, the Bathurst Chamber Music Festival, the Edge Island Festival for Choirs and Composers, the Newfoundland and Labrador Registered Music Teachers’ Association, and the Sudbury Symphony Orchestra.
Also an accomplished scholar, Jason is currently a postdoctoral researcher at McGill University, working on the ACTOR project (Analysis, Creation, and Teaching of Orchestration). His PhD at McGill was funded by the prestigious Vanier Scholarship (SSHRC). His research appears in Music Perception and Music Theory Online. He has presented at numerous national and international conferences and invited guest lectures.
A case study of the perceptual challenges and advantages of homogeneous orchestration: fantaisie harmonique (2019) for two guitar orchestras
“Orchestration” is defined by Stephen McAdams as “the choice, combination or juxtaposition of sounds to achieve a musical end.” This typically invokes a palette of heterogeneous sounds, especially those of the instruments of the symphony orchestra, but McAdams’ definition equally applies to homogeneous sound palettes encountered in ensembles of uniform composition, such as the guitar orchestra. Since orchestral effects predicated on perceptual difference—such as stratification and segmentation—often rely on timbral heterogeneity, orchestrating for homogeneous ensembles presents a different set of challenges where such effects are desirable. On the other hand, since orchestral effects predicated on perceptual similarity—such as blend and textural integration—are facilitated by timbral homogeneity, orchestrating for homogeneous ensembles may present a different set of advantages.
In this presentation, I will discuss my composition fantaisie harmonique (2019), for two guitar orchestras (one of classical guitars and another of electric guitars). Several musical devices are exploited which would be difficult or impossible to achieve with equal effectiveness in a more heterogeneous ensemble, including: (1) an elaborate tuning system combining elements of equal temperament and just intonation, in which each orchestra is divided into six different tuning groups, (2) an extensive hocketing section, (3) massed sonorities exploiting a variety of perceptual principles. The piece was conceived with spatial deployment of the instrumental groups in mind, and a forthcoming recording of the piece will use spatialization as a perceptual cue to distinguish groups, filling one of the traditional roles of timbral heterogeneity in more orthodox approaches to orchestration.
Robert Normandeau (Université de Montréal)
His work as a composer is mainly devoted to acousmatic music, although he composed some mixed works. More specifically, his compositions employ esthetical criteria whereby he creates a ‘cinema for the ear’ in which ‘meaning’ as well as ‘sound’ become the elements that elaborate his works. More recently Robert Normandeau composed a cycle of works of immersive multiphonic music for dome of loudspeakers. Along with concert music he has composed, for a period of twenty years, incidental music especially for the theatre.
He also worked as artistic director for over twenty years, especially for the concert series Clair de terre (Association pour la création et la recherche électroacoustiques du Québec (ACREQ)) from 1989 to ’93 at the Planétarium de Montréal, and Rien à voir and Akousma (Réseaux) from 1997 to 2006. He is Professor in electroacoustic music composition at Université de Montréal since 1999 after completing the first PhDMus in Electroacoustic Composition (1992), under Marcelle Deschênes and Francis Dhomont. He leads the Groupe de recherche immersion spatiale (Spatial Immersion Research Group, GRIS), which produces sound spatialisation software.
He received three Prix Opus from the Conseil québécois de la musique (CQM): two in 1999 — “Composer of the Year” and “Record of the Year — Contemporary Music” for Figures (IMED 0944) — and one in 2013 — “Record of the Year — Contemporary Music” for Palimpseste (IMED 12116). The Académie québécoise du théâtre (AQT) has given him two Masque Awards (“Best Music for Theatre”): one in 2002 for the play Malina and the second in 2005 for the play La cloche de verre, both directed by stage director Brigitte Haentjens. Robert Normandeau is an award winner of numerous international competitions, including the Golden Nica at Prix Ars Electronica, Linz (Austria, 1996).
ControlGRIS/SpatGRIS3: Les outils de spatialisation développés à l'UdeM
Since 2008, the research group in spatial immersion (GRIS) of the Faculty of Music of the University of Montreal has been developing spatialization tools intended for immersive environments with multiple speakers. We first produced plugins intended for integration into the most popular digital audio workstations. And subsequently we published a first spatialization software, first intended for speaker domes, then in its second version for all types of speaker setups, such as those found in acousmoniums, art galleries, or concert halls. We are now at version 3 of SpatGRIS, the main improvements of which are the abandonment of the dependence on JackRouter, a HAL plug-in now discontinued and incompatible with MacOS 10.15 (Catalina) and the opening to greater flexibility for the user in terms of input / output. We have also started a close collaboration with the developer of BlackHole to provide users with a 128-channel working environment. We will present the main characteristics of these tools during this talk.
Ofer Pelz and Matan Gover (CIRMMT)
Ofer Pelz composes music for diverse combinations of instruments and electroacoustic media, he is also an active improviser. He is the co-founder of Whim Ensemble together with Preston Beebe. He has studied composition, music theory, and music technology at Jerusalem, Paris, and Montreal. The work of Ofer Pelz has been recognized by the reception of many international prizes including two ACUM awards and the Ernst Von Siemens Grant. Meitar Ensemble, Cairn Ensemble, Ardeo String Quartet, The Israel Contemporary Players, Le Nouvel Ensemble Moderne, Architek Percussion, Geneva Camerata, and Neue Vocalsolisten are among the ensembles that played Pelz’s music. His music is played regularly in Europe, USA, Canada and Israel at La Biennale di Venezia and Manifeste IRCAM/Pompidou among others.
https://oferpelz.com/
Matan Gover is a multi-disciplinary musician and software developer. He is currently combining his two passions in McGill University’s Music Technology department, where he researches computational processing of vocal music supervised by Prof. Philippe Depalle. Matan won the America-Israel Cultural Foundation scholarship for piano performance, and completed his B.Mus. in orchestral conducting summa cum laude in Jerusalem. Matan is a professional choir singer, and has written orchestrations and arrangements for ensembles such as the Jerusalem Symphony Orchestra and Jerusalem Academy Chamber Choir. Matan has been a professional software developer since the age of 15. He now works at LANDR Audio Inc. on an A.I.-powered music processing engine that performs automatic music mastering.
http://matangover.com
Sound Tracks: An interactive video game composition
In this demo, we will present and discuss our piece entitled Sound Tracks, originally written for a live@CIRMMT concert and performed by Ensemble Aka. Sound Tracks lies on the continuum between a video game and a musical composition. The game’s graphical user interface consists of a set of ‘tracks’, one track per musician. Each track contains moving graphical symbols that represent musical gestures, and these symbols approach the viewers from the horizon. When the graphical symbols arrive to the musicians, they must play the corresponding musical gesture. This interface is inspired by a well-known video game called Guitar Hero.
The graphical interface and game rules replace the traditional musical score: the unfolding of a performance is not predetermined as in most classical music but plays out in real-time according to rules, chance operations, and improvisatory decisions taken by the performers.
In our demo, we will demonstrate a performance or recording of this work, as well as discuss the ideas that underpin its creation and questions that arose while performing it with several ensembles. The following themes will be discussed:
- Gamified composition: How does replacing traditional music notation and musical development with a game-based interface affect the mindset of the musicians and the audience during a performance?
- Improvisation vs. predetermination: How do musicians improvise in a given musical framework? How much information should be dictated by a score, versus leaving room for interpretation?
- Timbre and orchestration: How does a small set of musical gestures spread across multiple instruments get combined into complex timbres and textures? How do different instruments interpret the same musical ideas?
- The technology behind this game piece - the implementation and the control of the application for the performance.

Diego Quiroz (McGill University)

Recording engineer, musician and teacher of music production and audio engineering, and immersive audio. He has recorded classical music records in Montreal, Canada with classical orchestras, jazz orchestras and numerous ensembles of baroque, contemporary and modern music. Graduated from the Master of Sound Recording at McGill University in Montreal, Canada, and from Musical Synthesis major, Magna Cum Laude at Berklee College of Music in Boston, USA. He is current PhD candidate in Sound Recording at McGill University with highly renowned professors such as George Massenburg, Richard King, Wieslaw Wosczcyk and Martha de Francisco. Diego has been involved in numerous AES conference papers since becoming a PhD candidate in subjects around immersive audio perceptual evaluation, height-channel formats perceptual evaluation, gestural control for audio, audio perception in VR/AR and Hi Resolution audio. Diego ́s thesis proposal encompasses input devices and gestural control for 3D audio production. Other interests include, ambisonics, binaural audio, VR/AR in audio. 

Gestural Control for immersive recordings using Leap Motion for SPAT:Revolution
Innovative immersive recording techniques that involve height channels are being steadily developed, but are not matched by better methods of panning control. Only traditional knobs and faders from existing control surfaces have been paired with spatialization engine parameters, resulting in cumbersome andinadequate panning of the complex multi-channel stems. A Leap Motion Controller has been linked to some of the spatialization parameters of SPAT:Revolution in an effort to explore the use of gestural methods in sound design and mixing. The user can learn an alternate way of panning control that allows a higher degree of flexibility afforded during sophisticated mixing sessions.

Laurie Radford / University of Calgary

Laurie Radford is a Canadian composer, sound artist, music technologist, educator and researcher who creates music for diverse combinations of instruments and voices, electroacoustic media, and performers in interaction with computer-controlled signal processing of sound and image. His music fuses timbral and spatial characteristics of instruments and voices with mediated sound and image in a sonic art that is rhythmically visceral, formally exploratory and sonically engaging. His music has been performed widely and he has received commissions and performances from ensembles including the Aventa Ensemble, Esprit Orchestra, New Music Concerts, Le Nouvel Ensemble Modern, L'Ensemble contemporain de Montréal, Meitar Ensemble, Paramirabo, Thin Edge New Music Collective, Trio Fibonacci, the Penderecki, Bozzini and Molinari String Quartets, and the Winnipeg, Calgary, Edmonton and Montréal Symphony Orchestras. Radford has taught composition, electroacoustic music and music technology at McGill University, Concordia University, Bishop’s University, University of Alberta, City University (London, UK), and is presently Professor of Sonic Arts and Composition at the University of Calgary.
Getting into Place/Space: The Pedagogy of Spatial Audio
Human acquisition of skills and knowledge about the spaces they inhabit is linked to our development of proprioceptive skills from the earliest moments of physical and psychological negotiation with the world around us. The acquisition of “spatial intelligence” extends to the development of skills in comprehending the acoustic spaces through which we move, unconsciously and consciously measuring, comparing and committing to memory the sound signals and sources fleetingly inhabiting these spaces while building a library of sounding spaces to which we return and reference. Musicians, composers and sound artists develop their craft via time-honoured instruction in listening intently to instruments and sources of sound, considering their means of activation and the many parameters of sound generation moving in time. It is taken for granted that this sonic activity occurs within a particular bounded space and that that space affects the perception of sound and by extension the manner of performance practice and sound activation required. Yet, aside from certain electroacoustic studies that consider the ramifications of the technological mediation of space through stereo and multichannel sound projection environments, immersive cinema and video game design research, and spatial ear-training methods for audio engineers, there is very little overt instruction in spatial listening, a directed and methodical study of acoustic space as experienced in the myriad places and spaces that music and sound art are presented targeted at musicians and sound artists. The current proliferation of dedicated high-density loudspeaker arrays for spatial design, as well as improved and accessible technologies for capturing, preserving and reproducing the complexities of acoustic spaces, provides a fertile environment in which to initiate and develop such a pedagogy of spatial listening. This presentation will consider some of the fundamental parameters of spatial listening and perception of space that contribute to the skills involved in compositional design and control of spatial audio. An outline of a methodology of spatial sound pedagogy is proposed and illustrated with a series of spatial audio exercises that consider experiential learning, terminology and concepts, technologies for deployment, and creative application.

Jacques Rémus / Ipotam Mécamusique

Originally a biologist (agronomist and marine researcher), Jacques Rémus chose at the end of the seventies, to devote himself to music and the exploration of various forms of creation. Saxophonist, he took part in the founding of the Urban-Sax group. He also appears in many concerts from experimental music (Alan Sylva, Steve Lacy), to street music (Bread and Puppet). After studies in Music Conservatories, G.R.M. and G.M.E.B., he wrote musics for dance, theater, "total shows", television and cinema. He is above all the author of installations and shows featuring sound sculptures and musical machines such as the "Bombyx", the "Double String Quartet", "Concertomatique", "Leon and the Hands' Song", "Carillons" N ° 1, 2 and 3, "Washing Machines Orchestra" as well as those presented at the Musée des Arts Forains (Paris). Since 2014, his work has focused on the development of “Thermophones”.
Les Thermophones, récentes évolutions et spatialisation
Jacques Rémus developed the Thermophones project by experimenting with various phenomena related to the recent science of ThermoAcoustics. Several installations and concerts were first imagined with pipes using the principle of "pyrophones", based on the Rijke effect.In association with Steve Garret from Penn State University and several researchers working in French laboratories (Limsy CNRS Saclay, Paris VI, and Laum au Mans), Jacques Rémus began to develop an instrumentarium with pipes heated by electrical resistors, the Thermophones. A presentation was made at the IRCAM Forum in November 2015.Since then, the manipulations of these pipes with their pure, deep, powerful, and disturbing sounds, have made it possible to develop specific musical writings and to produce installations where the spatialization of acoustic sound sources adapts to the configuration of the installation sites. The uniqueness of the sounds, which are essentially made up of the fundamental wave of each pipe, makes it possible to consider the possibility of using them as "augmentable" instruments for other types of writing where the capture of their singing can be a source of audio and video transformations. If transport conditions allow it, the presentation can be carried out with a demonstration on one or more Thermophones.The Thermophones are controlled by Max.

Lindsey Reymore / Ohio State University
Lindsey is in the final semester of the PhD program in Music Theory at The Ohio State University. Her research focuses on timbre semantics and cross-modal language; other recent projects address multimodal emotion associations in music and dance, instrument-specific absolute pitch, and seventeenth-century harmony. In 2018, Lindsey received the Early Career Researcher Award from the European Society for the Cognitive Sciences of Music (ESCOM) at the International Conference on Music Perception and Cognition and is co-chair of the upcoming conference to be held at Ohio State, May 11-15, "Future Directions of Music Cognition."
Instrument Qualia, Timbre Trait Profiles, and Semantic Orchestration Analysis
I present a method of computational orchestration analysis built from empirical studies of timbre semantics. In open-ended interviews, 23 musicians were asked to describe their phenomenal experiences of the sounds of 20 Western instruments. A content analysis of the transcribed interviews suggested 77 qualitative categories underlying the musicians’ descriptions. In a second study, 460 musician participants rated subsets of the same 20 instruments according to these 77 categories. Principal Component Analyses and supplementary polls produced a final 20-dimensional model of the cognitive linguistics of timbre qualia. The model dimensions include: rumbling/low, soft/singing, watery/fluid, direct/loud, nasal/reedy, shrill/noisy, percussive, pure/clear, brassy/metallic, raspy/grainy, ringing/long decay, sparkling/brilliant, airy/breathy, resonant/vibrant, hollow, woody, muted/veiled, sustained/even, open, and focused/compact. In a third study, 243 participants rated subsets of a group of 34 orchestral instruments using the 20-dimensional model. These ratings were used to generate Timbre Trait Profiles, which serve as the foundation for the orchestration analysis method, for each of these instruments. A computational program is under development (anticipated Feb-March 2020) to generate a semantic orchestration plot given a musical piece as input. The analysis will provide information on how the semantic dimensions of timbre evolve throughout a work, initially using a model that combines Timbre Trait Profiles accordingly from orchestration and employs intensity modifiers based on dynamic indications. In addition to the semantic orchestration plots, I aim to translate musical data into a real-time, animated visual analysis that can be played along with the piece to illustrate dynamic timbral changes resulting from orchestration.

Ludovic Landolt

Born in 1993 Ludovic Landolt is a Franco-Swiss artist who currently works between Metz and Paris. He exhibited at Point Commun (Annecy), Tiret Point Tiret Gallery (Paris), in.plano (Paris) and the Palais de Tokyo and presented his work at the art center BBB (Toulouse). As a self-taught lute and sitar player, Landolt's work invokes a contemplative vision influenced by music, literature and poetry. He advances deep listening rather than merely hearing, and designs sculptures and sound instruments whose playing methods are activated during performances or happenings, according to experimental protocols. Concretely, he proposes to question the meaning and the use of sounds. He practices the strategy game Go, as well as Zazen meditation, both as epistemological tools granted to his sonic and visual language. He is currently a resident of the Locus Sonus research laboratory within the Aix-en-Provence Art School (ESAAIX) and the Institute for Research and Studies on the Arab and Muslim Worlds (Aix-Marseille University / CNRS).
Eternel Music - Bourdon and harmonic occurrences of bells

Two years ago, while taking a walk, I fortuitously came across a sound event that would deeply mark my work. On the ground floor of a ringing chamber in a City’s church in London, a group of bell ringers simultaneously activated a set of bells to create variations and musical sequences with an unexpected method called Change ringing. While this way of ringing was immediately appealing to me, it opened me to a field of study as intriguing as it is delightfully old-fashioned: campagnology, dedicated to the study of bells and their sounds. Since I have developed much sympathy for these instruments which I like listening to, remembering the fusion of metals at the origin of their own resonance, a supernatural union similar to the grafting of plants. From this aesthetic experience, their ringing is part of a sensitive culture and invites those who pay attention to experience a particular form of listening. This is why I would like to speak about the quality of this listening and the occurrences of these instruments, gates between the visible and the invisible worlds, specific to each acoustic culture. The polyphony and the tonality of these instruments are orchestrated according to a vocal story where the range of sounds exceeds that of the human voice. By going beyond their sacred uses, bells turn out to be above all sounding and musical objects, spatialized both in time and space where their sounds are broadcasted.

Marlon Schumacher and Núria Gimenez Comas

Marlon Schumacher’s academic background is multi-disciplinary with pedagogical and artistic degrees in music theory, digital media and composition (under Marco Stroppa) from the UMPA Stuttgart, and a PhD in Music Technology from McGill University (co-supervised by IRCAM). His main research topics are spatial sound perception/synthesis, computer-aided composition and musical interaction; he has contributed to the field through scientific publications, academic services, several open-source software releases and artistic/research projects. Marlon Schumacher is an active performer/composer creating works for a broad variety of media and formats, incl. instrumental and intermedia pieces, crowd-performances and installations. He currently works as professor for music informatics at the Institute for Music Informatics and Musicology of the University of Music in Karlsruhe, where he curated an international lecture series and designed research labs as director of the ComputerStudio. He is associate member of a.o. CIRMMT, IDMIL, RepMuS, and organizing member for the annual conference on Music and Sonic Arts (MUSA).
Núria Giménez-Comas began her musical studies with piano, but already with the idea of studying after the composition. She entered the ESMUC (Escola Superior de Musica de Catalunya) and works instrumental and electroacoustic composition with composer Christophe Havel. This work with electroacoustic music changes her way of thinking about instrumental music with a deep reflection on the notion of timbre in music. In ESMUC she finishes the bachelor with Mauricio Sotelo and a dissertation about "Music and Mathematics".
After two years she could enjoy a period of training at the Geneva Conservatory where she studied electroacustic composition with Luis Naon and instrumental with Michael Jarrell especially deepening in harmony and instrumentation. She has finished the Master in Composition at the High School of Music of Geneva with a dissertation about sound perception with Eric Daubresse, and developping some concepts, as "sound image".
Very interested in mixt music she has been selected to do Cursus 1 nad Cursus 2 for composers at IRCAM. During this research year she deals with the  sound scape listening in a project for string quartet and electronics. It has been premiered by Diotima Quartet in a collaboration with Voix Nouvelles de Royaumont using the new spatialisation system in 3D ambisonics.
She has worked in many collaboratif projects as with video-art and live video performance with Canadian artist Dan Browne. Also she has been in a long exchange with the poet Laure Gauthier in a « Poetic-Architecture » large format for actress-singer, choir and electronics piece.
Very attached also to the orchestration, she has worked with orchestras as OCG, Brussels Philarmonic, Geneva Camerata, Orchestra of Cadaqués with conductors as Michel Tabachnik and David Robertson. And has been taking part to the Orchidea Orchestration Project. She has received commissions from different musicians, orchestras and institutions, as Proxima Centauri, Geneva Camerata, Radio France, Cadaqués Orchestra, Grame-Ircam, GMEM, Mixtur Festival, CNDM, BBVA Fondation receiving support of French Culture Ministry, INAEM, SGAE Foundation. Her pieces has been played in festivals as Manifeste, Présences, Archipel, Biennale Musique en Scène (Lyon), BCNClàssics, Ibermúsica, among others. She has worked recently in a artistic research residency at IRCAM-ZKM with Marlon Schumacher, "Sculpting Space" about Synthesis textures in 3D space, working on different immersive installations in promenade format.​She has been awarded in many competitions as the Prize of Colegio de España de Paris-INAEM 2012 and the 1st prize of III Edison-Denisov Competition.
She has given some master-class or conferences about her music and research and about mixt music at Instituto Cervantes (Bordeaux), at Ircam Cursus and at ISCT of Zurich, Hochschule of Stuttgart among others. Very interested in creatif pedagogic projects she has worked in some institutions as the UAB, "Composers in the classroom" and other pedagogic commissions.
She has been invited to take part of competition jurys as Colegio de España de Paris-INAEM, and Young Composers Prize of SGAE, National Music Prize 2018.
Sculpting space
The proposed collaborative artistic research project has explored and developed the notion of spatial sound synthesis from artistic and perceptual viewpoints, truly amalgamating spatialization algorithms with sound synthesis engines according to an integral musical idea or approach. From a research viewpoint, the study of the mechanics of auditory perception should help to gain a better understanding of how these -musically mostly separated- dimensions interact and give rise to & evoke novel sound sensations.
Exploring spatialization beyond the prevalent concept of sound source spatialization, such as extended sources (with extension and shapes), spectral spatialization techniques, also the notion of a "synthetic-soundscape" in the sense of working with densities and distributions in various synthesis dimensions (in frequency as well as in space). From an artistic perspective, this approach can be seen as following a tradition of exploiting psychoacoustic principles as a musical model (e.g. as in the french spectral school) in order to create novel -sometimes paradoxical- sound compositions and musical forms.
To this end (as for the technical realization), composer Nuria Gimenez-Comas have used graphical descriptions of mass densities with the sound synthesis and spatialisation tools in Open Music.
Marlon Schumacher's contribution will be dedicated to r&d of new tools informed by cognitive mechanisms studied in spatial auditory perception and scene analysis. more precisely the OMChroma/OMPrisma framework, combining synthesis/spatializationmodels, perceptual processing, and room effects.
The novel sound-processing framework of the presented tools allows building complex dsp-graphs -directly in the composition environment-. This extends the compositional process to a virtual “lutherie” or “orchestration”, blurring the lines between musical material and instruments. The framework provides the possibility of designing controls for higher-level spatial attributes, such as diffusion, sharpness, definition, etc.
Concretely, we propose combining spatialization systems in an unorthodox loudspeaker setup (see technical rider) to synthesize a broad spectrum of “spatial sound objects” ranging from close-proximity sound sources to distant textures or ambiances, which the listener is free to individually explore and appreciate.
The idea is to create not only the sensation of given sound sources projected into space but multiple spatial sound morphologies around and within the audience, that is densifying-increasing, in different emerging sound layers.

Nadine Schütz / IRCAM

Nadine Schütz presents herself as a sound architect, or environmental interpreter. Based on both theoretical and poetic research, she explores sound space through compositions, performances, installations and ambiences that link space and listening, landscape and music, the urban and the human. Her works have been presented in Zurich, Venice, Naples, New York, Moscow, Tokyo and Kyoto. Among his current projects, a sound device for the TGI square in Paris, elementary instruments for the Pleyel Urban Crossing in Saint-Denis, and the Jardin des Réflexions for the renovated Place de La Défense. For four years she headed the multimedia laboratory of the Landscape Institute at ETH Zurich, where she installed a new studio for the spatial simulation of soundscapes, during her doctorate Cultivating Sound finalized in 2017. She is currently a guest composer at IRCAM at the Centre Pompidou Paris.
Land Sound Design 
Like an environmental interpreter, I pursue an approach of in situ urban and landscape composition by developing the acoustic qualities and sound ambiances provided by a site. Space and place are explored as a creative score that informs and directs its own transformation. This presentation exposes the artistic and methodological questions that stimulate my research conducted in collaboration with IRCAM, notably the Acoustic and Cognitive Spaces team, and in exchange with the Perception and Sound Design and Sound Systems and Signals: Audio/Acoustics, InstruMents teams. Space and place are as much the departure as the destination of my creations. But the fundamental principles of auditory perception also guide the sound orchestration of the environment. Spatialization tools play an important role in this process. First of all, they allow the acoustic characteristics of a site to be imported into the studio. The sound environment is then modeled. The composition of the project is part of this virtual environment and generates a new contextualized sound material.
Composing with echoes
Space and places are decrypted in the same way one deciphers a score.This approach guides my work on sound. The Jardin des Réflexions, an ongoing project for the Place de la Défense, gives rise to new experiments. The acoustic models of this sound and physical intervention have resulted in an evolutionary series of compositions, always linked to the place of origin, but which also develop the autonomy of each piece.This demo focuses on the composer's exploitation of the impulse responses  recorded in situ. The combination of different measures from different source and microphone positions defines a spatial register for imaginary scenes. By isolating their spatio-temporal and spectral characteristics, these acoustic imprints structure a spatial and tonal orchestration, always imbued with the acoustic spirit of the place. Acoustic space becomes an instrument of composition.

Christopher Trapani / University of Texas

The American/Italian composer Christopher Trapani was born in New Orleans, Louisiana. He earned a Bachelor’s degree from Harvard, a Master’s degree at the Royal College of Music, and a doctorate from Columbia University. He spent a year in Istanbul on a Fulbright grant, studying microtonality in Ottoman music, and nearly seven years in Paris, including several working at IRCAM. Christopher’s honors include the 2016-17 Rome Prize, a 2019 Guggenheim Fellowship, and the 2007 Gaudeamus Prize. He has received commissions from the Fromm Foundation (2019), the Koussevitzky Foundation (2018), and Chamber Music America (2015). His debut CD, Water- lines was released on New Focus Recordings in 2018. He currently serves as Visiting Assistant Professor of Composition and Interim Director of the Electronic Music Studios at The University of Texas at Austin’s Butler School of Music.
The orchestra increased: Spinning in Infinity
Spinning in Infinity (written for Festival Présences 2015, RIM: Greg Beller) presents a unique approach to spatialization and orchestration. A spatialized array of 12 loudspeakers is embedded in the orchestra to produce a fusion between electronic and acoustic sounds. The capabilities of acoustic instruments are enhanced by a complementary microtonal backdrop that fuses with the onstage players. These kaleidoscopic orchestrations are developed using concatenative synthesis; samples are chosen and retuned in real-time to create a kind of sonic color wheel, with 2-dimensional spiral-shaped trips through timbre space in CataRT—between brass and winds, for example, or between bright and dull, pitch and noise...
These reworked excerpts are then synchronized with the orchestra using the adaptive tempo-tracking of Antescofo, creating a sort of augmented orchestra whose dimensions are in constant flux: a dozen microtonal flutes can appear from nowhere, replaced the next second by a brass choir shaded by an unreal arsenal of mutes, or a peal of chimes triggered by a single live percussionist—waves of color in constant motion..

Roxanne Turcotte / CMCQ SMCQ

Active as a composer and sound designer since 1980, Turcotte has built her aesthetics around a cinema-like art of integration, acousmatic and immersive music. She is asked to sit on composition juries, and she regularly performs composition training sessions and workshops. Turcotte has received grants from the Canada Council for the Arts (CCA) and the Conseil des arts et des lettres du Québec (CALQ). The music of Roxanne Turcotte has won numerous awards and distinctions in USA, 1987, ’89), Nomination (LCL, nominated for Oslo World Music Days, 1990), Hugh Le Caine (Canada, 1985, ’89), Luigi Russolo (Italy, 1989), 6th Radio Art Competition (La France, 2005), Bourges (France). Her electroacoustic works have been programmed by several events : Florida Electroacoustic Music Festival (1996), Montréal en lumière (2000), Bourges (2003), Perpignan (2000), Futura (2006), Marseille (2002), Barcelona (2004), Geneva (2005), Akousma, 16 tracks (Bestiaire), Aix-en-Provence (2007), Edmonton (2014), Montpellier (Festival Klang, 2017), Festival MNM 2019.
Les oiseaux de Nias
Behavioral and comparative sound study of the functioning of birds in society and the consequences of the dehumanization of mankind devoted to its self-destruction, a universal disorder. The decrease in biodiversity from here to Nias Island, testifies to the disbelief of humans and climate-sceptic governments. The gradual extinction of animal species caused by the unlimited industrial power and ideological battles, make small groups of dissident individuals emerge from the revolting populations. This is a call to freedom and to safeguard our living space.

Marcel Zaes

Marcel Zaes (b. 1984), is an artist and artistic researcher. Currently, he is pursuing his Ph.D. in Music & Multimedia Composition at Brown University. Marcel investigates mechanical time with an interdisciplinary framework. His creative practice consists of assemblages of self-made software code that acts as mechanical timekeeper and human performers who they play «against», thereby creating an affective potential to re-think the gap between what is conceived of as «human» versus what is conceived of as «mechanical» temporality. Marcel’s work has most recently been performed and exhibited at ISEA Hong Kong, the Center for New Music San Francisco, the Biennial of Contemporary Arts Lisbon, Cabaret Voltaire Zurich, Columbia University in New York, Dampfzentrale Bern, and at Fridman Gallery New York.

www.marcelzaes.com

#otherbeats: Resisting the Grid — Performing Asynchrony
Much of today’s music builds on mechanized forms of timekeeping, such as the metronome, drum machine, or the visual grid in any contemporary music app. As my dissertation piece at Brown University, I made “#otherbeats.” It is an experimental web audio piece that challenges the idea of the binary between human- and machine-made time grids. For “#otherbeats,” I asked 50 participants from around the globe to record for me their idiosyncratic versions of time grids. All these collected recordings—an oral history about timekeeping—are curated and made audible on a website. Users can navigate, play and live mix the archive as a musical composition at https://otherbeats.net. “#otherbeats” is ambiguous in several ways: it is a piece about mechanical time grids, but it does not directly use them. It is about rhythmic regularity as much as it is about deviance and defiance. It is an instrument as much as a composition, and it is human as much as machinic. It is a network of sound, as much as a collection of seemingly unconnected sounds. As a piece, in its qualities of “mixing” and in its visual aesthetics, it references musical undergrounds that are heavily based on mechanized timekeepers, such as the queer African diasporic Chicago house of the late 1970s. “#otherbeats” thereby acknowledges how deeply rhythm-making, including in experimental musical cultures, is indebted to these scenes and pays tribute to them. This piece is an invitation to think about alternate temporalities in electronic music.In my presentation, I will present and discuss an excerpt of “#otherbeats,” the technicalities that lie at its base (in-browser sound processing with the Web Audio API), and the sociopolitical theory that is embedded in it.

Tiange Zhou (UC San Diego)

Born in China in 1990, Tiange Zhou is a composer, photographer, designer, and improvisational dancer. She is interested in interactive audiovisual art and other integrated art forms. Tiange's music receives performances by musicians from Mivos Quartet, I.C.E., S.E.M. Ensemble, Phillip Glass Ensemble, Yale Camerata, Neue Vocalsolisten, Sandbox Ensemble, Talujon Ensemble, Third Coast Percussion, Loadbang, Yale Philharmonia and Albany Symphony. Besides receiving the Baumgardner Fellowship for the residence composer in the Norfolk chamber music festival 2016, Tiange's work - Lament for Adonis was a finalist for the American Prize in the chorus music division and collected by American classical radio station WQXR. Her chamber work hEArT won the first prize of Kirkos Kammer International Composition Competition in Ireland. And her solo violin piece A Mirror for a Dream was chosen as one of the contemporary pieces for the Musical Summer Malaga 2016 6th International Solo Violin Competition. She is also invited as a speaker to present her music at Ircam Forum Shanghai 2019.
Tiange is pursuing her Ph.D. in composition at the University of California, San Diego, where she studies with Prof. Roger Reynolds for composition. She completed her master's degree at Yale University and a Bachelor's degree at Manhattan School of Music. She also completed an exchange program at Staatliche Hochschule für Musik und Darstellende Kunst, Stuttgart.
Besides concert music, Tiange participates in numerous collaborative projects with artists and scientists in different genres. She has done several projects about contemporary social psychological dilemmas, which she concerns.

LUX FLUX: Design sound and light work in Max/Msp through DMX
When musicians talk about interactive audio-visual creations in recent years, most of the time, they are mentioning how sound could impact the parameters of the video components. It is easy to ignore that lighting design has been alongside live music performance since the beginning of theatre history. The nature of light is more close to the sound, in which they both have abstractive characters and could be able to carry artistic continuity with having a specific narrative aspect. Musicians often use light terminologies to indicate sound, such as bright, dim, colorful, pale, shadow, etc., because of their similarities.
On the other hand, it should also be exciting to compose lighting events with music through the time by applying sound designing methodologies.
In this demonstration, I will share how to make intriguing lighting designs by Max/MSP programming through a USB to DMX interface. I will share several easy learning Max patches for audiences to understand the primary methods and utilization of lighting design and interactivity with the audio signal processing. I will present my audio-lighting work Sexet during the demo by using six LED lights as a chamber ensemble.
DMX is a lighting control protocol that allows users to have ultimate control over their lighting needs. The technology to access this protocol is very close to MIDI controlling. Therefore, it is very convenient to use MAX/MSP to program exciting creative works.

IRCAM SPEAKERS

Thibaut Carpentier / IRCAM
Thibaut Carpentier is a research engineer in the STMS Lab (Sciences and Technologies of Music and Sound) at Ircam, Paris. He studied acoustics at the Ecole Centrale and signal processing at Télécom ParisTech, before joining CNRS (French National Center for Scientific Research) in 2009. As a member of the Acoustics & Cognition Team, his work focuses on sound spatialization, artificial reverberation, room acoustics, and computer tools for 3D composition and mixing. He is the lead developer and head of the Spat project as well as the 3D mixing and post-production workstation Panoramix. In 2018, he was awarded the CNRS Cristal medal.


Simone Conforti / IRCAM
Composer, computer music designer, sound designer and software developer. Born in Winterthur, he is graduated in Flute and Electronic Music and teaches in the pedagogy department at IRCAM in Paris and works as computer music designer at CIMM Venice. Specialised in interactive and multimedia arts, his work passes also through an intense activity of music oriented technology design, in this field he has developed many algorithms which ranges from sound spatialisation and space virtualisation to sound masking and to generative music. Co-founder and CTO of MUSICO, formerly co-founded MusicFit and MUSST, has worked for Architettura Sonora, and as researcher for the Basel university, the MARTLab research center in Florence, the HEM Geneva and the HEMU in Lausanne. He has been professor in Electroacoustic at the Conservatoires of Florence and Cuneo.
“Within the sound”, recherche de nouveaux sons via TS2

It’s a workshop about TS2 and its latest features, within the context of sound design and sound composition, focused on how to discover sonic properties that are normally invisible/inaudible when we look/listen to sounds without decomposing their structure.
Through the manipulation of the time and the spectral properties, TS2 allows to retrieve details that are normally hidden to the human ear.
Thanks to its special sonic lens it is possible to generate new sounds from the original source and widen the exploration and the creation of sonic contents.

Philippe Esling / IRCAM
Philippe Esling received an MSc in Acoustics, Signal Processing and Computer Science in 2009 and a PhD on multiobjective time series matching in 2012. He was a post-doctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at IRCAM, Paris 6 since 2013. In this short time span, he authored and co-authored over 15 peer-reviewed journal papers in prestigious journals such as ACM Computing SurveysPublications of the National Academy of ScienceIEEE TSALP and Nucleic Acids Research. He received a young researcher award for his work in audio querying in 2011 and a PhD award for his work in multiobjective time series data mining in 2013.In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized at fall 2014 and already used by a wide community of composer. He directed six Masters interns, a C++ developer for a full year, and is currently directing two PhD students. He is the lead investigator in time series mining at IRCAM, main collaborator in the international France-Canada SSHRC partnership and the supervisor of an international workgroup on orchestration.


Jean-Louis Giavitto / IRCAM

Senior computer scientist at CNRS, my work focuses on the development of new programming paradigm based on temporal and spatial relationships. In my previous life, I applied these researches to the modelling and the simulation of biological systems, especially in the field of morphogenesis, at the University of Evry and at Genopole, where I co-founded the IBISC laboratory (Informatics, Integrative Biology and Complex Systems). Since my arrival at IRCAM (January 2011), my work has focused on the representation and manipulation of musical objects, for musical analysis, composition and performance on stage. I am especially interested in the specification of real-time interactions involving fine temporal relationships during performances, in the context of the Antescofo system used for the production of mixed music pieces at IRCAM and elsewhere in the world. This technology benefits everyone today, thanks to the creation of a spinoff. I participated with Arshia Cont in the creation of the INRIA MuTAnt project-team within the RepMus team and I am also deputy director of the joint research lab Ircam-CNRS-Sorbonne University.
Designing cyber-temporal systems with Antescofo

The Antescofo system couples machine listening and a specific programming language for compositional and performative purposes. It allows real-time synchronization of human musicians with computers during live performance, especially in the context of mixed music (the live association of acoustic instruments played by human musicians and electronic processes run on computers). The Listening module of Antescofo software infers the variability of the performance wrt to the idealized score, through score following and tempo detection algorithms. And the Antescofo real-time programming language provides a generic expressive support for the design of complex musical scenarios between human musicians and computer mediums in real-time interactions. It makes explicit the composer intentions on how computers and musicians are to perform together (for example should they play in a “call and response” manner, or should the musician takes the leads, etc.).

This way, the programmer/composer describes the interactive scenario with an augmented score, where musical objects stand next to computer programs, specifying temporal organizations for their live coordination. During each performance, human musicians “implement” the instrumental part of the score, while the system evaluates the electronic part taking into account the information provided by the listening module. The Antescofo system is used at IRCAM for the implementation of numerous mixed music pieces, but also for purely electronic pieces.
The presentation will focus on the Antescofo real-time programming language. This language is built on the synchrony hypothesis where atomic actions are instantaneous. Antescofo extends this approach with durative actions. This approach, and its benefits, will be compared to others approaches in the field of mixed music and audio processing. In Antescofo, as in many modern languages, processes are first class values. This makes possibles to program complex temporal behaviors in asimple way, by composing parameterized processes. Beyond processes, Antescofo actors are autonomous and parallel objects that respond to messages and that are used to implement parallel electronic voices. Temporal patterns can be used to enhance these actors to react to the occurrence arbitrary logical and temporal conditions.

During this lecture, we will explain how Antescofo pushes the recognition/triggering paradigm which is actually preeminent in mixed music, to the more musically expressive paradigm of synchronization, where “time-lines” are aligned and synchronized following performative and temporal constraints.

In the second part of our presentation, we will present a library called AntesCollider, build on top of Antescofo, and making possible to control directly a SuperCollider server from Antescofo. The Antescofo language is used to implement the control of complex sound synthesis processes in a dynamic and expressive fashion. The SuperCollider server implements the audio processing. Altough the system relies on two distinct entities in  client-server architecture (the Antescofo interpreter and the SuperCollider server), the augmented score addresses all the controls and sound processing details within the same unified document. This is an example of the notion of centralized score introduced by José Miguel Fernandez : a centralized score gathers in one document all the information needed for the definition of the temporal media (electronics, performer score, interactions and audio constructions) within the same language. This notion is motivated by the development of more dynamic, precise and musically expressive electronic scores, subsuming several temporal media, enabling new couplings between computers and musicians, and renewing the problem of interpretation both at the level of the composer and the instrumentalist.

The presentation will be illustrated by several artistic productions that rely on AntesCollider, including « Las Pintas », an audio-visual pieces developped for the SAT at Montreal.

INVITED COMPOSERS

Sasha Blondeau 

Sasha J. Blondeau (1986) is a french composer of contemporary mixed, instrumental and electroacoustic music. He has a PhD in musical composition in the Ircam-Sorbonne Université-CNRS program. He also composes pieces for theatre and is interested in the interaction between instrumental writing and "electroacoustic" writing in the same space of expressivity. Sasha J. Blondeau first studied piano and saxophone (Gap) and then studied analysis, writing and composition at the Lyon CRR. In 2007, he joined the CNSMD de Lyon in the composition classes of Denis Lorrain and François Roux. He obtained his PhD in composition at Ircam in 2017 and worked in the Musical Representations team (directors of research: Jean-Louis Giavitto (CNRS) and Dominique Pradelle (Paris Sorbonne, Philosophy) and worked on the Antescofo language and the new possibilities of writing electronics that it implies, from 2013 to 2017. He is a resident of the Cité Internationale des Arts de Paris from July 2013 to June 2015. In 2014, he attended the Ircam Manifest Academy and the Summer Composition Institute at Harvard in the United States. He is the laureate of the Francis and Mica Salabert Foundation Prize 2012 and the Sacem "Claude Arrieu" Prize 2018.

 

Carmine Emanuele Cella

Carmine Emanuele Cella is an Italian composer and researcher working on the relationship between mathematics and music. In 2007-2008, Carmine-Emanuele Cella works as a researcher in Paris in Ircam's Analysis/Synthesis team working on audio indexing and since January 2019, he is Assistant Professor of Music and Technology at the University of Berkeley.

Zosha Di Castri 

Zosha Di  Castri  is  a  Canadian composer/pianist  living  in New  York.  Her  work  (which has  been performed in Canada,  the  US, South America,  Asia,  and Europe)  extends  beyond purely  concert  music  including  projects  with electronics,  sound arts, and  collaborations  with video  and dance.  She  has  worked  with  such  ensembles as the BBC Symphony,  San Francisco Symphony,  Montreal  Symphony  Orchestra, the  National  Arts  Centre  Orchestra, the  L.A. Philharmonic,  the  Chicago Symphony  Orchestra,  the  New  York  Philharmonic,  ICE,  Wet  Ink,  Ekmeles, JACK  Quartet,  Yarn/Wire,  the NEM,  and  Talea Ensemble  among  others.  Zosha  is  currently  the Francis Goelet Assistant  Professor  of  Music  at Columbia  University. 

José Miguel Fernandez
José Miguel Fernández studied music and composition at the University of Chili and at the Laboratory for Musical Research and Production (LIPM) in Buenos Aires, Argentina. He then studied composition at the Conservatoire National Supérieur de Musique et de Danse de Lyon and followed the Cursus program in composition at IRCAM. He composes instrumental, electroacoustic, and mixed music works. His works have been performed in the Americas, Europe, Asia, and Oceana and he has produced mixed music and electroacoustic concerts in several festivals.José Miguel Fernández won the international electroacoustic music competition in Bourges (2000), the Grame-EOC’s international composition competition in Lyon (2008), and the Giga Hertz Award from the ZKM/EXPERIMENTALSTUDIO in Germany (2010). In 2014, he was chosen by IRCAM for the artistic research residency on interaction in mixed music works and in 2018 for a residency with the Société des Arts Technologiques in Montreal, on writing electronics for an audiovisual project. He is currently in the music doctorat program (research in composition) at IRCAM organized in collaboration with Sorbonne Université and the UPMC.His research project focuses primarily on writing for electronics and research on new tools for the creation of mixed and electroacoustic music.In parallel to his activity as a composer, he works on a range of educational projects and creation in connection with computer music.
Centralized score: AntesCollider and other artistic applications in Antescofo

The Antescofo system couples machine listening and a specific programming language for compositional and performative purposes. It allows real-time synchronization of human musicians with computers during live performance, especially in the context of mixed music (the live association of acoustic instruments played by human musicians and electronic processes run on computers). The Listening module of Antescofo software infers the variability of the performance wrt to the idealized score, through score following and tempo detection algorithms. And the Antescofo real-time programming language provides a generic expressive support for the design of complex musical scenarios between human musicians and computer mediums in real-time interactions. It makes explicit the composer intentions on how computers and musicians are to perform together (for example should they play in a “call and response” manner, or should the musician takes the leads, etc.).

This way, the programmer/composer describes the interactive scenario with an augmented score, where musical objects stand next to computer programs, specifying temporal organizations for their live coordination. During each performance, human musicians “implement” the instrumental part of the score, while the system evaluates the electronic part taking into account the information provided by the listening module. The Antescofo system is used at IRCAM for the implementation of numerous mixed music pieces, but also for purely electronic pieces.
The presentation will focus on the Antescofo real-time programming language. This language is built on the synchrony hypothesis where atomic actions are instantaneous. Antescofo extends this approach with durative actions. This approach, and its benefits, will be compared to others approaches in the field of mixed music and audio processing. In Antescofo, as in many modern languages, processes are first class values. This makes possibles to program complex temporal behaviors in asimple way, by composing parameterized processes. Beyond processes, Antescofo actors are autonomous and parallel objects that respond to messages and that are used to implement parallel electronic voices. Temporal patterns can be used to enhance these actors to react to the occurrence arbitrary logical and temporal conditions.

During this lecture, we will explain how Antescofo pushes the recognition/triggering paradigm which is actually preeminent in mixed music, to the more musically expressive paradigm of synchronization, where “time-lines” are aligned and synchronized following performative and temporal constraints.

In the second part of our presentation, we will present a library called AntesCollider, build on top of Antescofo, and making possible to control directly a SuperCollider server from Antescofo. The Antescofo language is used to implement the control of complex sound synthesis processes in a dynamic and expressive fashion. The SuperCollider server implements the audio processing. Altough the system relies on two distinct entities in  client-server architecture (the Antescofo interpreter and the SuperCollider server), the augmented score addresses all the controls and sound processing details within the same unified document. This is an example of the notion of centralized score introduced by José Miguel Fernandez : a centralized score gathers in one document all the information needed for the definition of the temporal media (electronics, performer score, interactions and audio constructions) within the same language. This notion is motivated by the development of more dynamic, precise and musically expressive electronic scores, subsuming several temporal media, enabling new couplings between computers and musicians, and renewing the problem of interpretation both at the level of the composer and the instrumentalist.

The presentation will be illustrated by several artistic productions that rely on AntesCollider, including « Las Pintas », an audio-visual pieces developped for the SAT at Montreal.
© Quentin Chevrier

Jean-Luc Hervé 

Jean-Luc Hervé was born in 1960. He studied composition at the conservatoire de Paris with Gérard Grisey, where he received a Premier Prix in composition. In 1997 he received the "Goffredo Petrassi" prize for his composition Ciels for orchestra. He was composer-in-research at IRCAM and received a fellowship from the DAAD in Berlin (2003). The profound effect of a residence at Villa Kujoyama in Kyoto, along with a doctoral thesis in aesthetics and subsequent research at IRCAM, have helped to shape Hervé's compositional outlook. He founded the group Biotop(e) with Thierry Blondeau and Oliver Schneller in 2004. His works have been performed by ensembles such as Orchestre National de France, Orchestre Philharmonique de Radio-France, Orchestra Sinfonica dell'Emilia-Romagna "Arturo Toscanini", Instant Donné, Court-Circuit, Ensemble Intercontemporain, 2E2M, Contrechamps, Berliner Symphonie Orchester, KNM Berlin, Musik Fabrik, Orchestra della Toscana. He is currently a teacher of composition at the Conservatoire de Boulogne-Billancourt.

James O'Callaghan

James O’Callaghan is a composer and sound artist based in Montréal. His music has been described as “very personal… with its own colour anchored in the unpredictable.” (Goethe-Institut) His work spans chamber, orchestral, live electronic and acousmatic idioms, audio installations, and site-specific performances. It often employs field recordings, amplified found objects, computer-assisted transcription of environmental sounds, and unique performance conditions. His music has been the recipient of over thirty prizes and nominations, including the Salvatore Martirano Award (2016), ISCM Young Composer Award (2017), and the Jan V. Matejcek Award (2018), and nominations for a JUNO Award (2014) the Gaudeamus Award (2016). Active as an arts organiser, he co-founded and co-directed the Montréal Contemporary Music Lab. Originally from Vancouver, he received a Bachelor of Fine Arts degree from Simon Fraser University in 2011, and a Master of Music degree from McGill University in 2014.
Alone and unalone: conceptual concerns in simultaneous headphone and speaker diffusion
Two recent related works of mine, Alone and unalone for sextet and electronics, and With and without walls (acousmatic), employ simultaneous loudspeaker and in-ear diffusion with headphones supplied for the audience. The works examine the relationship between individual and collective experience. When we listen to music together, as in a concert, we share a common reality, but we simultaneously have individual, unsharable experiences in our own heads. A confrontation of the philosophical problem of other minds, the affect of the pieces endeavour to teeter between solipsism and the kind of empathy-building that occurs through art.
This talk is designed to accompany the performance of Alone and unalone by Ensemble Paramirabo on April 3 2020 as part of the symposium. In it, I will discuss the technical configuration of the headphone diffusion system I have designed, as well as compositional strategies for the combination and movement of sound between in-ear and loudspeaker spatialization. The system provides unique immersive possibilities in spatial imagery: I will illustrate several of these possibilities with examples from the work, and discuss the compositional process, as well as the conceptual and artistic motivations behind these strategies.

© P. Raimbault

Leroux Philippe

Philippe Leroux is a French composer born in 1959. In 1978, he entered the Conservatoire National Supérieur de Musique de Paris in the classes of Ivo Malec, Claude Ballif, Pierre Schäeffer and Guy Reibel where he obtained three first prizes. During this period, he also studied with Olivier Messiaen, Franco Donatoni, Betsy Jolas, Jean-Claude Eloy, and Iannis Xénakis.
In 1993, he was named resident at the Villa Medici (Prix de Rome) where he stayed until October 1995. Philippe Leroux's music, always very lively and often full of surprises, is marked by an original use of striking sound gestures that organize themselves into a rich network of relationships. He is the author of more than eighty works: symphonic, vocal, works with electronics, chamber music and acousmatics. His works are performed and distributed in many events around the world including Donaueschingen Festival, Festival Présences de Radio-France, and the Agora Festival. He has received numerous awards and in 2015 he was named a member of the Royal Society of Canada, the Academy of Fine Arts of the Institut de France awarded him the Simone and Cino Del Duca Foundation Music Composition Prize, and his CD Quid sit Musicus received the Grand Prix du Disque 2015 awarded by the Académie Charles Cros. He has published several articles on contemporary music and has given lectures and composition courses in places such as the University of California at Berkeley, Harvard, etc. From 2001 to 2006 he taught composition at IRCAM as part of the computer music curriculum and in 2005/2006 at McGill University in Montreal (Canada) as part of the Langlois Foundation. From 2007 to 2009, he was in residence at the Arsenal de Metz and the Orchestre National de Lorraine, and from 2009 to 2011, he was a guest professor at the Université de Montréal (UdeM). Since September 2011 he is Associate Professor of Composition at the Schulich School of Music at McGill University, where he also directs the Digital Composition Studio. He is currently in residence with the MEITAR ensemble in Tel Aviv. His discography includes about thirty CDs, including five solo recordings.

Georgia Spiropoulos 

Trained in piano and all other disciplines surrounding composition in Athens, Georgia Spiropoulos also practices jazz and is passionate about traditional Greek music. She studies with P. Leroux and taken classes with M.Levinas. During the Cursus at IRCAM, she worked with  J. Harvey, T. Murail, B. Ferneyhough, P. Hurel, and M. Stroppa. Spiropoulos completed her Master’s degree at the EHESS, in collaboration with anthropologists and Hellenists. This focus on the oral origins of music is fueled by other fields of exploration: improvisation, performance and multidisciplinary art, voice, language. During the festival ManiFeste-2015, IRCAM dedicated a concert-portrait to her. She was a Distinguished Visiting Chair in Music and Acting Director of McGill Digital Composition Studios at McGill Univeristy, Schulich School of Music.