Discover the speakers and their presentation

IRCAM Forum Workshops Montreal, April 2-5, 2020

A - Aurélien Antoine / Azevedo Anesio

B - Baril Félix Frédéric / Beuret Denis / Bolles Monica / Bouchard Linda / Boyd Jeffrey, Friedemann Sallis, Martin Ritter / Brandon Amy / Brook Taylor

C - Cadars SylvainCahen Roland / Cella CarmineCarpentier Thibaut / Century Michael / Chandler ChristopherConforti Simone

D - Delgado Carlos / Delisle Julie

E - Esling Philippe

F - Fernandez José Miguel / Féron François-Xavier, Camier Cédric, Guastavino Catherine / Fortin Emilie et Dupuis Sophie / Foulon Raphaël

G - Gantt Matthew / Giannini Nicola / Goldford Louis / Gozzi Andrea / Grond Florian et Woszcyk Weslaw

H - Hamilton Rob / Heng Lena / Hoff Jullian and Leyec Charlotte / Huynh Erica

J - Juras Jordan et Luciani Davide

K - Kafejian Sergio and phd dr. Esther Lamneck / Kim Hanna / Krukauskas Mantautas

L - Landolt Ludovic / Lee Dongryul / Lemouton SergeLengelé Christophe

M - Madlener Frank / Maestre Esteban / Majeau-Battez Emmanuelle / McAdams Stephen, Russel Alistair, Goodchild Meghan, Lopez Beatrice et Kit Soden / Morciano Lara / Morrison Landon

N - Nagy Zvonimir / Neill Ben / Noble Jason

O - O'Callaghan James

P - Pelz Ofer et Matan Gover

R - Radford Laurie / Raynaud Eric / Rémus Jacques / Reymore Lindsey

S - Savoie Monique / Schumacher Marlon et Núria Gimenez Comas / Schütz Nadine / Spiropoulos Georgia

T - Trapani Christopher / Turcotte Roxanne / Tutschku Hans

Z - Zaes Marcel / Zhou Tiange

------------------------------- 

Aurélien Antoine / McGill University

Aurélien Antoine is a post-doctoral fellow at McGill University working with Stephen McAdams and Philippe Depalle. His current research focuses on modeling of orchestration effects and techniques from machine-readable symbolic score information and audio signals through data mining and machine learning techniques. This work benefits from the resources available in the Orchard database and aims to expand it. Its outcomes will also contribute to the understanding and use of the different orchestration effects and techniques.
Harnessing the Computational Modelling of the Perception of Orchestral Effects for Computer-Aided Orchestration Tools
Recent developments in the field of computer-aided orchestration have provided interesting approaches for addressing some of the many orchestration challenges, supported by advances in computational capacities and artificial intelligence methods. Nevertheless, harnessing the many sides of this musical art, which involves combining the acoustic properties of a large ensemble of varied instruments, has not yet been achieved. One interesting aspect to investigate is the perceptual effects shaped by the instrument combinations, such as blend, segregation, and orchestral contrasts, to name but three. These effects result from three auditory grouping processes, namely concurrent, sequential, and segmental grouping. Therefore, research in Auditory Scene Analysis (ASA) could be utilised to establish computational models that process symbolic musical score and audio signal information to identify specific orchestral effects. Our work in this area could help to understand and identify the different musical properties and techniques involved in achieving these effects that are appreciated by composers. These developments could benefit systems designed to perform orchestration analysis from machine-readable musical scores. Moreover, grasping the different parameters responsible for the perception of orchestral effects could be incorporated into computer-aided orchestration tools designed to search for the optimal instrument combinations by adding perceptual characteristics to their search methods.

Anésio Azevedo Costa Neto / Instituto Federal São Paulo - Universidade Brasília - IDMIL - McGill University

Anésio Azevedo is a Philosophy teacher at Instituto Federal de São Paulo (IFSP) and is a Ph.D. candidate for Universidade de Brasilia (UnB), Brasília, Brazil. Under the name "stellatum_," Anésio explores sounds, either recorded from Cerrado (biome from Brazil) or produced by himself. Combining different sonic characteristics into ambiances, Anésio aims to expand one’s perception of Nature by offering cues to perceive its owns length of time. 
 
Cerrado—Applying spatialization techniques to expanded perceptive fields
As an art researcher, I depart from the assumption that each specific environment has a kind of listening that becomes fundamental toward what we perceive in general in that environment. The aim of this research presentation is to show how I manage to enhance the sensation of immersiveness by modifying the distribution, the trajectories, and the intensity of sound sources in a tridimensional spatial axis and how this has helped me to develop an expanded ambient wherein people could experience an amount of Nature’s complexity that underlies what is possible to perceive. The artistic idea evokes not only the technique research but also the continuous process that gathers audiovisual data into potential materials as building blocks through my performances.

Frédéric Félix Baril / McGill University

Félix Frédéric Baril was born in 1979 in Montreal. He started a baccalaureate in composition in 1999 at the Université de Montréal where he is the student of Michel Longtin. He then studied with Denys Bouliane at McGill University. He finishes his Master in Composition in 2006. His Thesis, on the possibilities of organic development of musical material, is chosen for the Dean’s Honour List. Baril starts a doctorate in composition in 2007 at McGill University. Between 2012 and 2015 he conceives with Denys Bouliane an audio system for the reproduction of orchestral scores, a set of computer tools dedicated to composition and music research. The musical works of Baril have won a number of prize and bursaries at home and abroad, including the prestigious William Schuman Prize of the BMI Student Composer Award in New York (2001). Between 2003 and 2007 he is three times laureate of the SOCAN Foundation Awards for Young Composers. He receives scholarships from the FRQSC, CIRMMT as well as McGill University. Baril participated in numerous internships: "Les classiques de demain" at the National Art Centre, the "Rencontres de musique nouvelle" at Domaine Forget and "Voix Nouvelles" at the Royaumont Foundation. Baril works as postdoctorate under the supervision of Stephen McAdams at the McGill University Music Perception and Cognition Lab. He’s responsible of the OrchView software development, an expanding platform focused on orchestration research.
OrchView - Tools for the Analysis of Orchestration
Designed for Mac, Windows and iPad, OrchView is a stand-alone application for music analysis. OrchView provides music researchers with a powerful set of annotation tools built for the multiple research axes of the ACTOR project (Analysis, Creation + Teaching of Orchestration).
Annotation data is automatically gathered during the analysis process of the researcher (orchestration techniques, effects, selected instruments, measure range, etc.). It is then uploaded online to be integrated in ORCH.A.R.D. (Orchestration, Analysis & Research Database). A MusicXML score format provides flexibility in the visual representation of the analyses. It can then be exported to a PDF file along with the annotations.
OrchView already includes a set of Orchestral Grouping Effects tools. These tools correspond to the ongoing research being done by Stephen McAdams and his team. Orchestration Techniques tools are currently being implemented.
OrchView is currently being designed by Félix Frédéric Baril and programmed by Baptiste Bohelay. It is an original concept by Kit Soden.

Denis Beuret

Denis Beuret is a Swiss composer, trombonist, video artist, computer developer, improviser, and cultural mediator. He studied percussion, trombone, computer music, conducting, and orchestration, as well as cultural mediation. He is specialized in sound research: extended playing techniques of the bass trombone and integration of electronics in concerts. He has developed an augmented bass trombone, equipped with various sensors that allow him to control musical programs according to his movements and playing. He has presented his work on several occasions at IRCAM, as well as at ImproTech Paris - New York 2012.  
Virtual Ensemble, A Program that Grooves
This program generates melodies, bass, drums, and appropriate chords in real-time and plays it all in rhythm. It analyzes the pitches and dynamics of four audio sources (microphones or files), it adjusts to the speed of a sound source and measures its musical production in real-time, according to rhythms or split values selected. Concerning orchestration, it is possible to choose the sounds you want, as the program analyzes audio or MIDI and generates MIDI. For this presentation and the demo, Denis Beuret will play the trombone and use a pedalboard that allows him to control audio effects in real-time.

 

Monica Bolles

Monica Bolles has been working with spatial audio since 2011 when she first gained access to her local planetarium’s 15.1 channel surround system. Since then she has been continuously building toolsets in Max MSP to be able to create large textured soundscapes that explore space, movement, and interaction. Tapping into her roots in traditional audio engineering she works with composers and live performers to explore methods of translating their work to spatial environments while exploring the role the audio engineer plays as a performer and musician. As an artist, she has been focusing on building custom instruments that explore data sonification and use gestural control to create improvised spatial audio experiences. As a producer, she puts together teams to build large immersive works that bring together live performance, dance, 360-projections, spatial audio and other new technologies.
Orbits: An exploration in spatial audio and sonification
Monica Bolles has a B.S. in Music Production from CU Denver and a M.S. in Creative Technology and Design from CU Boulder. She has worked as a professional audio engineer in theater, live, and studio settings since 2011and has most recently been located at Tippet Rise experimenting and recording classical music artists for 9.1 Auro3D playback.
She has currently been focusing on her artistic practices and has most recently been designing and building large-scale immersive experiences. In early 2019 she produced and designed a custom spatial audio system for the live immersive performance N/TOPIA featuring guitarist Janet Feder. The piece was premiered at the Conference of World Affairs in Boulder, CO at Fiske Planetarium and was awarded the Immersive Mulitsensory Award to be a featured performance at the 2019 Cube Fest at Virginia Tech’s Cube (a black box theater which hosts a 140-channel multichannel speaker array and an immersive Cyclorama).
She has also been spending the last year working alongside Kelly Snook to help build an instrument for sonifying the universe. She has created the piece Orbits that was premiered at DU’s Making Media Matter and performed at Cube Fest and as part of their data driven evening. The piece involves a live performance of an exploration of sounds generated by the rotations of Venus and Earth around the Sun.
She has presented and hosted workshops as part of Ableton Loop (2018), SXSW (2019), IMERSA Summit (2013-2019), NIME (2017-2018), and more.

Linda Bouchard / Résidence au Matralab - Concordia University

Born in Val d’Or, Québec, Linda lived in New York City from 1979 to 1991 where she was active as a composer, orchestrator, conductor, teacher and producer. She was composer-in-residence with the National Arts Center Orchestra (1992-1995) and has been living in San Francisco since 1997. Her works have received awards in the US and Canada, including a Prix Opus Composer of the Year in Quebec, Fromm Music Foundation Award, Princeton Composition Contest, SOCAN Composition awards and residencies from the Rockefeller Foundation, Civitella Ranieri, Camargo Foundation and others. Bouchard’s music is recorded in Germany on ECM, USA on CRI, and in Canada on Marquis Classics. Since 2010, Linda has been creating multimedia works that have been performed to critical acclaim in North America. In 2017, she received a multiyear grant from the Canada Council for the Arts to develop tools to interpret data into musical parameters. More info: www.livestructures.com.
Live Structures
Live Structures is a research and composition project that explores different ways to interpret data into graphic notation and compositions. The Live Structures project started in October 2017, supported by an Explore and Create Grant from the Canada Council for the Arts received by Bouchard. One of the goals of the Live Structures project is to interpret data from the analysis of complex sounds into a visual musical notation. The tool, developed in collaboration with Joseph Browne of matralab at Concordia University, is called Ocular Scores™. So far, three iterations of the Ocular Scores Tool have been created, each performing multiple functions: a) the ability to draw an image from the analysis of complex sounds that can be used as gestural elements to compose new works or to compare a complex sound against another complex sound, b) the ability to draw full transcriptions of a performance for future interpretation, and c) the ability to draw images in real time and to manipulate those images to create interactive projected scores to be performed live by multiple performers. These various applications and how they can inspire composers and performers will be demonstrated with a live performer (tbd).

 

Jeffrey Boyd / Université de Calgary

Jeffrey Boyd is a professor of computer science at the University of Calgary.  His interests include computational musicology, sonification, interactive art, and video and sensing applied to human movement.  Friedemann Sallis is a professor emeritus in the Division of Music at the University of Calgary. Martin Ritter holds a DMA from the University of British Columbia and is a PhD candidate in Computational Media Design at the University of Calgary.
The hallucinogenic belfry: analyzing the first forty measures of Keith Hamel's Touch for piano and interactive electronics (2012)
Computational musicology is emerging out of the necessity of finding methods to deal with music (both art and popular) that escapes conventional Western notation. To better understand this music, we use computational methods to decompose recordings of performances of contemporary music into digital musical objects. In this paper, we examine the first 40 measures of Touch for Piano and Interactive Electronics (Hamel 2012). Hamel uses the spectra of bells as a basis for a score that mimics the timbre of bells, and combines it with the piano's timbre, electronic processing, and spatial rendering with eight speakers surrounding the audience. For musical objects, we elect to use the over 200 bell samples used in the electronic portion of the piece. An exhaustive computer search for occurrences of each bell sample over four recordings of performances by the same pianist (Megumi Masaki) in two venues over 100 directions (sampled with an ambisonic microphone) yields a database of thousands of bell event detections. Our search produced 1) numerous objects (samples) explicitly coded into the electronic 'score' (not a surprise); 2) a surprisingly large number of objects not explicitly coded. The latter group corresponds to pitches in higher registers, labelled "brass bell dry" or "glock" in the electronic score. Inspection of our code, and verification by listening, confirm that these objects are not produced from the bell samples in the electronic source. On the contrary, they are the product of piano pitches carefully harmonized in real time to gradually bring them closer to the bell samples in the course of the forty-measure segment. By disseminating these sounds in the concert space, the composer invites the audience to gradually enter his hallucinogenic belfry, where the musical work takes place.

Amy Brandon / Dalhousie University

Canadian composer and guitarist Amy Brandon's pieces have been described as '... mesmerizing' (Musicworks Magazine) and ‘Otherworldly and meditative ... [a] clashing of bleakness with beauty …’ (Minor Seventh). Upcoming 2019-20 events include premieres by KIRKOS Ensemble (Ireland), Exponential Ensemble (NYC) as well as performances and installations at the Winnipeg New Music Festival, the Canadian Music Centre and the centre d’experimentation musicale in Quebec. She has received Canadian and international composition awards including the Leo Brouwer Guitar Composition Competition (Grand Prize) and is currently completing an interdisciplinary PhD at Dalhousie University in Halifax, Nova Scotia.
Composing for AR space: creating interactive spatial scores for the METAVision headset
In the last several years, the compositional and performance possibilities within VR and AR environments has grown, with composers such as Paola Prestini and Giovanni Santini, among others, writing works for various VR, AR and 360 technologies. My own compositions from 2017-19 have focused on the particular affordances of the METAVision AR headset, which inhabits the intersection of graphic score, controller and improvisational movement. The primary goal of the works (Hidden Motive, 7 Malaguena Fragments for Augmented Guitar, flesh projektor) is and was the discovery and manipulation by the improvisor (or musician) of the affordances of the AR space, especially its reactivity to hand gestures. The works explore how individual bodies interact within a performative augmented reality environment, in particular how those interactive spaces can meld with the 'real' world objects such as instruments. In this demonstration I will show previous works for the METAVision AR headset as well as demonstrate a current developing multi-channel work.

Taylor Brook / Columbia University

Taylor Brook is a Canadian composer who has been based in New York since 2011. Brook writes music for the concert stage, electronic music, as well as music for video, theatre, and dance. Described as “gripping” and “engrossing” by the New York Times, Brook’s compositions have been performed around the world by ensembles and soloists such as Anssi Kartunnen, Mira Benjamin, Ensemble Ascolta, JACK Quartet, Mivos Quartet, Nouvel Ensemble Moderne, Quatuor Bozzini, Talea Ensemble, and others. His music is often concerned with finely tuned microtonal sonorities and exploring the perceptual qualities of sound. In 2018 Brook completed a Doctor of Musical Arts (DMA) in music composition at Columbia University. He holds a master’s degree in music composition from McGill University. Currently, Brook is a Core Lecturer at Columbia University and the technical director of TAK Ensemble.
Human Agency and Meaning of Computer-Generated Music in Virtutes Occultae
This paper will explore concepts around compositional control that arise from computer-generated music and computer improvisation. Drawing from an analysis of my electroacoustic composition, Virtutes Occultae, I will explore the implications of computer improvisation on the role of the composer, how value is attributed to experimental art, and the broader relationship to data and automation in society at large.
In creating the software to generate music for Virtutes Occultae I was confronted with decisions regarding the degree of control or chaos I would infuse into the improvising algorithm. The amount of randomization and weighted probabilities integrated into the software set the levels of unpredictability; the unpredictability of the computer improvisation become artistically stimulating, even leading the composer to imitate the computer improvisor in more traditionally through-composed sections.
Recent commercial ventures (AIVA, Jukedeck, Melodrive, etc.) boast algorithms that generate commercial jingles and soundtrack music automatically: choose a mood and a style to create an original piece of music with the click of a button. The AIVA engine promotes an uncanny function where one may select an existing piece of music, say a Chopin Nocturne, and move a slider between “similar” and “vaguely similar” to create a derivative work. What does this method of creating music mean for how we value music? While this software creates music for commercial purposes, I have employed similar techniques in non-commercial art in Virtutes Occultae and other works. Unpacking the ramifications of what computer-generated music means for the role of an artist and their relation to their art is a complex and multifarious subject that must be considered.

Sylvain Cadars / IRCAM (Sound Engineer)

Sylvain Cadars is a sound engineer and acoustic at the IRCAM center (Institute of Acoustic Music) in Paris since 10 years. He had a Master degree in Acoustic and computer for music from university of Paris Jussieu. And he studied electronic in Paris. He participates since 10 years at live concert, recording and opera for contemporary music with electronic and worked with several composers like Pierre Boulez, Hector Parra, Philip Manoury, Alberto Posadas, Franck Bedrossian with different musical ensemble in Europe (Klangforum, ICE New York, MusikFabrik ensemble, EIC ensemble intercontemporain of Paris, Arts nova ensemble, court circuit, etc.
 

Roland Cahen / ENSCi Les Ateliers

Roland Cahen is an electroacoustic music composer, sound designer, teacher and researcher in electroacoustic music and sound design. His research topics: sound design, spatial sound, kinetic music, sound navigation, multimodal interfaces, design of electroa- coustic sound devices... As a researcher: member of the Centre de Recherche en Design (CRD) ENSCi les Ateliers – ENS Paris-Saclay Professor in charge of the sound design studio of ENSCI – Les Ateliers (National Superior School of Industrial Creation). In charge of Experimental Studios: RAAGTime: Multimodal interface for secured universal access to automobile driving (MPSA) Upmix Café: Hearing one another speaking while listening to spatialized music in music cafés Entendre l’Invisible : Using Spatialization in Auditory scenography for modern physics popularization. Part of IRCAM projects and development.
Kinetic Design 
Kinetic music aims to produce sound choreography where the sound is diffused. Hence, it uses sound spatialisation in such a way that both the composer and the listener focus on kinetic aspects of sound, in opposition to using spatialisation only illustratively, or for rendering effects. In kinetic music, like theatre, dance or visual arts, each zone, position or direction can take on a musical value, a form of density that the sound space itself embodies. Kinetic music wishes to add a new form of expression and compositional methods to existing spatial music concepts and techniques. Sound spatialisation has already been the subject of abundant literature, the focus of this paper is to demonstrate the specificity of kinetic music. Of the literature on spatial sound, much has been written on the subject, such as generalities (principles, philosophy of space and music, history) numerous tools and techniques, analyses of musical intentions and abstractions about spatial figures, but very little has concentrated on the auditory experience, spatial sound aesthetics and none on the design process.
 

Thibaut Carpentier / IRCAM

Thibaut Carpentier is a research engineer in the STMS Lab (Sciences and Technologies of Music and Sound) at Ircam, Paris. He studied acoustics at the Ecole Centrale and signal processing at Télécom ParisTech, before joining CNRS (French National Center for Scientific Research) in 2009. As a member of the Acoustics & Cognition Team, his work focuses on sound spatialization, artificial reverberation, room acoustics, and computer tools for 3D composition and mixing. He is the lead developer and head of the Spat project as well as the 3D mixing and post-production workstation Panoramix. In 2018, he was awarded the CNRS Cristal medal.

Carmine Cella

Carmine Emanuele Cella is an internationally renown composer with advanced studies in applied mathematics. He studied at the Conservatory of Music G. Rossini, in Italy, where he received a master's degrees in piano, computer music and composition, and at the Accademia di S. Cecilia, in Rome, where he got a PhD in musical composition. He also studied philosophy and mathematics and got a PhD in mathematical logic at the University of Bologna, with a thesis entitled "On Symbolic Representations of Music" (2011).After working at IRCAM in 2007-2008 as researcher and again in 2011-2012 as composer in residency, Carmine Emanuele Cella conducted research in applied mathematics at the École Normale Supérieure de Paris, from 2015 to 2016, with Stéphane Mallat. Also in 2016, he was in residency at the American Academy in Rome, where he worked on his opera Pane, sale sabbia, that was premiered in June 2017 at the National Opera of Kiev. From 2017 to 2018, he worked on computer assisted orchestration at IRCAM—a long standing topic proposed by Boulez—, and managed to propose innovative solutions that are gathering consensus in the community. Since January 2019, Carmine is assistant professor in music and technology at CNMAT, University of California, Berkeley.
"Can Picasso think in shapes?" 
This talk will present my recent work in searching for good signal representations that permit high-level manipulation of musical concepts. After the definition of a geometric approach to signal representation, I will present my theory of sound-types and its application to music. Finally, I will propose musical applications including assisted orchestration and augmented instruments.

Michael L. Century / Rensselaer Polytechnic Institute

Michael Century is Professor of New Media and Music in the Arts Department at Rensselaer Polytechnic Institute in Troy, N.Y., which he joined in 2002. Musically at home in classical, contemporary, and improvisational settings, Century holds degrees in musicology, from the Universities of Toronto and California at Berkeley. Long associated with The Banff Centre for the Arts, he directed the Centre's inter-arts program between 1979-1988, and founded its Media Arts program in 1988. Before RPI, he was a new media researcher, inter-arts producer, and arts policy maker (Government of Canada Canadian Heritage and Department of Industry 1993-98. His works for live and electronically processed instruments have been performed and broadcast in festivals internationally.
Performance-demonstration of Pauline Oliveros’s Expanded Instrument System for HoA using Spat
Over the arc of her career as a composer and performer, Pauline Oliveros (1932-2016) maintained an abiding interest in expanding the aperture of temporal experience, and often referred to her own Expanded Instrument System (EIS) as a “time machine”— a device to permit present, past and future to occur, in her own words, “simultaneously with transformations”. With permission from the Pauline Oliveros Trust, I am continuing to develop and here propose a demonstration for the Forum of my own live-electronic music using accordion within a HoA system using Spat. The demonstration will involve my own musical performance and the programming collaboration of Matthew D. Gantt. The EIS system played a significant role not only in Oliveros’s recorded oeuvre, but also has had a significant impact In the broader history of live Electronic Music. My research into the history of the system also would permit me to provide a short resume about Oliveros as an artist and the way her system for manipulating and modulating improvisatory music with time delays developed over half a century. This development began with classic works for tape, followed successively by manual and foot-controlled outboard delay machines of the 1980s, MIDI controlled Max patches (early 1990s), a full digital transcription in MaxMSP (2002) and finally various modules for basic spatialization. The demonstration here proposed uses EIS in conjunction with Spat and provides a significant development in the power of the system, in both aesthetic and technical aspects.

Christopher Chandler / Union College

Christopher Chandler is a composer, sound artist, and the co-founder and executive director of the [Switch~ Ensemble]. He serves as Assistant Professor of Music at Union College in Schenectady, NY where he teaches courses in music theory, composition, and technology. His acoustic and electroacoustic work draws on field recordings, found sound objects, and custom generative software. His music has been performed across the United States, Canada, and France by leading ensembles including Eighth Blackbird, the American Wild Ensemble, the Oberlin Contemporary Music Ensemble, the Cleveland Chamber Symphony, and Le Nouvel Ensemble Moderne. His has received recognition and awards for his music including a BMI Student Composer Award, an ASCAP/SEAMUS Commission, two first prizes from the Austin Peay State University Young Composer's Award, winner of the American Modern Ensemble’s Annual Composition Competition, and the Nadia Boulanger Composition Prize from the American Conservatory in Fontainebleau, France. Christopher received a Ph.D. in composition from the Eastman School of Music, an M.M. in composition from Bowling Green State University, and a B.A. in composition and theory from the University of Richmond.
The Generative Sound File Player: A Corpus-Based Approach to Algorithmic Music
The Generative Sound File Player is a composition and performance tool for algorithmically organizing sound. Built in Max and incorporating the bach library and MuBu, the software allows the user to load, analyze, and parametrically control the presentation any number of sound files. At its core is bach’s powerful new bell (bach evaluation language on lllls) scripting language that allows for rich and powerful control of sound through text-based input or a graphical interface. The software sits at the intersection of generative music, concatenative synthesis, and interactive electronics. For the IRCAM Forum Workshop 2020, I propose giving a demonstration of its core functionality, creative applications, and recently developed spatialization capabilities.

Simone Conforti / IRCAM

Composer, computer music designer, sound designer and software developer. Born in Winterthur, he is graduated in Flute and Electronic Music and teaches in the pedagogy department at IRCAM in Paris and works as computer music designer at CIMM Venice. Specialised in interactive and multimedia arts, his work passes also through an intense activity of music oriented technology design, in this field he has developed many algorithms which ranges from sound spatialisation and space virtualisation to sound masking and to generative music. Co-founder and CTO of MUSICO, formerly co-founded MusicFit and MUSST, has worked for Architettura Sonora, and as researcher for the Basel university, the MARTLab research center in Florence, the HEM Geneva and the HEMU in Lausanne. He has been professor in Electroacoustic at the Conservatoires of Florence and Cuneo.

Carlos Delgado

Carlos Delgado's music has been heard in concerts, festivals, and radio broadcasts in England, Finland, France, Germany, Hungary, Italy, Japan, Romania, Spain, and the United States. As a composer specialized in electroacoustic chamber music and multimedia, his works have been presented at venues such as Merkin Recital Hall in New York; the Rencontre Internationale de Science & Cinema (RISC) in Marseille, France; and St. Giles Cripplegate/Barbican in London. He has participated in festivals including EMUFest and (Rome); ManiFeste 2015 Académie (IRCAM, Paris), and the 2018 New York City Electroacoustic Music Festival, and has appeared as a laptop performer at Symphony Space and the Abrons Art Center (New York); the Musica Senza Frontiere Festival, in Perugia; and many others. His works are available on the New World Records, Living Artist, Capstone Records, and Sonoton ProViva labels. He holds a Ph.D. in music composition from New York University.
Multidimensional Movement: Gestural Control of Spatialization in Live Performance
Lev is a gestural control software instrument I have developed that allows for the spontaneous control of sound, video, and spatialization in live performance. Lev takes video data from a laptop computer’s built-in video camera and splits them into three matrices, arranged in the form of a doorway or gate along the outer edges of its field of view. Each of these matrices defines three separate play zones which a live performer uses to generate data that control the instrument’s audio and video output. The play zone for the performer’s right hand provides for control of pitch and duration, while the left hand’s zone can be mapped to control various parameters such as amplitude, pitch-bend, filtering, modulation, etc. Bridging the two is the third matrix, located along the upper edge of the camera’s field of view, which defines a control zone for spatialization (panning and reverberation). The gestural data generated by all three play zones can simultaneously be mapped to control video processing parameters such as chromakeying, brightness, contrast, saturation, etc. The result is a multidimensional performance that amplifies a live performer’s spontaneously produced gestures by expanding their reach into the domains of sound, spatial location, movement, and video. The program was written in Max/MSP, and it is named after Lev Termen, inventor of the Theremin.

Julie Delisle / McGill University

Julie Delisle is a postdoctoral fellow at the Music Perception and Cognition Laboratory (McGill University, Montreal), where she works in collaboration with Stephen McAdams and Robert Hasegawa on the ACTOR Project. First trained as a classical flutist at the Conservatoire de musique de Montréal (Prix with Great Distinction) and at the Hochschule für Musik Freiburg (Germany), she then studied computer science, sound and music technology, and musicology. In 2018, she completed a interdisciplinary thesis at Université de Montréal on flute timbre with the support of a scholarship from the Social Sciences and Humanities Research Council (SSHRC) of Canada. Her research work focuses on acoustics and timbre of musical instruments, on the study of playing techniques, on the influence of electroacoustic and digital technology on composition and orchestration, and on the development and applications of methodologies related to digital musicology.
 

Philippe Esling / IRCAM

Philippe Esling received an MSc in Acoustics, Signal Processing and Computer Science in 2009 and a PhD on multiobjective time series matching in 2012. He was a post-doctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at IRCAM, Paris 6 since 2013. In this short time span, he authored and co-authored over 15 peer-reviewed journal papers in prestigious journals such as ACM Computing Surveys, Publications of the National Academy of Science, IEEE TSALP and Nucleic Acids Research. He received a young researcher award for his work in audio querying in 2011 and a PhD award for his work in multiobjective time series data mining in 2013.In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized at fall 2014 and already used by a wide community of composer. He directed six Masters interns, a C++ developer for a full year, and is currently directing two PhD students. He is the lead investigator in time series mining at IRCAM, main collaborator in the international France-Canada SSHRC partnership and the supervisor of an international workgroup on orchestration.

José Miguel Fernandez / IRCAM

José Miguel Fernández studied music and composition at the University of Chili and at the Laboratory for Musical Research and Production (LIPM) in Buenos Aires, Argentina. He then studied composition at the Conservatoire National Supérieur de Musique et de Danse de Lyon and followed the Cursus program in composition at IRCAM. He composes instrumental, electroacoustic, and mixed music works. His works have been performed in the Americas, Europe, Asia, and Oceana and he has produced mixed music and electroacoustic concerts in several festivals.José Miguel Fernández won the international electroacoustic music competition in Bourges (2000), the Grame-EOC’s international composition competition in Lyon (2008), and the Giga Hertz Award from the ZKM/EXPERIMENTALSTUDIO in Germany (2010). In 2014, he was chosen by IRCAM for the artistic research residency on interaction in mixed music works and in 2018 for a residency with the Société des Arts Technologiques in Montreal, on writing electronics for an audiovisual project. He is currently in the music doctorat program (research in composition) at IRCAM organized in collaboration with Sorbonne Université and the UPMC.His research project focuses primarily on writing for electronics and research on new tools for the creation of mixed and electroacoustic music.In parallel to his activity as a composer, he works on a range of educational projects and creation in connection with computer music.
François-Xavier Féron, Cédric Camier, Catherine Guastavino / STMS Lab (CNRS, Ircam, Sorbone Université)
François-Xavier Féron holds a master's degree in musical acoustics and a PhD in musicology (Sorbonne University). He was a postdoctoral researcher at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT, McGill University, Montreal, 2008-2009) and then at the Institut de Recherche et Coordination Acoustique/Musique (Ircam, Paris, 2009-2013). In 2013, he joined the CNRS as a research fellow. He then worked at the Laboratoire Bordelais de Recherche en Informatique and at the Studio de Création et de Recherche en Informatique et Musiques Expérimentales (LaBRI-SCRIME, Université de Bordeaux) before joining in 2018, the Analysis of Musical Practices team at Ircam (STMS-Ircam, Sorbonne University). His research, at the frontier between musical acoustics and musicology, focuses on contemporary musical practices in the 20th and 21st centuries (from the process of creation to the work of interpretation through the analysis of works and perceptive phenomena).
Cédric Camier's career combines science and music. He holds a doctorate in musical acoustics from the Ecole Polytechnique and is a research engineer at the Saint-Gobain Recherche R&D centre. His work focuses on tools for the restitution of auralised and spatialised sound environments as well as acoustic and perceptive diagnostic devices. A composer with degrees from the University of Montreal and the CRR of Poitiers, he composes and performs acousmatic, mixed or improvised music, mainly based on science-inspired transformation processes or synthesis of dynamic fields. Jean Piché, Denis Gougeon, Philippe Leroux and Annette Van de Gorne were among his teachers. His works have been performed in several countries and have received support from the Canada Council for the Arts and CIRMMT.
Catherine Guastavino is a professor at McGill University and a member of the Centre for Interdisciplinary Research in Music, Media and Technology (CIRMMT). She leads the Ville Sonore research partnership, which brings together university researchers, urban planning and acoustics professionals, artists and citizens to rethink the role of sound in our sensitive experiences of the city. Her research focuses on sound ambiences, auditory perception of space, sound spatialization, cognitive processes of categorization and the psychology of music. 
The sound centrifuge: spatial effects induced by circular trajectories at high velocity
“If revolutions of sound in space go beyond a certain barrier of revolutions per second, they become something else” explained Karlheinz Stockhausen during a conversation with Jonathan Cott in 1971. Later, the Portuguese composer Emmanuel Nunes, in close collaboration with computer music designer Eric Daubresse, experimented at Ircam with sounds moving at very high velocities and observed new perceptual effects. These musical experimentations inspired us to initiate a line of research on the perception of spatial figures back in 2009.
Through a series of controlled scientific experiments conducted at CIRMMT, we were able to document perceptual mechanisms at play to track moving sounds. We estimated perceptual thresholds for auditory motion perception and velocity perception and investigated the influence of reverberation, spatialization techniques and loudspeaker configurations. Throughout this process, new tools based on a hybrid spatialization method combining numerical propagation and angle based amplitude panning, were developed to move sounds around the listener at very high velocities. At such velocities, the revolution frequency is in the same order of magnitude as audible frequencies. New effects were obtained by manipulating the listener position, the direction of revolution, the velocity and the nature of the sound material using our custom-built “sound centrifuge” developed in Max-MSP. They include spatial ambiguities, Doppler pitch-shifting and amplitude modulation induced by velocity apparent variation, timber enrichment, spatial beating (amplitude modulation pattern as a function of the revolution frequency and the sound fundamental frequency) and a spatial wagon-wheel effect, all dependant on the listening position.
These effects were first used for creative purposes in two multichannel electro-acoustic pieces by composer Cédric Camier created in 2017 and 2019. In this demonstration, effects will be presented parametrically as core elements of spatial studies based on velocity.

Sophie Dupuis and Emilie Fortin

Sophie Dupuis is a composer from New Brunswick interested in interdisciplinary art music and music for small and large ensembles. She is recognized for her impressive technique and endless imagination. She finds her voice in her childhood spent in the picturesque scenery of the Maritimes and, conversely, by her attraction to raw, electrical and harsh sounds.
Émilie Fortin is an adventurous musician and teacher who explores every possible facet of the trumpet. A versatile performer, she is a freelancer for several ensembles and orchestras. She has contributed to the creation of over fifteen works internationally with various composers in an effort to enrich the repertoire of her instrument. She is also the artistic director of Bakarlari, a collective of soloists.
The collaboration between them exists in order to push the limits of traditional musical language, in order to create an immersive concert experience.

Raphaël Foulon / IRCAM

Raphaël Foulon is a video artist and researcher. He designs his own visual creation tools and explores various forms of expression such as algorithmic generation, live-cinema and multimedia feedback. His approach consists in exploring the themes of perception of nature and artifice, technological servitude and the transcendence of concepts related to reality. Thus, he privileges materials and textures from the living world, from ancestral artistic traditions and examines their synergistic association with modern means of expression. 

Matthew D. Gantt / Rensselaer Polytechnic Institute, Harvestworks

Matthew D. Gantt is an artist, composer and educator based in Troy, NY. His practice focuses on (dis)embodiment in virtual spaces, procedural systems facilitated by idiosyncratic technology, and the recursive nature of digital production and consumption. He has presented or performed at a range of institutional and grassroots spaces including Panoply Performance Laboratory, Harvestworks, New Museum, The Stone, Issue Project Room, and internationally at IRCAM (Paris) and Koma Elektronik (Berlin), among others. He has been an artist-in-residence at Pioneer Works, Bard College, and Signal Culture, and is a current PhD candidate at Rensselaer Polytechnic Institute. Gantt releases music with Orange Milk and Oxtail Recordings, teaches experimental music and media across academic and DIY contexts, and worked as a studio assistant to electronics pioneer Morton Subotnick from 2016 – ’18.
Sound and Virtuality: Creative VR, Ambisonics and Expanded Composition
Virtual reality offers the contemporary composer a number of affordances beyond the creation of games, simulations, or simple 'audio-visual' music. This demonstration will showcase new approaches for applying techniques common to generative music, modular-style patching and electronic composition to immersive environments via OSC bridging of Unity/VR, Max/MSP, and IRCAM's Spat~/Panoramix. Hands-on demonstrations of VR works-in-progress will show both 'composer-friendly' methodologies for working with real-time spatial sound and immersive media, as well as new conceptual frames for approaching contemporary VR, such as digital kinetic sculpture, immersive media as simultaneous site and score for performance, and VR as both sonic instrument and concert hall.

Nicola Giannini at EMS in Stockholm, August 2017. Image © Emanuele Porcinai

Nicola Giannini / Université de Montréal, CIRMMT

Nicola Giannini is a Sound Artist and an Electroacoustic Music Composer based in Montreal, Canada. His practice focuses on immersive music, both acousmatic and performed. His works have been presented in Canada, USA, Brazil, Columbia, Mexico, UK, Sweden and Italy. His piece “Eyes Draw Circles of Light” obtained the first prize at the JTTP 2019 competition organized by the Canadian Electroacoustic Community, and an Honorable Mention at the XII° Fundación Destellos Competition 2019. His piece “For Hannah” was chosen as finalist at the International competition Città di Udine. Originally from Italy, Nicola has a master degree in Electroacoustic Composition from the Conservatory of Florence. Nicola is a doctoral student at the Université de Montréal, under the supervision of Robert Normandeau, and is a research assistant with the Groupe de recherche en immersion spatiale. Nicola is one of the student coordinators at CIRMMT for the Research Axis on Expanded musical practice.
Eyes Draws Circles of Light, acousmatic piece for dome of speakers
Eyes Draw Circles of Light explores specific aspects of the human unconscious, characterizing that brief moment when we are about to fall sleep. Through sound spatialization, a multidimensional unconscious representation was created that evokes the relationship between psyche and body. The fast and involuntary body movements, hypnic jerks, that may occur at that time have been underlined. The work is a collaboration with the artists Elisabetta Porcinai and Alice Nardi, who wrote a poem for it, and aims to find a balance between elegance and experimentation, femininity and masculinity. The text was interpreted by Elisabetta and then elaborated by the composer. The work was composed in the immersive music studios at the Université de Montréal.

Louis Goldford / Columbia University

Louis Goldford is a composer of acoustic and mixed music whose works are often inspired by transcription and psychoanalysis. He has collaborated with ensembles such as the Talea Ensemble, JACK Quartet, Yarn/Wire, Ensemble Dal Niente, the Meitar Ensemble, and Rage Thormbones. Recent projects include premieres with violinist Marco Fusi, with musicians of the Cité internationale des arts and the Conservatoire supérieur de Paris. Additionally, Louis completed his Cursus at IRCAM in 2019. His works have been presented at music festivals across Europe and North America, and at international conferences. Louis has also performed in Taiwan, Poland and the United States. He is a Dean's Fellow at Columbia University in New York, where he studies with Georg Friedrich Haas, Zosha Di Castri, George Lewis, Brad Garton and Fred Lerdahl. In workshops and individual lessons Louis also worked with Brian Ferneyhough, Philippe Leroux, Yan Maresz, Chaya Czernowin and others.
Assisted Orchestration, Spatialization, and Workflow in Two Recent Compositions
In this presentation concentrating on my latest two works from 2019, I will discuss my extended use of Orchidea, the latest assisted orchestration platform in the Orch* tools lineage, and spatialization using OpenMusic and the Ircam Spat~ package for Max.
These pieces include “Au-dessus du carrelage de givre” for tenor, electronics, and video, premiered at the Soirée du Cursus at the Ircam ManiFeste, 18 June 2019 at the Centquatre, Paris, as well as “Tell Me, How Is It That I Poisoned Your Soup?” for 12 players and live electronics, premiered by the Talea Ensemble in New York City, 31 March 2019 at the DiMenna Center for Classical Music. Score, sound, and video excerpts will be presented alongside examples from the software tools used to generate specific passages.
Because these tools generate lots of intermediate files, including Max and OpenMusic patches, orchestral analysis and resynthesis, and video work, it quickly becomes necessary to organize one’s studio workflow in a deliberate way. I will offer some solutions for coordinating and synchronizing project files over many computers using the Git version control architecture, with particular emphasis on how such tools may be harnessed by artists and creators.

Andrea Gozzi / SAGAS - University of Florence

Musician and musicologist; graduated in music from the University of Paris 8 Vincennes-Saint-Denis and member of the team of Tempo Reale, Florence's centre for musical research, production and pedagogy, founded by Luciano Berio. PhD student at SAGAS (University of Florence), Andrea Gozzi is also a lecturer in Sound Design at the LABA Academy in Florence and a lecturer in Rock History and Sound Design at DAMS (University of Florence). As a musician, he has worked with Italian and international artists, both live and in the studio.Participating in events such as LIVE 8 in Rome in 2005, he has also played in France, England, Germany and Canada.He has published books and essays on the history of rock and musical biographies in Italy and Canada.
Listen to the theatre! Exploring Florentine performative spaces
A music performance space constitutes the frame as well as the content of the listeners’ experience. The acoustic environment forces continuous negotiations that differ according to a listener’s role and position as composer, performer or audience member. The aim of my research is to investigate the acoustics of a performative space, the Teatro del Maggio Musicale Fiorentino in Florence, following two complementary paths, both based on an interactive model. The first offers an impulse-response experience: the user can virtually explore the opera hall by choosing between the binaural reproductions of 13 different listening positions. The second is about the aural and visual perception of a performance of the romance “Una furtiva lagrima” from Donizetti’s opera L’elisir d’amore. The user, through ambisonics, 360 degree videos and virtual reality, will experience this performance from three different positions in the theatre: on stage, in the orchestra pit and in the audience seating.

Florian Grond and Weslaw Woszcyk / McGill University

Florian Grond is an interaction designer working as a research associate in the Sound Recording Department of the Schulich School of Music at McGill University. His interdisciplinary research and design interests focus on the immersive use of sound with several years of experience in sound recording with microphone arrays and reproduction with multi speaker setups. He currently applies his 3D sound capture and mixing expertise in collaboration with Randolph Jordan to this year’s Canada Pavilion at the art Biennale in Venice. For many years he has also been active as an independent media artist, exhibiting his works in solo and group exhibitions at several venues in North America, Europe and Asia. His artistic and research projects apply creative sonic practices to multimodal participatory design in the context of disability, the arts, and assistive technology. Over the last years, he started various collaborations with colleagues with disabilities from the local community, academia and the arts resulting in research output, artistic creations and the curating of exhibitions.

Wieslaw Woszczyk is an internationally recognized audio researcher and educator with leading expertise in emerging technology trends in audio. Woszczyk holds the James McGill Professor Research Chair position and a full professorship at McGill University, and is the founding director of the Graduate Program in Sound Recording (1978), and founding director of the CIRMMT Centre for Interdisciplinary Research in Music Media and Technology, an inter-university, inter-faculty, interdisciplinary research center established at McGill University in 2001. An AES member since 1976, Woszczyk is a Fellow of the Audio Engineering Society (1996) and the former Chair of its Technical Council (1996-2005), Governor (twice, in 1991-1993 and 2008-2010) and President (2006-2007). He also served on the Review Board of the AES Journal. Woszczyk received the Board of Governors Award in 1991 and a group Citation Award in 2001 for “pioneering the technology enabling collaborative multichannel performance over the broadband Internet.”
Exploring the possibilities of navigating and presenting music performances with a 6DoF capture system
Capturing multiple sound fields in a music performance space is nowadays possible using several synced higher order microphone arrays separated in space. With post-processing software for the interpolation between these points of capture, this new technology is known as 6DoF, as it affords 3 translational degrees of freedom across the scene, in addition to the already existing 3 rotational degrees of freedom within a single point. This offers new possibilities in the post-production step with regards to selection from the capture and balancing the recorded acoustic scene. In the context of VR, 6DoF offers new possibilities for immersing the listener into original audio content that covers an extended space, e.g. orchestral ensembles. This gives the listeners the possibility to explore the performance space with their own navigation strategies using interactive tools enabling rotation and translation. Depending on the spatial resolution of sound field capture, a range of possibilities is available in sound balancing, including variations in perspective, separation of sources and ambience, degree of sharpness and diffusion of the image in a 3D presentation. The authors will share their findings from the exploration of 6DoF captures using first order and third order Ambisonics microphone arrays at multiple locations of a performance space. The authors will also discuss various trade offs that may be considered in further improving the quality and flexibility of capturing music via multiple sound fields. Considerations will also be given with respect to reproductions in standard channel-based formats for 2D and 3D sound projections.

 Rob Hamilton / Rensselaer Polytechnic Institute

Composer and researcher Rob Hamilton explores the converging spaces between sound, music and interaction. His creative practice includes mixed and virtual-reality performance works built within fully rendered networked game environments, procedural music engines and mobile musical ecosystems. His research focuses on the cognitive implications of sonified musical gesture and motion and the role of perceived space in the creation and enjoyment of sound and music. Dr. Hamilton received his Ph.D. from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and currently serves as Assistant Professor of Music and Media in the Department of Arts at Rensselaer Polytechnic Institute in Troy, NY. 

Virtual Instrument Design for the 21st Century

http://homepages.rpi.edu/~hamilr4/cv/

Lena Heng / McGill University

Lena Heng is currently doing an interdisciplinary PhD in the Music Perception and Cognition Lab at McGill University Canada, under the supervision of Prof. Stephen McAdams. Their research interests are in the area of timbre perception, music hermeneutics, cognitive representation, and emotion perception in music. As a musician, Lena is particularly keen on integrating their research interests with performance, listening, and meaning-making in music. Their work on this aspect has earned them the Research Alive award from McGill Schulich School of Music in 2018/19. Prior to their graduate studies, Lena obtained a B.Mus (Hons.) from the Nanyang Academy of Fine Arts and a B.Soc.Sci (Hons.) in Psychology from the National University of Singapore. In 2016, Lena was awarded the NAC - Graduate Arts Scholarship, as well as the McGill University Graduate Excellence Fellowship and McGill University Student Excellence Award for their studies at McGill University. 
Timbre’s function in perception of affective intents. Can it be learned?
Timbre has been identified by music perception scholars as an important component in the communication of affect in music. While its function as a carrier of perceptually useful information about sound source mechanics has been established, studies of whether and how it functions as a carrier of information for communicating affect in music are still in their infancy. If timbre functions as a carrier of affective content in music, how it is used may be learned differently across different musical traditions. The amount of information timbre carries across different parts of a phrase may also vary according to musical context.
To investigate this, three groups of listeners with different musical training (Chinese musicians, Western musicians, and nonmusicians, n = 30 per group) were recruited for a listening experiment. They were presented with a) phrases and measures, and b) individual notes, of recorded excerpts interpreted with a variety of affective intents by performers on Western and Chinese instruments.
These excerpts were analyzed to determine acoustic aspects that are correlated with timbre characteristics. Analysis revealed consistent use of temporal, spectral, and spectrotemporal attributes in judging affective intent in music, suggesting purposeful use of these properties within the sounds by listeners. Comparison between listeners’ perception across notes and longer segments also revealed differences in perception with increased musical context. How timbre is used for musical communication thus appears to be implicated differently across musical traditions. The amount of importance timbre plays also appears to vary for different positions within a musical phrase.

Jullian Hoff and Charlotte Layec / Université de Montréal 

Jullian Hoff (composer): My creations are divided between performative works (mixed music & generative audiovisual) and fixed supports (acousmatic & video music). I am inspired by themes such as lyrical abstraction, the place of humans in front of technology, technoculture and posthumanism. I hold a master's degree in electroacoustic composition from the University of Montreal where I worked on human-machine interactions in the digital arts. 
Charlotte Layec, clarinetist, followed her musical training in France and then in Montreal. Versatile artist, she evolves around different musical aesthetics combining classical music and contemporary music (Émerillon trio), going through electroacoustic music and free improvisation (Ensemble ILÉA). Her interpreting qualities led the musician to perform, among others, within the OSM under the direction of Kent Nagano in 2016, as well as with the NEM directed by Lorraine Vaillancourt in 2018.
"Verklärter Rohr", use of audio descriptors in mixed music
The project "Verklärter Rohr" (or "transfigured tube") is a live performance for bass clarinet, electronic instruments and generative video. This innovative piece explores different spaces of dialogue between a musician (being sensitive and spontaneous) and logical and mathematical operators who control digital instruments (set of audio descriptors, musical algorithms, electroacoustic lutherie and a generative video program ).
"Verklärter Rohr" is a journey around a bass clarinet, transformed by means of digital audio treatments that propose multiple phantasmagoria. The musician (Charlotte Layec) interacts in real time with the audio and video device to create a musical atmosphere that is sometimes dreamlike and vaporous, sometimes pointillist and virtuoso.
 

Erica Huynh / McGill University 

Erica Huynh is a PhD Candidate in Music Technology (Interdisciplinary) at the Schulich School of Music of McGill University. Under the supervision of Stephen McAdams at the Music Perception and Cognition Lab, her research examines how listeners identify excitation methods and resonance structures when they are combined in ways that are typical (e.g., bowed string) or atypical (e.g., bowed air column) of acoustic musical instruments. This research is conducted in collaboration with Joël Bensoam from IRCAM and Jens Hjortkjær from Technical University of Denmark. Erica is particularly interested in integrating topics in psychology and music, as well as practicing methods in experimental design and data analysis.
Bowed plates and blown strings: Odd combinations of excitation methods and resonance structures impact perception
How do our mental models limit our perception of sounds in the physical world? We have evaluated the perception of sounds synthesized by one of the most convincing methods: physical modeling. We used Modalys, a digital physical modeling platform, to simulate interactions between two classes of mechanical properties of musical instruments: excitation methods and resonance structures. The excitation method sets into vibration the resonance structure, which acts as a filter that amplifies, suppresses, and radiates sound components. We simulated and paired three excitation methods (bowing, blowing, striking) and three resonance structures (string, air column, plate), forming nine excitation-resonator interactions. These interactions were either typical of acoustic musical instruments (e.g., bowed string) or atypical (e.g., blown plate). Listeners rated the extent to which the stimuli resembled bowing, blowing, and striking excitations (Experiment 1), or the extent to which they resembled string, air column, and plate resonators (Experiment 2). They assigned the highest resemblance ratings to: (1) excitations that actually produced the sound and (2) resonators that actually produced the sound. These effects were strongest for stimuli representing typical excitation-resonator interactions. However, listeners confused different excitations or resonators for one another for stimuli representing atypical interactions. We address how perceptual data can inform physical modeling approaches, given that Modalys effectively conveyed excitations and resonators of typical but not atypical interactions. Our findings emphasize that our mental models for how musical instruments are played are very specific and limited to previous exposure and perceived mechanical plausibility of excitation-resonator interactions.

Davide Luciani and Jordan Juras

Davide Luciani is an electronic music composer and media designer based in Berlin, currently attending the M.A. in Sound Studies and Sonic Arts at UdK - Berlin. He directed and curated a wide variety of projects between art and design. His collaborations have been hosted at institutions and venues such as Venice Biennial, Berlin Atonal, Ström Festival, Bayreuth Festspiele, amongst others. In 2014, together with sound artist Fabio Perletta, he co-founded Mote, a multidisciplinary design studio whose practice addresses arts and music.
Jordan Juras is a Berlin-based sound and computer artist. He is a Music Information Retrieval researcher at Native Instruments. He completed a M.Music in Music Technology at NYU, and is a McGill alumnus (graduated in 2012 with a B.Sc in Physics and minor in Philosophy of Science). 
Son.AR: Sounding the limit of space
We would like to present Son.AR, an experimental app that offers a sonic AR navigation system within the context of a multimedia guide. The app prototype was developed for the Screen City Biennial 2019 in Norway. It aimed to contribute to the field of public art by mediating artworks and the audience’s experience by extending a conventional information guide into a city-wide augmented audio map.
Binaural spatialisation is combined with compass and location tracking to allow users to interact with their environment by moving towards, around, and away from fixed virtual sound object locations. This technology allows us to create an augmented sonic superposition on a city’s topography, simulating the virtual space in the users’ earphones. In Son.AR, auditory perception is a central feature for the user experience.
The app broadens and suggests new possibilities for time-based media, public and social design, and expanding contemporary uses in arts, language, culture, and entertainment.
From an artistic perspective, this app gives the possibility to conduct research at the forefront of music technology: dynamic composition tools, spatial audio techniques, and the perceptual possibilities between sound identity, object, and the expectations of virtual representation.
For the Screen City Biennial, the authors used morphing as an approach to sound composition. Natural sounds were morphed into synthetic counterpoint and vice-versa - an amorphous Moebius strip emblematic of the precarious balance between environment and technology. New sound ecologies arise, serving as a tool to improve our consciousness of the world.
http://2019.screencitybiennial.org/sonarapp

Sergio Kafejian / São Paulo University

Sergio Kafejian is a Brazilian composer and researcher that has obtained his Masters from the Brunel University (London), PhD from UNESP and has developed a Postdoctoral Research at NYU Steinhardt in 2017. Kafejian has won several composition prizes as Bourges International Electroacoustical Music Contest (1998 and 2008), Concurso Ritmo e Som (1994 and 1998), Gilberto Mendes Contest for Orchestra (2008) and FUNARTE Classical Composition Prizes (2008 and 2014). His professional outputs consists of instrumental and electroacoustic compositions as well as pedagogic projects evolving contemporary improvisation, composition and performance. Kafejian, has been working at Santa Marcelina College, teaching composition, electroacoustic music and contemporary music since 2001. His is currently a Post Doctoral Researcher at University of São Paulo (USP) sponsored by the São Paulo Research Foundation (FAPESP). His recent research has focused on how the use of extended technics and sonorities, whether acoustic or electronic, has created new musical paradigms.
CTIP II: A practical demonstration of an interactive system
In this demo, we will demonstrate the functioning of CTIP II, the second interactive system developed by the author in collaboration with the clarinetist and improviser Dra. Esther Lamneck (NYU Steinhardt). As the previous one, the system extracts data relating to timbre, frequencies, and amplitudes and accordingly to it choses one of its interactive states available. Each interactive state splits the incoming sound into three layers and routes them to one (or more) of its digital sound processors and recording buffers. By their turns, the processed signals can be routed to the processors accordingly to detected information related to dynamics, density and timbre. Therefore, the interactions between the player and the system are the results of what a system offers in terms of responsiveness to the player’s musical behavior as well as how the player perceive and explore the pairs actions-responses. The CTIP II’s elaboration process was supported by the following issues: (1) to create a system to interact with an improviser in a manner a human would do; (2) to create a system that could be managed either automatically or by performer (3) to elaborate a system to be used in different musical situations. Amidst the projects carried on concerning interactive systems CTIP II has been influenced by: (1) William Hsu’s ARHS (2010); (2) Doug Van Nort’s FILTER and GREI (2010); Adam Linson’s ODESSA system (Linson et alt, 2015). CTIP II is based on MAX/MSP software and uses the following IRCAM’s tools: zsa.descriptors; mubu.gmm; zsa.descriptors; add¬_synth ~.

Hanna Kim / Toronto University

Hailed as “truly inspired” by Ludwig van Toronto, Hanna Kim encompasses a wide range of traditional, neo-romantic, minimalistic, and improvisational styles for her compositional work. She is the recipient of several awards, including the 2019 Lothar Klein Memorial Fellowship in Composition, the 2019 St. James Cathedral Composition Competition, the 2014 Miriam Silcox Scholarship, and the 2013 Joseph Dorfman Composition Competition (Germany). Ms. Kim has won numerous score calls, and has been asked to compose new works for concert performances across a variety of styles. A native of South Korea, Ms. Kim is currently working toward her doctoral degree (DMA) at the University of Toronto, Canada. In addition to her passion for being a scholar of music, Kim is also an active church musician. She currently serves as the Minister of Music at the Calvary Baptist Church in Toronto. 
Timbre Saturation for Chamber Orchestra
My presentation will be about a new piece for chamber orchestra that I wrote as a dissertation for my doctoral degree at the University of Toronto. The piece specifically considers economy attained through efficient uses of the instruments. The work will be for a smaller number of players than the standard orchestra. Typically, when it comes to scoring chord progressions for a large orchestra, two methods are common: one is doubling, and the other is harmonizing. My composition project begins by questioning whether those common practices cause any sound overload. Furthermore, what if a composer eliminates any melodic/intervallic elements that could be thought of as ‘wastefulness’, such as doubling? By seeking to create the most economic orchestration, the goal of this project is to suggest a solution to optimize the orchestral effects and colors with a limited number of instruments. In addition, my dissertation work will rely on the minimum number of instruments for a large ensemble piece to be as effective as a full-sized orchestra, particularly focusing on the use of colors (timbre). The piece will require twenty-five players: 1st flute, 2nd flute (+ picc), 1st clarinet in Bb, 2nd clarinet in Bb, 3rd clarinet in Bb (+ bass cl), tenor saxophone, bassoon, horn, trumpet in C, 1st trombone, 2nd trombone, piano (+ celeste), harp, 1st percussion, 2nd percussion, timpani and Strings (2, 2, 2, 2, 1). The piece will have three movements. The performing duration will be approximately fifteen minutes.

Mantautas Krukauskas / Lithuanian Academy of Music and Theatre, Music Innovation Studies Centre

Mantautas Krukauskas (b. 1980) – composer and sound artist, teacher at the Department of Composition of Lithuanian Academy of Music and Theatre in Vilnius, where he is also a co-founder and Head (since 2016) of Music Innovation Studies Centre, academic lab for studies, art and research, with a focus on music technology, innovation in music and music education, interactive arts, immersive media, and interdisciplinarity. His compositions, including chamber music, sound art and other works, music for theatre and dance productions have been performed in Lithuania, Austria, Germany, France, Canada, USA, and other countries. Professional profile also includes electronic music performance and work within creative industries sphere with music production and arrangement. Mantautas Krukauskas has been actively involved in diverse field of activities, including coordination and management of international artistic, research and educational programmes. His interests comprise interdisciplinarity, creativity, music and media technologies, and a synergy of different aesthetic and cultural approaches.
Some conceptions for effective use of immersive sound techniques for music composition and orchestration
Immersive and spatial sound technologies are gaining more widespread use, especially in electroacoustic music composition. Although spatial audio is widely considered to be a relatively new field, its roots can be traced to much older times, also in orchestral music. Accessibility of new tools with intuitive interfaces widened possibilities of composers to work with analysis and modelling of spatialization without a steep learning curve and deep specialised knowledge. This enabled us to shift our attention from technological challenges towards artistic ones. From one side, mostly in acousmatic music, major part of the research concerns technical aspects rather than the content itself; from other side we have more abstract inquiries and insights on conceptions of space in music. Studies which concern spatialization techniques, as well as application of discoveries in spatial audio to acoustic music and orchestration are quite lacking. Existing know-how mostly is being held by experts and is shared mostly interpersonally.
Author of this presentation is actively working with spatialization since 2013, which contributed towards acquiring expertise of diverse techniques of adapting and mixing sonic material in 3D space. The scope of work also includes exploring the use of space as a compositional parameter, working with music and sound in interdisciplinary contexts etc. This experience led to discovery of certain trends and directions for artistically effective application of immersive sound techniques both for spatial audio and traditional composition and orchestration.
This presentation will focus on defining most wide-spread artistic contexts for spatial sound application and will describe spatialization techniques which lead towards effective artistic impact. Author will also demonstrate some models and experimentation in using relevant immersive sound architectonic principles for acoustic music composition and orchestration.

Dongryul Lee / University of Illinois at Urbana-Champaign

Dongryul Lee’s music is deeply oriented around acoustical phenomena and virtuosic classical performance practice. He seeks to write music that creates profound aural experiences with both dramaturgy and pathos. His compositions have been performed by ensembles such as Avanti!, Contemporanea, Jupiter, MIVOS, Callithumpian Consort, GMCL, S.E.M., Conference Ensemble, Paramirabo, and Illinois Modern Ensemble among others. He was awarded the third prize in the first Bartók World Competition (Budapest, 2018); the Presser Award for the performance of Unending Rose with Kairos quartett (Berlin, May 2020); the Special Prize Piero Pezzé in the Composition Competition Città di Udine (Italy, 2018); and the Second Prize in the Composition Competition GMCL (Portugal, 2017). His Parastrata has been performed in four cities in Europe and North America. Lee holds degrees in computer science and composition from Yonsei University and the Eastman School respectively, and is an ABD doctoral candidate and lecturer at the University of Illinois at Urbana-Champaign.
A Thousand Carillons: Acoustical Implementation of Bell Spectra Using the Finite Element Method and Its Compositional Realization
I will present an implementation of virtual bells, based on the Finite Element Method from engineering physics. The FEM has been widely used in engineering analysis and acoustics, especially for the creation and optimization of carillons. Beginning with a brief introduction of spectral music inspired by bell sounds, I will introduce the theoretical basis of the FEM and its application to the isoparametric 2-D elements, including the Principle of Virtual Work and FE shape functions. The creation of 3-D virtual bell geometries with structural analysis, and their optimization process will follow, with acoustical background on campanology. For my computational realizations, I follow the groundbreaking research of Schoof and Roozen-Kroon; their research formed the basis on which the first prototype of the major-third bell was designed and cast. A brief introduction of just tuning and 72 TET within the threshold of 5 cents will be analyzed, for the creation of spectral profiles of optimal bell tone-colors. The presentation will be accompanied with original SuperCollider examples and images of bell geometries created in COMSOL multiphysics, to visualize and sonify the characteristics of bells and their sounds.

Serge Lemouton / Ircam (Computer Music Designer)

After studying violin, musicology, writing and composition, Serge Lemouton specialized in the various fields of computer music at the Sonvs department of the Conservatoire national supérieur de musique de Lyon. Since 1992, he is a computer music designer at Ircam. He collaborates with researchers in the development of computer tools and participates in the musical projects of composers including Florence Baschet, Laurent Cuniot, Michael Jarrell, Jacques Lenot, Jean-Luc Hervé, Michaël Levinas, Magnus Lindberg, Tristan Murail, Marco Stroppa, Fréderic Durieux and others. He has notably directed and performed in real time several works by Philippe Manoury, including K..., la frontière, On-Iron, Partita 1 and 2, and the opera Quartett by Luca Francesconi.
 

Christophe Lengelé / Université de Montréal

Currently studying for a Doctorate in Music - composition and sound design - at Université de Montréal, his field of activity and research gathers spatial sound design, electronic and electroacoustic composition and performance with a special focus on the development of live experimental audio tools and interfaces built from open source softwares. After studying law and economics and working as a marketing and market analyst in international companies for a few years, he decided to quit the business field in 2006 and returned to studies to be trained in electroacoustic composition and obtained a Master of Arts in computer music. He seeks to bring together the spheres of composition and improvisation and focuses on performing variable spatio-temporal open sound pieces with a global custom live tool, which he has been developing regularly in SuperCollider since 2011 in order to play the place and the music at the same time.
Real-time creation of spatialized polyrhythms in electroacoustic music

Frank Madlener / IRCAM

Frank Madlener is a pianist and conductor, a cultural and artistic manager ; he has overseen many festivals and musical events. Since 2006, Madlener has served as the director of IRCAM (Institut de Recherche et Coordination Acoustique/Musique), a dynamic and innovative research institute founded at the Centre Pompidou in the late 1970s, where he had previously held the position of artistic director. As the director of IRCAM, Madlener explores the relationship between man and machine and how technology affects the composer. His goal is to implement technology in compositions for particular instruments as well as the entire orchestra. He exposes broad audiences to innovative and experimental pieces of music.
Esteban Maestre / McGill University
Esteban Maestre (Barcelona, 1979) is a researcher in musical acoustics and audio signal processing with background in Electrical Engineering (BSc'00, MSc'03) and Computer Science (DEA'06, PhD'09). He has held research and/or teaching positions at Universitat Politecnica de Catalunya, Universitat Pompeu Fabra, Stanford University, and McGill University. In studying novel means for interaction with sound and music, he has worked on diverse topics related to sound analysis/synthesis, vibroacoustic modeling, physical modeling of musical instruments, motion capture analysis of performing musicians, machine learning applied to motor control of sound rendering, and spatial audio.
Towards virtual acoustic replication of real acoustic violins
With the motivation of preserving historical musical instruments as interactive digital acoustic artifacts that can be played and heard in real time, we are currently pursuing the development of novel means to model and virtually recreate the sound radiation characteristics of real acoustic violins. We work by measuring the directivity of an acoustic violin and constructing an efficient digital filter model that can be used for real-time processing of a whitened version of the electrical signal coming from a silent violin as played by a musician. In a low-reflectivity chamber, we employ a microphone array to characterize a radiativity transfer function for an acoustic violin by exciting the bridge with an impact hammer and measuring the sound pressure at 4320 points on a sphere surrounding the instrument. We then design a digital filter that allows to obtain the signal corresponding to the radiated sound pressure wavefront in any direction, for many time-varying directions simultaneously, offering possibilities for real-time spatialization and virtual acoustic reality applications. This presentation will provide an overview of our most recent progress on this project, ultimately aimed at virtually reconstructing the sound of the Stradivari Messiah.

Emanuelle Majeau-Bettez / McGill University - IRCAM

Emanuelle Majeau-Bettez is a doctoral candidate in musicology and feminist studies at McGill University, where she is supervised by David Brackett and Lisa Barg. Her research focuses on both historical and current collaborative components of composer Éliane Radigue's career. Amongst other publications, Emanuelle is the author of Radigue’s “parcours d’œuvre” on the Ircam B.R.A.H.M.S. database. Emanuelle serves on the editorial board of Circuit, musiques contemporaines and is part of the Musical Improvisation and Collective Action [MICA] research team at Ircam. Outside of her academic studies, Emanuelle is a devoted piano teacher at École de musique de Verdun and Camp musical Père Lindsay, and she is completely passionate about surfing.
Oozing out of the walls : l’espace immersif selon Eliane Radigue

During the broadcast of Psi 847 (1973), a work composed by Éliane Radigue using the ARP 2500 synthesizer, composer Tom Johnson noticed that he heard certain motifs emanating from incongruous corners: from the loudspeakers, certainly, but also from the side wall, or from a specific point near the ceiling. Although this type of spatialization is not exclusive to Radigue, this talk will demonstrate that the composer's approach draws attention to these acoustic games which, in many other types of music, go completely unnoticed. At the risk of shocking some engineers by placing her loudspeakers in a "completely anti-acoustic" manner, it has always been essential for Radigue to make the place sound: attention to the acoustic response of a space, so that the sound cannot be traced to a particular source and thus builds an interesting story everywhere. Radigue is thus an "auditory architect" as defined by Berry Blesser: at the antipode of sound engineering techniques that transform a place to create optimal, directed listening zones, Radigue "focuses on how listeners experience space”.

Luc Hossepied

Lara Morciano / EA 7410 (SACRe) Paris Sciences et Lettres Research University. Conservatoire L. Cherubini de Florence 

Lara Morciano began studying the piano at a young age and obtained her diploma with the highest distinctions at the age of 16 at the Conservatory Tito Schipa in Lecce, Italy. Parallel to her concert activities, she studied composition at the Conservatory Santa Cecilia in Rome, where she earned several diplomas (Composition, Choral music and Choral Conducting, Piano reduction of scores and Analysis), subsequently earning her Master in Composition with Franco Donatoni at the Santa Cecilia's National Academy. In France, after a Diploma of Composition obtained at the Strasbourg’s Conservatory (in the class of Ivan Fedele) she attended in 2005-2006 the Cursus of composition and computer music of Ircam. In 2009 she was awarded a Master of the Arts in Musicology at the University of Paris 8, followed by a PhD in Sciences, Arts, Creation and Research (SACRe) at PSL Research University, ENS, CNSMDP and Ircam, under the direction of Gérard Assayag.Her compositions have been performed in many festivals including Philarmonie – Paris; Ircam - Centre Pompidou; Presences, Création Mondiale – Radio France; ZKM – Karlsruhe; The Venice Biennale; New York City Electroacoustic Music Festival; Onassis Foundation, Athènes ; International Gaudeamus Music Week – Amsterdam; Warsaw Autumn Festival; Opera – Dijon; Musica – Strasbourg; Ultima – Oslo; Bifem – Australia; Nuova Consonanza – Rome... She works with ensembles such as the Ensemble Intercontemporain, Court Circuit and with performers including Hae Sun Kang, Mario Caroli, Claude Delangle and Garth Knox. Her music is broadcast by France Musique, Rai3-RadioTelevisione Italiana, ABC Classic FM, Louis Vuitton Foundation and other transmissions.She has received commissions from numerous institutions including the French Ministry of Culture, the Ensemble Intercontemporain, Ircam - Centre Pompidou, Radio France, the Venice Biennale, the ZKM of Karlsruhe, the GRAME / Auditorium Orchestre de Lyon. She won the Tremplin 2008 competition (Ircam and Ensemble Intercontemporain), the International Composition Competition Giga-Hertz Award in Germany in 2012 and the ICMA Audience Award for Best Music Presentation at the International Computer Music Conference (ICMC) 2019 in New York.
Entre réel et virtuel: captation gestuelle, transducteurs, lieu d’écoute
In my cycle of works for piano, motion capture, transducers and real-time electronics, the research is based on the correspondence between the gesture and the characteristics of the sound signal as well as the synchronization and interaction between the hands movements of the performer and various sound processes. Through experimentation on embedded sensors, the possibility to control the sound synthesis in real time from gestural capture and the integration of gesture’s analysis used to synchronize the musician’s actions with electronic treatments, allow to set up an execution where the instrumental and electronic realization are strictly interdependent and interactive. By using transducers to treat and diffused sounds (also the sound synthesis manipulated with the motion capture directly into the piano), it is possible to create a double world of sonorities characterized by the ambiguity of the temperate world of the acoustic piano with that of a virtual resonant instrument completely detuned.

Stephen McAdams, Meghan Goodchild, Beatrice Lopez and Kit Soden / McGill University

Stephen McAdams studied music composition and theory at De Anza College in California before turning to perceptual psychology at McGill (BSc, 1977). He then studied psychoacoustics and auditory neuroscience at Northwestern University (1977-1979), continuing on to complete a PhD in Hearing and Speech Sciences at Stanford University (1984). In 1986, he founded the Music Perception and Cognition team at IRCAM-Centre Pompidou in Paris and organized the first conference on Music and the Cognitive Sciences there in 1988. He was a research scientist in the French CNRS (1989-2004) and then returned to McGill to direct the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT, 2004-2009). He holds the Canada Research Chair in Music Perception and Cognition. He is currently interested in the perception of musical timbre applied to a psychological foundation for a theory of musical orchestration.
Meghan Goodchild is the Research Data Management Systems Librarian at Queen’s University and Scholars Portal of the Ontario Council of University Libraries (OCUL). In her role, she develops research data infrastructure locally at Queen’s and collaborates on initiatives that support OCUL, such as providing user support and project management of development work for Scholars Portal’s Dataverse platform. She has expertise in interdisciplinary research projects, data visualization, digital preservation, and data repositories. Meghan previously worked as a postdoctoral fellow and project manager for the Orchestration and Perception Project at McGill University led by Stephen McAdams. She holds a Master of Information Studies (MISt) and a PhD in Music Theory from McGill University.
In his senior years of a BSc in Mathematics at McGill University, Alistair Russell designed and built the OrchARD web application using the Django web framework and integrated the application with the jQuery Querybuilder to facilitate complex queries of the extensive OrchARD knowledge base. He worked at the Music Perception and Cognition Lab between the years of 2014 and 2017 under the guidance and supervision of Meghan Goodchild and Stephen McAdams. Alistair has since gone on to work as a Production Coordinator for a software engineering support team at Industrial Light and Magic in Vancouver, BC. He has worked in the visual effects industry for the past three years, starting his career as a technical specialist before initiating the move towards a leadership position. 
Beatrice Lopez holds a BSc in Computer Science from McGill University.  She worked as the web database developer OrchARD, building on the work of previous developer Alistair Russell.  She is interested in all things web and software development and have also jumped on the bandwagon that is amateur artificial intelligence and machine learning.  She dreams of building things that can impact this world in a beautiful and positive way.  She likes coffee, the colour blue, code, natural light, and the great outdoors. 
Kit Soden is a composer, researcher, and music educator based in Montreal, QC, Canada. He is currently pursuing a PhD in composition at McGill University, studying with John Rea and Stephen McAdams, and working as a research assistant for the Analysis, Creation, and Teaching of Orchestration (ACTOR) project. As a composer, Kit is inspired by the interaction and relationship between timbre and narrativity in music, and particularly in the use of orchestration to enhance the expressive dramaturgy of a composition. Currently working on two opera projects, Kit also composes works for solo instruments, electronics and live ensemble, various chamber ensembles, and large wind band.

The Orchestration Analysis and Research Database (Orchard)
To provide a tool for researching the role of auditory grouping effects in orchestration practice and thereby the role of timbre as a structuring force in music, a first-of-its-kind online database was created, and an analysis taxonomy and methodology were established. The taxonomy includes grouping processes of three kinds: concurrent, sequential and segmental. These play a role in sonic blends that give rise to new timbres through perceptual fusion, voice separation on the basis of timbre, integration of multiple instrumental lines into surface textures, the formation of orchestral layers of varying prominence based on timbral salience, contrasts based on changes in instrumentation and register, and progressive orchestration in larger-scale gestures. The taxonomy served in the creation of the data model for the database. Scores of 85 orchestral movements from Haydn to Vaughan Williams were analyzed while listening to commercial recordings by experts in teams of two who analyzed the scores in terms of the taxonomy individually and then confronted their results before entering them into the database. A query builder allows for the construction of hierarchical queries on different levels of the taxonomy, and to specify composer, piece, movement, and instrumentation. Results of the queries display the annotated score at the appropriate page and provide a sound clip of the corresponding measures from the commercial recording used in the combined score/aural analysis. The database has proved useful in exploration of the diversity of each orchestral effect and has the potential to be used for data mining and machine learning for knowledge discovery.

 

 

Landon Morrison / Harvard University

Landon Morrison is a College Fellow at Harvard University, where he recently began teaching after completing his PhD in music theory at the Schulich School of Music of McGill University in Montreal. His dissertation, titled “Sounds, Signals, Signs: Transductive Currents in Post- Spectral Music at IRCAM,” examines the relationship between contemporary compositional practices, technological development, and psychoacoustics within the context of post-spectral music created at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM). More broadly, his research aims to draw music theory and media studies into an interdisciplinary dialogue that tracks the transductive flow of sounds within new media environments. Recent and forthcoming publications include analytically driven articles in Circuit: musiques contemporaine, Nuove Musiche, and Music Theory Online, as well as a chapter on the history of rhythm quantization to be published in the Oxford Handbook of Time in Music.
Computer-Assisted Orchestration, Format Theory, and Constructions of Timbre
In a multi-authored paper unveiling Jonathan Harvey’s Speakings (2008), Gilbert Nouno et al. document the results of a collaborative IRCAM project driven by the “artistic aim of making an orchestra speak through computer music processes” (2009). The ensuing music dramatizes this aim, depicting an audible program where the orchestra progresses through stages of baby-like “babbling,” adult “chatter,” and finally, “ritual language” in the form of a Tibetan mantra. Turning this narrative on its head, my paper takes Speakings as a point of departure for a genealogical analysis of computer-assisted orchestration techniques, showing how, in order to make an orchestra speak, it was first necessary to make software listen.
Through a close examination of archival documents and “e-sketches,” I follow Harvey’s transition from a hybrid software setup (Melodyne and a custom partial-tracking program) to the newly-developed Orchidée application (Carpentier and Bresson 2010), which notably uses music information retrieval (MIR) methods. I frame this shift in relation to format theory (Sterne 2012), showing how categories used to encode sound files with audio descriptors of low-level timbral attributes (spectral, temporal, and perceptual) are themselves contingent on a number of factors, including: a) sedimented layers of psychoacoustics research grounded in the metrics of timbre similarity (Wessel 1979); b) a delegation of this knowledge to software tools at IRCAM, which one finds already with the implementation of Terhardt’s algorithm for pitch salience in the IANA program (Terhardt 1982); and c) the wider network of institutional negotiations surrounding the establishment of standardized file formats like the MPEG-7 protocol (Peeters 2004).

Zvonimir Nagy / Texas A&M University

Zvony Nagy is a composer and music scholar based in Texas where he serves on the music faculty at Tarleton State University (Texas A&M). He holds a Doctor of Music in Composition degree from Northwestern University. Nagy composes acoustic, choral, electroacoustic, and multimedia music. His compositions are informed by cognitive and computer sciences, with a focus on the relationship between music and embodiment. His compositions also employ computer-assisted processes for music composition, notation, and analysis, as well as self-referential and dynamic systems, along with more intuitive approaches to compositional techniques and processes. Nagy’s music is released on PARMA Recordings, Albany Records, and MSR Classics; a selection of his compositions is published by Aldebaran Editions and Musik Fabrik. His book Embodiment of Musical Creativity: The Cognitive and Performative Causality of Musical Composition (Routledge, 2017) offers an innovative look at the interdisciplinary nature of creativity in musical composition. 
Code as Music, Music as Code: An Introduction to the Music Encoding of Compositions with Object-Oriented Programming
The computer-assisted composition is considered both a tool in the creation, modification, analysis, or optimization of the compositional process and as a means for the automated generation of music created by a computer. Reception of this approach to compositional creativity has been divided, with some composers working towards a more integrated application that would offer a means of encoding methodology that fuses the conceptualization of music encoding with the contextualization of encoded music while allowing the composer to continue playing an active role within this creative space.
In this presentation, the author introduces an approach to computer-assisted composition that aims to provide a more unified approach to encoding music by combining the symbolic music information generation and retrieval process with a digital representation of the musical structure. The former is achieved by the conceptual encoding of given data structures using Abjad, a Python application programming interface that enables composers to encode musical representations and levels of musical structure in an incremental way (e.g., through the assemblage of basic building blocks of musical objects: pitches, chords, rhythms, meters, as well as dynamics, articulations, etc.). The latter is realized in the form of a typeset score using Lilypond, a computer program for music engraving written in C++ that is integrated with the Abjad package, which in turn enables composers to have direct control over the notation and engraving of the notational objects.
This modular approach to the encoding of a musical composition consists of first selecting the Python data structures and then organizing those conceptualized attributes into functions and classes within the object-oriented programing environment of the Abjad package. The result is a fully-engraved musical score that represents the encoded symbolic data structures of music, a contextualized representation of the composition in the form of a musical score. In turn, the output of the Abjad package is in the form of an encoded LilyPond score, making Abjad a programmatic way of generating LilyPond files. The process allows for not only a more interactive encapsulation of the creative process within the environment for algorithmic composition, but it also accounts for the formation of compositional inheritances of object-oriented programming.

Ben Neill / Ramapo College of New Jersey

Ben Neill is a composer/performer and creator of the Mutantrumpet, a hybrid electro-acoustic instrument. The Mutantrumpet was originally developed with Robert Moog and subsequently at STEIM in Amsterdam. Neill has released eleven albums of his music on labels including Universal/Verve, Thirsty Ear, Astralwerks, and Six Degrees. Performances have included Big Ears Festival, BAM Next Wave Festival, Lincoln Center, Whitney Museum, Getty Museum, Moogfest, Spoleto Festival, Umbria Jazz, Bang On A Can, ICA London, Vienna Jazz Festival, Bing Concert Hall at Stanford, and the Edinburgh Festival. Neill has worked closely with many innovative musicians and artists including La Monte Young, John Cage, John Cale, Pauline Oliveros, Petr Kotik, Rhys Chatham, DJ Spooky, David Berhman, Mimi Goese, Nicolas Collins, and David Wojnarowicz. He is currently an Artist in Residence in the Nokia Bell Labs Experiments in Art and Technology program, and a Professor of Music at Ramapo College of New Jersey.
Fantini Futuro
Fantini Futuro is a new audio-visual performance work for Ben Neill’s Mutantrumpet V4, countertenor, Baroque keyboards, and interactive video projections. It is being created for the 64 channel Antechamber at Nokia Bell Labs where Neill is an Artist in Residence in the Experiments in Art and Technology program.
The piece is based on the music and life of early Baroque trumpeter/composer Girolamo Fantini, who was responsible for bringing the trumpet indoors from the hunt and the battlefield to the realm of art music. Fantini was a musical celebrity in his time, and wrote one of the earliest collections of music written for trumpet alone as well as with keyboards. He also pioneered the use of mutes to expand the dynamic range of the instrument for indoor use along with numerous other playing techniques. Fantini Futuro remixes and collages Fantini’s material both compositionally and in real time during performance through live sampling. The work draws connections between the dynamic energy of early Baroque musical and architectural vocabularies and minimalist patterns and processes, reflecting on the improvisatory musical performance practices of Fantini's time through a variety of interactive technologies.
The visual component is primarily comprised of architectural imagery from places where Fantini lived and performed. The animated architectural imagery is controlled live from Neill’s mutantrumpet and creates the sensation of the performers being situated in virtual worlds created from these historical elements.

Jason Noble / McGill University

Jason Noble’s compositions have been described as “remarkable achievements” and “terrific sonic painting,” with performances across Canada and in USA, Argentina, Mexico, France, Belgium, the Netherlands, Germany, Denmark, and Italy, and numerous publications, recordings, and broadcasts. He has held residencies including Pro Coro Canada at the Banff Centre, the St. John’s International Sound Symposium, the Bathurst Chamber Music Festival, the Edge Island Festival for Choirs and Composers, the Newfoundland and Labrador Registered Music Teachers’ Association, and the Sudbury Symphony Orchestra. Also an accomplished scholar, Jason is currently a postdoctoral researcher at McGill University, working on the ACTOR project (Analysis, Creation, and Teaching of Orchestration). His PhD at McGill was funded by the prestigious Vanier Scholarship (SSHRC). His research appears in Music Perception and Music Theory Online, with forthcoming publications in Organised Sound and the Routledge Companion series. He has presented at numerous national and international conferences and invited guest lectures.
A case study of the perceptual challenges and advantages of homogeneous orchestration: fantaisie harmonique (2019) for two guitar orchestras
“Orchestration” is defined by Stephen McAdams as “the choice, combination or juxtaposition of sounds to achieve a musical end.” This typically invokes a palette of heterogeneous sounds, especially those of the instruments of the symphony orchestra, but McAdams’ definition equally applies to homogeneous sound palettes encountered in ensembles of uniform composition, such as the guitar orchestra. Since orchestral effects predicated on perceptual difference—such as stratification and segmentation—often rely on timbral heterogeneity, orchestrating for homogeneous ensembles presents a different set of challenges where such effects are desirable. On the other hand, since orchestral effects predicated on perceptual similarity—such as blend and textural integration—are facilitated by timbral homogeneity, orchestrating for homogeneous ensembles may present a different set of advantages.
In this presentation, I will discuss my composition fantaisie harmonique (2019), for two guitar orchestras (one of classical guitars and another of electric guitars). Several musical devices are exploited which would be difficult or impossible to achieve with equal effectiveness in a more heterogeneous ensemble, including: (1) an elaborate tuning system combining elements of equal temperament and just intonation, in which each orchestra is divided into six different tuning groups, (2) an extensive hocketing section, (3) massed sonorities exploiting a variety of perceptual principles. The piece was conceived with spatial deployment of the instrumental groups in mind, and a forthcoming recording of the piece will use spatialization as a perceptual cue to distinguish groups, filling one of the traditional roles of timbral heterogeneity in more orthodox approaches to orchestration.

James O'Callaghan

James O’Callaghan is a composer and sound artist based in Montréal. His music has been described as “very personal… with its own colour anchored in the unpredictable.” (Goethe-Institut) His work spans chamber, orchestral, live electronic and acousmatic idioms, audio installations, and site-specific performances. It often employs field recordings, amplified found objects, computer-assisted transcription of environmental sounds, and unique performance conditions. His music has been the recipient of over thirty prizes and nominations, including the Salvatore Martirano Award (2016), ISCM Young Composer Award (2017), and the Jan V. Matejcek Award (2018), and nominations for a JUNO Award (2014) the Gaudeamus Award (2016). Active as an arts organiser, he co-founded and co-directed the Montréal Contemporary Music Lab. Originally from Vancouver, he received a Bachelor of Fine Arts degree from Simon Fraser University in 2011, and a Master of Music degree from McGill University in 2014.
Alone and unalone: conceptual concerns in simultaneous headphone and speaker diffusion
Two recent related works of mine, Alone and unalone for sextet and electronics, and With and without walls (acousmatic), employ simultaneous loudspeaker and in-ear diffusion with headphones supplied for the audience. The works examine the relationship between individual and collective experience. When we listen to music together, as in a concert, we share a common reality, but we simultaneously have individual, unsharable experiences in our own heads. A confrontation of the philosophical problem of other minds, the affect of the pieces endeavour to teeter between solipsism and the kind of empathy-building that occurs through art.
This talk is designed to accompany the performance of Alone and unalone by Ensemble Paramirabo on April 3 2020 as part of the symposium. In it, I will discuss the technical configuration of the headphone diffusion system I have designed, as well as compositional strategies for the combination and movement of sound between in-ear and loudspeaker spatialization. The system provides unique immersive possibilities in spatial imagery: I will illustrate several of these possibilities with examples from the work, and discuss the compositional process, as well as the conceptual and artistic motivations behind these strategies.

Ofer Pelz and Matan Gover / CIRMMT

Ofer Pelz composes music for diverse combinations of instruments and electroacoustic media, he is also an active improviser. He is the co-founder of Whim Ensemble together with Preston Beebe. He has studied composition, music theory, and music technology at Jerusalem, Paris, and Montreal. The work of Ofer Pelz has been recognized by the reception of many international prizes including two ACUM awards and the Ernst Von Siemens Grant. Meitar Ensemble, Cairn Ensemble, Ardeo String Quartet, The Israel Contemporary Players, Le Nouvel Ensemble Moderne, Architek Percussion, Geneva Camerata, and Neue Vocalsolisten are among the ensembles that played Pelz’s music. His music is played regularly in Europe, USA, Canada and Israel at La Biennale di Venezia and Manifeste IRCAM/Pompidou among others.
Matan Gover is a multi-disciplinary musician and software developer. He is currently combining his two passions in McGill University’s Music Technology department, where he researches computational processing of vocal music supervised by Prof. Philippe Depalle. Matan won the America-Israel Cultural Foundation scholarship for piano performance, and completed his B.Mus. in orchestral conducting summa cum laude in Jerusalem. Matan is a professional choir singer, and has written orchestrations and arrangements for ensembles such as the Jerusalem Symphony Orchestra and Jerusalem Academy Chamber Choir. Matan has been a professional software developer since the age of 15. He now works at LANDR Audio Inc. on an A.I.-powered music processing engine that performs automatic music mastering.
Sound Tracks: An interactive video game composition
In this demo, we will present and discuss our piece entitled Sound Tracks, originally written for a live@CIRMMT concert and performed by Ensemble Aka. Sound Tracks lies on the continuum between a video game and a musical composition. The game’s graphical user interface consists of a set of ‘tracks’, one track per musician. Each track contains moving graphical symbols that represent musical gestures, and these symbols approach the viewers from the horizon. When the graphical symbols arrive to the musicians, they must play the corresponding musical gesture. This interface is inspired by a well-known video game called Guitar Hero.
The graphical interface and game rules replace the traditional musical score: the unfolding of a performance is not predetermined as in most classical music but plays out in real-time according to rules, chance operations, and improvisatory decisions taken by the performers.
In our demo, we will demonstrate a performance or recording of this work, as well as discuss the ideas that underpin its creation and questions that arose while performing it with several ensembles. The following themes will be discussed:
- Gamified composition: How does replacing traditional music notation and musical development with a game-based interface affect the mindset of the musicians and the audience during a performance?
- Improvisation vs. predetermination: How do musicians improvise in a given musical framework? How much information should be dictated by a score, versus leaving room for interpretation?
- Timbre and orchestration: How does a small set of musical gestures spread across multiple instruments get combined into complex timbres and textures? How do different instruments interpret the same musical ideas?
- The technology behind this game piece - the implementation and the control of the application for the performance.

Laurie Radford / University of Calgary

Laurie Radford is a Canadian composer, sound artist, music technologist, educator and researcher who creates music for diverse combinations of instruments and voices, electroacoustic media, and performers in interaction with computer-controlled signal processing of sound and image. His music fuses timbral and spatial characteristics of instruments and voices with mediated sound and image in a sonic art that is rhythmically visceral, formally exploratory and sonically engaging. His music has been performed widely and he has received commissions and performances from ensembles including the Aventa Ensemble, Esprit Orchestra, New Music Concerts, Le Nouvel Ensemble Modern, L'Ensemble contemporain de Montréal, Meitar Ensemble, Paramirabo, Thin Edge New Music Collective, Trio Fibonacci, the Penderecki, Bozzini and Molinari String Quartets, and the Winnipeg, Calgary, Edmonton and Montréal Symphony Orchestras. Radford has taught composition, electroacoustic music and music technology at McGill University, Concordia University, Bishop’s University, University of Alberta, City University (London, UK), and is presently Professor of Sonic Arts and Composition at the University of Calgary.
Getting into Place/Space: The Pedagogy of Spatial Audio
Human acquisition of skills and knowledge about the spaces they inhabit is linked to our development of proprioceptive skills from the earliest moments of physical and psychological negotiation with the world around us. The acquisition of “spatial intelligence” extends to the development of skills in comprehending the acoustic spaces through which we move, unconsciously and consciously measuring, comparing and committing to memory the sound signals and sources fleetingly inhabiting these spaces while building a library of sounding spaces to which we return and reference. Musicians, composers and sound artists develop their craft via time-honoured instruction in listening intently to instruments and sources of sound, considering their means of activation and the many parameters of sound generation moving in time. It is taken for granted that this sonic activity occurs within a particular bounded space and that that space affects the perception of sound and by extension the manner of performance practice and sound activation required. Yet, aside from certain electroacoustic studies that consider the ramifications of the technological mediation of space through stereo and multichannel sound projection environments, immersive cinema and video game design research, and spatial ear-training methods for audio engineers, there is very little overt instruction in spatial listening, a directed and methodical study of acoustic space as experienced in the myriad places and spaces that music and sound art are presented targeted at musicians and sound artists. The current proliferation of dedicated high-density loudspeaker arrays for spatial design, as well as improved and accessible technologies for capturing, preserving and reproducing the complexities of acoustic spaces, provides a fertile environment in which to initiate and develop such a pedagogy of spatial listening. This presentation will consider some of the fundamental parameters of spatial listening and perception of space that contribute to the skills involved in compositional design and control of spatial audio. An outline of a methodology of spatial sound pedagogy is proposed and illustrated with a series of spatial audio exercises that consider experiential learning, terminology and concepts, technologies for deployment, and creative application.
Eric Raynaud / IRCAM
Fraction, real name is Eric Raynaud, is a sound artist, composer of experimental electronic music, and creator of audio-visual creations born in Brittany and living in Paris whose work is particularly interested in the forms of sound immersion and their interactions with the visual media. His first production appeared on the German label Shitkatapult before joining the label Parisien Infiné in 2008. With the support of the CNC-Dicream (2010), he created the immersive audio-visual performance DROMOS (used by Apple). In the wake of these two ambitious works, it aims in particular to forge links between 3D sound immersion, contemporary art and architecture, with a particular interest for issues involving science and environmental issues. His work has been presented in many national and international electronic, experimental, digital or audiovisual culture festivals and events such as MIRA, MUTEK, GogBot, MEQ, Maintenant, Sonica, Lab360, Z-KU, Gaiety Lyrical, SAT Montreal, Resonate, Kikk, etc.
 

Jacques Rémus / Ipotam Mécamusique

Originally a biologist (agronomist and marine researcher), Jacques Rémus chose at the end of the seventies, to devote himself to music and the exploration of various forms of creation. Saxophonist, he took part in the founding of the Urban-Sax group. He also appears in many concerts from experimental music (Alan Sylva, Steve Lacy), to street music (Bread and Puppet). After studies in Music Conservatories, G.R.M. and G.M.E.B., he wrote musics for dance, theater, "total shows", television and cinema. He is above all the author of installations and shows featuring sound sculptures and musical machines such as the "Bombyx", the "Double String Quartet", "Concertomatique", "Leon and the Hands' Song", "Carillons" N ° 1, 2 and 3, "Washing Machines Orchestra" as well as those presented at the Musée des Arts Forains (Paris). Since 2014, his work has focused on the development of “Thermophones”.

 
Lindsey Reymore / Ohio State University
Lindsey is in the final semester of the PhD program in Music Theory at The Ohio State University. Her research focuses on timbre semantics and cross-modal language; other recent projects address multimodal emotion associations in music and dance, instrument-specific absolute pitch, and seventeenth-century harmony. In 2018, Lindsey received the Early Career Researcher Award from the European Society for the Cognitive Sciences of Music (ESCOM) at the International Conference on Music Perception and Cognition and is co-chair of the upcoming conference to be held at Ohio State, May 11-15, "Future Directions of Music Cognition."
Instrument Qualia, Timbre Trait Profiles, and Semantic Orchestration Analysis
I present a method of computational orchestration analysis built from empirical studies of timbre semantics. In open-ended interviews, 23 musicians were asked to describe their phenomenal experiences of the sounds of 20 Western instruments. A content analysis of the transcribed interviews suggested 77 qualitative categories underlying the musicians’ descriptions. In a second study, 460 musician participants rated subsets of the same 20 instruments according to these 77 categories. Principal Component Analyses and supplementary polls produced a final 20-dimensional model of the cognitive linguistics of timbre qualia. The model dimensions include: rumbling/low, soft/singing, watery/fluid, direct/loud, nasal/reedy, shrill/noisy, percussive, pure/clear, brassy/metallic, raspy/grainy, ringing/long decay, sparkling/brilliant, airy/breathy, resonant/vibrant, hollow, woody, muted/veiled, sustained/even, open, and focused/compact. In a third study, 243 participants rated subsets of a group of 34 orchestral instruments using the 20-dimensional model. These ratings were used to generate Timbre Trait Profiles, which serve as the foundation for the orchestration analysis method, for each of these instruments. A computational program is under development (anticipated Feb-March 2020) to generate a semantic orchestration plot given a musical piece as input. The analysis will provide information on how the semantic dimensions of timbre evolve throughout a work, initially using a model that combines Timbre Trait Profiles accordingly from orchestration and employs intensity modifiers based on dynamic indications. In addition to the semantic orchestration plots, I aim to translate musical data into a real-time, animated visual analysis that can be played along with the piece to illustrate dynamic timbral changes resulting from orchestration.

Monique Savoie / Société des Arts Technologiques

Video artist, orientated towards installation art, electro-acoustic music of the 80s, Monique Savoie has been involved in the avant garde of Montreal, in various capacities of different artistic domains. In 1995, she led the 6th symposiun of electronic arts, ISEA95 Montreal, an event which was key to placing Montreal on the international stage for digital arts. In 1996, she founded the Society for Arts and Technology (SAT), which she heads, while directing the arts and development sectors. With more than 36 000 members, the SAT provides a networking hub, at a time when digital culture is radically transforming our way of thinking, creating and diffusing art. Since 1996, she has created a multidisciplinary center for research, creation, production, formation, and diffusion, dedicated to the advancement and preservation of digital culture. The SAT is also a major center for the Montreal's technological arts with an internationally renowned in the domain. Recognized in 2014 for her work by the city of Montreal, Monique Savoie was awarded the “Bâtisseuse du 21e siècle" prize, and was made a Chevalier (Knight) of the Ordre des arts et des lettres de la République française (Order of Arts and Letters) of France in 2017. Since August 2016, she represents the SAT as partner of the Gaîté Lyrique in Paris, until 2022.

Marlon Schumacher and Núria Gimenez Comas

Marlon Schumacher’s academic background is multi-disciplinary with pedagogical and artistic degrees in music theory, digital media and composition (under Marco Stroppa) from the UMPA Stuttgart, and a PhD in Music Technology from McGill University (co-supervised by IRCAM). His main research topics are spatial sound perception/synthesis, computer-aided composition and musical interaction; he has contributed to the field through scientific publications, academic services, several open-source software releases and artistic/research projects. Marlon Schumacher is an active performer/composer creating works for a broad variety of media and formats, incl. instrumental and intermedia pieces, crowd-performances and installations. He currently works as professor for music informatics at the Institute for Music Informatics and Musicology of the University of Music in Karlsruhe, where he curated an international lecture series and designed research labs as director of the ComputerStudio. He is associate member of a.o. CIRMMT, IDMIL, RepMuS, and organizing member for the annual conference on Music and Sonic Arts (MUSA).
Sculpting space
The proposed collaborative artistic research project has explored and developed the notion of spatial sound synthesis from artistic and perceptual viewpoints, truly amalgamating spatialization algorithms with sound synthesis engines according to an integral musical idea or approach. From a research viewpoint, the study of the mechanics of auditory perception should help to gain a better understanding of how these -musically mostly separated- dimensions interact and give rise to & evoke novel sound sensations.
Exploring spatialization beyond the prevalent concept of sound source spatialization, such as extended sources (with extension and shapes), spectral spatialization techniques, also the notion of a "synthetic-soundscape" in the sense of working with densities and distributions in various synthesis dimensions (in frequency as well as in space). From an artistic perspective, this approach can be seen as following a tradition of exploiting psychoacoustic principles as a musical model (e.g. as in the french spectral school) in order to create novel -sometimes paradoxical- sound compositions and musical forms.
To this end (as for the technical realization), composer Nuria Gimenez-Comas have used graphical descriptions of mass densities with the sound synthesis and spatialisation tools in Open Music.
This approach is made possible through…
Marlon Schumachers contribution will be dedicated to r&d of new tools informed by cognitive mechanisms studied in spatial auditory perception and scene analysis. more precisely the OMChroma/OMPrisma framework, combining synthesis/spatializationmodels, perceptual processing, and room effects.
The novel sound-processing framework of the presented tools allows building complex dsp-graphs -directly in the composition environment-. This extends the compositional process to a virtual “lutherie” or “orchestration”, blurring the lines between musical material and instruments. The framework provides the possibility of designing controls for higher-level spatial attributes, such as diffusion, sharpness, definition, etc.
Concretely, we propose combining spatialization systems in an unorthodox loudspeaker setup (see technical rider) to synthesize a broad spectrum of “spatial sound objects” ranging from close-proximity sound sources to distant textures or ambiances, which the listener is free to individually explore and appreciate.
The idea is to create not only the sensation of given sound sources projected into space but multiple spatial sound morphologies around and within the audience, that is densifying-increasing, in different emerging sound layers.

Nadine Schütz / IRCAM

Nadine Schütz presents herself as a sound architect, or environmental interpreter. Based on both theoretical and poetic research, she explores sound space through compositions, performances, installations and ambiences that link space and listening, landscape and music, the urban and the human. Her works have been presented in Zurich, Venice, Naples, New York, Moscow, Tokyo and Kyoto. Among his current projects, a sound device for the TGI square in Paris, elementary instruments for the Pleyel Urban Crossing in Saint-Denis, and the Jardin des Réflexions for the renovated Place de La Défense. For four years she headed the multimedia laboratory of the Landscape Institute at ETH Zurich, where she installed a new studio for the spatial simulation of soundscapes, during her doctorate Cultivating Sound finalized in 2017. She is currently a guest composer at IRCAM at the Centre Pompidou Paris.
Presentation of her artistic work and collaboration leads with IRCAM research teams
1. “Composing with echoes”: investigate how the acoustic footprint of a place can be incorporated into new sound content and into different spatial and temporal scales of a composition for the same place. (EAC)
2. Instrument - Space: it is the search for a composer correlation between place, architectural/physical structures and sound events through their respective reading/design/definition as instrument and space - sound source and resonance body - at the same time. (EAC / S3AM)
3. Acoustic-semantic: develop the speak tool towards a (spatial) lexicology of sound scenes, in order to create acoustic and semantic links between the description of a sound scene and a single sound / an individual sound event in the composition. (PDS / EAC)
4. Prefiguration in the studio / Mastering outside the studio / Multi-modal aspects, in particular the interactions between the auditory and visual: methodological issues in the process of creating works for specific places / environments.

Georgia Spiropoulos / IRCAM

Trained in piano and all other disciplines surrounding composition in Athens, Georgia Spiropoulos also practices jazz and is passionate about traditional Greek music. She studies with P. Leroux and taken classes with M.Levinas. During the Cursus at IRCAM, she worked with  J. Harvey, T. Murail, B. Ferneyhough, P. Hurel, and M. Stroppa. Spiropoulos completed her Master’s degree at the EHESS, in collaboration with anthropologists and Hellenists. This focus on the oral origins of music is fueled by other fields of exploration: improvisation, performance and multidisciplinary art, voice, language. During the festival ManiFeste-2015, IRCAM dedicated a concert-portrait to her. She was a Distinguished Visiting Chair in Music and Acting Director of McGill Digital Composition Studios at McGill Univeristy, Schulich School of Music.
 

Christopher Trapani / University of Texas

The American/Italian composer Christopher Trapani was born in New Orleans, Louisiana. He earned a Bachelor’s degree from Harvard, a Master’s degree at the Royal College of Music, and a doctorate from Columbia University. He spent a year in Istanbul on a Fulbright grant, studying microtonality in Ottoman music, and nearly seven years in Paris, including several working at IRCAM. Christopher’s honors include the 2016-17 Rome Prize, a 2019 Guggenheim Fellowship, and the 2007 Gaudeamus Prize. He has received commissions from the Fromm Foundation (2019), the Koussevitzky Foundation (2018), and Chamber Music America (2015). His debut CD, Water- lines was released on New Focus Recordings in 2018. He currently serves as Visiting Assistant Professor of Composition and Interim Director of the Electronic Music Studios at The University of Texas at Austin’s Butler School of Music.
The orchestra increased: Spinning in Infinity
Spinning in Infinity (written for Festival Présences 2015, RIM: Greg Beller) presents a unique approach to spatialization and orchestration. A spatialized array of 12 loudspeakers is embedded in the orchestra to produce a fusion between electronic and acoustic sounds. The capabilities of acoustic instruments are enhanced by a complementary microtonal backdrop that fuses with the onstage players. These kaleidoscopic orchestrations are developed using concatenative synthesis; samples are chosen and retuned in real-time to create a kind of sonic color wheel, with 2-dimensional spiral-shaped trips through timbre space in CataRT—between brass and winds, for example, or between bright and dull, pitch and noise...
These reworked excerpts are then synchronized with the orchestra using the adaptive tempo-tracking of Antescofo, creating a sort of augmented orchestra whose dimensions are in constant flux: a dozen microtonal flutes can appear from nowhere, replaced the next second by a brass choir shaded by an unreal arsenal of mutes, or a peal of chimes triggered by a single live percussionist—waves of color in constant motion…
 

Roxanne Turcotte / CMCQ SMCQ

Active as a composer and sound designer since 1980, Turcotte has built her aesthetics around a cinema-like art of integration, acousmatic and immersive music. She is asked to sit on composition juries, and she regularly performs composition training sessions and workshops. Turcotte has received grants from the Canada Council for the Arts (CCA) and the Conseil des arts et des lettres du Québec (CALQ). The music of Roxanne Turcotte has won numerous awards and distinctions in USA, 1987, ’89), Nomination (LCL, nominated for Oslo World Music Days, 1990), Hugh Le Caine (Canada, 1985, ’89), Luigi Russolo (Italy, 1989), 6th Radio Art Competition (La France, 2005), Bourges (France). Her electroacoustic works have been programmed by several events : Florida Electroacoustic Music Festival (1996), Montréal en lumière (2000), Bourges (2003), Perpignan (2000), Futura (2006), Marseille (2002), Barcelona (2004), Geneva (2005), Akousma, 16 tracks (Bestiaire), Aix-en-Provence (2007), Edmonton (2014), Montpellier (Festival Klang, 2017), Festival MNM 2019.
 

Hans Tutschku / Harvard University

Hans Tutschku is a composer of instrumental and electroacoustic music. In 1982 he joined the “Ensemble for intuitive music Weimar” and later studied theatre and composition in Berlin, Dresden, The Hague, Paris, and Birmingham. He collaborated in film, theatre and dance productions, and participated in concert cycles with Karlheinz Stockhausen. Since 2004 he directs the electroacoustic studios at Harvard University. Improvisation with electronics has been a core activity over the past 35 years. He is the winner of several international competitions, among others: Hanns Eisler Preis, Bourges, CIMESP Sao Paulo, Prix Ars Electronica, Prix Noroit, Prix Musica Nova, ZKM Giga-Hertz, CIME ICEM and Klang!. In 2005 he received the culture prize of the city of Weimar. Besides his regular courses at the university, he has taught international workshops for musicians and non-musicians on aspects of art appreciation, listening, creativity, composition, improvisation, live-electronics, and sound spatialization in more than 20 countries.
Combining database sound creation with ambisonics spatialization
Working with large sound databases to compose dense sonic textures has been at the core of my research for the past years. I’m now combining the output of a multi-voice playback engine, which produces hundreds of voices simultaneously with their spatialization in an ambisonics dome – up to 7th order. The entire environment is programmed in Max/MSP, using the bach and cage libraries, as well as several different spatialization tools, including Ircam Spat and ICST tools. Models of flocking birds and networks of physical masses are used to guide the spatial aspects of sound clouds.
The presentation will demonstrate the creation of the database, the search engine, possibilities of symbolic sound representations and their editing possibilities, as well as my approach to sound spatialization.
The talk will be combined with a performance of my acousmatic composition ‘dark matter – avalanche’ (2018), a tribute to Iannis Xenakis.

Marcel Zaes

Marcel Zaes (b. 1984), is an artist, performer, and composer, holding master’s degrees in media art and music composition from Bern University of the Arts as well as Zurich University of the Arts. Additional composition studies with Peter Ablinger and Alvin Curran. Currently, he is pursuing his Ph.D. in Music & Multimedia Composition at Brown University. Both as a scholar and as an artist, he explores the ways in which rhythm forms the basis for community, that is, rhythm affords the sociality that is traditionally called “making music together” or “dancing together,” even if no such action is involved at all. He investigates mechanical forms of rhythm, its politics and its socio-cultural significance with an interdisciplinary framework that encompasses sound and media studies, new media, critical race studies, performance and dance studies.
Resisting the Grid — Performing Asynchrony
Much of today’s music production builds on mechanical timekeeping, be it on the metronome, drum machine, step sequencer, sampler, the quartz in the CPU or the visual grid in any contemporary DAW. As an artist-scholar I am deeply inclined with such mechanical forms of rhythm, and with the techniques that emerge at the boundaries between these rhythm machines and their human users. As a PhD candidate, I am currently composing new music (mixed electronic-instrumental music) which — and here lies my primary motivation — challenges the modernist hailing of the mechanical, of the grid, and of synchrony as the major accomplishments of industrial revolution. My work encompasses the designing of compositional-technological tools (in Max MSP and Phython3) which emphasize the anti-tropes of time keeping: the temporal lag (Rebecca Schneider), the break (Fred Moten), rupture (Tricia Rose), failure (Jack Halberstam), the humanizing of the machine (Louis Chude-Sokei), and the ambiguous politics of irregularity. In my artistic work, I have repeatedly detached the creation of inner sound texture from its more “social” temporal shape (Tara Rodgers), that is, for instance I had viola players provide continuous timbre, while a digital algorithm would apply amplitude envelopes and shape the perceived rhythm along irregular grids. In my presentation, I will outline my project and play examples from 1-2 of my most recent musical works, most likely including “Pulsations” and “The December Sketch,” while using these examples as room-making devices to think about “resistance” and alternative temporal shapes.

Tiange Zhou / UC San Diego

Born in China in 1990, Tiange Zhou is a composer, photographer, designer, and improvisational dancer. She is interested in interactive audiovisual art and other integrated art forms. Her acoustic compositions, electronic works, and installation art pieces has been performed and exhibit through Asia, Europe and North America. She is also invited as a speaker to present her research on audiovisualogy at Ircam Forum Shanghai 2019. Tiange is pursuing her Ph.D. in composition at the University of California, San Diego, where she studies with Prof. Roger Reynolds for composition. She completed her master's degree at Yale University and a Bachelor's degree at Manhattan School of Music. She also completed an exchange program at Staatliche Hochschule für Musik und Darstellende Kunst, Stuttgart. Besides independent projects, Tiange participates in numerous collaborative projects with artists and scientist in different genres. She has done several projects about contemporary social psychological dilemmas, which she concerns.

LUX FLUX: Design sound and light work in Max/Msp through DMX
When musicians talk about interactive audio-visual creations in recent years, most of the time, they are mentioning how sound could impact the parameters of the video components. It is easy to ignore that lighting design has been alongside live music performance since the beginning of theatre history. The nature of light is more close to the sound, in which they both have abstractive characters and could be able to carry artistic continuity with having a specific narrative aspect. Musicians often use light terminologies to indicate sound, such as bright, dim, colorful, pale, shadow, etc., because of their similarities.
On the other hand, it should also be exciting to compose lighting events with music through the time by applying sound designing methodologies.
In this demonstration, I will share how to make intriguing lighting designs by Max/MSP programming through a USB to DMX interface. I will share several easy learning Max patches for audiences to understand the primary methods and utilization of lighting design and interactivity with the audio signal processing. I will present my audio-lighting work Sexet during the demo by using six LED lights as a chamber ensemble.
DMX is a lighting control protocol that allows users to have ultimate control over their lighting needs. The technology to access this protocol is very close to MIDI controlling. Therefore, it is very convenient to use MAX/MSP to program exciting creative works.