Posté par: bwschwartz Il y a 2 années, 3 mois
While musical synthesizers have become increasingly accessible from an economic standpoint, the barrier of entry in terms of knowledge remains standing: it can be difficult to craft a desired sound from scratch without signal processing knowledge and extensive experience with electronic instruments.
This presentation will address that problem, proposing a novel paradigm that combines Automatic Synthesizer Programming (ASP) with Musical Source Separation (MSS) – drawing on work from the presenter’s Master’s Thesis in Music Technology at NYU Steinhardt (completed Spring 2022 under the advisement of Dr. Brian McFee). A Python prototype will be introduced, which takes a polyphonic audio file containing a target bass sound and returns the parameters on a Max/MSP additive synthesizer to approximate that timbre.
The presentation will discuss the motivation behind this new paradigm, which is intended to provide an intuitive sound design interface. From there, it will detail the prototype implementation – which combines two neural network modules – focusing on the application of Google’s Differentiable Digital Signal Processing library. An objective analysis of the MSS module relative to baselines will follow. The presentation will culminate in a live demonstration of the automatically programmable Max synthesizer, showing how bass timbres can be creatively accessed from a few varied recordings.
Partager sur Twitter Partager sur Facebook
Commentaires
Pas de commentaires actuellement
Nouveau commentaire