Juan José Burred

J.J. Burred is an independent researcher, software developer and musician based in Paris. With a background in machine learning and signal processing, his work aims at developing innovative tools for music and sound creation, analysis and search. After earning a PhD from the Technical University of Berlin, he worked as a researcher at IRCAM and Audionamix, on topics such as source separation, automatic music analysis, sound classification, content-based search and sound synthesis. His current main activity concerns the exploration of machine learning techniques for new methods of sound analysis/synthesis aimed at musical creation.

Factorsynth and Factoid: demo and presentation of upcoming features

Factorsynth is a Max and MaxForLive tool (and a partner product of the IRCAM forum) that uses a machine learning technique (matrix factorization) to decompose any input sound into a set of temporal and spectral elements. Once these elements have been extracted, they can be modified and recombined to perform powerful transformations, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patterns and creating complex sound textures. I will demonstrate different use cases of Factorsynth as a studio or performance tool, and will introduce new features of the upcoming version 2. I will also present Factoid, my latest release, a device related to Factorsynth and aimed at the randomization of temporal structure.