AudioStellar, an open source corpus-based musical instrument for latent sound structure discovery and sonic experimentation by Agustin Spinetto

AudioStellar is a free experimental sampler that uses AI to generate a 2D point map from a folder with audio samples. The sounds included in the map can be played in novel ways, impossible to achieve with traditional DAWs, samplers and even custom code. We are a research team constituted by members from the Universidad Nacional de Tres de Febrero (Buenos Aires) and Temple University Japan Campus (Tokyo). The project was presented in talks, workshops and seminars, as well as in performances and concerts in numerous international venues: MUTEK, IRCAM, NIME, ICMC, AIMC, Universitat Pompeu Fabra, Tokyo University of the Arts, among others. As a result of our global presence, we have recorded an average of 550 visits per month from more than 30 countries and 700 downloads in the last 6 months. Users from all over the world share their experiences and contribute to the development of the program through our forums, creating an ever-growing collaborative community.

logo

Generating a visual representation of short audio clips’ similarities are not only useful for organizing and exploring an audio sample library, but it also opens up a new range of possibilities for sonic experimentation. 

We present AudioStellar, an open source software that enables creative practitioners to create AI generated 2D visualizations (i.e latent space) of their own audio corpus without programming or machine learning knowledge.  Sound artists can play their input corpus by interacting with this computer learned latent space using a user interface that provides built-in modes to experiment with. AudioStellar can interact with other software by MIDI syncing, sequencing, adding audio effects, and more. Creating novel forms of interaction is encouraged through OSC communication or writing custom C++ code using provided framework. 

AudioStellar is a free experimental sampler that uses AI to generate a 2D point map from a folder with audio samples. The sounds included in the map can be played in novel ways, impossible to achieve with traditional DAWs, samplers and even custom code.

AudioStellar Demo Video

Maps

The software processes a folder with user-selected sounds to generate an intelligent sound map, placing each sound as a point in a 2D space. On the map, close dots correspond to similar sounds, while distant dots represent different sounds. The dots are grouped into colored clusters to more intuitively differentiate the diverse timbres that compose it.

map 1

The map is a visual interface with a double function: it reveals the latent, pre-existing structure in the relationships between the audio samples and also allows the sounds to be reproduced in novel ways through the Units.

map 2

 

Units

AudioStellar provides several Units that allow the interaction with sound samples through new logics and criteria that are not possible to achieve through traditional sampling techniques. Each session can have multiple Units, each with controllable mixing parameters, just like the channels of a mixer. Units take advantage of the unique interface created by AudioStellar to play the chosen collection of audio samples. Moreover, All Units can incorporate effects, and be controlled by OSC or MIDI.

 

Explorer Unit 

It allows listening to the different sounds one by one in a precise way, as well as to create spatial trajectories traced by the user, which behave like loops. It makes it possible to explore the sound collection by listening to the generated map to discover latent timbral relationships.

explorer unit

 

Particle Unit

Particles are autonomous agents that move around the map, reproducing any sound they touch. These particles have multiple control parameters and can move through the map as swarms or as explosions. 

particle unit

Sequence Unit

It defines a sequence of sounds that are reproduced using distance as rhythm, a tool that transcends traditional musical languages. By using several Sequence Units in parallel it is possible to explore an unexpected rhythmic universe.

sequence unit


Morph Unit

This unit creates sound textures by mixing a region of samples that it plays at different intensities. A tool designed to execute sound gestures by using external physical trajectories or controllers.

morph unit

OSC Unit

This unit facilitates the connection of AudioStellar with external programming software (Max, PureData, Python), as it has a library of numerous OSC methods to create custom heuristics.