Posté par: agusajs Il y a 2 mois, 3 semaines
Generating a visual representation of short audio clips’ similarities are not only useful for organizing and exploring an audio sample library, but it also opens up a new range of possibilities for sonic experimentation.
We present AudioStellar, an open source software that enables creative practitioners to create AI generated 2D visualizations (i.e latent space) of their own audio corpus without programming or machine learning knowledge. Sound artists can play their input corpus by interacting with this computer learned latent space using a user interface that provides built-in modes to experiment with. AudioStellar can interact with other software by MIDI syncing, sequencing, adding audio effects, and more. Creating novel forms of interaction is encouraged through OSC communication or writing custom C++ code using provided framework.
AudioStellar is a free experimental sampler that uses AI to generate a 2D point map from a folder with audio samples. The sounds included in the map can be played in novel ways, impossible to achieve with traditional DAWs, samplers and even custom code.
Maps
The software processes a folder with user-selected sounds to generate an intelligent sound map, placing each sound as a point in a 2D space. On the map, close dots correspond to similar sounds, while distant dots represent different sounds. The dots are grouped into colored clusters to more intuitively differentiate the diverse timbres that compose it.
The map is a visual interface with a double function: it reveals the latent, pre-existing structure in the relationships between the audio samples and also allows the sounds to be reproduced in novel ways through the Units.
Units
AudioStellar provides several Units that allow the interaction with sound samples through new logics and criteria that are not possible to achieve through traditional sampling techniques. Each session can have multiple Units, each with controllable mixing parameters, just like the channels of a mixer. Units take advantage of the unique interface created by AudioStellar to play the chosen collection of audio samples. Moreover, All Units can incorporate effects, and be controlled by OSC or MIDI.
Explorer Unit
It allows listening to the different sounds one by one in a precise way, as well as to create spatial trajectories traced by the user, which behave like loops. It makes it possible to explore the sound collection by listening to the generated map to discover latent timbral relationships.
Particle Unit
Particles are autonomous agents that move around the map, reproducing any sound they touch. These particles have multiple control parameters and can move through the map as swarms or as explosions.
Sequence Unit
It defines a sequence of sounds that are reproduced using distance as rhythm, a tool that transcends traditional musical languages. By using several Sequence Units in parallel it is possible to explore an unexpected rhythmic universe.
Morph Unit
This unit creates sound textures by mixing a region of samples that it plays at different intensities. A tool designed to execute sound gestures by using external physical trajectories or controllers.
OSC Unit
This unit facilitates the connection of AudioStellar with external programming software (Max, PureData, Python), as it has a library of numerous OSC methods to create custom heuristics.
Partager sur Twitter Partager sur Facebook
Commentaires
Pas de commentaires actuellement
Nouveau commentaire