Bridging Audio, Visual, and AI Domains using the Elixir language, by Thibaut Barrère

This research project focuses on developing an integrated framework for music technology, leveraging Elixir to create a comprehensive ecosystem for audio processing, visual representation, and artificial intelligence integration.

Presented by : Thibaut Barrère

Biography

This research project focuses on developing an integrated framework for music technology, leveraging Elixir to create a comprehensive ecosystem for audio processing, visual representation, and artificial intelligence integration.

As an independent developer, I am designing and implementing a consolidated system aiming at handling diverse aspects of music creation, performance, and analysis within a single technological stack.

The framework, built on Elixir, interfaces seamlessly with MIDI and audio capabilities, allowing for soft real-time audio stream creation, MIDI event handling, and multi-sound card support. It allows for live, reactive interfaces, both web-based and non-web, including SVG piano rolls and other dynamic graphical representations of musical data. The system's hot-reloading capabilities facilitate rapid prototyping and live performances.

Controlling DMX lightning from Elixir

To enhance performance, the framework interfaces with C and Rust, enabling efficient utilization of drivers and specialized interfaces. It leverages Elixir and Erlang's native clustering abilities to interconnect multiple nodes on a network, enabling distributed processing and synchronization across devices.

The integration of Large Language Models (LLMs) within the same technological stack allows for the extraction of musical knowledge, scores, and theory from AI models. The framework extends beyond audio, incorporating light projections via DMX protocols and implementing live image recognition and video processing using Elixir libraries.

This project brings together diverse capabilities within a single, coherent system, allowing for personal exploration of the intersections between music, technology, and artificial intelligence.

As development progresses, this integrated approach may lead to new insights and creative possibilities in digital music creation and performance.