Introduction to Offline Algorithmic Audio with bellplay~ by Felipe Tovar-Henao

This workshop offers a practical, hands-on introduction to offline audio generation, analysis, and processing in bellplay~, a scripting-based environment using the bell programming language. Unlike real-time environments such as Max or SuperCollider, bellplay~ renders sound offline rather than in real-time. This approach enables techniques such as multi-pass and look-ahead processing, computationally expensive operations, non-causal behavior, and analysis-driven transformations without concern for CPU limits or polyphonic voice management and allocation. Additionally, it's designed to bridge symbolic music (i.e., notation-based) operations with audio, facilitating the composition of acoustic, electronic, and mixed music within a single environment. For more information, visit: https://bellplay.net

➡️ This presentation is part of IRCAM Forum Workshops Paris / Enghien-les-Bains March 2026

bellplay

bellplay~ Workshop

bellplay~ is a scripting environment for audio and music composition. Unlike real-time environments such as Max or SuperCollider, it renders sound offline, which opens up techniques such as multi-pass and look-ahead processing, computationally expensive operations, non-causal behavior, and analysis-driven transformations — without the constraints of CPU limits or voice management. It also bridges symbolic, notation-based operations with audio, making it suitable for acoustic, electronic, and mixed music within a single environment.

For more information, visit: https://bellplay.net.


Before

You will receive setup instructions in advance. Please install and configure bellplay~ on your laptop (Mac or Windows) before the session so we can use the time efficiently.


During

The workshop builds progressively toward writing your own granular processing script, using this as a vehicle for introducing the environment and its core features. The session runs 60 minutes.

Foundations: launching the environment, writing and running a script, generating a simple sound and placing it on a timeline, and rendering audio to disk.

Core project — granulation: importing a sample into a buffer; generating grains by defining onset position, duration, gain, and stereo placement through scripted rules; looping and layering grains to build dense textures; and rendering, adjusting parameters, and re-rendering iteratively.

Optional topics (time permitting): running basic analysis — such as onset detection or spectral centroid — and using the results to shape grain behavior; exporting rendered audio or buffer data for use outside the environment.


After

By the end of the session you will have a working granulation script and a clear understanding of how bellplay~ handles audio algorithmically. You will also receive example scripts and documentation to continue working independently after the workshop.

This workshop is aimed at composers and sound artists interested in direct, fine-grained control over sound through code, without the constraints of real-time interaction or visual programming.